mesos-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Chen <...@mesosphere.io>
Subject Re: Spark Job Submitting on Mesos Cluster
Date Mon, 14 Sep 2015 20:13:59 GMT
Thanks Haosdent!

Tim

On Mon, Sep 14, 2015 at 1:29 AM, SLiZn Liu <sliznmailbox@gmail.com> wrote:

> I found the --no-switch_user flag in mesos slave configuration. Will give
> it a try. Thanks Tim, and haosdent !
> ​
>
> On Mon, Sep 14, 2015 at 4:15 PM haosdent <haosdent@gmail.com> wrote:
>
>> > turn off --switch-user flag in the Mesos slave
>> --no-switch_user :-)
>>
>> On Mon, Sep 14, 2015 at 4:03 PM, Tim Chen <tim@mesosphere.io> wrote:
>>
>>> Actually --proxy-user is more about which user you're impersonated to
>>> run the driver, but not the user that is going to be passed to Mesos to run
>>> as.
>>>
>>> The way to use a partciular user when running a spark job is to set the
>>> SPARK_USER environment variable, and that user will be passed to Mesos.
>>>
>>> Atlernatively you can also turn off --switch-user flag in the Mesos
>>> slave so that all jobs will just use the Slave's current user.
>>>
>>> Tim
>>>
>>> On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sliznmailbox@gmail.com>
>>> wrote:
>>>
>>>> Thx Tommy, did you mean add proxy user like this:
>>>>
>>>> spark-submit --proxy-user <MESOS-STARTER> ...
>>>>
>>>> where represents the user who started Mesos?
>>>>
>>>> and is this parameter documented anywhere?
>>>> ​
>>>>
>>>> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xiaods@gmail.com> wrote:
>>>>
>>>>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>>>>> should have the proxy_user in the /etc/passwd in every node.
>>>>>
>>>>> 2015-09-14 13:05 GMT+08:00 haosdent <haosdent@gmail.com>:
>>>>>
>>>>>> Do you start your mesos cluster with root?
>>>>>>
>>>>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sliznmailbox@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Mesos Users,
>>>>>>>
>>>>>>> I’m trying to run Spark jobs on my Mesos cluster. However I
>>>>>>> discovered that my Spark job must be submitted by the same user
who started
>>>>>>> Mesos, otherwise a ExecutorLostFailure will rise, and the job
won’t
>>>>>>> be executed. Is there anyway that every user share a same Mesos
cluster in
>>>>>>> harmony? =D
>>>>>>>
>>>>>>> BR,
>>>>>>> Todd Leo
>>>>>>> ​
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Haosdent Huang
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Deshi Xiao
>>>>> Twitter: xds2000
>>>>> E-mail: xiaods(AT)gmail.com
>>>>>
>>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>

Mime
View raw message