incubator-mesos-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matei Zaharia <ma...@eecs.berkeley.edu>
Subject Re: spark+mesos: increasing protocol buffer message size limit
Date Mon, 11 Jun 2012 06:37:21 GMT
Ah, got it. Any idea why they're that large? Maybe you could use broadcast variables if you
have a large object used in the closures.

Anyway, I plan to send task bodies out-of-band in future releases to avoid these kinds of
problems. If you want something like this now, there's a "dev" branch in the Spark repo that
contains a "coarse-grained Mesos" execution mode where Spark just acquires one task on each
node and keeps launching mini-tasks inside it. You can enable it by setting the property spark.mesos.coarse
to true. But in the future I want to send the tasks through a separate channel even in the
"real" Mesos use case, and also to change the way tasks are sent so that each closure is ideally
shipped only once to each node.

Matei

On Jun 10, 2012, at 11:27 PM, Arpan Ghosh wrote:

> We are using actors. These are the task objects that get serialized and sent over before
a parallelize/foreach starts.
> 
> On Sun, Jun 10, 2012 at 11:10 PM, Matei Zaharia <matei@eecs.berkeley.edu> wrote:
> Are you using Mesos framework messages by any chance? It might be even better to avoid
that, because they add some overhead regardless of the protobuf size limit due to a lot of
copies happening in the stack.
> 
> Matei
> 
> On Jun 9, 2012, at 12:40 AM, Arpan Ghosh wrote:
> 
> > Hi,
> >
> > I am running a spark job on a mesos cluster and would like to increase the size
limit on the protocol buffer messages. I understand that I have to call SetTotalBytesLimit()
but I wasn't sure of where the CodedInputStream was being created while running a spark job
on mesos.
> >
> >
> > Thanks
> >
> > Arpan
> >
> >
> 
> 


Mime
View raw message