spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ashic Mahtab <as...@live.com>
Subject RE: Scala Lazy values and partitions
Date Fri, 19 Dec 2014 14:32:06 GMT
Just to confirm, once per VM means that it'll be the same instance across all applications
in a particular JVM instance (i.e. executor). So even if the spark application is terminated,
the instance will live on, correct? I think that's what Sean said, and it seems logical.

From: gerard.maas@gmail.com
Date: Fri, 19 Dec 2014 12:52:23 +0100
Subject: Re: Scala Lazy values and partitions
To: ashic@live.com
CC: user@spark.apache.org

It will be instantiated once per VM, which translates to once per executor.
-kr, Gerard.
On Fri, Dec 19, 2014 at 12:21 PM, Ashic Mahtab <ashic@live.com> wrote:


Hi Guys,
Are scala lazy values instantiated once per executor, or once per partition? For example,
if I have:

object Something = 
    val lazy context = create()

    def foo(item) = context.doSomething(item)

and I do

someRdd.foreach(Something.foo)

then will context get instantiated once per executor, or once per partition?

Thanks,
Ashic.

 		 	   		  
 		 	   		  
Mime
View raw message