hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer <awittena...@linkedin.com>
Subject Re: any plans to deploy OSGi bundles on cluster?
Date Tue, 04 Jan 2011 04:28:19 GMT

On Jan 2, 2011, at 9:51 AM, Hiller, Dean (Contractor) wrote:

> I was looking at distributed cache and how I need to copy local jars to
> hdfs.  I was wondering if there was any plans to just deploy an OSGi
> bundle(ie. Introspect and auto deploy jars from bundle to the
> distributed cache and then make the api calls to deploy them to the
> slave nodes so there is no work for the developer to do except deploy
> OSGi bundles).

	AFAIK, no.

> Not to mention, the OSGi classloader mechanism is so sweet, that I could
> deploy jar A to be used by all my jobs and also deploy jar B version 1
> and jar B version 2 which could be used at the same time by different
> jobs without classloading problems.  

	Given that distributed caches are set per-job, this isn't a problem with Hadoop either. 
Each job's task gets its own JVM.  The only time that I know of versioning being an issue
is when one conflicts with a bundled Hadoop jar.  [... and that problem is either fixed or
will be committed soon to trunk]
Mime
View raw message