Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 16E98C80C for ; Mon, 9 Jul 2012 18:14:15 +0000 (UTC) Received: (qmail 54367 invoked by uid 500); 9 Jul 2012 18:14:14 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 54316 invoked by uid 500); 9 Jul 2012 18:14:14 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 54308 invoked by uid 99); 9 Jul 2012 18:14:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Jul 2012 18:14:14 +0000 X-ASF-Spam-Status: No, hits=0.4 required=5.0 tests=NO_RDNS_DOTCOM_HELO,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: 216.145.54.171 is neither permitted nor denied by domain of evans@yahoo-inc.com) Received: from [216.145.54.171] (HELO mrout1.yahoo.com) (216.145.54.171) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Jul 2012 18:14:06 +0000 Received: from sp1-ex07cas01.ds.corp.yahoo.com (sp1-ex07cas01.ds.corp.yahoo.com [216.252.116.137]) by mrout1.yahoo.com (8.14.4/8.14.4/y.out) with ESMTP id q69IDZDb058801 for ; Mon, 9 Jul 2012 11:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=yahoo-inc.com; s=cobra; t=1341857616; bh=jta1ke06e6hoCaF0jUvqxdysHPh99vbBxGPN8DyLTpM=; h=From:To:Date:Subject:Message-ID:In-Reply-To:Content-Type: Content-Transfer-Encoding:MIME-Version; b=vmR5K9QSq8+sI2BB3wW8vvy4QKrOKvKsoB/cByPxCob1n5xOBECYFs4vnRsQGs+Bp WFaFVhQ+3ziHhdfAWu4PpDoE2XV67LA7GuuQprpo64okBSPvJVrqxgfk87Oo1MWttP 0G8ovfzas5aaJY2yc40xHIr9ACcV+IMm5KsSyfII= Received: from SP1-EX07VS02.ds.corp.yahoo.com ([216.252.116.135]) by sp1-ex07cas01.ds.corp.yahoo.com ([216.252.116.137]) with mapi; Mon, 9 Jul 2012 11:13:35 -0700 From: Robert Evans To: "hdfs-dev@hadoop.apache.org" Date: Mon, 9 Jul 2012 11:13:31 -0700 Subject: Re: OSGi and classloaders Thread-Topic: OSGi and classloaders Thread-Index: Ac1d/o5Cj5fQvmrESlKOKDcifBS8Dg== Message-ID: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.2.2.120421 acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Milter-Version: master.31+4-gbc07cd5+ X-CLX-ID: 857616003 Guillaume, The problem with Configuration is that it is public, so changing it does not just impact Hadoop. It also impacts all of the projects that use it, either directly as part of the Map/Reduce APIs or for storing their own configuration. Within Hadoop proper there are several places where it cannot just be static. For Map Reduce a Configuration object is created for each Map/Reduce job. So from a client's perspective it may have multiple different instances of Configuration in flight at any point in time, one for each job. HDFS also support this having multiple separate configurations in the client simultaneously. For some things processes like the NameNode, DataNode and the ResourceManager you may be able to get away with a single static configuration, but from the clients perspective that may be difficult. I am not really sure about the NodeManger, because it interacts with HDFS on behalf of the end user and I am not completely sure how Configuration fits into that picture. --Bobby Evans =20 On 7/9/12 10:04 AM, "Guillaume Nodet" wrote: >Right, that would surely be incompatible. The initial work I did was on >1.0.3 and those problems can be solved in a more simple (though less >clean) >way in that branch, mainly because of the fact that there is a single jar >which contain everything, so that causes less problems in OSGi. > >For trunk, is there any valid reason to create multiple configurations ? >Or >is the idea of a singleton something that I can investigate working on ? > I'm not very familiar with hadoop internals, so I may very well be >missing >some edge cases. If not, I can come up with a patch that would transform >Configuration into a singleton, leading to more flexibility for OSGi and a >performance improvement by avoiding re-parsing the xml configuration >multiple times. > >On Mon, Jul 9, 2012 at 4:37 PM, Robert Evans wrote: > >> Guillaume, >> >> I am not super familiar with OSGi. I have used it a little in the past, >> but that was 5+ years ago. I am in favor of something that will fix the >> CLASSPATH problems that we currently have and would allow for CLASSPATH >> isolation between Hadoop itself and the applications that use Hadoop. >>If >> OSGi can do this cleanly then I am +1 for moving to OSGi. >> >> However, we are trying to maintain binary compatibility within major >> version numbers, in preparation for rolling upgrades. Many of the >>things >> you have suggested like moving classes from one package to another, and >> doing some serious rework to Configuration will break not only binary >> compatibility but also API compatibility. >> >> If we do go this rout, just be aware that it is most likely something >>that >> would have to force a major version bump, which right now means trunk >>(the >> 3.0 line). >> >> --Bobby Evans >> >> On 7/9/12 8:24 AM, "Guillaume Nodet" wrote: >> >> >I'm working with Jean-Baptiste to make hadoop work in OSGi. >> >OSGi works with classloader in a very specific way which leads to >>several >> >problems with hadoop. >> > >> >Let me quickly explain how OSGi works. In OSGi, you deploy bundles, >>which >> >are jars with additional OSGi metadata. This metadata is used by the >>OSGi >> >framework to create a classloader for the bundle. However, the >> >classloaders are not organized in a tree like in a JEE environment, but >> >rather in some kind of graph, where each classloader has limited >> >visibility >> >and limited exposure. This is controlled by at the package level by >> >specifying which packages are exported and which packages are imported >>by >> >a >> >given bundle. This is mainly two consequences: >> > * OSGi does not supports well split-packages, where the same package >>is >> >exported by two different bundles >> > * a classloader does not have visibility on everything as in a usual >> >flat >> >classloader environment or even JEE-like env >> > >> >The first problem arise for example with the org.apache.hadoop.fs >>package >> >which is split across hadoop-common and hadoop-hdfs jars (which defines >> >the >> >Hdfs class). There may be other cases, but I haven't hit them yet. To >> >solve this problem, it'd be better if such classes were moved into a >> >different package. >> > >> >The second problem is much more complicated. I think most of the >> >classloading is done from Configuration. However, Configuration has an >> >internal classloader which is set by the constructor to the thread >>context >> >classloader (defaulting to the Configuration class' classloader) and >>new >> >Configuration objects are created everywhere in the code. >> >In addition, creating new Configuration objects force the parsing of >>the >> >configuration files several times. >> >Also in OSGi, Configuration is better done through the standard OSGi >> >ConfigurationAdmin service, so it would be nice to integrate the >> >configuration into ConfigAdmin when running in OSGi. >> >For the above reasons, I'd like to know what would you think of >> >transforming the Configuration object into a real singleton, or at >>least >> >replacing the "new Configuration()" call spread everywhere with the >>access >> >to a singleton Configuration.getInstance(). >> >This would allow the hadoop osgi layer to manage the Configuration in >>a >> >more osgi friendly way, allowing the use of a specific subclass which >> >could >> >better manage the class loading in an OSGi environment and integrate >>with >> >ConfigAdmin. This may also remove the need for keeping a registry of >> >existing Configuration and having to update them when a default >>resource >> >if >> >added for example. >> > >> >Some of the above problems have been addressed in some way in >>HADOOP-7977, >> >but the fixes I've been working on were more related to hadoop 1.0.x >> >branch, and are slightly unapplicable to trunk. >> > >> >One last point: the two above problems are mainly due to the fact that >> >I've >> >been assuming that individual hadoop jars are transformed into native >> >bundles. This would go away if we'd have a single bundle containing >>all >> >the individual jars (as it was with hadoop-core-1.0.x, but having more >> >fine >> >grained jars is better imho. >> > >> >Thoughts welcomed. >> > >> >-- >> >------------------------ >> >Guillaume Nodet >> >------------------------ >> >Blog: http://gnodet.blogspot.com/ >> >------------------------ >> >FuseSource, Integration everywhere >> >http://fusesource.com >> >> > > >--=20 >------------------------ >Guillaume Nodet >------------------------ >Blog: http://gnodet.blogspot.com/ >------------------------ >FuseSource, Integration everywhere >http://fusesource.com