Return-Path: X-Original-To: apmail-hadoop-common-dev-archive@www.apache.org Delivered-To: apmail-hadoop-common-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CD8D018759 for ; Wed, 25 Nov 2015 19:11:41 +0000 (UTC) Received: (qmail 9447 invoked by uid 500); 25 Nov 2015 19:11:40 -0000 Delivered-To: apmail-hadoop-common-dev-archive@hadoop.apache.org Received: (qmail 9377 invoked by uid 500); 25 Nov 2015 19:11:40 -0000 Mailing-List: contact common-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-dev@hadoop.apache.org Received: (qmail 9366 invoked by uid 99); 25 Nov 2015 19:11:40 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Nov 2015 19:11:40 +0000 Received: from [10.22.10.88] (unknown [192.175.27.10]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 051021A0015 for ; Wed, 25 Nov 2015 19:11:39 +0000 (UTC) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: [DISCUSS] Looking to a 2.8.0 release From: Vinod Kumar Vavilapalli In-Reply-To: Date: Wed, 25 Nov 2015 11:11:38 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <5E2BA599-FA73-42DC-8DDE-CD05EB020FBD@apache.org> References: <0C22896E-D77B-4DE4-99A2-9D81E150779C@hortonworks.com> <3505A424-5B02-49FA-9B56-AF13D20FEABD@altiscale.com> To: common-dev@hadoop.apache.org X-Mailer: Apple Mail (2.2104) Haohui, It=E2=80=99ll help to document this whole line of discussion about hdfs = jar change and its impact/non-impact for existing users so there is less = confusion. Thanks +Vinod > On Nov 11, 2015, at 3:26 PM, Haohui Mai wrote: >=20 > bq. If and only if they take the Hadoop class path at face value. > Many applications don=E2=80=99t because of conflicting dependencies = and > instead import specific jars. >=20 > We do make the assumptions that applications need to pick up all the > dependency (either automatically or manually). The situation is > similar with adding a new dependency into hdfs in a minor release. >=20 > Maven / gradle obviously help, but I'd love to hear more about it how > you get it to work. In trunk hadoop-env.sh adds 118 jars into the > class path. Are you manually importing 118 jars for every single > applications? >=20 >=20 >=20 > On Wed, Nov 11, 2015 at 3:09 PM, Haohui Mai = wrote: >> bq. currently pulling in hadoop-client gives downstream apps >> hadoop-hdfs-client, but not hadoop-hdfs server side, right? >>=20 >> Right now hadoop-client pulls in hadoop-hdfs directly to ensure a >> smooth transition. Maybe we can revisit the decision in the 2.9 / = 3.x? >>=20 >> On Wed, Nov 11, 2015 at 3:00 PM, Steve Loughran = wrote: >>>=20 >>>> On 11 Nov 2015, at 22:15, Haohui Mai wrote: >>>>=20 >>>> bq. it basically makes the assumption that everyone recompiles for >>>> every minor release. >>>>=20 >>>> I don't think that the statement holds. HDFS-6200 keeps classes in = the >>>> same package. hdfs-client becomes a transitive dependency of the >>>> original hdfs jar. >>>>=20 >>>> Applications continue to work without recompilation as the classes >>>> will be in the same name and will be available in the classpath. = They >>>> have the option of switching to depending only on hdfs-client to >>>> minimize the dependency when they are comfortable. >>>>=20 >>>> I'm not claiming that there are no bugs in HDFS-6200, but just like >>>> other features we discover bugs and fix them continuously. >>>>=20 >>>> ~Haohui >>>>=20 >>>=20 >>> currently pulling in hadoop-client gives downstream apps = hadoop-hdfs-client, but not hadoop-hdfs server side, right? >=20