Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3AB31119E8 for ; Tue, 24 Jun 2014 23:45:25 +0000 (UTC) Received: (qmail 23575 invoked by uid 500); 24 Jun 2014 23:45:19 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 23477 invoked by uid 500); 24 Jun 2014 23:45:19 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 23436 invoked by uid 99); 24 Jun 2014 23:45:18 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Jun 2014 23:45:18 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of tucu@cloudera.com designates 209.85.216.45 as permitted sender) Received: from [209.85.216.45] (HELO mail-qa0-f45.google.com) (209.85.216.45) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Jun 2014 23:45:16 +0000 Received: by mail-qa0-f45.google.com with SMTP id v10so884956qac.4 for ; Tue, 24 Jun 2014 16:44:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=9Y4wbPeTgXuKsMB+LEFMlJI0Qio4/P/LuZ7SNbueWTk=; b=L/N7k14SkrIEi837+vaen6pGmoWbkPk6ahnu6vSRFQ/7o27QHpVkKtDAbS7H6kxECK CkQJyAWvf482acv1CaT1+C6vkIu62Knq7RNS0n7s6qmxvxLoFpoScRVeYktMAzeYGSnf hYdqtEANTx/S8uG0eTVrF/VmG3uVrdorb4YWTPiIi9tutDJFluvHheQpN3VqQj2VfuFn xeibXEqR9xYsBUWRPVRkpU1RQQsrDVIOl4U1sfJy51TJfCaaFExkOz4RBcAnW2GsTgZX 97lPMiP9poOBklKpLsU8OOqaKfIDsD+ZgjFYGil6DEH4ylbyKZzSH5SpWHKh59KHyFwk KoAw== X-Gm-Message-State: ALoCoQlM8RshD4qXqJOJOpJPVDjk4TANWOy15hsrVJ2RsITLPXBlyVPDbPlsX5hqkvlje+5Bir5M X-Received: by 10.224.123.71 with SMTP id o7mr5800026qar.38.1403653491365; Tue, 24 Jun 2014 16:44:51 -0700 (PDT) MIME-Version: 1.0 Received: by 10.96.59.134 with HTTP; Tue, 24 Jun 2014 16:44:21 -0700 (PDT) In-Reply-To: References: <347D40C0-9156-40C5-9859-60D5B93D8BC9@hortonworks.com> From: Alejandro Abdelnur Date: Tue, 24 Jun 2014 16:44:21 -0700 Message-ID: Subject: Re: Moving to JDK7, JDK8 and new major releases To: "mapreduce-dev@hadoop.apache.org" Cc: "common-dev@hadoop.apache.org" , "hdfs-dev@hadoop.apache.org" , "yarn-dev@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7bf0da26bdaa5904fc9d8bc3 X-Virus-Checked: Checked by ClamAV on apache.org --047d7bf0da26bdaa5904fc9d8bc3 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable After reading this thread and thinking a bit about it, I think it should be OK such move up to JDK7 in Hadoop 2 for the following reasons: * Existing Hadoop 2 releases and related projects are running on JDK7 in production. * Commercial vendors of Hadoop have already done lot of work to ensure Hadoop on JDK7 works while keeping Hadoop on JDK6 working. * Different from many of the 3rd party libraries used by Hadoop, JDK is much stricter on backwards compatibility. IMPORTANT: I take this as an exception and not as a carte blanche for 3rd party dependencies and for moving from JDK7 to JDK8 (though it could OK for the later if we end up in the same state of affairs) Even for Hadoop 2.5, I think we could do the move: * Create the Hadoop 2.5 release branch. * Have one nightly Jenkins job that builds Hadoop 2.5 branch with JDK6 to ensure not JDK7 language/API feature creeps out in Hadoop 2.5. Keep this for all Hadoop 2.5.x releases. * Sanity tests for the Hadoop 2.5.x releases should be done with JDK7. * Apply Steve=E2=80=99s patch to require JDK7 on trunk and branch-2. * Move all Apache Jenkins jobs to build/test using JDK7. * Starting from Hadoop 2.6 we support JDK7 language/API features. Effectively what we are ensuring that Hadoop 2.5.x builds and test with JDK6 & JDK7 and that all tests towards the release are done with JDK7. Users can proactively upgrade to JDK7 before upgrading to Hadoop 2.5.x, or if upgrade to Hadoop 2.5.x and they run into any issue because of JDK6 (which it would be quite unlikely) they can reactively upgrade to JDK7. Thoughts? On Tue, Jun 24, 2014 at 4:22 PM, Andrew Wang wrote: > Hi all, > > On dependencies, we've bumped library versions when we think it's safe an= d > the APIs in the new version are compatible. Or, it's not leaked to the ap= p > classpath (e.g the JUnit version bump). I think the JIRAs Arun mentioned > fall into one of those categories. Steve can do a better job explaining > this to me, but we haven't bumped things like Jetty or Guava because they > are on the classpath and are not compatible. There is this line in the > compat guidelines: > > - Existing MapReduce, YARN & HDFS applications and frameworks should > work unmodified within a major release i.e. Apache Hadoop ABI is > supported. > > Since Hadoop apps can and do depend on the Hadoop classpath, the classpat= h > is effectively part of our API. I'm sure there are user apps out there th= at > will break if we make incompatible changes to the classpath. I haven't re= ad > up on the MR JIRA Arun mentioned, but there MR isn't the only YARN app ou= t > there. > > Sticking to the theme of "work unmodified", let's think about the user > effort required to upgrade their JDK. This can be a very expensive task. = It > might need approval up and down the org, meaning lots of certification, > testing, and signoff. Considering the amount of user effort involved here= , > it really seems like dropping a JDK is something that should only happen = in > a major release. Else, there's the potential for nasty surprises in a > supposedly "minor" release. > > That said, we are in an unhappy place right now regarding JDK6, and it's > true that almost everyone's moved off of JDK6 at this point. So, I'd be > okay with an intermediate 2.x release that drops JDK6 support (but no > incompatible changes to the classpath like Guava). This is basically free= , > and we could start using JDK7 idioms like multi-catch and new NIO stuff i= n > Hadoop code (a minor draw I guess). > > My higher-level goal though is to avoid going through this same pain agai= n > when JDK7 goes EOL. I'd like to do a JDK8-based release before then for > this reason. This is why I suggested skipping an intermediate 2.x+JDK7 > release and leapfrogging to 3.0+JDK8. 10 months is really not that far in > the future, and it seems like a better place to focus our efforts. I was > also hoping it'd be realistic to fix our classpath leakage by then, since > then we'd have a nice, tight, future-proofed new major release. > > Thanks, > Andrew > > > > > On Tue, Jun 24, 2014 at 11:43 AM, Arun C Murthy > wrote: > > > Andrew, > > > > Thanks for starting this thread. I'll edit the wiki to provide more > > context around rolling-upgrades etc. which, as I pointed out in the > > original thread, are key IMHO. > > > > On Jun 24, 2014, at 11:17 AM, Andrew Wang > > wrote: > > > https://wiki.apache.org/hadoop/MovingToJdk7and8 > > > > > > I think based on our current compatibility guidelines, Proposal A is > the > > > most attractive. We're pretty hamstrung by the requirement to keep th= e > > > classpath the same, which would be solved by either OSGI or shading o= ur > > > deps (but that's a different discussion). > > > > I don't see that anywhere in our current compatibility guidelines. > > > > As you can see from > > > http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Co= mpatibility.html > > we do not have such a policy (pasted here for convenience): > > > > Java Classpath > > > > User applications built against Hadoop might add all Hadoop jars > > (including Hadoop's library dependencies) to the application's classpat= h. > > Adding new dependencies or updating the version of existing dependencie= s > > may interfere with those in applications' classpaths. > > > > Policy > > > > Currently, there is NO policy on when Hadoop's dependencies can change. > > > > Furthermore, we have *already* changed our classpath in hadoop-2.x. > Again, > > as I pointed out in the previous thread, here is the precedent: > > > > On Jun 21, 2014, at 5:59 PM, Arun C Murthy wrote: > > > > > Also, this is something we already have done i.e. we updated some of > our > > software deps in hadoop-2.4 v/s hadoop-2.2 - clearly not something as > > dramatic as JDK. Here are some examples: > > > https://issues.apache.org/jira/browse/HADOOP-9991 > > > https://issues.apache.org/jira/browse/HADOOP-10102 > > > https://issues.apache.org/jira/browse/HADOOP-10103 > > > https://issues.apache.org/jira/browse/HADOOP-10104 > > > https://issues.apache.org/jira/browse/HADOOP-10503 > > > > thanks, > > Arun > > -- > > CONFIDENTIALITY NOTICE > > NOTICE: This message is intended for the use of the individual or entit= y > to > > which it is addressed and may contain information that is confidential, > > privileged and exempt from disclosure under applicable law. If the read= er > > of this message is not the intended recipient, you are hereby notified > that > > any printing, copying, dissemination, distribution, disclosure or > > forwarding of this communication is strictly prohibited. If you have > > received this communication in error, please contact the sender > immediately > > and delete it from your system. Thank You. > > > --=20 Alejandro --047d7bf0da26bdaa5904fc9d8bc3--