hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@hortonworks.com>
Subject Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)
Date Sun, 19 Nov 2017 11:22:41 GMT

On 14 Nov 2017, at 00:10, Arun Suresh <asuresh@apache.org<mailto:asuresh@apache.org>>

Hi Folks,

Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since

More information about the 2.9.0 release plan can be found here:

New RC is available at: *https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/

The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:

The maven artifacts are available via repository.apache.org<http://repository.apache.org>

We are carrying over the votes from the previous RC given that the delta is
the license fix.

Given the above - we are also going to stick with the original deadline for
the vote : ending on Friday 17th November 2017 2pm PT time.


I hadn't finished my testing yet; I'd been assuming that it was 5 days from thw new RC. I
believe every RC should (must?) still have that 5 day test period. I have done all my core
tests, but haven't done the final tests of the S3Guard CLI tests. I did have to actually put
in an afternoon making my spark cloud integration tests work against branch-2, so it wasn't
like I could just D/L and test in half an hour

As it was, I had filed one minor bug, but didnt' consider that an issue as it was only with
the new  FileSystem.create(path)" Builder. I was going to go vote +! unless those CLI tests
were completly broken.

What I do want is  release notes to highlight which things we think unstable/experimental
features to use with caution

  1.  Filesystem.create(Path)
  2.  S3Guard
  3.  AlyiunOSS (it hasn't been out long enough to be trusted)

We ourselves know what's still stabilising, other should too


D/L tar file, & .asc, check signature

pg2 --verify hadoop-2.9.0.tar.gz.asc
gpg: assuming signed data in 'hadoop-2.9.0.tar.gz'
gpg: Signature made Mon 13 Nov 23:45:49 2017 GMT
gpg:                using RSA key 0x7ECDEEEA64ECB6E6
gpg: Good signature from "Arun Suresh <asuresh@apache.org<mailto:asuresh@apache.org>>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 412B BFB1 27CB 48DA 6BA2  E3EE 7ECD EEEA 64EC B6E6

This is a valid signature; Arun is listed in KEYS. He's not trusted though: at the time of
checking his public key hadn't been signed by anyone I trust
 though it shows we need more cross-trust between them. (Since then Andrew Wang has authenticated
him, so he is now transitively trusted).

Downstream build

Clean build of Spark through the staged artifacts (must remember to rm the local copies when
the release ships). This verifies artifacts and source code compatibility.

mvn clean install -Pyarn,hadoop-2.7,hadoop-cloud,snapshots-and-staging  -Dhadoop.version=2.9.0

Everything compiled fine. I did not do a full test run; I didn't have the time and without
knowing whether everything worked locally on the 2.8.1 release, could have been a distraction
on a failure.

Downstream tests:

Ran all my spark cloud integration tests: s3 (S3 ireland + S3Guard) , azure (Azure ireland(,
swift (RAX US);


there were 3 failures of SparkSQL & ORC in test setup, but I think that's related to the
latest spark code & my injected-into-org.apache.spark-namespace tests; they're a bit brittle
to changes in spark-master.

All the work used the "legacy" FileOutputCommitter; with s3guard turned on. This delivers
the consistency needed for Job and task commit to be correct, though without the performance
needed for it to be usable in production. That's dependent on HADOOP-13786, which I don't
propose to backport. (it loves its Java 8 & targets 3.1)

hadoop cloud module object store tests

Checked out the relevant git commit & did a local run of hadoop-aws, hadoop-azure and

* s3a happy, including s3guard
* All the s3n & s3:// tests skipped/failed because I didn't have the test bindings for
those set up: untested.

Azure wasb: happy; one transient failure which

ADLs: Not set up test this
OSS: not set up to test this


Found one test failure due to swift not supporting FileSystem.createNonRecursive(); this means
the new builder based FileSystem.create(Path) mechanism doesn't work for it. Nothing that
serious, given its an experimental API, but it does show that we are undertesting this stuff.


Note: the swift tests I ran downstream with spark did work. This is only the new API call
which fails.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message