hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: [VOTE] First release candidate for HBase 2.4.0 (RC1) is available
Date Thu, 10 Dec 2020 22:29:27 GMT
Sounds good.

> I would instead
> suggest bringing a DISCUSS to the dev/private list about Apache HBase
> adopting Stack's downstreamer project (assuming he's willing to make
> the donation) and incorporating it into our nightly suite.

Assuming Stack is willing enough to acknowledge the donation we can just
file a JIRA for this with a link to the github project and discuss further
in the context of the JIRA, including the IP clearance (
https://incubator.apache.org/ip-clearance/) process.


On Thu, Dec 10, 2020 at 1:52 PM Nick Dimiduk <ndimiduk@apache.org> wrote:

> On Thu, Dec 10, 2020 at 12:17 PM Andrew Purtell <apurtell@apache.org>
> wrote:
> >
> > Thanks Nick. So we have three dependency related followups?
>
> It was my pleasure.
>
> > 1. Jetty 9.3 issue with some Java 11 versions. (Possible fix by managing
> up
> > to Jetty 9.4 transitively.)
>
> Jetty dependency is not something I would attempt to "patch" on our
> side. Instead, if we have a place in our book where we can warn users
> of various hidden gotchas around HBase, I propose we document it
> there.
>
> > 2. Something with servlet-api as pertains to Spark 1.6.0
>
> "pertains to Spark 1.6.0" is speculation on my part. I would instead
> suggest bringing a DISCUSS to the dev/private list about Apache HBase
> adopting Stack's downstreamer project (assuming he's willing to make
> the donation) and incorporating it into our nightly suite. We can
> shake out whatever specific issues it finds once proper dependency
> diligence has been conducted.
>
> > 3. Exclude zookeeper where we bring in hadoop-common
>
> I think this is one we can do. I don't think an issues was raised from
> this finding after 2.3.0, so I have filed HBASE-25384.
>
> > On Thu, Dec 10, 2020 at 11:47 AM Nick Dimiduk <ndimiduk@apache.org>
> wrote:
> >
> > > +1 (binding)
> > >
> > > * Signature: ok
> > > * Checksum : ok
> > > * Compatibility Report vs 2.3.0: ok
> > > * Rat check (1.8.0_275 + Hadoop2): ok
> > >  - mvn clean apache-rat:check
> > > * Built from source (1.8.0_275+ Hadoop2): ok
> > >  - mvn clean install  -DskipTests
> > > * Unit tests pass (1.8.0_275+ Hadoop2): ok
> > >  - mvn package -P runSmallTests  -Dsurefire.rerunFailingTestsCount=3
> > > * Rat check (1.8.0_275 + Hadoop3): ok
> > >  - mvn clean apache-rat:check -D hadoop.profile=3.0
> > > * Rat check (11.0.8 + Hadoop3): ok
> > >  - mvn clean apache-rat:check -D hadoop.profile=3.0
> > > * Built from source (1.8.0_275 + Hadoop3): ok
> > >  - mvn clean install -D hadoop.profile=3.0 -DskipTests
> > > * Built from source (11.0.8 + Hadoop3): ok
> > >  - mvn clean install -D hadoop.profile=3.0 -DskipTests
> > > * Unit tests pass (1.8.0_275 + Hadoop3): ok
> > >  - mvn package -P runSmallTests -D hadoop.profile=3.0
> > > -Dsurefire.rerunFailingTestsCount=3
> > > * Unit tests pass (11.0.8 + Hadoop3): ok
> > >  - mvn package -P runSmallTests -D hadoop.profile=3.0
> > > -Dsurefire.rerunFailingTestsCount=3
> > > * Unit tests: ok
> > >  - Jenkins is looking pretty good overall.
> > >
> > > * saintstack/hbase-downstreamer: had an issue with servlet api. This
> > > looks like a transitive class path problem, but I haven't made time to
> > > investigate in further detail; manual inspection of maven
> > > dependency:tree looks fine, as pertains to the servlet-api jar.
> > > Looking at that project's dependencies, I'm guessing this means that
> > > HBase 2.4.0 is not compatible with Spark 1.6.0. For me, this is
> > > concerning but not a blocker.
> > >  - mvn -Dhbase.2.version=2.4.0
> > > -Dhbase.staging.repository='
> > >
> https://repository.apache.org/content/repositories/orgapachehbase-1420/'
> > > -Dhadoop.version=2.10.0 -Dslf4j.version=1.7.30 clean package
> > >  - had to exclude zookeeper from hadoop-common, which still depends on
> > > zookeeper-3.4.9 and is missing
> > > org/apache/zookeeper/common/X509Exception$SSLContextException (same as
> > > 2.3.0 release)
> > >
> > > On Wed, Dec 9, 2020 at 5:29 PM Nick Dimiduk <ndimiduk@apache.org>
> wrote:
> > > >
> > > > On Wed, Dec 9, 2020 at 5:23 PM Andrew Purtell <
> andrew.purtell@gmail.com>
> > > wrote:
> > > >>
> > > >> ICYMI, the nightly build passes because the JDK version there has
> three
> > > components “11.0.6” and yours fails because your JDK version has four
> > > components (“11.0.9.1”).
> > > >
> > > >
> > > > Thanks, I did miss this (and I had to look up ICYMI).
> > > >
> > > >> As this is a release vote you do not need to withdraw the veto, it
> does
> > > not block, but I would ask that you fix your local environment to
> conform
> > > the the limitations of Jetty as transitively pulled in via that
> optional
> > > profile and retest. This is not an HBase issue and it should not be
> > > relevant in voting. (IMHO)
> > > >
> > > >
> > > > Since this is indeed a jetty issue, I agree that this is not an HBase
> > > issue and I rescind the -1. Let me grab a slightly different JDK11
> build
> > > and resume my evaluation.
> > > >
> > > >> > On Dec 9, 2020, at 5:10 PM, Andrew Purtell <
> andrew.purtell@gmail.com>
> > > wrote:
> > > >> >
> > > >> > Nick,
> > > >> >
> > > >> > Given the nature of this issue I’d ask you to try Duo’s suggestion
> > > and if an earlier version of Hadoop 3 succeeds that this be sufficient
> this
> > > time around.
> > > >> >
> > > >> > All,
> > > >> >
> > > >> > I will start a DISCUSS thread as follow up as to what should
be
> > > considered required and veto worthy for a RC and what should not, with
> > > regard to optional build profiles. In my opinion ‘required‘ should be
> > > defined as what is enabled by default in the release build, and
> ‘optional’
> > > is everything else until we agree to specifically include one or more
> > > optional build profiles to the required list.
> > > >> >
> > > >> >
> > > >> >> On Dec 9, 2020, at 4:31 PM, 张铎 <palomino219@gmail.com>
wrote:
> > > >> >>
> > > >> >> OK, I think the problem is a bug in jetty 9.3, the JavaVersion
> > > >> >> implementation for 9.3 and 9.4 are completely different,
there
> is no
> > > >> >> problem to parse 11.0.9.1 for jetty 9.4 but for jetty 9.3,
it can
> > > only pass
> > > >> >> version with two dots, i.e, 11.0.9.
> > > >> >>
> > > >> >> I think you could add -Dhadoop-three.version to set the hadoop
3
> > > version
> > > >> >> explicitly to a newer version which uses jetty 9.4 to solve
the
> > > problem,
> > > >> >> IIRC all the newest release for each active release line
has
> upgrade
> > > to
> > > >> >> jetty 9.4 and that's why we need to shade jetty as jetty
9.3 and
> 9.4
> > > are
> > > >> >> incompatible.
> > > >> >>
> > > >> >> Thanks.
> > > >> >>
> > > >> >> 张铎(Duo Zhang) <palomino219@gmail.com> 于2020年12月10日周四
上午8:21写道:
> > > >> >>
> > > >> >>> On nightly jdk11 build the jdk version is
> > > >> >>>
> > > >> >>> AdoptOpenJDK-11.0.6+10
> > > >> >>>
> > > >> >>> Andrew Purtell <apurtell@apache.org> 于2020年12月10日周四
上午7:21写道:
> > > >> >>>
> > > >> >>>> Let me rephrase.
> > > >> >>>>
> > > >> >>>> I'm all for testing the optional configurations.
I'm less
> > > supportive of
> > > >> >>>> vetoing releases when an optional configuration has
some issue
> due
> > > to a
> > > >> >>>> third party component. I would like to see us veto
only on the
> > > required
> > > >> >>>> configurations, and otherwise file JIRAs to fix up
the nits on
> the
> > > >> >>>> optional
> > > >> >>>> ones.
> > > >> >>>>
> > > >> >>>>
> > > >> >>>> On Wed, Dec 9, 2020 at 3:19 PM Andrew Purtell <
> apurtell@apache.org
> > > >
> > > >> >>>> wrote:
> > > >> >>>>
> > > >> >>>>>> parseJDK9:71, JavaVersion (org.eclipse.jetty.util)
> > > >> >>>>>
> > > >> >>>>> So unless I am mistaken, some Jetty utility class
is not able
> to
> > > parse
> > > >> >>>> the
> > > >> >>>>> version string of your particular JDK/JRE.
> > > >> >>>>>
> > > >> >>>>> We can try to downgrade Jetty but I am not sure
in general
> how we
> > > are
> > > >> >>>>> supposed to take on the risk of third party dependencies
doing
> > > the wrong
> > > >> >>>>> thing in an optional configuration. I for one
do not want to
> deal
> > > with a
> > > >> >>>>> combinatorial explosion of transitive dependencies
when
> releasing.
> > > >> >>>>>
> > > >> >>>>>
> > > >> >>>>> On Wed, Dec 9, 2020 at 2:41 PM Nick Dimiduk <
> ndimiduk@apache.org>
> > > >> >>>> wrote:
> > > >> >>>>>
> > > >> >>>>>> This is coming out of Jetty + Hadoop. This
build has a
> > > regression in
> > > >> >>>> our
> > > >> >>>>>> JDK11 support. Or perhaps there's a regression
in the
> upstream
> > > Hadoop
> > > >> >>>>>> against which JDK11 builds.
> > > >> >>>>>>
> > > >> >>>>>> I'm afraid I must vote -1 until we can sort
out the issue.
> I'd
> > > >> >>>> appreciate
> > > >> >>>>>> it if someone else can attempt to repro,
help ensure it's not
> > > just me.
> > > >> >>>>>>
> > > >> >>>>>> Thanks,
> > > >> >>>>>> Nick
> > > >> >>>>>>
> > > >> >>>>>> (Apologies for the crude stack trace; this
is copied out of
> an
> > > attached
> > > >> >>>>>> debugger)
> > > >> >>>>>>
> > > >> >>>>>> parseJDK9:71, JavaVersion (org.eclipse.jetty.util)
> > > >> >>>>>> parse:49, JavaVersion (org.eclipse.jetty.util)
> > > >> >>>>>> <clinit>:43, JavaVersion (org.eclipse.jetty.util)
> > > >> >>>>>> findAndFilterContainerPaths:185, WebInfConfiguration
> > > >> >>>>>> (org.eclipse.jetty.webapp)
> > > >> >>>>>> preConfigure:155, WebInfConfiguration
> (org.eclipse.jetty.webapp)
> > > >> >>>>>> preConfigure:485, WebAppContext (org.eclipse.jetty.webapp)
> > > >> >>>>>> doStart:521, WebAppContext (org.eclipse.jetty.webapp)
> > > >> >>>>>> start:68, AbstractLifeCycle
> (org.eclipse.jetty.util.component)
> > > >> >>>>>> start:131, ContainerLifeCycle
> (org.eclipse.jetty.util.component)
> > > >> >>>>>> doStart:113, ContainerLifeCycle
> > > (org.eclipse.jetty.util.component)
> > > >> >>>>>> doStart:61, AbstractHandler
> (org.eclipse.jetty.server.handler)
> > > >> >>>>>> start:68, AbstractLifeCycle
> (org.eclipse.jetty.util.component)
> > > >> >>>>>> start:131, ContainerLifeCycle
> (org.eclipse.jetty.util.component)
> > > >> >>>>>> start:427, Server (org.eclipse.jetty.server)
> > > >> >>>>>> doStart:105, ContainerLifeCycle
> > > (org.eclipse.jetty.util.component)
> > > >> >>>>>> doStart:61, AbstractHandler
> (org.eclipse.jetty.server.handler)
> > > >> >>>>>> doStart:394, Server (org.eclipse.jetty.server)
> > > >> >>>>>> start:68, AbstractLifeCycle
> (org.eclipse.jetty.util.component)
> > > >> >>>>>> start:1140, HttpServer2 (org.apache.hadoop.http)
> > > >> >>>>>> start:177, NameNodeHttpServer
> > > (org.apache.hadoop.hdfs.server.namenode)
> > > >> >>>>>> startHttpServer:872, NameNode
> > > (org.apache.hadoop.hdfs.server.namenode)
> > > >> >>>>>> initialize:694, NameNode
> (org.apache.hadoop.hdfs.server.namenode)
> > > >> >>>>>> <init>:940, NameNode (org.apache.hadoop.hdfs.server.namenode)
> > > >> >>>>>> <init>:913, NameNode (org.apache.hadoop.hdfs.server.namenode)
> > > >> >>>>>> createNameNode:1646, NameNode
> > > (org.apache.hadoop.hdfs.server.namenode)
> > > >> >>>>>> createNameNode:1314, MiniDFSCluster (org.apache.hadoop.hdfs)
> > > >> >>>>>> configureNameService:1083, MiniDFSCluster
> > > (org.apache.hadoop.hdfs)
> > > >> >>>>>> createNameNodesAndSetConf:958, MiniDFSCluster
> > > (org.apache.hadoop.hdfs)
> > > >> >>>>>> initMiniDFSCluster:890, MiniDFSCluster
> (org.apache.hadoop.hdfs)
> > > >> >>>>>> <init>:518, MiniDFSCluster (org.apache.hadoop.hdfs)
> > > >> >>>>>> build:477, MiniDFSCluster$Builder (org.apache.hadoop.hdfs)
> > > >> >>>>>> startMiniDFSCluster:108, AsyncFSTestBase
> > > >> >>>>>> (org.apache.hadoop.hbase.io.asyncfs)
> > > >> >>>>>> setUp:87, TestFanOutOneBlockAsyncDFSOutput
> > > >> >>>>>> (org.apache.hadoop.hbase.io.asyncfs)
> > > >> >>>>>> invoke0:-1, NativeMethodAccessorImpl (jdk.internal.reflect)
> > > >> >>>>>> invoke:62, NativeMethodAccessorImpl (jdk.internal.reflect)
> > > >> >>>>>> invoke:43, DelegatingMethodAccessorImpl
> (jdk.internal.reflect)
> > > >> >>>>>> invoke:566, Method (java.lang.reflect)
> > > >> >>>>>> runReflectiveCall:59, FrameworkMethod$1
> (org.junit.runners.model)
> > > >> >>>>>> run:12, ReflectiveCallable (org.junit.internal.runners.model)
> > > >> >>>>>> invokeExplosively:56, FrameworkMethod
> (org.junit.runners.model)
> > > >> >>>>>> invokeMethod:33, RunBefores
> > > (org.junit.internal.runners.statements)
> > > >> >>>>>> evaluate:24, RunBefores
> (org.junit.internal.runners.statements)
> > > >> >>>>>> evaluate:27, RunAfters
> (org.junit.internal.runners.statements)
> > > >> >>>>>> evaluate:38, SystemExitRule$1 (org.apache.hadoop.hbase)
> > > >> >>>>>> call:288, FailOnTimeout$CallableStatement
> > > >> >>>>>> (org.junit.internal.runners.statements)
> > > >> >>>>>> call:282, FailOnTimeout$CallableStatement
> > > >> >>>>>> (org.junit.internal.runners.statements)
> > > >> >>>>>> run:264, FutureTask (java.util.concurrent)
> > > >> >>>>>> run:834, Thread (java.lang)
> > > >> >>>>>>
> > > >> >>>>>> On Wed, Dec 9, 2020 at 2:08 PM Nick Dimiduk
<
> ndimiduk@apache.org
> > > >
> > > >> >>>> wrote:
> > > >> >>>>>>
> > > >> >>>>>>> On Mon, Dec 7, 2020 at 1:51 PM Nick Dimiduk
<
> > > ndimiduk@apache.org>
> > > >> >>>>>> wrote:
> > > >> >>>>>>>
> > > >> >>>>>>>> Has anyone successfully built/run
this RC with JDK11 and
> > > Hadoop3
> > > >> >>>>>> profile?
> > > >> >>>>>>>> I'm seeing test failures locally
in the hbase-asyncfs
> module.
> > > >> >>>>>>>> Reproducible with:
> > > >> >>>>>>>>
> > > >> >>>>>>>> $
> > > >> >>>>>>>>
> > > >> >>>>>>
> > > >> >>>>
> > >
> JAVA_HOME=/Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home
> > > >> >>>>>>>> mvn clean install -Dhadoop.profile=3.0
> > > >> >>>>>>>>
> > > >> >>>>>>
> > > >> >>>>
> > >
> -Dtest=org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput
> > > >> >>>>>>>> ...
> > > >> >>>>>>>> [INFO] Running
> > > >> >>>>>>>> org.apache.hadoop.hbase.io
> > > .asyncfs.TestFanOutOneBlockAsyncDFSOutput
> > > >> >>>>>>>>
> > > >> >>>>>>>>
> > > >> >>>>>>>> [ERROR] Tests run: 1, Failures: 0,
Errors: 1, Skipped: 0,
> Time
> > > >> >>>> elapsed:
> > > >> >>>>>>>> 1.785 s <<< FAILURE! - in
> > > >> >>>>>>>> org.apache.hadoop.hbase.io
> > > .asyncfs.TestFanOutOneBlockAsyncDFSOutput
> > > >> >>>>>>>>
> > > >> >>>>>>>> [ERROR]
> > > >> >>>>>>>> org.apache.hadoop.hbase.io
> > > .asyncfs.TestFanOutOneBlockAsyncDFSOutput
> > > >> >>>>>> Time
> > > >> >>>>>>>> elapsed: 1.775 s  <<< ERROR!
> > > >> >>>>>>>>
> > > >> >>>>>>>> java.lang.ExceptionInInitializerError
> > > >> >>>>>>>>
> > > >> >>>>>>>>
> > > >> >>>>>>>>       at
> > > >> >>>>>>>> org.apache.hadoop.hbase.io
> > > >> >>>>>>
> > > >> >>>>
> > >
> .asyncfs.TestFanOutOneBlockAsyncDFSOutput.setUp(TestFanOutOneBlockAsyncDFSOutput.java:87)
> > > >> >>>>>>>>
> > > >> >>>>>>>> Caused by: java.lang.IllegalArgumentException:
Invalid Java
> > > version
> > > >> >>>>>>>> 11.0.9.1
> > > >> >>>>>>>>
> > > >> >>>>>>>>       at
> > > >> >>>>>>>> org.apache.hadoop.hbase.io
> > > >> >>>>>>
> > > >> >>>>
> > >
> .asyncfs.TestFanOutOneBlockAsyncDFSOutput.setUp(TestFanOutOneBlockAsyncDFSOutput.java:87)
> > > >> >>>>>>>>
> > > >> >>>>>>>
> > > >> >>>>>>> This failure is not isolated to macOS.
I ran this build on
> an
> > > Ubuntu
> > > >> >>>> VM
> > > >> >>>>>>> with the same AdoptOpenJDK 11.0.9.1.
Why don't we see this
> in
> > > >> >>>> Jenkins?
> > > >> >>>>>>>
> > > >> >>>>>>> [ERROR] Tests run: 1, Failures: 0, Errors:
1, Skipped: 0,
> Time
> > > >> >>>> elapsed:
> > > >> >>>>>>> 0.011 s <<< FAILURE! - in
> > > >> >>>>>>>
> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog
> > > >> >>>>>>>
> > > >> >>>>>>> [ERROR]
> > > org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog
> > > >> >>>>>>> Time elapsed: 0.003 s  <<< ERROR!
> > > >> >>>>>>>
> > > >> >>>>>>> java.lang.ExceptionInInitializerError
> > > >> >>>>>>>
> > > >> >>>>>>>       at
> > > >> >>>>>>>
> > > >> >>>>>>
> > > >> >>>>
> > >
> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog.setUpBeforeClass(TestAsyncProtobufLog.java:56)
> > > >> >>>>>>>
> > > >> >>>>>>>       Caused by: java.lang.IllegalArgumentException:
Invalid
> > > Java
> > > >> >>>>>> version
> > > >> >>>>>>> 11.0.9.1
> > > >> >>>>>>>       at
> > > >> >>>>>>>
> > > >> >>>>>>
> > > >> >>>>
> > >
> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog.setUpBeforeClass(TestAsyncProtobufLog.java:56)
> > > >> >>>>>>>
> > > >> >>>>>>> On Thu, Dec 3, 2020 at 4:05 PM Andrew
Purtell <
> > > apurtell@apache.org>
> > > >> >>>>>> wrote:
> > > >> >>>>>>>>
> > > >> >>>>>>>>> Please vote on this Apache hbase
release candidate,
> > > hbase-2.4.0RC1
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> The VOTE will remain open for
at least 72 hours.
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> [ ] +1 Release this package as
Apache hbase 2.4.0
> > > >> >>>>>>>>> [ ] -1 Do not release this package
because ...
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> The tag to be voted on is 2.4.0RC1:
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>   https://github.com/apache/hbase/tree/2.4.0RC1
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> The release files, including
signatures, digests, as well
> as
> > > >> >>>>>> CHANGES.md
> > > >> >>>>>>>>> and RELEASENOTES.md included
in this RC can be found at:
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>   https://dist.apache.org/repos/dist/dev/hbase/2.4.0RC1/
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> Customarily Maven artifacts would
be available in a
> staging
> > > >> >>>>>> repository.
> > > >> >>>>>>>>> Unfortunately I was forced to
terminate the Maven deploy
> step
> > > after
> > > >> >>>>>>>>> the upload ran for more than
four hours and my build
> equipment
> > > >> >>>>>>>>> needed to be relocated, with
loss of network connectivity.
> > > This RC
> > > >> >>>> has
> > > >> >>>>>>>>> been delayed long enough. A temporary
Maven repository is
> not
> > > a
> > > >> >>>>>>>>> requirement for a vote. I will
retry Maven deploy
> tomorrow. I
> > > can
> > > >> >>>>>>>>> promise the artifacts for this
RC will be staged in Apache
> > > Nexus
> > > >> >>>> and
> > > >> >>>>>>>>> ready for release well ahead
of the earliest possible time
> > > this
> > > >> >>>> vote
> > > >> >>>>>>>>> can complete.
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> Artifacts were signed with the
apurtell@apache.org key
> which
> > > can
> > > >> >>>> be
> > > >> >>>>>>>>> found
> > > >> >>>>>>>>> in:
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>   https://dist.apache.org/repos/dist/release/hbase/KEYS
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> The API compatibility report
for this RC can be found at:
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>
> > > >> >>>>>>
> > > >> >>>>
> > >
> https://dist.apache.org/repos/dist/dev/hbase/2.4.0RC1/api_compare_2.4.0RC1_to_2.3.0.html
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> The changes are mostly added
methods, which conform to the
> > > >> >>>>>> compatibility
> > > >> >>>>>>>>> guidelines for a new minor release.
There is one change
> to the
> > > >> >>>> public
> > > >> >>>>>>>>> Region interface that alters
the return type of a method.
> > > This is
> > > >> >>>>>>>>> equivalent to a removal then
addition and can be a binary
> > > >> >>>>>> compatibility
> > > >> >>>>>>>>> problem. However to your RM's
eye the change looks
> > > intentional and
> > > >> >>>> is
> > > >> >>>>>>>>> part of an API improvement project,
and a compatibility
> > > method is
> > > >> >>>> not
> > > >> >>>>>>>>> possible here because Java doesn't
consider return type
> when
> > > >> >>>> deciding
> > > >> >>>>>> if
> > > >> >>>>>>>>> one method signature duplicates
another.
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> To learn more about Apache HBase,
please see
> > > >> >>>>>>>>>
> > > >> >>>>>>>>>   http://hbase.apache.org/
> > > >> >>>>>>>>>
> > > >> >>>>>>>>> Thanks,
> > > >> >>>>>>>>> Your HBase Release Manager
> > > >> >>>>>>>>>
> > > >> >>>>>>>>
> > > >> >>>>>>
> > > >> >>>>>
> > > >> >>>>>
> > > >> >>>>> --
> > > >> >>>>> Best regards,
> > > >> >>>>> Andrew
> > > >> >>>>>
> > > >> >>>>> Words like orphans lost among the crosstalk,
meaning torn from
> > > truth's
> > > >> >>>>> decrepit hands
> > > >> >>>>>  - A23, Crosstalk
> > > >> >>>>>
> > > >> >>>>
> > > >> >>>>
> > > >> >>>> --
> > > >> >>>> Best regards,
> > > >> >>>> Andrew
> > > >> >>>>
> > > >> >>>> Words like orphans lost among the crosstalk, meaning
torn from
> > > truth's
> > > >> >>>> decrepit hands
> > > >> >>>>  - A23, Crosstalk
> > > >> >>>>
> > > >> >>>
> > >
> >
> >
> > --
> > Best regards,
> > Andrew
> >
> > Words like orphans lost among the crosstalk, meaning torn from truth's
> > decrepit hands
> >    - A23, Crosstalk
>


-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message