hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Yang <ey...@hortonworks.com>
Subject Re: When are incompatible changes acceptable (HDFS-12990)
Date Tue, 09 Jan 2018 23:15:02 GMT
While I agree the original port change was unnecessary, I don’t think Hadoop NN port change
is a bad thing.

I worked for a Hadoop distro that NN RPC port was default to port 9000.  When we migrate from
BigInsights to IOP and now to HDP, we have to move customer Hive metadata to new NN RPC port.
 It only took one developer (myself) to write the tool for the migration.  The incurring workload
is not as bad as most people anticipated because Hadoop depends on configuration file for
referencing namenode.  Most of the code can work transparently.  It helped to harden the downstream
testing tools to be more robust.

We will never know how many people are actively working on Hadoop 3.0.0.  Perhaps, couple
hundred developers or thousands.  I think the switch back may have saved few developers work,
but there could be more people getting impacted at unexpected minor release change in the
future.  I recommend keeping current values to avoid rule bending and future frustrations.


On 1/9/18, 11:21 AM, "Chris Douglas" <cdouglas@apache.org> wrote:

    Particularly since 9820 isn't in the contiguous range of ports in
    HDFS-9427, is there any value in this change?
    Let's change it back to prevent the disruption to users, but
    downstream projects should treat this as a bug in their tests. Please
    open JIRAs in affected projects. -C
    On Tue, Jan 9, 2018 at 5:18 AM, larry mccay <lmccay@apache.org> wrote:
    > On Mon, Jan 8, 2018 at 11:28 PM, Aaron T. Myers <atm@apache.org> wrote:
    >> Thanks a lot for the response, Larry. Comments inline.
    >> On Mon, Jan 8, 2018 at 6:44 PM, larry mccay <lmccay@apache.org> wrote:
    >>> Question...
    >>> Can this be addressed in some way during or before upgrade that allows it
    >>> to only affect new installs?
    >>> Even a config based workaround prior to upgrade might make this a change
    >>> less disruptive.
    >>> If part of the upgrade process includes a step (maybe even a script) to
    >>> set the NN RPC port explicitly beforehand then it would allow existing
    >>> deployments and related clients to remain whole - otherwise it will uptake
    >>> the new default port.
    >> Perhaps something like this could be done, but I think there are downsides
    >> to anything like this. For example, I'm sure there are plenty of
    >> applications written on top of Hadoop that have tests which hard-code the
    >> port number. Nothing we do in a setup script will help here. If we don't
    >> change the default port back to what it was, these tests will likely all
    >> have to be updated.
    > I may not have made my point clear enough.
    > What I meant to say is to fix the default port but direct folks to
    > explicitly set the port they are using in a deployment (the current
    > default) so that it doesn't change out from under them - unless they are
    > fine with it changing.
    >>> Meta note: we shouldn't be so pedantic about policy that we can't back
    >>> out something that is considered a bug or even mistake.
    >> This is my bigger point. Rigidly adhering to the compat guidelines in this
    >> instance helps almost no one, while hurting many folks.
    >> We basically made a mistake when we decided to change the default NN port
    >> with little upside, even between major versions. We discovered this very
    >> quickly, and we have an opportunity to fix it now and in so doing likely
    >> disrupt very, very few users and downstream applications. If we don't
    >> change it, we'll be causing difficulty for our users, downstream
    >> developers, and ourselves, potentially for years.
    > Agreed.
    >> Best,
    >> Aaron
    To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
    For additional commands, e-mail: common-dev-help@hadoop.apache.org

View raw message