hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Busbey (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients
Date Tue, 12 May 2015 21:21:04 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14540773#comment-14540773

Sean Busbey commented on HADOOP-11656:

If we're going the route of shading for clients, IMO there is less incentive to use a different
mechanism on the framework side; what would be a reason not to consider shading on the framework
side if we're shading for the client? I think it would be great to provide the same type of
solutions for both the client side and the framework side, and that would simplify things
a lot for users. Also, note that the build side of things would bring those two aspects together
anyway (see below).

There's a whole lot that can go wrong shading, especially in a framework as complicated as
YARN. So long as we can provide a cleaner abstraction server side, we should seek to do that.

The name "hadoop-client-reactor" is rather awkward as the reactor has a specific meaning in
programming, and this is not that.

Fair enough, using obsolete maven terminology is probably a bad idea. In the current patch
on HADOOP-11804 I went with "hadoop-client-modules" for the name of the multi-module pom.

bq. Unfortunately, it doesn't provide much upgrade help for applications that rely on the
classes found in the fallback case.

Could you please elaborate on this point? Do you mean things will break if user code relied
on a Hadoop dependency implicitly (without bringing their own copy) and Hadoop upgraded it
to an incompatible version? Note that this type of issues may exist with the OSGi approach
as well. If OSGi exported that particular dependency, then the user would start relying on
that dependency implicitly too unless he/she brings the dependency. And in that case, if Hadoop
upgraded that dependency, the user code will break in the same manner.

If Hadoop does not intent to support that use case, OSGi does allow the possibility of not
exporting these dependencies, in which case the user code will simply break right from the
beginning until the user fixes it so they bring the dependency.

We get around this issue when using OSGi containers by exporting a different set of dependencies
depending on what the client application tells us it needs. By default in branch-2 we presume
not telling us means they need whatever hte last release on branch-2 was. By default in trunk
/ branch-3 we presume it means "export nothing."

When a client application says "I need Hadoop 2.2 dependencies" we can export a set of dependencies
that matches that release. See the paragraph that starts with "To maintain backwards compatibility..."

The only caveat is what the underlying system bundles (Hadoop+system) should export. If we're
going to use OSGi, I think we should only export the actual public APIs and types the user
code can couple to. The implication of that decision is that things will fail miserably if
any of the implicit dependencies is missing from the user code, and we'd spend a lot of time
tracking down missing dependencies for users. Trust me, this is non-trivial support cost.

This is exactly what the "hadoop dependencies from version X" bundles will solve.

I haven't thought through this completely, but we do need to think about the impact on user
builds. To create their app (e.g. MR app), what maven artifacts would they need to depend
on? Note that users usually have a single project for their client as well as the code that's
executed on the cluster. Do we anticipate any changes users are required to make (e.g. clean
up their 3rd party dependencies, etc.)? Although in theory everyone should have a clean pom,
etc. etc., sadly the reality is very different, and we need to be able to tell users what
is needed before they can start leveraging this.

This is what the opt-in new modules for downstream folks aims to solve. take a look at the
POC on HADOOP-11804. I'm using HBase as a test downstream application for the HDFS side of
things currently.

> Classpath isolation for downstream clients
> ------------------------------------------
>                 Key: HADOOP-11656
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11656
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Sean Busbey
>            Assignee: Sean Busbey
>              Labels: classloading, classpath, dependencies, scripts, shell
>         Attachments: HADOOP-11656_proposal.md
> Currently, Hadoop exposes downstream clients to a variety of third party libraries. As
our code base grows and matures we increase the set of libraries we rely on. At the same time,
as our user base grows we increase the likelihood that some downstream project will run into
a conflict while attempting to use a different version of some library we depend on. This
has already happened with i.e. Guava several times for HBase, Accumulo, and Spark (and I'm
sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to off and
they don't do anything to help dependency conflicts on the driver side or for folks talking
to HDFS directly. This should serve as an umbrella for changes needed to do things thoroughly
on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that doesn't
pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when executing user
provided code, whether client side in a launcher/driver or on the cluster in a container or
within MR.
> This provides us with a double benefit: users get less grief when they want to run substantially
ahead or behind the versions we need and the project is freer to change our own dependency
versions because they'll no longer be in our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases written in
the comments.

This message was sent by Atlassian JIRA

View raw message