hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Busbey (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13916) Document how downstream clients should make use of the new shaded client artifacts
Date Fri, 01 Sep 2017 12:22:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150425#comment-16150425

Sean Busbey commented on HADOOP-13916:

Skipping to the end of HADOOP-11656, the proposal is to publish shaded jars to maven for downstream
projects to build/test against. This repo will be responsible for...?

The repo provides a place where we can show use of the client facing modules, similar to how
Steve mentions the cloud integration stuff.

bq. Does this need to be a separate repo though?

Over in HBase it's been very useful for our similar downstream example to be in its own repo.
For one thing, it ensures that updates to it have attention called to them. If it were in
the main repo it'd be too easy for a dev that introduced a breaking change to just also update
the example at the same time. Additionally, it has allowed us to recently start expressly
show downstream examples that successfully cross major versions.


Thus far, there's been no downside to that repo being just some github hosted thing. I do,
however, think it'd be better generally for these things to be in repos under the control
of an appropriate PMC.

bq. Do we need to branch it and release it with Hadoop versions?

One gap on the hbase downstream is that we haven't ever really done releases. Here in Hadoop
we could continue that tradition, but I'd much rather have something we can refer back to
later. That would mean having releases, though I'd hope at a much lower rate than the main

bq. How does precommit work? 

This is an excellent question. The most familiar course would be an additional JIRA tracker
and then a copy of the existing tracker specific precommit jobs would work. That sounds like
a fair bit of overhead.

bq. maybe "hadoop-downstream-tests", create one in github as a PoC

I'm happy to do this however the PMC prefers. Our goal is compilable code that we can have
snippets of in project documentation. That documentation should live in the main repo along
with the rest of our web facing stuff. Since the educational value should be in the documentation
that walks through things, we'd hopefully mitigate the long term risk of e.g. that code coming
from a personal github repo that might cease to be.

> Document how downstream clients should make use of the new shaded client artifacts
> ----------------------------------------------------------------------------------
>                 Key: HADOOP-13916
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13916
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: documentation
>    Affects Versions: 3.0.0-alpha2
>            Reporter: Sean Busbey
>            Assignee: Sean Busbey
> provide a quickstart that walks through using the new shaded dependencies with Maven
to create a simple downstream project.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message