From hdfs-dev-return-40824-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Thu Jan 9 11:17:19 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id A4F5718063F for ; Thu, 9 Jan 2020 12:17:18 +0100 (CET) Received: (qmail 9369 invoked by uid 500); 9 Jan 2020 11:17:10 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 9318 invoked by uid 99); 9 Jan 2020 11:17:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Jan 2020 11:17:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 4C2971A430D; Thu, 9 Jan 2020 11:17:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.197 X-Spam-Level: X-Spam-Status: No, score=-0.197 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-ec2-va.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id qWyEoneZ8cOE; Thu, 9 Jan 2020 11:17:06 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=209.85.210.193; helo=mail-pf1-f193.google.com; envelope-from=ayushtkn@gmail.com; receiver= Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by mx1-ec2-va.apache.org (ASF Mail Server at mx1-ec2-va.apache.org) with ESMTPS id 98F44BC564; Thu, 9 Jan 2020 11:17:05 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id x185so3238226pfc.5; Thu, 09 Jan 2020 03:17:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=cyn0c+MAzm7VtUueCY2VK+TE2K+yi6wcdpI3DhgYKoA=; b=Md0EONr1jJSj0daVT9YTuHVB8tcHFe0kVD4kPWzrCY3xqvJo8LIHq8kFISPLLOCju+ adlfgf3+ppgt8V8/NAevbJB0spTOgBJQcGEMT4pI0q3IxsSX0hvFXoZWW9s2ZcNFL5n7 Z/hShNSM6y1HmrMhK9aRGrfbSLc62YltsmmGRMAu62Yu9qG+CFk6e4gMd9LkmUPi3rxU L0skpUYjdu13jHfoXGmwnAdNe1XvIIotOHvcTi2QVdWXD8gxP7z3ufsompEb6zcpJOD5 6Fz6b52QLCTTrjOz0EzVCPDBNBvRajGkOEUgSiAt2RUwNr+zxm3iF8ZJp7wjM2Eifesl ebKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=cyn0c+MAzm7VtUueCY2VK+TE2K+yi6wcdpI3DhgYKoA=; b=UTuwJdBbv+c5xBos6LD0H2qG1drvIMVmlXWCwSEP6HWcXUfDsvzfCapa8jDfspoR3t MUfDCirWLksmEP1z5ID5oUweARWO1saON8yXj4lrpDMmgVWeAxHd4BjoV7cKwbcVCuO+ w/lt01GS4tY+uYSkiXkubOYzOof9L3V90o+3xJoADkejlSVBq0bxde6ScLXgmz6CnCoy vDYHB6Hiqeu2wErGWVixzC/ve9ZeHISKbQ/vGAMWoqeW904bywZjYz/Wcp3cWcx/V/y0 JtUpvAUU5jZc9IbejHyl1AT5Bk4Yf7IhVhPbjd+pF/t2qtyNogyZYK/ItJ8yGZ/lfXqR agqA== X-Gm-Message-State: APjAAAWLuhsZYHq+zkbGuDeUhYk4TprW1j6MwnXCOl0tDU5D18RDBxgZ kLa7GBykspFY3QARwDbG88fmwVwdiBk= X-Google-Smtp-Source: APXvYqwxkJWQ/taGo5vN49xK0dVS3dt6SeMsLtPbkrdbq2z22kNxVTbVh2nVVjBGMrf2cPLnRk3Gew== X-Received: by 2002:aa7:84c6:: with SMTP id x6mr10209304pfn.181.1578568624298; Thu, 09 Jan 2020 03:17:04 -0800 (PST) Received: from [10.20.154.97] ([125.17.165.39]) by smtp.gmail.com with ESMTPSA id o134sm7843329pfg.137.2020.01.09.03.17.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Jan 2020 03:17:03 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (1.0) Subject: Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts From: Ayush Saxena X-Mailer: iPhone Mail (16G102) In-Reply-To: Date: Thu, 9 Jan 2020 16:47:00 +0530 Cc: Hadoop Common , "hdfs-dev@hadoop.apache.org" , "mapreduce-dev@hadoop.apache.org" , yarn-dev Content-Transfer-Encoding: quoted-printable Message-Id: <22D1616B-8094-4E52-98BD-A9E23A35ABE0@gmail.com> References: <2005054919.4447628.1578180768057@mail.yahoo.com> To: Brahma Reddy Battula Hi All, FYI : We will be going ahead with the present approach, will merge by tomorrow EOD= . Considering no one has objections. Thanx Everyone!!! -Ayush > On 07-Jan-2020, at 9:22 PM, Brahma Reddy Battula wrote= : >=20 > Hi Sree vaddi,Owen,stack,Duo Zhang, >=20 > We can move forward based on your comments, just waiting for your > reply.Hope all of your comments answered..(unification we can think > parallel thread as Vinay mentioned). >=20 >=20 >=20 > On Mon, 6 Jan 2020 at 6:21 PM, Vinayakumar B > wrote: >=20 >> Hi Sree, >>=20 >>> apache/hadoop-thirdparty, How would it fit into ASF ? As an Incubating >> Project ? Or as a TLP ? >>> Or as a new project definition ? >> As already mentioned by Ayush, this will be a subproject of Hadoop. >> Releases will be voted by Hadoop PMC as per ASF process. >>=20 >>=20 >>> The effort to streamline and put in an accepted standard for the >> dependencies that require shading, >>> seems beyond the siloed efforts of hadoop, hbase, etc.... >>=20 >>> I propose, we bring all the decision makers from all these artifacts in >> one room and decide best course of action. >>> I am looking at, no projects should ever had to shade any artifacts >> except as an absolute necessary alternative. >>=20 >> This is the ideal proposal for any project. But unfortunately some projec= ts >> takes their own course based on need. >>=20 >> In the current case of protobuf in Hadoop, >> Protobuf upgrade from 2.5.0 (which is already EOL) was not taken up to= >> avoid downstream failures. Since Hadoop is a platform, its dependencies >> will get added to downstream projects' classpath. So any change in Hadoop= 's >> dependencies will directly affect downstreams. Hadoop strictly follows >> backward compatibility as far as possible. >> Though protobuf provides wire compatibility b/w versions, it doesnt >> provide compatibility for generated sources. >> Now, to support ARM protobuf upgrade is mandatory. Using shading >> technique, In Hadoop internally can upgrade to shaded protobuf 3.x and >> still have 2.5.0 protobuf (deprecated) for downstreams. >>=20 >> This shading is necessary to have both versions of protobuf supported. >> (2.5.0 (non-shaded) for downstream's classpath and 3.x (shaded) for >> hadoop's internal usage). >> And this entire work to be done before 3.3.0 release. >>=20 >> So, though its ideal to make a common approach for all projects, I sugges= t >> for Hadoop we can go ahead as per current approach. >> We can also start the parallel effort to address these problems in a >> separate discussion/proposal. Once the solution is available we can revis= it >> and adopt new solution accordingly in all such projects (ex: HBase, Hadoo= p, >> Ratis). >>=20 >> -Vinay >>=20 >>> On Mon, Jan 6, 2020 at 12:39 AM Ayush Saxena wrote:= >>>=20 >>> Hey Sree >>>=20 >>>> apache/hadoop-thirdparty, How would it fit into ASF ? As an Incubating >>>> Project ? Or as a TLP ? >>>> Or as a new project definition ? >>>>=20 >>> A sub project of Apache Hadoop, having its own independent release >> cycles. >>> May be you can put this into the same column as ozone or as >>> submarine(couple of months ago). >>>=20 >>> Unifying for all, seems interesting but each project is independent and >> has >>> its own limitations and way of thinking, I don't think it would be an >> easy >>> task to bring all on the same table and get them agree to a common stuff= . >>>=20 >>> I guess this has been into discussion since quite long, and there hasn't= >>> been any other alternative suggested. Still we can hold up for a week, i= f >>> someone comes up with a better solution, else we can continue in the >>> present direction. >>>=20 >>> -Ayush >>>=20 >>>=20 >>>=20 >>> On Sun, 5 Jan 2020 at 05:03, Sree Vaddi > .invalid> >>> wrote: >>>=20 >>>> apache/hadoop-thirdparty, How would it fit into ASF ? As an Incubating >>>> Project ? Or as a TLP ? >>>> Or as a new project definition ? >>>>=20 >>>> The effort to streamline and put in an accepted standard for the >>>> dependencies that require shading,seems beyond the siloed efforts of >>>> hadoop, hbase, etc.... >>>>=20 >>>> I propose, we bring all the decision makers from all these artifacts in= >>>> one room and decide best course of action.I am looking at, no projects >>>> should ever had to shade any artifacts except as an absolute necessary >>>> alternative. >>>>=20 >>>>=20 >>>> Thank you./Sree >>>>=20 >>>>=20 >>>>=20 >>>> On Saturday, January 4, 2020, 7:49:18 AM PST, Vinayakumar B < >>>> vinayakumarb@apache.org> wrote: >>>>=20 >>>> Hi, >>>> Sorry for the late reply,. >>>>>>> To be exact, how can we better use the thirdparty repo? Looking at >>>> HBase as an example, it looks like everything that are known to break a= >>> lot >>>> after an update get shaded into the hbase-thirdparty artifact: guava, >>>> netty, ... etc. >>>> Is it the purpose to isolate these naughty dependencies? >>>> Yes, shading is to isolate these naughty dependencies from downstream >>>> classpath and have independent control on these upgrades without >> breaking >>>> downstreams. >>>>=20 >>>> First PR https://github.com/apache/hadoop-thirdparty/pull/1 to create >>> the >>>> protobuf shaded jar is ready to merge. >>>>=20 >>>> Please take a look if anyone interested, will be merged may be after >> two >>>> days if no objections. >>>>=20 >>>> -Vinay >>>>=20 >>>>=20 >>>> On Thu, Oct 10, 2019 at 3:30 AM Wei-Chiu Chuang >>>> wrote: >>>>=20 >>>>> Hi I am late to this but I am keen to understand more. >>>>>=20 >>>>> To be exact, how can we better use the thirdparty repo? Looking at >>> HBase >>>>> as an example, it looks like everything that are known to break a lot >>>> after >>>>> an update get shaded into the hbase-thirdparty artifact: guava, >> netty, >>>> ... >>>>> etc. >>>>> Is it the purpose to isolate these naughty dependencies? >>>>>=20 >>>>> On Wed, Oct 9, 2019 at 12:38 PM Vinayakumar B < >> vinayakumarb@apache.org >>>>=20 >>>>> wrote: >>>>>=20 >>>>>> Hi All, >>>>>>=20 >>>>>> I have updated the PR as per @Owen O'Malley >>=20 >>>>>> 's suggestions. >>>>>>=20 >>>>>> i. Renamed the module to 'hadoop-shaded-protobuf37' >>>>>> ii. Kept the shaded package to 'o.a.h.thirdparty.protobuf37' >>>>>>=20 >>>>>> Please review!! >>>>>>=20 >>>>>> Thanks, >>>>>> -Vinay >>>>>>=20 >>>>>>=20 >>>>>> On Sat, Sep 28, 2019 at 10:29 AM =E5=BC=A0=E9=93=8E(Duo Zhang) < >> palomino219@gmail.com >>>>=20 >>>>>> wrote: >>>>>>=20 >>>>>>> For HBase we have a separated repo for hbase-thirdparty >>>>>>>=20 >>>>>>> https://github.com/apache/hbase-thirdparty >>>>>>>=20 >>>>>>> We will publish the artifacts to nexus so we do not need to >> include >>>>>>> binaries in our git repo, just add a dependency in the pom. >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>=20 >>>=20 >> https://mvnrepository.com/artifact/org.apache.hbase.thirdparty/hbase-shad= ed-protobuf >>>>>>>=20 >>>>>>>=20 >>>>>>> And it has its own release cycles, only when there are special >>>>>> requirements >>>>>>> or we want to upgrade some of the dependencies. This is the vote >>>> thread >>>>>> for >>>>>>> the newest release, where we want to provide a shaded gson for >> jdk7. >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>=20 >>>=20 >> https://lists.apache.org/thread.html/f12c589baabbc79c7fb2843422d4590bea98= 2cd102e2bd9d21e9884b@%3Cdev.hbase.apache.org%3E >>>>>>>=20 >>>>>>>=20 >>>>>>> Thanks. >>>>>>>=20 >>>>>>> Vinayakumar B =E4=BA=8E2019=E5=B9=B49=E6=9C= =8828=E6=97=A5=E5=91=A8=E5=85=AD =E4=B8=8A=E5=8D=881:28=E5=86=99=E9=81=93=EF= =BC=9A >>>>>>>=20 >>>>>>>> Please find replies inline. >>>>>>>>=20 >>>>>>>> -Vinay >>>>>>>>=20 >>>>>>>> On Fri, Sep 27, 2019 at 10:21 PM Owen O'Malley < >>>>>> owen.omalley@gmail.com> >>>>>>>> wrote: >>>>>>>>=20 >>>>>>>>> I'm very unhappy with this direction. In particular, I don't >>> think >>>>>> git >>>>>>> is >>>>>>>>> a good place for distribution of binary artifacts. >> Furthermore, >>>> the >>>>>> PMC >>>>>>>>> shouldn't be releasing anything without a release vote. >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>> Proposed solution doesnt release any binaries in git. Its >>> actually a >>>>>>>> complete sub-project which follows entire release process, >>> including >>>>>> VOTE >>>>>>>> in public. I have mentioned already that release process is >>> similar >>>> to >>>>>>>> hadoop. >>>>>>>> To be specific, using the (almost) same script used in hadoop to >>>>>> generate >>>>>>>> artifacts, sign and deploy to staging repository. Please let me >>> know >>>>>> If I >>>>>>>> am conveying anything wrong. >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>> I'd propose that we make a third party module that contains >> the >>>>>>> *source* >>>>>>>>> of the pom files to build the relocated jars. This should >>>>>> absolutely be >>>>>>>>> treated as a last resort for the mostly Google projects that >>>>>> regularly >>>>>>>>> break binary compatibility (eg. Protobuf & Guava). >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>> Same has been implemented in the PR >>>>>>>> https://github.com/apache/hadoop-thirdparty/pull/1. Please >> check >>>> and >>>>>> let >>>>>>>> me >>>>>>>> know If I misunderstood. Yes, this is the last option we have >>> AFAIK. >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>> In terms of naming, I'd propose something like: >>>>>>>>>=20 >>>>>>>>> org.apache.hadoop.thirdparty.protobuf2_5 >>>>>>>>> org.apache.hadoop.thirdparty.guava28 >>>>>>>>>=20 >>>>>>>>> In particular, I think we absolutely need to include the >> version >>>> of >>>>>> the >>>>>>>>> underlying project. On the other hand, since we should not be >>>>>> shading >>>>>>>>> *everything* we can drop the leading com.google. >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>> IMO, This naming convention is easy for identifying the >> underlying >>>>>>> project, >>>>>>>> but it will be difficult to maintain going forward if >> underlying >>>>>> project >>>>>>>> versions changes. Since thirdparty module have its own releases, >>>> each >>>>>> of >>>>>>>> those release can be mapped to specific version of underlying >>>> project. >>>>>>> Even >>>>>>>> the binary artifact can include a MANIFEST with underlying >> project >>>>>>> details >>>>>>>> as per Steve's suggestion on HADOOP-13363. >>>>>>>> That said, if you still prefer to have project number in >> artifact >>>> id, >>>>>> it >>>>>>>> can be done. >>>>>>>>=20 >>>>>>>> The Hadoop project can make releases of the thirdparty module: >>>>>>>>>=20 >>>>>>>>> >>>>>>>>> org.apache.hadoop >>>>>>>>> hadoop-thirdparty-protobuf25 >>>>>>>>> 1.0 >>>>>>>>> >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>> Note that the version has to be the hadoop thirdparty release >>>> number, >>>>>>> which >>>>>>>>> is part of why you need to have the underlying version in the >>>>>> artifact >>>>>>>>> name. These we can push to maven central as new releases from >>>>>> Hadoop. >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>> Exactly, same has been implemented in the PR. hadoop-thirdparty >>>> module >>>>>>> have >>>>>>>> its own releases. But in HADOOP Jira, thirdparty versions can be >>>>>>>> differentiated using prefix "thirdparty-". >>>>>>>>=20 >>>>>>>> Same solution is being followed in HBase. May be people involved >>> in >>>>>> HBase >>>>>>>> can add some points here. >>>>>>>>=20 >>>>>>>> Thoughts? >>>>>>>>>=20 >>>>>>>>> .. Owen >>>>>>>>>=20 >>>>>>>>> On Fri, Sep 27, 2019 at 8:38 AM Vinayakumar B < >>>>>> vinayakumarb@apache.org >>>>>>>>=20 >>>>>>>>> wrote: >>>>>>>>>=20 >>>>>>>>>> Hi All, >>>>>>>>>>=20 >>>>>>>>>> I wanted to discuss about the separate repo for thirdparty >>>>>>>> dependencies >>>>>>>>>> which we need to shaded and include in Hadoop component's >> jars. >>>>>>>>>>=20 >>>>>>>>>> Apologies for the big text ahead, but this needs clear >>>>>>> explanation!! >>>>>>>>>>=20 >>>>>>>>>> Right now most needed such dependency is protobuf. >> Protobuf >>>>>>>> dependency >>>>>>>>>> was not upgraded from 2.5.0 onwards with the fear that >>> downstream >>>>>>>> builds, >>>>>>>>>> which depends on transitive dependency protobuf coming from >>>>>> hadoop's >>>>>>>> jars, >>>>>>>>>> may fail with the upgrade. Apparently protobuf does not >>> guarantee >>>>>>> source >>>>>>>>>> compatibility, though it guarantees wire compatibility >> between >>>>>>> versions. >>>>>>>>>> Because of this behavior, version upgrade may cause breakage >> in >>>>>> known >>>>>>>> and >>>>>>>>>> unknown (private?) downstreams. >>>>>>>>>>=20 >>>>>>>>>> So to tackle this, we came up the following proposal in >>>>>>> HADOOP-13363. >>>>>>>>>>=20 >>>>>>>>>> Luckily, As far as I know, no APIs, either public to user >> or >>>>>>> between >>>>>>>>>> Hadoop processes, is not directly using protobuf classes in >>>>>>> signatures. >>>>>>>>>> (If >>>>>>>>>> any exist, please let us know). >>>>>>>>>>=20 >>>>>>>>>> Proposal: >>>>>>>>>> ------------ >>>>>>>>>>=20 >>>>>>>>>> 1. Create a artifact(s) which contains shaded >> dependencies. >>>> All >>>>>>> such >>>>>>>>>> shading/relocation will be with known prefix >>>>>>>>>> **org.apache.hadoop.thirdparty.**. >>>>>>>>>> 2. Right now protobuf jar (ex: >>>>>>>> o.a.h.thirdparty:hadoop-shaded-protobuf) >>>>>>>>>> to start with, all **com.google.protobuf** classes will be >>>>>> relocated >>>>>>> as >>>>>>>>>> **org.apache.hadoop.thirdparty.com.google.protobuf**. >>>>>>>>>> 3. Hadoop modules, which needs protobuf as dependency, >> will >>>> add >>>>>>> this >>>>>>>>>> shaded artifact as dependency (ex: >>>>>>>>>> o.a.h.thirdparty:hadoop-shaded-protobuf). >>>>>>>>>> 4. All previous usages of "com.google.protobuf" will be >>>>>> relocated >>>>>>> to >>>>>>>>>> "org.apache.hadoop.thirdparty.com.google.protobuf" in the >> code >>>> and >>>>>>> will >>>>>>>> be >>>>>>>>>> committed. Please note, this replacement is One-Time directly >>> in >>>>>>> source >>>>>>>>>> code, NOT during compile and package. >>>>>>>>>> 5. Once all usages of "com.google.protobuf" is relocated, >>> then >>>>>>> hadoop >>>>>>>>>> dont care about which version of original "protobuf-java" is >>> in >>>>>>>>>> dependency. >>>>>>>>>> 6. Just keep "protobuf-java:2.5.0" in dependency tree not >> to >>>>>> break >>>>>>>> the >>>>>>>>>> downstreams. But hadoop will be originally using the latest >>>>>> protobuf >>>>>>>>>> present in "o.a.h.thirdparty:hadoop-shaded-protobuf". >>>>>>>>>>=20 >>>>>>>>>> 7. Coming back to separate repo, Following are most >>>> appropriate >>>>>>>> reasons >>>>>>>>>> of keeping shaded dependency artifact in separate repo >> instead >>> of >>>>>>>>>> submodule. >>>>>>>>>>=20 >>>>>>>>>> 7a. These artifacts need not be built all the time. It >>> needs >>>>>> to >>>>>>> be >>>>>>>>>> built only when there is a change in the dependency version >> or >>>> the >>>>>>> build >>>>>>>>>> process. >>>>>>>>>> 7b. If added as "submodule in Hadoop repo", >>>>>>>> maven-shade-plugin:shade >>>>>>>>>> will execute only in package phase. That means, "mvn compile" >>> or >>>>>> "mvn >>>>>>>>>> test-compile" will not be failed as this artifact will not >> have >>>>>>>> relocated >>>>>>>>>> classes, instead it will have original classes, resulting in >>>>>>> compilation >>>>>>>>>> failure. Workaround, build thirdparty submodule first and >>> exclude >>>>>>>>>> "thirdparty" submodule in other executions. This will be a >>>> complex >>>>>>>> process >>>>>>>>>> compared to keeping in a separate repo. >>>>>>>>>>=20 >>>>>>>>>> 7c. Separate repo, will be a subproject of Hadoop, using >>> the >>>>>>> same >>>>>>>>>> HADOOP jira project, with different versioning prefixed with >>>>>>>> "thirdparty-" >>>>>>>>>> (ex: thirdparty-1.0.0). >>>>>>>>>> 7d. Separate will have same release process as Hadoop. >>>>>>>>>>=20 >>>>>>>>>> HADOOP-13363 ( >>>>>> https://issues.apache.org/jira/browse/HADOOP-13363) >>>>>>>> is >>>>>>>>>> an >>>>>>>>>> umbrella jira tracking the changes to protobuf upgrade. >>>>>>>>>>=20 >>>>>>>>>> PR (https://github.com/apache/hadoop-thirdparty/pull/1) >> has >>>>>> been >>>>>>>>>> raised >>>>>>>>>> for separate repo creation in (HADOOP-16595 ( >>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-16595) >>>>>>>>>>=20 >>>>>>>>>> Please provide your inputs for the proposal and review the >>> PR >>>>>> to >>>>>>>>>> proceed with the proposal. >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>> -Thanks, >>>>>>>>>> Vinay >>>>>>>>>>=20 >>>>>>>>>> On Fri, Sep 27, 2019 at 11:54 AM Vinod Kumar Vavilapalli < >>>>>>>>>> vinodkv@apache.org> >>>>>>>>>> wrote: >>>>>>>>>>=20 >>>>>>>>>>> Moving the thread to the dev lists. >>>>>>>>>>>=20 >>>>>>>>>>> Thanks >>>>>>>>>>> +Vinod >>>>>>>>>>>=20 >>>>>>>>>>>> On Sep 23, 2019, at 11:43 PM, Vinayakumar B < >>>>>>>> vinayakumarb@apache.org> >>>>>>>>>>> wrote: >>>>>>>>>>>>=20 >>>>>>>>>>>> Thanks Marton, >>>>>>>>>>>>=20 >>>>>>>>>>>> Current created 'hadoop-thirdparty' repo is empty right >>> now. >>>>>>>>>>>> Whether to use that repo for shaded artifact or not will >>> be >>>>>>>>>> monitored in >>>>>>>>>>>> HADOOP-13363 umbrella jira. Please feel free to join the >>>>>>> discussion. >>>>>>>>>>>>=20 >>>>>>>>>>>> There is no existing codebase is being moved out of >> hadoop >>>>>> repo. >>>>>>> So >>>>>>>> I >>>>>>>>>>> think >>>>>>>>>>>> right now we are good to go. >>>>>>>>>>>>=20 >>>>>>>>>>>> -Vinay >>>>>>>>>>>>=20 >>>>>>>>>>>> On Mon, Sep 23, 2019 at 11:38 PM Marton Elek < >>>> elek@apache.org> >>>>>>>> wrote: >>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> I am not sure if it's defined when is a vote required. >>>>>>>>>>>>>=20 >>>>>>>>>>>>> https://www.apache.org/foundation/voting.html >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Personally I think it's a big enough change to send a >>>>>>> notification >>>>>>>> to >>>>>>>>>>> the >>>>>>>>>>>>> dev lists with a 'lazy consensus' closure >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Marton >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On 2019/09/23 17:46:37, Vinayakumar B < >>>>>> vinayakumarb@apache.org> >>>>>>>>>> wrote: >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> As discussed in HADOOP-13363, protobuf 3.x jar (and may >>> be >>>>>> more >>>>>>> in >>>>>>>>>>>>> future) >>>>>>>>>>>>>> will be kept as a shaded artifact in a separate repo, >>> which >>>>>> will >>>>>>>> be >>>>>>>>>>>>>> referred as dependency in hadoop modules. This >> approach >>>>>> avoids >>>>>>>>>> shading >>>>>>>>>>>>> of >>>>>>>>>>>>>> every submodule during build. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> So question is does any VOTE required before asking to >>>>>> create a >>>>>>>> git >>>>>>>>>>> repo? >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> On selfserve platform >>>>>>>> https://gitbox.apache.org/setup/newrepo.html >>>>>>>>>>>>>> I can access see that, requester should be PMC. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Wanted to confirm here first. >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> -Vinay >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>=20 >>>> --------------------------------------------------------------------- >>>>>>>>>>>>> To unsubscribe, e-mail: >>>> private-unsubscribe@hadoop.apache.org >>>>>>>>>>>>> For additional commands, e-mail: >>>>>> private-help@hadoop.apache.org >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>>=20 >>>=20 >>=20 > --=20 >=20 >=20 >=20 > --Brahma Reddy Battula --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org