Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B29E0200D61 for ; Tue, 19 Dec 2017 19:56:10 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id B1468160C1B; Tue, 19 Dec 2017 18:56:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A5717160BFF for ; Tue, 19 Dec 2017 19:56:09 +0100 (CET) Received: (qmail 83462 invoked by uid 500); 19 Dec 2017 18:56:07 -0000 Mailing-List: contact dev-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@spark.apache.org Received: (qmail 83448 invoked by uid 99); 19 Dec 2017 18:56:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Dec 2017 18:56:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 8146BC5C51 for ; Tue, 19 Dec 2017 18:56:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.879 X-Spam-Level: * X-Spam-Status: No, score=1.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cloudera.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id xXm4wVJu0FvF for ; Tue, 19 Dec 2017 18:56:04 +0000 (UTC) Received: from mail-ot0-f172.google.com (mail-ot0-f172.google.com [74.125.82.172]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 51F975F282 for ; Tue, 19 Dec 2017 18:56:04 +0000 (UTC) Received: by mail-ot0-f172.google.com with SMTP id p3so17613321oti.5 for ; Tue, 19 Dec 2017 10:56:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudera.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zbqbF39Giw/fdCpe/cWs27Qe7uyCQQ9CCjb0BqBMSss=; b=Nc9Mk3MmZ8AA9koUztzHp4YA5JIGn0ne2IekeaDuJDZapfNtwzk0ve500W+7oT3UXO DDVmESSQgLNE+NXqJHLAu3Z4mOsT/qWB8j/xIgKpX+Hc9GkFelg4G0sesZSkAEnaPZEE qJ6jEFYQG59qcNxyptrgxxnA7Bw/ANxmMNwnflsgOp+OAxu8+uv8RepkBlH3RdnbqomW ROUW0ABPLT334SaSAwsnPWxl7QgLAU1WbmJCWExgV6mXwd+T+AQ4JevbcyOnUBk0h/oH D+tPzmwjmMSFaeJAUzvJwS1qsN6qRe+MZzx58G6eNmEN3ZXU2mY++J2fwxo+46Gk0cs/ BbYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zbqbF39Giw/fdCpe/cWs27Qe7uyCQQ9CCjb0BqBMSss=; b=lkHWqnkOiFk3/1v3w3DPLxeI9G/NJMREQY/+MPxXky//EZNKyq7tLk7v+RA0eEyXwJ FNkjSOVJbBAcwH0sQ5P0gC28A5rwk0JghN7YHQ3UchdPN9LE+SJtFpbI957ftfYRsPXi y11+ilYadCe5rpTJylAvErvTaF9FKTKbCUs31pt6i/MoppwYup7+8duzhyKSR6qlMDc0 8z/ZCPB7oDu/FrtdASHtw1DmKlztpa0EspvFNTmL+ItPgsigGUc2WhTVLyEfhNyBX5lL BbOHEdzMo2Lk8Q7iU3Vbi9QgDrSK14rDYMiSIRcyIw9XcxMyborPLY2A6jmmcR1gb2cK bD1Q== X-Gm-Message-State: AKGB3mLatwCGQqabw9yrOe6xi5mW1UeNTmI2vcEtoSdyPV4Q0dneQDoN qvH146IAW2sIn6SE8eJcY89EHU6Sm1YL1qY5NF8MoIRa X-Google-Smtp-Source: ACJfBotzdKpK8T4YphcY33yCVAuAOyNYV2ECjuGNJNaQDh0H0QGCqaBF+3usN+B70FOrd4O/lQyRkmMrshxHY4Gv9hs= X-Received: by 10.157.2.240 with SMTP id 103mr3162294otl.372.1513709758164; Tue, 19 Dec 2017 10:55:58 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Sean Owen Date: Tue, 19 Dec 2017 18:55:47 +0000 Message-ID: Subject: Re: Publishing official docker images for KubernetesSchedulerBackend To: Erik Erlandson Cc: dev Content-Type: multipart/alternative; boundary="94eb2c09a59c6de9f70560b6046e" archived-at: Tue, 19 Dec 2017 18:56:10 -0000 --94eb2c09a59c6de9f70560b6046e Content-Type: text/plain; charset="UTF-8" Unfortunately you'll need to chase down the license of all the bits that are distributed directly by the project. This was a big job back in the day for the Maven artifacts and some work to maintain. Most of the work is one-time, at least. On Tue, Dec 19, 2017 at 12:53 PM Erik Erlandson wrote: > Agreed that the GPL family would be "toxic." > > The current images have been at least informally confirmed to use licenses > that are ASF compatible. Is there an officially sanctioned method of > license auditing that can be applied here? > > On Tue, Dec 19, 2017 at 11:45 AM, Sean Owen wrote: > >> I think that's all correct, though the license of third party >> dependencies is actually a difficult and sticky part. The ASF couldn't make >> a software release including any GPL software for example, and it's not >> just a matter of adding a disclaimer. Any actual bits distributed by the >> PMC would have to follow all the license rules. >> >> On Tue, Dec 19, 2017 at 12:34 PM Erik Erlandson >> wrote: >> >>> I've been looking a bit more into ASF legal posture on licensing and >>> container images. What I have found indicates that ASF considers container >>> images to be just another variety of distribution channel. As such, it is >>> acceptable to publish official releases; for example an image such as >>> spark:v2.3.0 built from the v2.3.0 source is fine. It is not acceptable to >>> do something like regularly publish spark:latest built from the head of >>> master. >>> >>> More detail here: >>> https://issues.apache.org/jira/browse/LEGAL-270 >>> >>> So as I understand it, making a release-tagged public image as part of >>> each official release does not pose any problems. >>> >>> With respect to considering the licenses of other ancillary dependencies >>> that are also installed on such container images, I noticed this clause in >>> the legal boilerplate for the Flink images >>> : >>> >>> As with all Docker images, these likely also contain other software >>>> which may be under other licenses (such as Bash, etc from the base >>>> distribution, along with any direct or indirect dependencies of the primary >>>> software being contained). >>>> >>> >>> So it may be sufficient to resolve this via disclaimer. >>> >>> -Erik >>> >>> On Thu, Dec 14, 2017 at 7:55 PM, Erik Erlandson >>> wrote: >>> >>>> Currently the containers are based off alpine, which pulls in BSD2 and >>>> MIT licensing: >>>> https://github.com/apache/spark/pull/19717#discussion_r154502824 >>>> >>>> to the best of my understanding, neither of those poses a problem. If >>>> we based the image off of centos I'd also expect the licensing of any image >>>> deps to be compatible. >>>> >>>> On Thu, Dec 14, 2017 at 7:19 PM, Mark Hamstra >>>> wrote: >>>> >>>>> What licensing issues come into play? >>>>> >>>>> On Thu, Dec 14, 2017 at 4:00 PM, Erik Erlandson >>>>> wrote: >>>>> >>>>>> We've been discussing the topic of container images a bit more. The >>>>>> kubernetes back-end operates by executing some specific CMD and ENTRYPOINT >>>>>> logic, which is different than mesos, and which is probably not practical >>>>>> to unify at this level. >>>>>> >>>>>> However: These CMD and ENTRYPOINT configurations are essentially just >>>>>> a thin skin on top of an image which is just an install of a spark distro. >>>>>> We feel that a single "spark-base" image should be publishable, that is >>>>>> consumable by kube-spark images, and mesos-spark images, and likely any >>>>>> other community image whose primary purpose is running spark components. >>>>>> The kube-specific dockerfiles would be written "FROM spark-base" and just >>>>>> add the small command and entrypoint layers. Likewise, the mesos images >>>>>> could add any specialization layers that are necessary on top of the >>>>>> "spark-base" image. >>>>>> >>>>>> Does this factorization sound reasonable to others? >>>>>> Cheers, >>>>>> Erik >>>>>> >>>>>> >>>>>> On Wed, Nov 29, 2017 at 10:04 AM, Mridul Muralidharan < >>>>>> mridul@gmail.com> wrote: >>>>>> >>>>>>> We do support running on Apache Mesos via docker images - so this >>>>>>> would not be restricted to k8s. >>>>>>> But unlike mesos support, which has other modes of running, I believe >>>>>>> k8s support more heavily depends on availability of docker images. >>>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> Mridul >>>>>>> >>>>>>> >>>>>>> On Wed, Nov 29, 2017 at 8:56 AM, Sean Owen >>>>>>> wrote: >>>>>>> > Would it be logical to provide Docker-based distributions of other >>>>>>> pieces of >>>>>>> > Spark? or is this specific to K8S? >>>>>>> > The problem is we wouldn't generally also provide a distribution >>>>>>> of Spark >>>>>>> > for the reasons you give, because if that, then why not RPMs and >>>>>>> so on. >>>>>>> > >>>>>>> > On Wed, Nov 29, 2017 at 10:41 AM Anirudh Ramanathan < >>>>>>> ramanathana@google.com> >>>>>>> > wrote: >>>>>>> >> >>>>>>> >> In this context, I think the docker images are similar to the >>>>>>> binaries >>>>>>> >> rather than an extension. >>>>>>> >> It's packaging the compiled distribution to save people the >>>>>>> effort of >>>>>>> >> building one themselves, akin to binaries or the python package. >>>>>>> >> >>>>>>> >> For reference, this is the base dockerfile for the main image >>>>>>> that we >>>>>>> >> intend to publish. It's not particularly complicated. >>>>>>> >> The driver and executor images are based on said base image and >>>>>>> only >>>>>>> >> customize the CMD (any file/directory inclusions are extraneous >>>>>>> and will be >>>>>>> >> removed). >>>>>>> >> >>>>>>> >> Is there only one way to build it? That's a bit harder to reason >>>>>>> about. >>>>>>> >> The base image I'd argue is likely going to always be built that >>>>>>> way. The >>>>>>> >> driver and executor images, there may be cases where people want >>>>>>> to >>>>>>> >> customize it - (like putting all dependencies into it for >>>>>>> example). >>>>>>> >> In those cases, as long as our images are bare bones, they can >>>>>>> use the >>>>>>> >> spark-driver/spark-executor images we publish as the base, and >>>>>>> build their >>>>>>> >> customization as a layer on top of it. >>>>>>> >> >>>>>>> >> I think the composability of docker images, makes this a bit >>>>>>> different >>>>>>> >> from say - debian packages. >>>>>>> >> We can publish canonical images that serve as both - a complete >>>>>>> image for >>>>>>> >> most Spark applications, as well as a stable substrate to build >>>>>>> >> customization upon. >>>>>>> >> >>>>>>> >> On Wed, Nov 29, 2017 at 7:38 AM, Mark Hamstra < >>>>>>> mark@clearstorydata.com> >>>>>>> >> wrote: >>>>>>> >>> >>>>>>> >>> It's probably also worth considering whether there is only one, >>>>>>> >>> well-defined, correct way to create such an image or whether >>>>>>> this is a >>>>>>> >>> reasonable avenue for customization. Part of why we don't do >>>>>>> something like >>>>>>> >>> maintain and publish canonical Debian packages for Spark is >>>>>>> because >>>>>>> >>> different organizations doing packaging and distribution of >>>>>>> infrastructures >>>>>>> >>> or operating systems can reasonably want to do this in a custom >>>>>>> (or >>>>>>> >>> non-customary) way. If there is really only one reasonable way >>>>>>> to do a >>>>>>> >>> docker image, then my bias starts to tend more toward the Spark >>>>>>> PMC taking >>>>>>> >>> on the responsibility to maintain and publish that image. If >>>>>>> there is more >>>>>>> >>> than one way to do it and publishing a particular image is more >>>>>>> just a >>>>>>> >>> convenience, then my bias tends more away from maintaining and >>>>>>> publish it. >>>>>>> >>> >>>>>>> >>> On Wed, Nov 29, 2017 at 5:14 AM, Sean Owen >>>>>>> wrote: >>>>>>> >>>> >>>>>>> >>>> Source code is the primary release; compiled binary releases are >>>>>>> >>>> conveniences that are also released. A docker image sounds >>>>>>> fairly different >>>>>>> >>>> though. To the extent it's the standard delivery mechanism for >>>>>>> some artifact >>>>>>> >>>> (think: pyspark on PyPI as well) that makes sense, but is that >>>>>>> the >>>>>>> >>>> situation? if it's more of an extension or alternate >>>>>>> presentation of Spark >>>>>>> >>>> components, that typically wouldn't be part of a Spark release. >>>>>>> The ones the >>>>>>> >>>> PMC takes responsibility for maintaining ought to be the core, >>>>>>> critical >>>>>>> >>>> means of distribution alone. >>>>>>> >>>> >>>>>>> >>>> On Wed, Nov 29, 2017 at 2:52 AM Anirudh Ramanathan >>>>>>> >>>> wrote: >>>>>>> >>>>> >>>>>>> >>>>> Hi all, >>>>>>> >>>>> >>>>>>> >>>>> We're all working towards the Kubernetes scheduler backend >>>>>>> (full steam >>>>>>> >>>>> ahead!) that's targeted towards Spark 2.3. One of the >>>>>>> questions that comes >>>>>>> >>>>> up often is docker images. >>>>>>> >>>>> >>>>>>> >>>>> While we're making available dockerfiles to allow people to >>>>>>> create >>>>>>> >>>>> their own docker images from source, ideally, we'd want to >>>>>>> publish official >>>>>>> >>>>> docker images as part of the release process. >>>>>>> >>>>> >>>>>>> >>>>> I understand that the ASF has procedure around this, and we >>>>>>> would want >>>>>>> >>>>> to get that started to help us get these artifacts published >>>>>>> by 2.3. I'd >>>>>>> >>>>> love to get a discussion around this started, and the thoughts >>>>>>> of the >>>>>>> >>>>> community regarding this. >>>>>>> >>>>> >>>>>>> >>>>> -- >>>>>>> >>>>> Thanks, >>>>>>> >>>>> Anirudh Ramanathan >>>>>>> >>> >>>>>>> >>> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> -- >>>>>>> >> Anirudh Ramanathan >>>>>>> >>>>>>> --------------------------------------------------------------------- >>>>>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> > --94eb2c09a59c6de9f70560b6046e Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Unfortunately you'll need to chase down the license of= all the bits that are distributed directly by the project. This was a big = job back in the day for the Maven artifacts and some work to maintain. Most= of the work is one-time, at least.

On Tue, Dec 19, 2017 at 12:53 PM Erik Erlandson <eerlands@redhat.com> wrote:
Agreed that the GPL family= would be "toxic."

The current images have been at least i= nformally confirmed to use licenses that are ASF compatible.=C2=A0 Is there= an officially sanctioned method of license auditing that can be applied he= re?

On T= ue, Dec 19, 2017 at 11:45 AM, Sean Owen <sowen@cloudera.com> wrote:
I think that= 9;s all correct, though the license of third party dependencies is actually= a difficult and sticky part. The ASF couldn't make a software release = including any GPL software for example, and it's not just a matter of a= dding a disclaimer. Any actual bits distributed by the PMC would have to fo= llow all the license rules.

On Tue, Dec 19, 2017 at 12:34 PM Erik Erlandson <eerlands@redhat.com&g= t; wrote:
I've been looking a bit more into ASF legal posture on lice= nsing and container images. What I have found indicates that ASF considers = container images to be just another variety of distribution channel.=C2=A0 = As such, it is acceptable to publish official releases; for example an imag= e such as spark:v2.3.0 built from the v2.3.0 source is fine.=C2=A0 It is no= t acceptable to do something like regularly publish spark:latest built from= the head of master.

More detail here:
https://issues.apache.o= rg/jira/browse/LEGAL-270

So as I understand it, making a r= elease-tagged public image as part of each official release does not pose a= ny problems.

With respect to considering the licenses of other= ancillary dependencies that are also installed on such container images, I= noticed this clause in the legal boilerplate for the Flink images:

<= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft:1px solid rgb(204,204,204);padding-left:1ex">As with all Docker images,= these likely also contain other software=20 which may be under other licenses (such as Bash, etc from the base=20 distribution, along with any direct or indirect dependencies of the=20 primary software being contained).

So it may be s= ufficient to resolve this via disclaimer.
<= div>

-Erik

On Thu, Dec 14, 201= 7 at 7:55 PM, Erik Erlandson <eerlands@redhat.com> wrote:<= br>
Currently the conta= iners are based off alpine, which pulls in BSD2 and MIT licensing:
https://github.com/apache/spark/pull/19717#discussion_r15450= 2824

to the best of my understanding, neither of those pos= es a problem.=C2=A0 If we based the image off of centos I'd also expect= the licensing of any image deps to be compatible.

On Thu, Dec = 14, 2017 at 7:19 PM, Mark Hamstra <mark@clearstorydata.com> wrote:
What licens= ing issues come into play?

On Thu, Dec 14, 2017 at 4:00 PM, Erik Erlandson <eerlands@redhat.com> wrote:
We've been discussing the topic of container images a b= it more.=C2=A0 The kubernetes back-end operates by executing some specific = CMD and ENTRYPOINT logic, which is different than mesos, and which is proba= bly not practical to unify at this level.

However: These CMD a= nd ENTRYPOINT configurations are essentially just a thin skin on top of an = image which is just an install of a spark distro.=C2=A0 We feel that a sing= le "spark-base" image should be publishable, that is consumable b= y kube-spark images, and mesos-spark images, and likely any other community= image whose primary purpose is running spark components.=C2=A0 The kube-sp= ecific dockerfiles would be written "FROM spark-base" and just ad= d the small command and entrypoint layers.=C2=A0 Likewise, the mesos images= could add any specialization layers that are necessary on top of the "= ;spark-base" image.

Does this factorization sound reasona= ble to others?
Cheers,
Erik

<= /div>

=
We do support running on Apache Mesos via docker ima= ges - so this
would not be restricted to k8s.
But unlike mesos support, which has other modes of running, I believe
k8s support more heavily depends on availability of docker images.


Regards,
Mridul


On Wed, Nov 29, 2017 at 8:56 AM, Sean Owen <sowen@cloudera.com> wrote:
> Would it be logical to provide Docker-based distributions of other pie= ces of
> Spark? or is this specific to K8S?
> The problem is we wouldn't generally also provide a distribution o= f Spark
> for the reasons you give, because if that, then why not RPMs and so on= .
>
> On Wed, Nov 29, 2017 at 10:41 AM Anirudh Ramanathan <ramanathana@google.com>= ;
> wrote:
>>
>> In this context, I think the docker images are similar to the bina= ries
>> rather than an extension.
>> It's packaging the compiled distribution to save people the ef= fort of
>> building one themselves, akin to binaries or the python package. >>
>> For reference, this is the base dockerfile for the main image that= we
>> intend to publish. It's not particularly complicated.
>> The driver and executor images are based on said base image and on= ly
>> customize the CMD (any file/directory inclusions are extraneous an= d will be
>> removed).
>>
>> Is there only one way to build it? That's a bit harder to reas= on about.
>> The base image I'd argue is likely going to always be built th= at way. The
>> driver and executor images, there may be cases where people want t= o
>> customize it - (like putting all dependencies into it for example)= .
>> In those cases, as long as our images are bare bones, they can use= the
>> spark-driver/spark-executor images we publish as the base, and bui= ld their
>> customization as a layer on top of it.
>>
>> I think the composability of docker images, makes this a bit diffe= rent
>> from say - debian packages.
>> We can publish canonical images that serve as both - a complete im= age for
>> most Spark applications, as well as a stable substrate to build >> customization upon.
>>
>> On Wed, Nov 29, 2017 at 7:38 AM, Mark Hamstra <mark@clearstorydata.com>= ;
>> wrote:
>>>
>>> It's probably also worth considering whether there is only= one,
>>> well-defined, correct way to create such an image or whether t= his is a
>>> reasonable avenue for customization. Part of why we don't = do something like
>>> maintain and publish canonical Debian packages for Spark is be= cause
>>> different organizations doing packaging and distribution of in= frastructures
>>> or operating systems can reasonably want to do this in a custo= m (or
>>> non-customary) way. If there is really only one reasonable way= to do a
>>> docker image, then my bias starts to tend more toward the Spar= k PMC taking
>>> on the responsibility to maintain and publish that image. If t= here is more
>>> than one way to do it and publishing a particular image is mor= e just a
>>> convenience, then my bias tends more away from maintaining and= publish it.
>>>
>>> On Wed, Nov 29, 2017 at 5:14 AM, Sean Owen <sowen@cloudera.com> wrote:<= br> >>>>
>>>> Source code is the primary release; compiled binary releas= es are
>>>> conveniences that are also released. A docker image sounds= fairly different
>>>> though. To the extent it's the standard delivery mecha= nism for some artifact
>>>> (think: pyspark on PyPI as well) that makes sense, but is = that the
>>>> situation? if it's more of an extension or alternate p= resentation of Spark
>>>> components, that typically wouldn't be part of a Spark= release. The ones the
>>>> PMC takes responsibility for maintaining ought to be the c= ore, critical
>>>> means of distribution alone.
>>>>
>>>> On Wed, Nov 29, 2017 at 2:52 AM Anirudh Ramanathan
>>>> <ramanathana@google.com.invalid> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> We're all working towards the Kubernetes scheduler= backend (full steam
>>>>> ahead!) that's targeted towards Spark 2.3. One of = the questions that comes
>>>>> up often is docker images.
>>>>>
>>>>> While we're making available dockerfiles to allow = people to create
>>>>> their own docker images from source, ideally, we'd= want to publish official
>>>>> docker images as part of the release process.
>>>>>
>>>>> I understand that the ASF has procedure around this, a= nd we would want
>>>>> to get that started to help us get these artifacts pub= lished by 2.3. I'd
>>>>> love to get a discussion around this started, and the = thoughts of the
>>>>> community regarding this.
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Anirudh Ramanathan
>>>
>>>
>>
>>
>>
>> --
>> Anirudh Ramanathan

---------------------------------------------------= ------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org






--94eb2c09a59c6de9f70560b6046e--