Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B2723200AE1 for ; Mon, 6 Jun 2016 17:59:20 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B0C33160A24; Mon, 6 Jun 2016 15:59:20 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 5A7DE160A1E for ; Mon, 6 Jun 2016 17:59:19 +0200 (CEST) Received: (qmail 18069 invoked by uid 500); 6 Jun 2016 15:59:17 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 18059 invoked by uid 99); 6 Jun 2016 15:59:17 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Jun 2016 15:59:17 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 56F2E180545 for ; Mon, 6 Jun 2016 15:59:17 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.179 X-Spam-Level: * X-Spam-Status: No, score=1.179 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=cloudxlab.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id xiMyQSCRwMiT for ; Mon, 6 Jun 2016 15:59:14 +0000 (UTC) Received: from mail-it0-f51.google.com (mail-it0-f51.google.com [209.85.214.51]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id D364C5F306 for ; Mon, 6 Jun 2016 15:59:13 +0000 (UTC) Received: by mail-it0-f51.google.com with SMTP id z189so46548437itg.0 for ; Mon, 06 Jun 2016 08:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudxlab.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=Mk4OdpxmBbwZYst9w2lqQMV/dkfMKfKH4Kc3WVrU2Zo=; b=MX7FDTRENBLRFzoHdWd+klBhXMEbjDbooKWo6aSsE5ZgX32QpNoEGVVK8CVdDGNeuW 9IIiYfkkxJpk4h8oUUb+ASzauRp8qdVbbjN2/WZukyiPb/R1a2cOQGfVHP4akzCHLlIL i+oEywDYeJCUe3n0O0/0s+KkXtyUzBV5b8e7M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=Mk4OdpxmBbwZYst9w2lqQMV/dkfMKfKH4Kc3WVrU2Zo=; b=LtYcka65nX5YIVXRJsFiHxLtYGx8iasubRSrN+fVW02EXJeANjLXSYotZwxxJ5M/s3 Mqt/c+kaTew3MMu/xc71YU8SSlbJg8UQE8colq5XM9az6rgZ4OzCYH/8i3bFrpsx5ScC J5J4d5kl7u11FWZ20jRgu9aEhcw9RYbF9PWaYRZfMOXA0hLdekL+eBJ1UVyHV+Xg3bdZ lQYoD/pBm/WRcOknxzUYqClZvyYSDDNiLLO4CnRFW9wkJ/GxP2hSVlyOZbozZU7LB7K/ 4/DzjszCZX87kYd8p13cKANJyJRY0CQJYHI54HIoF5olgzFmjxYFgAwddGbZiqfzAkPy hwOA== X-Gm-Message-State: ALyK8tKBIxqMmaVDL8ya/tParpgpV0Eh9yxGMFhlCBfNoqVs4of9fi4f/y2AQKoaWIUowU7ByMBLhrnSlOB5v/d6 MIME-Version: 1.0 X-Received: by 10.36.16.193 with SMTP id 184mr17984359ity.12.1465228752649; Mon, 06 Jun 2016 08:59:12 -0700 (PDT) Received: by 10.107.47.139 with HTTP; Mon, 6 Jun 2016 08:59:12 -0700 (PDT) Received: by 10.107.47.139 with HTTP; Mon, 6 Jun 2016 08:59:12 -0700 (PDT) In-Reply-To: References: Date: Mon, 6 Jun 2016 21:29:12 +0530 Message-ID: Subject: Re: Why does the user need write permission on the location of external hive table? From: Sandeep Giri To: user@hive.apache.org Content-Type: multipart/alternative; boundary=001a11445cc851380005349e2709 archived-at: Mon, 06 Jun 2016 15:59:20 -0000 --001a11445cc851380005349e2709 Content-Type: text/plain; charset=UTF-8 Yes, Mich that's right. That folder us read-only to me. That's my question. Why do we need modification permissions on the location while creating external table. This data is read-only. In hive, how can we process the huge data on which we don't have write permissions? Is cloning this data the only possibility? On May 31, 2016 3:15 PM, "Mich Talebzadeh" wrote: > right that directly belongs to hdfs:hdfs and nonone else bar that user can > write to it. > > if you are connecting via beeline you need to specify the user and > password > > beeline -u jdbc:hive2://rhes564:10010/default > org.apache.hive.jdbc.HiveDriver -n hduser -p xxxx > > When I look at permissioning I see only hdfs can write to it not user > Sandeep? > > HTH > > Dr Mich Talebzadeh > > > > LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw > * > > > > http://talebzadehmich.wordpress.com > > > > On 31 May 2016 at 09:20, Sandeep Giri wrote: > >> Yes, when I run hadoop fs it gives results correctly. >> >> *hadoop fs -ls >> /data/SentimentFiles/SentimentFiles/upload/data/tweets_raw/* >> *Found 30 items* >> *-rw-r--r-- 3 hdfs hdfs 6148 2015-12-04 15:19 >> /data/SentimentFiles/SentimentFiles/upload/data/tweets_raw/.DS_Store* >> *-rw-r--r-- 3 hdfs hdfs 803323 2015-12-04 15:19 >> /data/SentimentFiles/SentimentFiles/upload/data/tweets_raw/FlumeData.1367523670393.gz* >> *-rw-r--r-- 3 hdfs hdfs 284355 2015-12-04 15:19 >> /data/SentimentFiles/SentimentFiles/upload/data/tweets_raw/FlumeData.1367523670394.gz* >> *....* >> >> >> >> >> On Tue, May 31, 2016 at 1:42 PM, Mich Talebzadeh < >> mich.talebzadeh@gmail.com> wrote: >> >>> is this location correct and valid? >>> >>> LOCATION '/data/SentimentFiles/*SentimentFiles*/upload/data/tweets_raw/' >>> >>> Dr Mich Talebzadeh >>> >>> >>> >>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw >>> * >>> >>> >>> >>> http://talebzadehmich.wordpress.com >>> >>> >>> >>> On 31 May 2016 at 08:50, Sandeep Giri wrote: >>> >>>> Hi Hive Team, >>>> >>>> As per my understanding, in Hive, you can create two kinds of tables: >>>> Managed and External. >>>> >>>> In case of managed table, you own the data and hence when you drop the >>>> table the data is deleted. >>>> >>>> In case of external table, you don't have ownership of the data and >>>> hence when you delete such a table, the underlying data is not deleted. >>>> Only metadata is deleted. >>>> >>>> Now, recently i have observed that you can not create an external table >>>> over a location on which you don't have write (modification) permissions in >>>> HDFS. I completely fail to understand this. >>>> >>>> Use case: It is quite common that the data you are churning is huge and >>>> read-only. So, to churn such data via Hive, will you have to copy this huge >>>> data to a location on which you have write permissions? >>>> >>>> Please help. >>>> >>>> My data is located in a hdfs folder >>>> (/data/SentimentFiles/SentimentFiles/upload/data/tweets_raw/) on which I >>>> only have readonly permission. And I am trying to execute the following >>>> command >>>> >>>> *CREATE EXTERNAL TABLE tweets_raw (* >>>> * id BIGINT,* >>>> * created_at STRING,* >>>> * source STRING,* >>>> * favorited BOOLEAN,* >>>> * retweet_count INT,* >>>> * retweeted_status STRUCT<* >>>> * text:STRING,* >>>> * users:STRUCT>,* >>>> * entities STRUCT<* >>>> * urls:ARRAY>,* >>>> * user_mentions:ARRAY>,* >>>> * hashtags:ARRAY>>,* >>>> * text STRING,* >>>> * user1 STRUCT<* >>>> * screen_name:STRING,* >>>> * name:STRING,* >>>> * friends_count:INT,* >>>> * followers_count:INT,* >>>> * statuses_count:INT,* >>>> * verified:BOOLEAN,* >>>> * utc_offset:STRING, -- was INT but nulls are strings* >>>> * time_zone:STRING>,* >>>> * in_reply_to_screen_name STRING,* >>>> * year int,* >>>> * month int,* >>>> * day int,* >>>> * hour int* >>>> * )* >>>> * ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'* >>>> * WITH SERDEPROPERTIES ("ignore.malformed.json" = "true")* >>>> * LOCATION >>>> '/data/SentimentFiles/SentimentFiles/upload/data/tweets_raw/'* >>>> * ;* >>>> >>>> It throws the following error: >>>> >>>> FAILED: Execution Error, return code 1 from >>>> org.apache.hadoop.hive.ql.exec.DDLTask. >>>> MetaException(message:java.security.AccessControlException: Permission >>>> denied: user=sandeep, access=WRITE, >>>> inode="/data/SentimentFiles/SentimentFiles/upload/data/tweets_raw":hdfs:hdfs:drwxr-xr-x >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1729) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8348) >>>> at >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:1978) >>>> at >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.ja >>>> va:1443) >>>> at >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProto >>>> s.java) >>>> at >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) >>>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) >>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) >>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) >>>> at java.security.AccessController.doPrivileged(Native Method) >>>> at javax.security.auth.Subject.doAs(Subject.java:422) >>>> at >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) >>>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) >>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Sandeep Giri, >>>> +1-(347) 781-4573 (US) >>>> +91-953-899-8962 (IN) >>>> www.CloudxLab.com (A Hadoop cluster for practicing) >>>> >>>> >>> >> >> >> -- >> Regards, >> Sandeep Giri, >> +1-(347) 781-4573 (US) >> +91-953-899-8962 (IN) >> www.CloudxLab.com >> > > --001a11445cc851380005349e2709 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Yes, Mich that's right. That folder us read-only to me.<= /p>

That's my question. Why do we need modification permissi= ons on the location
while creating external table.

This data is read-only. In hive, how can we process the huge= data on which
we don't have write permissions? Is cloning this data the only possibil= ity?

On May 31, 2016 3:15 PM, "Mich Talebzadeh&q= uot; <mich.talebzadeh@gmail= .com> wrote:
=
right that directly belongs to hdfs:hdfs and nonone e= lse bar that user can write to it.

if you are conn= ecting via beeline you need to specify the user and password
beeline -u= jdbc:hive2://rhes564:10010/default org.apache.hive.jdbc.HiveDriver -n hdus= er -p xxxx

When I look at permissioning I s= ee only hdfs can write to it not user Sandeep?

HTH=


On 31 May 2016 at 09:20, Sandeep Giri <= sandeep@cloudxlab.com> wrote:
Yes, when I run hadoop fs it gives results correctly.
hadoop fs -ls /data/SentimentFiles/SentimentFi= les/upload/data/tweets_raw/
Found 30 items<= /div>
-rw-r--r-- =C2=A0 3 hdfs hdfs =C2=A0 =C2=A0 =C2=A0 = 6148 2015-12-04 15:19 /data/SentimentFiles/SentimentFiles/upload/data/tweet= s_raw/.DS_Store
-rw-r--r-- =C2=A0 3 hdfs hdfs = =C2=A0 =C2=A0 803323 2015-12-04 15:19 /data/SentimentFiles/SentimentFiles/u= pload/data/tweets_raw/FlumeData.1367523670393.gz
<= i>-rw-r--r-- =C2=A0 3 hdfs hdfs =C2=A0 =C2=A0 284355 2015-12-04 15:19 /data= /SentimentFiles/SentimentFiles/upload/data/tweets_raw/FlumeData.13675236703= 94.gz
....



On Tue, May 31, 2016 at 1:42 PM, Mich Talebzadeh <mich.t= alebzadeh@gmail.com> wrote:
is this location correct and valid?

LOCATION '/data/Sentime= ntFiles/SentimentFiles/upload/data/tweets_raw/'

On 31 May 2016 at 08:50, Sandeep Giri <= sandeep@cloudxlab.com> wrote:
Hi Hive Team,

As per my understanding, = in Hive, you can create two kinds of tables: Managed and External.=C2=A0

In case of managed table, you own the data and hence= when you drop the table the data is deleted.

In c= ase of external table, you don't have ownership of the data and hence w= hen you delete such a table, the underlying data is not deleted. Only metad= ata is deleted.

Now, recently i have observed that= you can not create an external table over a location on which you don'= t have write (modification) permissions in HDFS. I completely fail to under= stand this.

Use case: It is quite common that the = data you are churning is huge and read-only. So, to churn such data via Hiv= e, will you have to copy this huge data to a location on which you have wri= te permissions?

Please help.

<= div>My data is located in a hdfs folder (/data/SentimentFiles/SentimentFile= s/upload/data/tweets_raw/) =C2=A0on which I only have readonly permission. = And I am trying to execute the following command

CREATE EXTERNAL TABLE tweets_raw (
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 id BIGINT,
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 created_at STRING,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 source = STRING,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 favorited BOOLEAN,=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 retweet_count INT,
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 retweeted_status STRUCT<
=C2=A0 =C2=A0 =C2=A0 =C2=A0 text:STRING,
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 users:STRUCT<screen_name:STRING,name:STRING>>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 entities STRUCT<
<= i>=C2=A0 =C2=A0 =C2=A0 =C2=A0 urls:ARRAY<STRUCT<expanded_url:STRING&g= t;>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 user_mentions:ARRAY<= ;STRUCT<screen_name:STRING,name:STRING>>,
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 hashtags:ARRAY<STRUCT<text:STRING>>>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 text STRING,
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 user1 STRUCT<
=C2=A0 =C2=A0= =C2=A0 =C2=A0 screen_name:STRING,
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 name:STRING,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 friends_co= unt:INT,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 followers_count:INT,<= /i>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 statuses_count:INT,
<= div>=C2=A0 =C2=A0 =C2=A0 =C2=A0 verified:BOOLEAN,
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 utc_offset:STRING, -- was INT but nulls are string= s
=C2=A0 =C2=A0 =C2=A0 =C2=A0 time_zone:STRING>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 in_reply_to_screen_name STRING,<= /div>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 year int,
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 month int,
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 day int,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 hour int
=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
=C2=A0 =C2=A0 =C2=A0 =C2=A0 WITH SERDEPROPERTIES ("igno= re.malformed.json" =3D "true")
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 LOCATION '/data/SentimentFiles/SentimentFiles/upload/= data/tweets_raw/'
=C2=A0 =C2=A0 =C2=A0 =C2=A0 ;

It throws the following error:

<= /div>
FAILED: Execution Error, return code 1 from org.apache.hadoo= p.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlEx= ception: Permission denied: user=3Dsandeep, access=3DWRITE, inode=3D"/= data/SentimentFiles/SentimentFiles/upload/data/tweets_raw":hdfs:hdfs:d= rwxr-xr-x
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.s= erver.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.namenode= .FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSP= ermissionChecker.checkPermission(FSPermissionChecker.java:190)
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSDir= ectory.checkPermission(FSDirectory.java:1771)
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermi= ssion(FSDirectory.java:1755)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.a= pache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.j= ava:1729)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.s= erver.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8348)
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.namenode.NameN= odeRpcServer.checkAccess(NameNodeRpcServer.java:1978)
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtoc= olServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTransl= atorPB.ja
va:1443)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.a= pache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenod= eProtocol$2.callBlockingMethod(ClientNamenodeProtocolProto
s.java= )
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.ProtobufRp= cEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.= java:969)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Se= rver$Handler$1.run(Server.java:2151)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.security.AccessController.doPrivileged(= Native Method)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at javax.security.auth= .Subject.doAs(Subject.java:422)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at or= g.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.jav= a:1657)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Serv= er$Handler.run(Server.java:2145)
<= div>


--
Regards,
Sande= ep Giri,
+91-953-899-8962 (IN)
ww= w.CloudxLab.com=C2=A0 (A Hadoop cluster for practicing)





--
Regards,
Sandeep Giri,
+91-953-899-8962 (IN)

--001a11445cc851380005349e2709--