Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 71CF21842E for ; Sun, 8 Nov 2015 00:11:04 +0000 (UTC) Received: (qmail 17134 invoked by uid 500); 8 Nov 2015 00:10:54 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 16980 invoked by uid 500); 8 Nov 2015 00:10:54 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 16969 invoked by uid 99); 8 Nov 2015 00:10:54 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 08 Nov 2015 00:10:54 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 94D0AC1798 for ; Sun, 8 Nov 2015 00:10:53 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.9 X-Spam-Level: ** X-Spam-Status: No, score=2.9 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id QfN8FdiZlesi for ; Sun, 8 Nov 2015 00:10:43 +0000 (UTC) Received: from mail-wi0-f179.google.com (mail-wi0-f179.google.com [209.85.212.179]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 4E048439DD for ; Sun, 8 Nov 2015 00:10:43 +0000 (UTC) Received: by wimw2 with SMTP id w2so50252032wim.1 for ; Sat, 07 Nov 2015 16:10:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=MVYwfz4EPXWHc6IoT11jY5MjIWZ2djjjFAaZz3gK+wM=; b=OZzYD6Y4a8TULU5X2D2HIw27+9TjD8MDPWut4jRFU7XSEnNQ9vHs4Yhq2NozljZbvo GRAZL1qtYr0tu+0pOIJo2ykf8biB+C41LFOJU5OrnWZWcvq/nupHS+irN7P2TBmAXkOS R6x6a/tPesNDQIy5PmBWX3pAj8kkza7y3e8GrrRIpN6mm1RXyZQcjbBFzXsF/09pPblz 55F4ivWND8KqNGaTFAvu76mNOvUaXGCeIPy21zpgj8zH16+3nbgtHBrEHnlz3bdHMDa6 vAP2yGZUhtUOH0b7ofVfOEsMrdYrI///gzTyTbYkejWzfgfUSUvEMY6LoYO8b9C6Yiou JNyA== X-Received: by 10.194.9.4 with SMTP id v4mr24517782wja.142.1446941436680; Sat, 07 Nov 2015 16:10:36 -0800 (PST) MIME-Version: 1.0 Received: by 10.28.21.4 with HTTP; Sat, 7 Nov 2015 16:10:07 -0800 (PST) In-Reply-To: References: <8AD4EE147886274A8B495D6AF407DF698E49EEC2@szxeml510-mbx.china.huawei.com> From: Namikaze Minato Date: Sun, 8 Nov 2015 01:10:07 +0100 Message-ID: Subject: Re: hadoop not using whole disk for HDFS To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b5d4fe25821b10523fc4ef3 --047d7b5d4fe25821b10523fc4ef3 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I hope you understand that you sent 5 emails to several hundred (thousand?) people in the world in 15 minutes... Please think before hitting this "send" button. In Unix (AND windows) you can mount a drive into a folder. This means just that the disk is accessible from that folder, it does not increase the capacity of / to mount a 2 TB drive in /home. Nor does it use any space on / to do so. Just think that / is one drive, which contains everything EXCEPT /home and is for example 50GB big and /home is another drive which is 2TB big. What you need is to make your hadoop understand that it should use /home (to be precise a folder in /home and not the complete partition) as hdfs storage space. Now I will let the other people in the thread disscuss with you about the technicalities of setting that parameter in the right config file, as I don't have the knowledge about this specific matter. Regards, LLoyd On 8 November 2015 at 00:00, Adaryl "Bob" Wakefield, MBA < adaryl.wakefield@hotmail.com> wrote: > No it=E2=80=99s flat out saying that that config cannot be set with anyth= ing > starting with /home. > > Adaryl "Bob" Wakefield, MBA > Principal > Mass Street Analytics, LLC > 913.938.6685 > www.linkedin.com/in/bobwakefieldmba > Twitter: @BobLovesData > > *From:* Naganarasimha G R (Naga) > *Sent:* Thursday, November 05, 2015 10:58 PM > *To:* user@hadoop.apache.org > *Subject:* RE: hadoop not using whole disk for HDFS > > Hi Bob, > > I am suspecting Ambari would not be allowing to create a folder directly > under */home*, might be it will allow */home//hdfs*, since > directories under /home is expected to be users home dir. > > Regards, > + Naga > ------------------------------ > *From:* Naganarasimha G R (Naga) [garlanaganarasimha@huawei.com] > *Sent:* Friday, November 06, 2015 09:34 > *To:* user@hadoop.apache.org > *Subject:* RE: hadoop not using whole disk for HDFS > > Thanks Brahma, dint realize he might have configured both directories and > i was assuming bob has configured single new directory "/hdfs/data". > So virtually its showing additional space, > *manually try to add a data dir in /home, for your usecase, and restart > datanodes.* > Not sure about the impacs in Ambari but worth a try! , more permanent > solution would be better remount > Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-home 2.7T > 33M 2.7T 1% /home > ------------------------------ > *From:* Brahma Reddy Battula [brahmareddy.battula@huawei.com] > *Sent:* Friday, November 06, 2015 08:19 > *To:* user@hadoop.apache.org > *Subject:* RE: hadoop not using whole disk for HDFS > > > For each configured *dfs.datanode.data.dir* , HDFS thinks its in separate > partiotion and counts the capacity separately. So when another dir is add= ed > /hdfs/data, HDFS thinks new partition is added, So it increased the > capacity 50GB per node. i.e. 100GB for 2 Nodes. > > Not allowing /home directory to configure for data.dir might be ambari's > constraint, instead you can *manually try to add a data dir* in /home, > for your usecase, and restart datanodes. > > > > Thanks & Regards > > Brahma Reddy Battula > > > > > ------------------------------ > *From:* Naganarasimha G R (Naga) [garlanaganarasimha@huawei.com] > *Sent:* Friday, November 06, 2015 7:20 AM > *To:* user@hadoop.apache.org > *Subject:* RE: hadoop not using whole disk for HDFS > > Hi Bob, > > > > *1. I wasn=E2=80=99t able to set the config to /home/hdfs/data. I got an = error > that told me I=E2=80=99m not allowed to set that config to the /home dire= ctory. So > I made it /hdfs/data.* > > *Naga : *I am not sure about the HDP Distro but if you make it point to *= /hdfs/data, > *still it will be pointing to the root mount itself i.e. > > * /dev/mapper/centos-root* *50G* *12G* *39G* *23%* */* > > Other Alternative is to mount the drive to some other folder other than > /home and then try. > > > *2. When I restarted, the space available increased by a whopping 100GB.* > > *Naga : *I am particularly not sure how this happened may be you can > again recheck if you enter the command *"df -h configured>" *you will find out how much disk space is available on the > related mount for which the path is configured. > > > > Regards, > > + Naga > > > > > > > ------------------------------ > *From:* Adaryl "Bob" Wakefield, MBA [adaryl.wakefield@hotmail.com] > *Sent:* Friday, November 06, 2015 06:54 > *To:* user@hadoop.apache.org > *Subject:* Re: hadoop not using whole disk for HDFS > > Is there a maximum amount of disk space that HDFS will use? Is 100GB that > max? When we=E2=80=99re supposed to be dealing with =E2=80=9Cbig data=E2= =80=9D why is the amount of > data to be held on any one box such a small number when you=E2=80=99ve go= t > terabytes available? > > Adaryl "Bob" Wakefield, MBA > Principal > Mass Street Analytics, LLC > 913.938.6685 > www.linkedin.com/in/bobwakefieldmba > Twitter: @BobLovesData > > *From:* Adaryl "Bob" Wakefield, MBA > *Sent:* Wednesday, November 04, 2015 4:38 PM > *To:* user@hadoop.apache.org > *Subject:* Re: hadoop not using whole disk for HDFS > > This is an experimental cluster and there isn=E2=80=99t anything I can=E2= =80=99t lose. I > ran into some issues. I=E2=80=99m running the Hortonworks distro and am m= anaging > things through Ambari. > > 1. I wasn=E2=80=99t able to set the config to /home/hdfs/data. I got an e= rror that > told me I=E2=80=99m not allowed to set that config to the /home directory= . So I > made it /hdfs/data. > 2. When I restarted, the space available increased by a whopping 100GB. > > > > Adaryl "Bob" Wakefield, MBA > Principal > Mass Street Analytics, LLC > 913.938.6685 > www.linkedin.com/in/bobwakefieldmba > Twitter: @BobLovesData > > *From:* Naganarasimha G R (Naga) > *Sent:* Wednesday, November 04, 2015 4:26 PM > *To:* user@hadoop.apache.org > *Subject:* RE: hadoop not using whole disk for HDFS > > > Better would be to stop the daemons and copy the data from */hadoop/hdfs/= data > *to */home/hdfs/data *, reconfigure *dfs.datanode.data.dir* to */home/hdf= s/data > *and then start the daemons. If the data is comparitively less ! > > Ensure you have the backup if have any critical data ! > > > > Regards, > > + Naga > ------------------------------ > *From:* Adaryl "Bob" Wakefield, MBA [adaryl.wakefield@hotmail.com] > *Sent:* Thursday, November 05, 2015 03:40 > *To:* user@hadoop.apache.org > *Subject:* Re: hadoop not using whole disk for HDFS > > So like I can just create a new folder in the home directory like: > home/hdfs/data > and then set dfs.datanode.data.dir to: > /hadoop/hdfs/data,home/hdfs/data > > Restart the node and that should do it correct? > > Adaryl "Bob" Wakefield, MBA > Principal > Mass Street Analytics, LLC > 913.938.6685 > www.linkedin.com/in/bobwakefieldmba > Twitter: @BobLovesData > > *From:* Naganarasimha G R (Naga) > *Sent:* Wednesday, November 04, 2015 3:59 PM > *To:* user@hadoop.apache.org > *Subject:* RE: hadoop not using whole disk for HDFS > > > Hi Bob, > > > > Seems like you have configured to disk dir to be other than an folder in* > /home,* if so try creating another folder and add to > *"dfs.datanode.data.dir"* seperated by comma instead of trying to reset > the default. > > And its also advised not to use the root partition "/" to be configured > for HDFS data dir, if the Dir usage hits the maximum then OS might fail t= o > function properly. > > > > Regards, > > + Naga > ------------------------------ > *From:* P lva [ruvikal@gmail.com] > *Sent:* Thursday, November 05, 2015 03:11 > *To:* user@hadoop.apache.org > *Subject:* Re: hadoop not using whole disk for HDFS > > What does your dfs.datanode.data.dir point to ? > > > On Wed, Nov 4, 2015 at 4:14 PM, Adaryl "Bob" Wakefield, MBA < > adaryl.wakefield@hotmail.com> wrote: > >> Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 50G >> 12G 39G 23% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm >> tmpfs 16G 1.4G 15G 9% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda2 >> 494M 123M 372M 25% /boot /dev/mapper/centos-home 2.7T 33M 2.7T 1% /home >> >> That=E2=80=99s from one datanode. The second one is nearly identical. I >> discovered that 50GB is actually a default. That seems really weird. Dis= k >> space is cheap. Why would you not just use most of the disk and why is i= t >> so hard to reset the default? >> >> Adaryl "Bob" Wakefield, MBA >> Principal >> Mass Street Analytics, LLC >> 913.938.6685 >> www.linkedin.com/in/bobwakefieldmba >> Twitter: @BobLovesData >> >> *From:* Chris Nauroth >> *Sent:* Wednesday, November 04, 2015 12:16 PM >> *To:* user@hadoop.apache.org >> *Subject:* Re: hadoop not using whole disk for HDFS >> >> How are those drives partitioned? Is it possible that the directories >> pointed to by the dfs.datanode.data.dir property in hdfs-site.xml reside= on >> partitions that are sized to only 100 GB? Running commands like df woul= d >> be a good way to check this at the OS level, independently of Hadoop. >> >> --Chris Nauroth >> >> From: MBA >> Reply-To: "user@hadoop.apache.org" >> Date: Tuesday, November 3, 2015 at 11:16 AM >> To: "user@hadoop.apache.org" >> Subject: Re: hadoop not using whole disk for HDFS >> >> Yeah. It has the current value of 1073741824 which is like 1.07 gig. >> >> B. >> *From:* Chris Nauroth >> *Sent:* Tuesday, November 03, 2015 11:57 AM >> *To:* user@hadoop.apache.org >> *Subject:* Re: hadoop not using whole disk for HDFS >> >> Hi Bob, >> >> Does the hdfs-site.xml configuration file contain the property >> dfs.datanode.du.reserved? If this is defined, then the DataNode >> intentionally will not use this space for storage of replicas. >> >> >> dfs.datanode.du.reserved >> 0 >> Reserved space in bytes per volume. Always leave this muc= h >> space free for non dfs use. >> >> >> >> --Chris Nauroth >> >> From: MBA >> Reply-To: "user@hadoop.apache.org" >> Date: Tuesday, November 3, 2015 at 10:51 AM >> To: "user@hadoop.apache.org" >> Subject: hadoop not using whole disk for HDFS >> >> I=E2=80=99ve got the Hortonworks distro running on a three node cluster.= For some >> reason the disk available for HDFS is MUCH less than the total disk spac= e. >> Both of my data nodes have 3TB hard drives. Only 100GB of that is being >> used for HDFS. Is it possible that I have a setting wrong somewhere? >> >> B. >> > > --047d7b5d4fe25821b10523fc4ef3 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I hope you understand that you sent 5 emails to several hu= ndred (thousand?) people in the world in 15 minutes... Please think before = hitting this "send" button.

In Unix (AND windo= ws) you can mount a drive into a folder. This means just that the disk is a= ccessible from that folder, it does not increase the capacity of / to mount= a 2 TB drive in /home. Nor does it use any space on / to do so.
= Just think that / is one drive, which contains everything EXCEPT /home and = is for example 50GB big and /home is another drive which is 2TB big.
What you need is to make your hadoop understand that it should use /home (= to be precise a folder in /home and not the complete partition) as hdfs sto= rage space. Now I will let the other people in the thread disscuss with you= about the technicalities of setting that parameter in the right config fil= e, as I don't have the knowledge about this specific matter.

Reg= ards,
LLoyd

On 8 November 2015 at 00:00, Adaryl "Bob" Wakefi= eld, MBA <adaryl.wakefield@hotmail.com> wrote:
No it=E2=80=99s flat out saying that that config cannot be set with an= ything=20 starting with /home.
=C2=A0
A= daryl=20 "Bob" Wakefield, MBA
Principal
Mass Street Analytics,=20 LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter:=20 @BobLovesData
=C2=A0
Hi=20 Bob,=20
=C2=A0
I am suspecting Ambari would not be allowing to create a folder direct= ly=20 under /home, might be it will allow /home/<user_name>/hdfs<= /i>,=20 since directories under /home is expected to be users home dir.
=C2=A0
Regards,
+ Naga

From: Naganarasimha G R (Naga)=20 [garlana= ganarasimha@huawei.com]
Sent: Friday, November 06, 2015=20 09:34
To: user@hadoop.apache.org
Subject: RE: hadoop not=20 using whole disk for HDFS

Thanks=20 Brahma, dint realize he might have configured both directories and i was=20 assuming bob has configured single new directory "/hdfs/data".=C2=A0=C2=A0=20
So virtually its showing additional space,
manually try to add= a=20 data dir in /home, f= or=20 your usecase, and restart datanodes.
Not sure about the impacs in Ambari but worth a= try! ,=20 more permanent solution would be better remount
= Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ce= ntos-home 2.7T 33M 2.7T 1% /home

From: Brahma Reddy Battula=20 [brahma= reddy.battula@huawei.com]
Sent: Friday, November 06, 2015=20 08:19
To: user@hadoop.apache.org
Subject: RE: hadoop not=20 using whole disk for HDFS


For=20 each configured dfs.datanode.data.dir , HDFS thinks its in separate= =20 partiotion and counts the capacity separately. So when another dir is added= =20 /hdfs/data, HDFS thinks new partition is added, So it increased the capacit= y=20 50GB per node. i.e. 100GB for 2 Nodes.

Not allowing /home directory = to=20 configure for data.dir might be ambari's constraint, instead you can manually=20 try to add a data dir in /home, for your usecase, and restart datanodes= .



Thanks & Regards

=C2=A0Brahma Reddy=20 Battula

=C2=A0



From: Naganarasimha G R (Naga)=20 [garlana= ganarasimha@huawei.com]
Sent: Friday, November 06, 2015 7:20= =20 AM
To: user@hadoop.apache.org
Subject: RE: hadoop not using=20 whole disk for HDFS

Hi Bob,

=C2=A0

1. I wasn=E2=80=99t able to set the config to /home/hdfs/data. I got= an error=20 that told me I=E2=80=99m not allowed to set that config to the /home direct= ory. So I=20 made it /hdfs/data.

Naga : I am not sure about the HDP Distro but if you ma= ke it=20 point to /hdfs/data, still it will be pointing to the root mount i= tself=20 i.e.

<= /tbody>
=C2=A0=C2= =A0=C2=A0=20 /dev/mapper/centos-root 50G 12G 39G 23%= /

Other Alternative is to mount the drive to some other folder other than = /home=20 and then try.

=C2=A0

2. When I restarted, the space available increased by a whopping= =20 100GB.

Naga : I am particularly not sure how this happened may= be=20 you can again recheck if you enter the command "df -h <path of = the NM=20 data dir configured>" you will find out how much disk space is= available=20 on the related mount for which the path is configured.

=C2=A0

Regards,

+ Naga

=C2=A0

=C2=A0

=C2=A0


From: Adaryl "Bob" Wakefield, MBA=20 [adaryl.w= akefield@hotmail.com]
Sent: Friday, November 06, 2015=20 06:54
To: user@hadoop.apache.org
Subject: Re: hadoop not=20 using whole disk for HDFS

Is there a maximum amount of disk space that HDFS will use? Is 100GB t= hat=20 max? When we=E2=80=99re supposed to be dealing with =E2=80=9Cbig data=E2=80= =9D why is the amount of data=20 to be held on any one box such a small number when you=E2=80=99ve got terab= ytes=20 available?
=C2=A0
A= daryl=20 "Bob" Wakefield, MBA
Principal
Mass Street Analytics,=20 LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter:=20 @BobLovesData
=C2=A0
Sent: Wednesday, November 04, 2015 4:38 PM
Subject: Re: hadoop not using whole disk for HDFS
=C2=A0
This is an experimental cluster and there isn=E2=80=99t anything I can= =E2=80=99t lose. I=20 ran into some issues. I=E2=80=99m running the Hortonworks distro and am man= aging things=20 through Ambari.
=C2=A0
1. I wasn=E2=80=99t able to set the config to /home/hdfs/data. I got a= n error that=20 told me I=E2=80=99m not allowed to set that config to the /home directory. = So I made it=20 /hdfs/data.
2. When I restarted, the space available increased by a whopping=20 100GB.
=C2=A0
=C2=A0
=C2=A0
A= daryl=20 "Bob" Wakefield, MBA
Principal
Mass Street Analytics,=20 LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter:=20 @BobLovesData
=C2=A0
Sent: Wednesday, November 04, 2015 4:26 PM
Subject: RE: hadoop not using whole disk for HDFS
=C2=A0

Better would be to stop the daemons and copy the data from=20 /hadoop/hdfs/data to /home/hdfs/data , reconfigure=20 dfs.datanode.data.dir to /home/hdfs/data and then start t= he=20 daemons. If the data is comparitively less !

Ensure you have the backup if have any critical data !

=C2=A0

Regards,

+ Naga


From: Adaryl "Bob" Wakefield, MBA=20 [adaryl.w= akefield@hotmail.com]
Sent: Thursday, November 05, 2015=20 03:40
To: user@hadoop.apache.org
Subject: Re: hadoop not=20 using whole disk for HDFS

So like I can just create a new folder in the home directory like:
home/hdfs/data
and then set dfs.datanode.data.dir to:
/hadoop/hdfs/data,home/hdfs/data
=C2=A0
Restart the node and that should do it correct?
=C2=A0
A= daryl=20 "Bob" Wakefield, MBA
Principal
Mass Street Analytics,=20 LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter:=20 @BobLovesData
=C2=A0
Sent: Wednesday, November 04, 2015 3:59 PM
Subject: RE: hadoop not using whole disk for HDFS
=C2=A0

Hi Bob,

=C2=A0

Seems like you have configured to disk dir to be other than an folder=20 in /home, if so try creating another folder and add to=20 "dfs.datanode.data.dir" seperated by comma inste= ad of trying to=20 reset the default.

And its also advised not to use the root partition "/" to be c= onfigured for=20 HDFS data dir, if the Dir usage hits the maximum then OS might fail to func= tion=20 properly.

=C2=A0

Regards,

+ Naga


From: P lva [ruvikal@gmail.com]
Sent: Thursday,=20 November 05, 2015 03:11
To: user@hadoop.apache.org
Subject:=20 Re: hadoop not using whole disk for HDFS

What does your dfs.datanode.data.dir point to ?
=C2=A0
=C2=A0
On Wed, Nov 4, 2015 at 4:14 PM, Adaryl "Bob= " Wakefield,=20 MBA <adaryl.wakefield@hotmail.com> wrote:
=
Filesystem Size Used Avail<= /td> Use% Mounted on
/dev/mapper/= centos-root 50G 12G 39G 23%= /
devtmpfs 16G 0 16G 0%<= /td> /dev
tmpfs= 16G 0 16G 0%<= /td> /dev/shm
tmpfs= 16G 1.4G 15G 9%<= /td> /run
tmpfs= 16G 0 16G 0%<= /td> /sys/fs/cgroup
/dev/sda2 494M 123M 372M 25%= /boot
/dev/mapper/= centos-home 2.7T 33M 2.7T 1%<= /td> /home
=C2=A0
That=E2=80=99s from one datanode. The second one is nearly identical= . I=20 discovered that 50GB is actually a default. That seems really weird. Disk= =20 space is cheap. Why would you not just use most of the disk and why is it= so=20 hard to reset the default?
=C2=A0
Adaryl=20 "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC=
9= 13.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter:=20 @BobLovesData
=C2=A0
Sent: Wednesday, November 04, 2015 12:16 PM
Subject: Re: hadoop not using whole disk for=20 HDFS
=C2=A0
--Chris=20 Nauroth
=C2=A0
From: = MBA <a= daryl.wakefield@hotmail.com>
Rep= ly-To: "user@hadoop.apache.org"=20 <user@hado= op.apache.org>
Date: Tues= day, November 3, 2015 at 11:16=20 AM
To: "user@hadoop.apache.org"= =20 <user@hado= op.apache.org>
Subject: R= e: hadoop not using whole disk for=20 HDFS
=C2=A0
Yeah. It has the current value of 1073741824 which is like 1.07=20 gig.
=C2=A0
Sent: Tuesday, November 03, 2015 11:57 AM
Subject: Re: hadoop not using whole disk for=20 HDFS
=C2=A0
--Chris=20 Nauroth
=C2=A0
From: = MBA <a= daryl.wakefield@hotmail.com>
Rep= ly-To: "user@hadoop.apache.org"=20 <user@hado= op.apache.org>
Date: Tues= day, November 3, 2015 at 10:51=20 AM
To: "user@hadoop.apache.org"= =20 <user@hado= op.apache.org>
Subject: h= adoop not using whole disk for=20 HDFS
=C2=A0
=C2=A0