hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rahul <rmalv...@apple.com>
Subject Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
Date Sun, 03 Oct 2010 17:58:13 GMT
Thanks Jonathan,

Its really of great help.

Rahul
On Oct 2, 2010, at 9:32 PM, Jonathan Gray wrote:

> Rahul,
> 
> There is a ton of documentation available for Hadoop (including books).
> 
> Best place to start is the wiki: http://wiki.apache.org/hadoop/
> 
> On your specific issue, you need to configure Hadoop to tell it what directories to store
data.
> 
> The configuration parameter name is 'dfs.data.dir' and you need to put in a comma-delimited
list of directories to use to store data.
> 
> JG
> 
>> -----Original Message-----
>> From: rahul [mailto:rmalviya@apple.com]
>> Sent: Saturday, October 02, 2010 9:53 AM
>> To: general@hadoop.apache.org
>> Subject: Re: Total Space Available on Hadoop Cluster Or Hadoop version
>> of "df".
>> 
>> Hi Marcos,
>> 
>> Same thing is happening for me as well.
>> 
>> I have multiple disks mounted to my system but by default when i format
>> it took the nearest/ disk in which hadoop binary is present.
>> 
>> Is there a way in which I can format all the drives mounted to my
>> system ?
>> 
>> So can we control in some way the drives or the places which we want to
>> format for hdfs?
>> 
>> Thanks,
>> Rahul
>> 
>> On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:
>> 
>>> I gotte the same problem, I remember it was something realted to
>> user's
>>> partition.
>>> for example I created hadoop user so HDFS took the closest partition
>> to
>>> user.
>>> I dont remenber exaclty but it was something like that. I hope it
>> helps u in
>>> someway.
>>> 
>>> On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore
>> <Glenn.Gore@melbourneit.com.au>wrote:
>>> 
>>>> hadoop dfsadmin -report
>>>> 
>>>> Regards
>>>> 
>>>> Glenn
>>>> 
>>>> 
>>>> -----Original Message-----
>>>> From: rahul [mailto:rmalviya@apple.com]
>>>> Sent: Sat 10/2/2010 2:27 PM
>>>> To: general@hadoop.apache.org
>>>> Subject: Total Space Available on Hadoop Cluster Or Hadoop version
>> of "df".
>>>> 
>>>> Hi,
>>>> 
>>>> I am using Hadoop 0.20.2 version for data processing by setting up
>> Hadoop
>>>> Cluster on two nodes.
>>>> 
>>>> And I am continuously adding more space to the nodes.
>>>> 
>>>> Can some body let me know how to get the total space available on
>> the
>>>> hadoop cluster using command line.
>>>> 
>>>> or
>>>> 
>>>> Hadoop version "df", Unix command.
>>>> 
>>>> Any input is helpful.
>>>> 
>>>> Thanks
>>>> Rahul
>>>> 
>>>> 
> 


Mime
View raw message