hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rahul <rmalv...@apple.com>
Subject Re: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
Date Sat, 02 Oct 2010 16:52:42 GMT
Hi Marcos,

Same thing is happening for me as well. 

I have multiple disks mounted to my system but by default when i format it took the nearest/
disk in which hadoop binary is present.

Is there a way in which I can format all the drives mounted to my system ?

So can we control in some way the drives or the places which we want to format for hdfs?

Thanks,
Rahul

On Oct 2, 2010, at 7:39 AM, Marcos Pinto wrote:

> I gotte the same problem, I remember it was something realted to user's
> partition.
> for example I created hadoop user so HDFS took the closest partition to
> user.
> I dont remenber exaclty but it was something like that. I hope it helps u in
> someway.
> 
> On Sat, Oct 2, 2010 at 2:13 AM, Glenn Gore <Glenn.Gore@melbourneit.com.au>wrote:
> 
>> hadoop dfsadmin -report
>> 
>> Regards
>> 
>> Glenn
>> 
>> 
>> -----Original Message-----
>> From: rahul [mailto:rmalviya@apple.com]
>> Sent: Sat 10/2/2010 2:27 PM
>> To: general@hadoop.apache.org
>> Subject: Total Space Available on Hadoop Cluster Or Hadoop version of "df".
>> 
>> Hi,
>> 
>> I am using Hadoop 0.20.2 version for data processing by setting up Hadoop
>> Cluster on two nodes.
>> 
>> And I am continuously adding more space to the nodes.
>> 
>> Can some body let me know how to get the total space available on the
>> hadoop cluster using command line.
>> 
>> or
>> 
>> Hadoop version "df", Unix command.
>> 
>> Any input is helpful.
>> 
>> Thanks
>> Rahul
>> 
>> 


Mime
View raw message