hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "zjffdu" <zjf...@gmail.com>
Subject RE: Cluster Disk Usage
Date Fri, 21 Aug 2009 15:22:58 GMT
Arvind,

You can use this API to get the size of file system used

FileSystem.getUsed();


But, I do not find the API for calculate the remaining space. You can write
some code to create a API,  

The remaining disk space = Total of disk space - operate system space -
FileSystem.getUsed() 



-----Original Message-----
From: Arvind Sharma [mailto:arvind321@yahoo.com] 
Sent: 2009年8月20日 16:45
To: common-user@hadoop.apache.org
Subject: Re: Cluster Disk Usage

Sorry, I also sent a direct e-mail to one response.... 

there I asked one question - what is the cost of these APIs ???  Are they
too expensive calls ?  Is the API only going to the NN which stores this
data ?

Thanks!
Arvind




________________________________
From: Arvind Sharma <arvind321@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 4:01:02 PM
Subject: Re: Cluster Disk Usage

Using hadoop-0.19.2




________________________________
From: Arvind Sharma <arvind321@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 3:56:53 PM
Subject: Cluster Disk Usage

Is there a way to find out how much disk space - overall or per Datanode
basis - is available before creating a file ?

I am trying to address an issue where the disk got full (config error) and
the client was not able to create a file on the HDFS.

I want to be able to check if  there space left on the grid before trying to
create the file.

Arvind


      


Mime
View raw message