hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: how to increase DFS capacity
Date Sat, 09 Apr 2011 08:06:03 GMT
Moving this to the hdfs-user@ list. Please use that list in the future
for HDFS specific queries.

(bcc'd mapreduce-user)

My reply inline.

On Thu, Apr 7, 2011 at 8:52 PM, zhengjun chen <zhjchen.sa@gmail.com> wrote:
> My problem i: when running my program, it always issues " Disc quota
> exceeded" error.
> I checked with "hadoop dfsadmin -report" command, it shows DFS Used: 98%
> So, how do I increase Present capacity to solve this problem?

Has a quota been set upon your user by your cluster's administrator?
What's the output of df -h on your dfs.data.dir directory's disk?

> On Thu, Apr 7, 2011 at 10:18 AM, zhengjun chen <zhjchen.sa@gmail.com> wrote:
>>
>> I want to increase DFS capacity. Which command I should use?
>> Using "hadoop dfsadmin -report", the following is my current
>> configuration.
>> Configured Capacity: 53687091200 (50 GB)
>> Present Capacity: 2568660701 (2.39 GB)
>> DFS Remaining: 2560737280 (2.38 GB)
>> DFS Used: 7923421 (7.56 MB)
>> DFS Used%: 0.31%
>> How can I increase Present Capacity?

Present capacity is generally the space freely available on the disk.
Perhaps your disk is full, if no quote has been applied?

-- 
Harsh J

Mime
View raw message