hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Q, DiskOutOfSpaceException
Date Tue, 02 Dec 2008 03:11:34 GMT
After DiskOutOfSpaceException occurred, the daemons are dead except
TaskTracker. Isn't it problematic?

/Edward

---------- Forwarded message ----------
From: Edward J. Yoon <edwardyoon@apache.org>
Date: Tue, Dec 2, 2008 at 12:04 PM
Subject: Re: Bulk import question.
To: apurtell@apache.org
Cc: hbase-user@hadoop.apache.org, 02635@nhncorp.com


I'm considering to store the large-scale web-mail data on the Hbase.
As you know, there is a lot of mail bomb (e.g. spam, group mail,...,
etc). So, I tested these.

Here's my additionally question. Have we a monitoring tool for disk space?

/Edward

On Tue, Dec 2, 2008 at 11:42 AM, Andrew Purtell <apurtell@apache.org> wrote:
> Edward,
>
> You are running with insufficient resources -- too little CPU
> for your task and too little disk for your data.
>
> If you are running a mapreduce task and DFS runs out of space
> for the temporary files, then you indeed should expect
> aberrant job status from the Hadoop job framework, for
> example such things as completion status running backwards.
>
> I do agree that under these circumstances HBase daemons
> should fail more gracefully, by entering some kind of
> degraded read only mode, if DFS is not totally dead. I
> suspect this is already on a to do list somewhere, and I
> vaguely recall a jira filed on that topic.
>
>   - Andy
>
>
>> From: Edward J. Yoon <edwardyoon@apache.org>
>> Subject: Re: Bulk import question.
>> To: hbase-user@hadoop.apache.org, apurtell@apache.org
>> Date: Monday, December 1, 2008, 6:26 PM
>> It was by 'Datanode DiskOutOfSpaceException'. But, I
>> think daemons should not dead.
>>
>> On Wed, Nov 26, 2008 at 1:08 PM, Edward J. Yoon
>> <edwardyoon@apache.org> wrote:
>> > Hmm. It often occurs to me. I'll check the logs.
>> >
>> > On Fri, Nov 21, 2008 at 9:46 AM, Andrew Purtell
>> <apurtell@yahoo.com> wrote:
>> > > I think a 2 node cluster is simply too small for
>> > > the full load of everything.
>> > >
>
>
>
>
>



--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Mime
View raw message