hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: Bulk import question.
Date Tue, 02 Dec 2008 02:42:17 GMT
Edward,

You are running with insufficient resources -- too little CPU
for your task and too little disk for your data. 

If you are running a mapreduce task and DFS runs out of space
for the temporary files, then you indeed should expect
aberrant job status from the Hadoop job framework, for
example such things as completion status running backwards.

I do agree that under these circumstances HBase daemons
should fail more gracefully, by entering some kind of
degraded read only mode, if DFS is not totally dead. I 
suspect this is already on a to do list somewhere, and I
vaguely recall a jira filed on that topic.

   - Andy


> From: Edward J. Yoon <edwardyoon@apache.org>
> Subject: Re: Bulk import question.
> To: hbase-user@hadoop.apache.org, apurtell@apache.org
> Date: Monday, December 1, 2008, 6:26 PM
> It was by 'Datanode DiskOutOfSpaceException'. But, I
> think daemons should not dead.
> 
> On Wed, Nov 26, 2008 at 1:08 PM, Edward J. Yoon
> <edwardyoon@apache.org> wrote:
> > Hmm. It often occurs to me. I'll check the logs.
> >
> > On Fri, Nov 21, 2008 at 9:46 AM, Andrew Purtell
> <apurtell@yahoo.com> wrote:
> > > I think a 2 node cluster is simply too small for
> > > the full load of everything.
> > >



      

Mime
View raw message