hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: Bulk import question.
Date Tue, 02 Dec 2008 05:00:56 GMT
> Let us know how else we can help along your project.

Yup, Thanks. :)

On Tue, Dec 2, 2008 at 1:50 PM, Michael Stack <stack@duboce.net> wrote:
> There is none in hbase; it doesn't manage the filesystem so doesn't make the
> best sense adding it there (We could add it as a metric I suppose).  In hdfs
> there are facilities for asking that it only fill a percentage or an
> explicit amount of the allocated space -- see hadoop-default.xml.  I'm not
> sure how well these work.
>
> Would suggest that you consider the advice given by the lads -- jgray on
> how-to cluster monitor (including disk usage) and apurtell on not-enough
> resources -- if you want to get serious about your cluster.
>
> Let us know how else we can help along your project.
>
> St.Ack
>
>
>
> Edward J. Yoon wrote:
>>
>> I'm considering to store the large-scale web-mail data on the Hbase.
>> As you know, there is a lot of mail bomb (e.g. spam, group mail,...,
>> etc). So, I tested these.
>>
>> Here's my additionally question. Have we a monitoring tool for disk space?
>>
>> /Edward
>>
>> On Tue, Dec 2, 2008 at 11:42 AM, Andrew Purtell <apurtell@apache.org>
>> wrote:
>>
>>>
>>> Edward,
>>>
>>> You are running with insufficient resources -- too little CPU
>>> for your task and too little disk for your data.
>>>
>>> If you are running a mapreduce task and DFS runs out of space
>>> for the temporary files, then you indeed should expect
>>> aberrant job status from the Hadoop job framework, for
>>> example such things as completion status running backwards.
>>>
>>> I do agree that under these circumstances HBase daemons
>>> should fail more gracefully, by entering some kind of
>>> degraded read only mode, if DFS is not totally dead. I
>>> suspect this is already on a to do list somewhere, and I
>>> vaguely recall a jira filed on that topic.
>>>
>>>  - Andy
>>>
>>>
>>>
>>>>
>>>> From: Edward J. Yoon <edwardyoon@apache.org>
>>>> Subject: Re: Bulk import question.
>>>> To: hbase-user@hadoop.apache.org, apurtell@apache.org
>>>> Date: Monday, December 1, 2008, 6:26 PM
>>>> It was by 'Datanode DiskOutOfSpaceException'. But, I
>>>> think daemons should not dead.
>>>>
>>>> On Wed, Nov 26, 2008 at 1:08 PM, Edward J. Yoon
>>>> <edwardyoon@apache.org> wrote:
>>>>
>>>>>
>>>>> Hmm. It often occurs to me. I'll check the logs.
>>>>>
>>>>> On Fri, Nov 21, 2008 at 9:46 AM, Andrew Purtell
>>>>>
>>>>
>>>> <apurtell@yahoo.com> wrote:
>>>>
>>>>>>
>>>>>> I think a 2 node cluster is simply too small for
>>>>>> the full load of everything.
>>>>>>
>>>>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>>
>
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Mime
View raw message