hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: Bulk import question.
Date Tue, 02 Dec 2008 02:26:07 GMT
It was by 'Datanode DiskOutOfSpaceException'. But, I think daemons
should not dead.

On Wed, Nov 26, 2008 at 1:08 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
> Hmm. It often occurs to me. I'll check the logs.
>
> On Fri, Nov 21, 2008 at 9:46 AM, Andrew Purtell <apurtell@yahoo.com> wrote:
>> I think a 2 node cluster is simply too small for the full
>> load of everything.
>>
>> When I go that small I leave DFS out of the picture and run
>> HBase (in "local" mode) on top of a local file system on one
>> node and the jobtracker and tasktrackers on the other.
>> Even then I upped RAM on the HBase node to 3GB and run HBase
>> with 2GB heap for satisfactory results.
>>
>>   - Andy
>>
>>
>>> From: stack <stack@duboce.net>
>>> Subject: Re: Bulk import question.
>>> To: hbase-user@hadoop.apache.org
>>> Date: Thursday, November 20, 2008, 9:40 AM
>>> Edward J. Yoon wrote:
>>> > When I tried to bulk import, I received below error.
>>> (code is same
>>> > with hbase wiki)
>>> >
>>> > 08/11/20 15:23:50 INFO mapred.JobClient:  map 62%
>>> reduce 0%
>>> > 08/11/20 15:27:10 INFO mapred.JobClient:  map 30%
>>> reduce 0%
>>> >
>>> > Is it possible? And, hadoop/hbase daemons are crashed.
>>> >
>>> Percentage done can go in reverse if framework loses a
>>> bunch of maps (e.g. if crash).
>>>
>>> > - hadoop-0.18.2 & hbase-0.18.1
>>> > - 4 CPU, SATA hard disk, Physical Memory 16,626,844 KB
>>> > - 2 node cluster
>>> >
>>>
>>> So, on each node you have datanode, tasktracker, and
>>> regionserver running and then on one of the nodes you also
>>> have namenode plus jobtracker?  How many tasks per server?
>>> Two, the default?
>>>
>>> Check out your regionserver logs.  My guess is one likely
>>> crashed, perhaps because it was starved of time or because
>>> its datanode was not responding nicely because it was
>>> loaded.
>>>
>>> You've enabled DEBUG in hbase so you can get detail,
>>> upped your file descriptors and your xceiverCount count?
>>> (See FAQ for how).
>>>
>>> St.Ack
>>>
>>>
>>> >
>>> > ----
>>> > 08/11/20 15:23:36 INFO mapred.JobClient:  map 57%
>>> reduce 0%
>>> > 08/11/20 15:23:40 INFO mapred.JobClient:  map 59%
>>> reduce 0%
>>> > 08/11/20 15:23:45 INFO mapred.JobClient:  map 60%
>>> reduce 0%
>>> > 08/11/20 15:23:50 INFO mapred.JobClient:  map 62%
>>> reduce 0%
>>> > 08/11/20 15:27:10 INFO mapred.JobClient:  map 30%
>>> reduce 0%
>>> > 08/11/20 15:27:10 INFO mapred.JobClient: Task Id :
>>> > attempt_200811131622_0019_m_000000_0, Status : FAILED
>>> >
>>> org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>> Trying to
>>> > contact region server 61.247.201.164:60020 for region
>>> > mail,,1227162121175, row '?:', but failed
>>> after 10 attempts.
>>> > Exceptions:
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> > java.io.IOException: Call failed on local exception
>>> >
>>> >         at
>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:863)
>>> >         at
>>> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:964)
>>> >         at
>>> org.apache.hadoop.hbase.client.HTable.commit(HTable.java:950)
>>> >         at
>>> com.nhn.mail.Runner$InnerMap.map(Runner.java:59)
>>> >         at
>>> com.nhn.mail.Runner$InnerMap.map(Runner.java:38)
>>> >         at
>>> org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
>>> >         at
>>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
>>> >         at
>>> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
>>> >
>>> >
>>
>>
>>
>>
>
>
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Mime
View raw message