hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: DiskErrorException
Date Fri, 19 Dec 2008 01:24:20 GMT
> Scanners need to be closed.   See if that makes a difference.

Oh, it was missed. I'll retrying.

> When you say 25 million entries, is that 5000 columns per row?  How many
> families?
>
> Each cell is 200MB?

Each row have 5000 columns and Cell is a 8 bytes.

On Fri, Dec 19, 2008 at 10:05 AM, stack <stack@duboce.net> wrote:
> Scanners need to be closed.   See if that makes a difference.
>
> When you say 25 million entries, is that 5000 columns per row?  How many
> families?
>
> Each cell is 200MB?
>
> St.Ack
>
>
> Edward J. Yoon wrote:
>>
>> This is my input and mapper code information. There is no reduce and
>> output collection, this DEE problem always occurred.
>>
>> == Input Table Info ==
>>
>> Rows - 5,000
>> The number of entries - 25,000,000
>> The size of entries - 200 MB
>>
>> == Mapper ==
>>
>> Map {
>>  // key/value is a range.
>>  // no output collection
>>
>>  Scanner scan = table.getScanner(cols, key, value);
>> }
>>
>> On Fri, Dec 19, 2008 at 5:55 AM, stack <stack@duboce.net> wrote:
>>
>>>
>>> The DEE might be symptom of the job being killed Edward.  If you look in
>>> logs of attempt_200812181605_0013_m_000001_0, does it say what it was
>>> stuck
>>> doing such that the task timed out?
>>> St.Ack
>>>
>>> Edward J. Yoon wrote:
>>>
>>>>
>>>> Could someone review this error log? What is the DiskErrorException?
>>>>
>>>> ----
>>>> 08/12/18 18:00:21 WARN mapred.JobClient: Use genericOptions for the
>>>> option
>>>> -libj
>>>> ars
>>>> 08/12/18 18:00:21 WARN mapred.JobClient: No job jar file set.  User
>>>> classes may
>>>> not be found. See JobConf(Class) or JobConf#setJar(String).
>>>> 08/12/18 18:00:21 INFO mapred.TableInputFormatBase: split:
>>>> 0->d8g054.nhncorp.com
>>>> :,000000000003182
>>>> 08/12/18 18:00:21 INFO mapred.TableInputFormatBase: split:
>>>> 1->d8g053.nhncorp.com
>>>> :000000000003182,
>>>> 08/12/18 18:00:21 INFO mapred.JobClient: Running job:
>>>> job_200812181605_0013
>>>> 08/12/18 18:00:22 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 08/12/18 18:00:28 INFO mapred.JobClient:  map 50% reduce 0%
>>>> 08/12/18 18:00:38 INFO mapred.JobClient:  map 50% reduce 8%
>>>> 08/12/18 18:00:41 INFO mapred.JobClient:  map 50% reduce 16%
>>>> 08/12/18 18:10:43 INFO mapred.JobClient: Task Id :
>>>> attempt_200812181605_0013_m_0
>>>> 00001_0, Status : FAILED
>>>> Task attempt_200812181605_0013_m_000001_0 failed to report status for
>>>> 602
>>>> second
>>>> s. Killing!
>>>> 08/12/18 18:11:39 INFO mapred.JobClient: Task Id :
>>>> attempt_200812181605_0013_m_0
>>>> 00001_1, Status : FAILED
>>>> Task attempt_200812181605_0013_m_000001_1 failed to report status for
>>>> 602
>>>> second
>>>> s. Killing!
>>>>
>>>> Hadoop task tracker:
>>>>
>>>> 2008-12-18 18:15:31,760 INFO org.apache.hadoop.mapred.TaskTracker:
>>>> attempt_20081
>>>> 2181605_0013_r_000001_0 0.16666667% reduce > copy (1 of 2 at 0.00 MB/s)
>>>> >
>>>> 2008-12-18 18:15:32,450 INFO org.apache.hadoop.mapred.TaskTracker:
>>>> org.apache.ha
>>>> doop.util.DiskChecker$DiskErrorException: Could not find
>>>> taskTracker/jobcache/jo
>>>> b_200812181605_0013/attempt_200812181605_0013_r_000001_0/output/file.out
>>>> in any
>>>> of the configured local directories
>>>> 2008-12-18 18:15:34,763 INFO org.apache.hadoop.mapred.TaskTracker:
>>>> attempt_20081
>>>> 2181605_0013_r_000001_0 0.16666667% reduce > copy (1 of 2 at 0.00 MB/s)
>>>> >
>>>> 2008-12-18 18:15:37,453 INFO org.apache.hadoop.mapred.TaskTracker:
>>>> org.apache.ha
>>>> doop.util.DiskChecker$DiskErrorException: Could not find
>>>> taskTracker/jobcache/jo
>>>> b_200812181605_0013/attempt_200812181605_0013_r_000001_0/output/file.out
>>>> in any
>>>> of the configured local directories
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>>
>>
>
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Mime
View raw message