hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Kim <benkimkim...@gmail.com>
Subject Re: Decommissioning a datanode takes forever
Date Tue, 22 Jan 2013 00:28:31 GMT
Hi Varun, Thnk you for the reponse

No there doesnt seem to be any corrupted blocks in my cluster.
I did "hadoop fsck -blocks /" and it didnt report any corrupted block.

However, these are two WARNings in the namenode log, constantly repeating
since the decommission.

   - 2013-01-22 09:16:30,908 WARN
   org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Cannot roll edit log,
   edits.new files already exists in all healthy directories:
   - 2013-01-22 09:12:10,885 WARN
   org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place
   enough replicas, still in need of 1

There isn't any WARN or ERROR in the decommissioning datanode log

Ben


On Mon, Jan 21, 2013 at 3:05 PM, varun kumar <varun.uid@gmail.com> wrote:

> Hi Ben,
>
> Are there any corrupted blocks in your hadoop cluster.
>
> Regards,
> Varun Kumar
>
>
> On Mon, Jan 21, 2013 at 8:22 AM, Ben Kim <benkimkimben@gmail.com> wrote:
>
>> Hi!
>>
>> I followed the decommissioning guide on the hadoop hdfs wiki.
>>
>> the hdfs web ui shows that the decommissioning proceess has successfully
>> begun.
>>
>> it started redeploying 80,000 blocks through the hadoop cluster, but for
>> some reason it stopped at 9059 blocks. I've waited 30 hours and still no
>> progress.
>>
>> Any one with any idea?
>>  --
>>
>> *Benjamin Kim*
>> *benkimkimben at gmail*
>>
>
>
>
> --
> Regards,
> Varun Kumar.P
>



-- 

*Benjamin Kim*
*benkimkimben at gmail*

Mime
View raw message