hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adnan Karač <adnanka...@gmail.com>
Subject Re: Cannot obtain block length for LocatedBlock
Date Tue, 26 May 2015 09:13:18 GMT
Hi Brahma,

Thanks for the quick response. I assumed that running file check without
*openforwrite* option would yield file in this block whether it was open
for write and not. However, I have just tried it as well, unfortunately no
success.

Adnan
ᐧ

On Tue, May 26, 2015 at 10:12 AM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

>
> Can you try like following..?
>
> * hdfs fsck -openforwrite -files -blocks -locations / |
> grep blk_1109280129_1099547327549*
>
>
>  Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>    ------------------------------
> *From:* Adnan Karač [adnankarac@gmail.com]
> *Sent:* Tuesday, May 26, 2015 1:34 PM
> *To:* user@hadoop.apache.org
> *Subject:* Cannot obtain block length for LocatedBlock
>
>   Hi all,
>
>  I have an MR job running and exiting with following exception.
>
>  java.io.IOException: Cannot obtain block length for LocatedBlock
> {BP-1632531813-172.19.67.67-1393407344218:blk_1109280129_1099547327549;
> getBlockSize()=139397; corrupt=false; offset=0; locs=[172.19.67.67:50010,
> 172.19.67.78:50010, 172.19.67.84:50010]}
>
>  Now, the fun part is that i don't know which file is in question. In
> order to find this out, i did this:
>
>  *hdfs fsck -files -blocks  / | grep blk_1109280129_1099547327549*
>
>  Interestingly enough, it came up with nothing.
>
>  Did anyone experience anything similar? Or does anyone have a piece of
> advice on how to resolve this?
>
>  Version of hadoop is 2.3.0
>
>  Thanks in advance!
>
>  --
> Adnan Karač
> ᐧ
>



-- 
Adnan Karač

Mime
View raw message