hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sudhakara st <sudhakara...@gmail.com>
Subject Re: How to monitor what hdfs block is served to a client?
Date Tue, 07 Jul 2015 12:29:01 GMT
You have to customize *inputformat *by extending *FileInputFormat *and
override methods *getSplits*(JobContext jobc), *computeSplitSize*(long
blockSize, long minSize, long maxSize)

On Sat, Jun 20, 2015 at 4:55 AM, Shiyao Ma <i@introo.me> wrote:

> Hi.
>
> How to monitor the block transmission log of datanodes?
>
>
> A more detailed example:
>
> My hdfs block size is 128MB. I have a file stored on hdfs with size
> 167.08MB.
>
> Also, I have a client, requesting the whole file with three splits, e.g.,
>
> hdfs://myserver:9000/myfile:0+58397994  (0-56MB)
>
> hdfs://myserver:9000/myfile:58397994+58397994 (56MB-112MB)
>
> hdfs://myserver:9000/myfile:116795988+58397994 (112MB-168MB)
>
>
> The situation is kinda fixed and I cannot modify the split size.
> Nevertheless, I'd like to know what block tranmission is happening
> under the earth.
>



-- 

Regards,
...sudhakara

Mime
View raw message