hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Haijia Zhou <leons...@gmail.com>
Subject Re: FILE_BYTES_READ counter missing for HBase mapreduce job
Date Thu, 05 Sep 2013 18:50:48 GMT
Addition info:
The mapreduce job I run is a map-only job. It does not have reducers and it
write data directly to hdfs in the mapper.
 Could this be the reason why there's no value for file_bytes_read?
 If so, is there any easy way to get the total input data size?

 Thanks
Haijia


On Thu, Sep 5, 2013 at 2:46 PM, Haijia Zhou <leonster@gmail.com> wrote:

> Hi,
>  Basically I have a mapreduce job to scan a hbase table and do some
> processing. After the job finishes, I only got three filesystem counters:
> HDFS_BYTES_READ, HDFS_BYTES_WRITTEN and FILE_BYTES_WRITTEN.
>  The value of HDFS_BYTES_READ is not very useful here because it shows the
> size of the .META file, not the size of input records.
>  I am looking for counter FILE_BYTES_READ but somehow it's missing in the
> job status report.
>
>  Does anyone know what I might miss here?
>
>  Thanks
> Haijia
>
> P.S. The job status report
>  FileSystemCounters
> HDFS_BYTES_READ
>    340,124      0           340,124
> FILE_BYTES_WRITTEN
> 190,431,329      0       190,431,329
> HDFS_BYTES_WRITTEN
> 272,538,467,123      0   272,538,467,123
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message