hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Duxbury <br...@rapleaf.com>
Subject Massive discrepancies in job's bytes written/read
Date Wed, 18 Mar 2009 00:26:52 GMT
Hey all,

In looking at the stats for a number of our jobs, the amount of data  
that the UI claims we've read from or written to HDFS is vastly  
larger than the amount of data that should be involved in the job.  
For instance, we have a job that combines small files into big files  
that we're operating on around 2TB worth of data. The outputs in HDFS  
(via hadoop dfs -du) matches the expected size, but the jobtracker UI  
claims that we've read and written around 22TB of data!

By all accounts, Hadoop is actually *doing* the right thing - we're  
not observing excess data reading or writing anywhere. However, this  
massive discrepancy makes the job stats essentially worthless for  
understanding IO in our jobs.

Does anyone know why there's such an enormous difference? Have others  
experienced this problem?


View raw message