hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From arun k <arunk...@gmail.com>
Subject Re: Capturing Map/reduce task run times and bytes read
Date Sat, 03 Dec 2011 14:30:01 GMT

I wanted to conform about it b'coz in case if it doesn't i want to write
code to capture it.

Does it make sense to classify a map/reduce task as I/O bound or cpu bound
based on its I/O rate ?


On Sat, Dec 3, 2011 at 2:43 PM, Harsh J <harsh@cloudera.com> wrote:

> Arun,
> Inline again.
> On 03-Dec-2011, at 12:39 PM, arun k wrote:
> Q>Does the map/reduce task run time displayed in web GUI is
> decent/accurate enough ?
> Don't see why not. We only display what's been genuinely collected. What
> you get out of an API on the CLI is absolutely the same thing. Or perhaps I
> do not understand your question completely here - what's led you to ask
> this?
> Q>If i want to do find the IO rate of a task, will the task run time
> divided by total number of FIle bytes and HDFS bytes read/written give it
> approximately ?
> Yes, that should give you a stop-watch measure. Task start -> Task end,
> and the counters the task puts up for itself.
> Q>Does the FILE Bytes read for the reduce task include the map output
> record bytes read non-locally over network or the bytes read locally from
> the map output records after they are copied locally ?
> FILE counters are from whatever is read off a local filesystem (file:///),
> so would mean the latter. If you look again, you will notice another
> counter named "Reduce shuffle bytes" that gives you the former count -
> separately.

View raw message