hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From elton sky <eltonsky9...@gmail.com>
Subject Re: Why single thread for HDFS?
Date Thu, 08 Jul 2010 01:37:03 GMT
Steve,

I do have access to that code if I can get at the right bit of the
> repository, if you really want me to look at it in detail ask, with the
> caveats that I'm away for the rest of the month and somewhat busy. Apart
> from that there's no reason why I shouldn't be able to make the changes to
> DfsClient public. Keep reminding me :)
>

Sounds great. can you please make me know (email or other) how can I access
to that code.


>
>

>
I think right now you get a list of blocks via
> DfsClient.getBlockLocations(); this is a list of hosts where blocks live.
> There is no data about which disk on the specific host.
>
> I belive that what Russ did was move the decisions from DfsInputStream
> -which picks a block location for you, with a bias to the local host- and
> instead lets the calling program make the decision as to where to fetch each
> block. This meant he could set the renderer up to request blocks from
> different hosts.
>
> He had tried to use the JT to schedule the rendering code, but that didn't
> work as MapReduce has the notion of "reduction": less data out than in, so
> it moves work to where the data is. In rendering it's more MapExpand; the
> operation is the transformation of PDF pages into 600dpi 32bpp bitmaps,
> which then need to be streamed to the (very large) printer at its print
> rate, in the correct order. It was easiest to have a specific machine on the
> cluster -with no datanodes or TTs- set up to do the rendering, and just ask
> the filesystem for where things are.
>
> Like I said, I don't think there was anything tricky done in DfsClient,
> more a matter of making some data known internally to the  DfsClient code
> public, so that the client app can decide where to fetch data. If the
> DfsClient knew which HDD the data was on in a datanode, the client app could
> use that in its decision making too, so that if the 9 machines each had 6
> HDDs, you could keep them all busy.
>

Firstly, because nodes in a cluster are usually VMs. But if each vm attached
to multiple physical disk, we can parallel;


>If you're talking about M/R jobs, you don't want to do threads in your
map() routine, while this is possible, its going to be really hard >to
justify the extra parallelism along with the need to wait for all of the
threads to complete before you can end the map() method.

Secondly, I agree with Gautam, Micheal. In a MR job, maybe it's not good
idea to read input parallely in map() method. Because map reads a line from
HDFS each time to process. This solution looks simple & elegent, though it
keeps the connection to source for a long time, until the end of map.
I can think about some trick, like letting map() pull and process for the
first block worth of input, and simultaneously pull the left input data to
local disk of map() in other threads. But this sounds messy...

Aside above 2, I think we can do disk level parallel.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message