hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yanbo Liang <yanboha...@gmail.com>
Subject Re: which part of Hadoop is responsible of distributing the input file fragments to datanodes?
Date Thu, 15 Nov 2012 08:13:00 GMT
I guess you means to set your own strategy of block distribution.
If this, just hack the code as following clue:
FSNamesystem.getAdditionalBlock() ---> BlockManager.chooseTarget()
 ---> BlockPlacementPolicy.chooseTarget().
And you need to implement your own BlockPlacementPolicy.
Then if the client request addBlock RPC, the NameNode will assign DataNode
to store the replicas as your rules.

2012/11/15 salmakhalil <salma_7975@hotmail.com>

> What I want to do exactly is redistributing the input file fragments over
> the
> nodes of cluster according some calculations. I need to find the part that
> starts to distribute the input file to add my code instead of.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/which-part-of-Hadoop-is-responsible-of-distributing-the-input-file-fragments-to-datanodes-tp4019530p4020330.html
> Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message