hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Venner <jason.had...@gmail.com>
Subject Re: How can I deploy 100 blocks onto 10 datanodes with each node have 10 blocks?
Date Tue, 20 Oct 2009 03:54:08 GMT
If you set your replication count to one and on each datanode, create 10
files, you will achieve the pattern you are trying for.

By default when a file is created on a machine hosting a datanode, that
datanode will receive 1 replica of the file, and will be responsible for
sending the file data to the next replica if any.

On Thu, Oct 15, 2009 at 1:39 PM, Huang Qian <skyswind@gmail.com> wrote:

> Hi everyone. I am working on a project with hadoop and now I come across
> some problem. How can I deploy 100 files, with each file have one block by
> setting the blocksize and controling the file size, on to 10 datanode, and
> make sure each datanode has 10 blocks. I know the file system can deploy
> the
> blocks automaticly, but I want to make sure for the assigns files, the
> files
> will be deployed well-proportioned. How can I make it by the hadoop tool or
> api?
> Huang Qian(黄骞)
> Institute of Remote Sensing and GIS,Peking University
> Phone: (86-10) 5276-3109
> Mobile: (86) 1590-126-8883
> Address:Rm.554,Building 1,ChangChunXinYuan,Peking
> Univ.,Beijing(100871),CHINA

Pro Hadoop, a book to guide you from beginner to hadoop mastery,
www.prohadoopbook.com a community for Hadoop Professionals

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message