hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "C.V.Krishnakumar" <cvkrishnaku...@me.com>
Subject Re: using 'fs -put' from datanode: all data written to that node's hdfs and not distributed
Date Tue, 13 Jul 2010 16:32:04 GMT
I am a newbie. I am curious to know how you discovered that all the blocks are written to
datanode's hdfs? I thought the replication by namenode was transparent. Am I missing something?
On Jul 12, 2010, at 4:21 PM, Nathan Grice wrote:

> We are trying to load data into hdfs from one of the slaves and when the put
> command is run from a slave(datanode) all of the blocks are written to the
> datanode's hdfs, and not distributed to all of the nodes in the cluster. It
> does not seem to matter what destination format we use ( /filename vs
> hdfs://master:9000/filename) it always behaves the same.
> Conversely, running the same command from the namenode distributes the files
> across the datanodes.
> Is there something I am missing?
> -Nathan

View raw message