hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From C G <parallel...@yahoo.com>
Subject Re: DFS Block Allocation
Date Fri, 21 Dec 2007 01:52:32 GMT
Hmmm....this thread is very interesting - I didn't know most of the stuff mentioned here.
  Ted, when you say "copy in the distro" do you need to include the configuration files from
the running grid?  You don't need to actually start HDFS on this node do you?
  If I'm following this approach correctly, I would want to have an "xfer server" whose job
it is to essentially run dfs -copyFromLocal on all inbound-to-HDFS data. Once I'm certain
that my data has copied correctly, I can delete the local files on the xfer server.
  This is great news, as my current system wastes a lot of time copying data from data acquisition
servers to the master node. If I can copy to HDFS directly from ny acquisition servers then
I am a happy guy....
  C G

Ted Dunning <tdunning@veoh.com> wrote:

Just copy the hadoop distro directory to the other machine and use whatever
command you were using before.

A program that uses hadoop just have to have access to all of the nodes
across the net. It doesn't assume anything else.

On 12/20/07 2:35 PM, "Jeff Eastman" wrote:

> .... Can you give me a pointer on how to accomplish this (upload from other
> machine)? 

Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.
  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message