hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wellington Chevreuil <wellington.chevre...@gmail.com>
Subject Re: Uploading file to HDFS
Date Fri, 19 Apr 2013 10:01:27 GMT
Can't you use flume for that?


2013/4/19 David Parks <davidparks21@yahoo.com>

> I just realized another trick you might trying. The Hadoop dfs client can
> read input from STDIN, you could use netcat to pipe the stuff across to
> HDFS without hitting the hard drive, I haven’t tried it, but here’s what I
> would think might work:****
>
> ** **
>
> On the Hadoop box, open a listening port and feed that to the HDFS command:
> ****
>
> nc -l 2342 | hdfs dfs -copyFromLocal - /tmp/x.txt****
>
> ** **
>
> On the remote server:****
>
> cat my_big_2tb_file > nc 10.1.1.1 2342****
>
> ** **
>
> I haven’t tried it yet, but in theory this would work. I just happened to
> test out the hdfs dfs command reading from stdin. You might have to correct
> the above syntax, I just wrote it off the top of my head.****
>
> ** **
>
> Dave****
>
> ** **
>
> ** **
>
> *From:* 超级塞亚人 [mailto:sheldom@gmail.com]
> *Sent:* Friday, April 19, 2013 11:35 AM
> *To:* user@hadoop.apache.org
> *Subject:* Uploading file to HDFS****
>
> ** **
>
> I have a problem. Our cluster has 32 nodes. Each disk is 1TB. I wanna
> upload 2TB file to HDFS.How can I put the file to the namenode and upload
> to HDFS? ****
>

Mime
View raw message