hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Julian Wissmann <julian.wissm...@sdace.de>
Subject Piping output of hadoop command
Date Mon, 18 Feb 2013 17:16:06 GMT
Hi,

we're running a Hadoop cluster with hbase for the purpose of
evaluating it as database for a research project and we've more or
less decided to go with it.
So now I'm exploring backup mechanisms and have decided to experiment
with hadoops export functionality for that.

What I am trying to achieve is getting data out of hbase and into hdfs
via hadoop export and then copy it out of hdfs onto a backup system.
However while copying data out of hdfs to the backup machine I am
experiencing problems.

What I am trying to do is the following:

hadoop fs -copyToLocal FILE_IN_HDFS | ssh REMOTE_HOST "dd of=TARGET_FILE"

It creates a file on the remote host, however this file is 0kb in
size; instead of copying any data over there, the file just lands in
my home folder.

The command output looks like this: hadoop fs -copyToLocal
FILE_IN_HDFS | ssh REMOTE_HOST "dd of=FILE_ON REMOTE_HOST"
0+0 Datens├Ątze ein
0+0 Datens├Ątze aus
0 Bytes (0 B) kopiert, 1,10011 s, 0,0 kB/s

I cannot think of any reason, why this command would behave in this
way. Is this some Java-ism that I'm missing here (like not correctly
treating stdout), or am I actually doing it wrong?

The Hadoop Version is 2.0.0-cdh4.1.2

Regards

Julian

Mime
View raw message