hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: Piping output of hadoop command
Date Mon, 18 Feb 2013 17:23:28 GMT
Hi Julian,

I think it's not outputing on the standard output bu on the error one.

You might want to test that:
hadoop fs -copyToLocal FILE_IN_HDFS 1>&2 | ssh REMOTE_HOST "dd

Which will redirect the stderr to the stdout too.

Not sure, but it might be your issue.


2013/2/18, Julian Wissmann <julian.wissmann@sdace.de>:
> Hi,
> we're running a Hadoop cluster with hbase for the purpose of
> evaluating it as database for a research project and we've more or
> less decided to go with it.
> So now I'm exploring backup mechanisms and have decided to experiment
> with hadoops export functionality for that.
> What I am trying to achieve is getting data out of hbase and into hdfs
> via hadoop export and then copy it out of hdfs onto a backup system.
> However while copying data out of hdfs to the backup machine I am
> experiencing problems.
> What I am trying to do is the following:
> hadoop fs -copyToLocal FILE_IN_HDFS | ssh REMOTE_HOST "dd of=TARGET_FILE"
> It creates a file on the remote host, however this file is 0kb in
> size; instead of copying any data over there, the file just lands in
> my home folder.
> The command output looks like this: hadoop fs -copyToLocal
> 0+0 Datens├Ątze ein
> 0+0 Datens├Ątze aus
> 0 Bytes (0 B) kopiert, 1,10011 s, 0,0 kB/s
> I cannot think of any reason, why this command would behave in this
> way. Is this some Java-ism that I'm missing here (like not correctly
> treating stdout), or am I actually doing it wrong?
> The Hadoop Version is 2.0.0-cdh4.1.2
> Regards
> Julian

View raw message