hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Parks" <davidpark...@yahoo.com>
Subject RE: Fastest way to transfer files
Date Sat, 29 Dec 2012 10:29:08 GMT
Here’s an example of running distcp (actually in this case s3distcp, but it’s about the
same, just new DistCp()) from java:


ToolRunner.run(getConf(), new S3DistCp(), new String[] {

       "--src",             "/src/dir/",

       "--srcPattern",      ".*(itemtable)-r-[0-9]*.*",

       "--dest",            "s3://yourbucket/results/", 

       "--s3Endpoint",      "s3.amazonaws.com"         });




From: Joep Rottinghuis [mailto:jrottinghuis@gmail.com] 
Sent: Saturday, December 29, 2012 2:51 PM
To: user@hadoop.apache.org
Cc: user@hadoop.apache.org; hdfs-user@hadoop.apache.org
Subject: Re: Fastest way to transfer files


Not sure why you are implying a contradiction when you say: "... distcp is useful _but_ you
want to do 'it' in java..."


First of all distcp _is_ written in Java.

You can call distcp or any other MR job from Java just fine.





Sent from my iPhone

On Dec 28, 2012, at 12:01 PM, burakkk <burak.isikli@gmail.com> wrote:


I have two different hdfs cluster. I need to transfer files between these environments. What's
the fastest way to transfer files for that situation? 


I've researched about it. I found distcp command. It's useful but I want to do in java so
is there any way to do this?


Is there any way to transfer files chunk by chunk from one hdfs cluster to another one or
is there any way to implement a process using chunks without whole file?



Best Regards...



BURAK ISIKLI | http://burakisikli.wordpress.com


View raw message