hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris K Wensel <ch...@wensel.net>
Subject Re: Using S3 Block FileSystem as HDFS replacement
Date Wed, 02 Jul 2008 04:41:13 GMT
> How do i put something into the fs?
> something like "bin/hadoop fs -put input input" will not work well  
> since s3
> is not the default fs, so i tried to do bin/hadoop fs -put input
> s3://ID:SECRET@BUCKET/input (and some variations of it) but didn't  
> worked, i
> always got an error complaining about not having provided the ID/ 
> secret for
> s3.
>

 > hadoop distcp ...

should work with your s3 urls

>
> 08/07/01 22:12:55 INFO mapred.FileInputFormat: Total input paths to  
> process
> : 2
> 08/07/01 22:12:57 INFO mapred.JobClient: Running job:  
> job_200807012133_0010
> 08/07/01 22:12:58 INFO mapred.JobClient:  map 100% reduce 100%
> java.io.IOException: Job failed!
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1062)
> (...)
>
> I tried several times and with the wordcount example but the error  
> were
> always the same.
>

unsure. many things could be conspiring against you. leave the  
defaults in hadoop-site and use distcp to copy things around. that's  
the simplest I think.

ckw

Mime
View raw message