hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Ruchovets <oruchov...@gmail.com>
Subject produce map/reduce on multiple files
Date Tue, 23 Mar 2010 10:08:14 GMT
Hi ,
All examples that I found executes mapreduce job on a single file but in my
situation I have more than one.

Suppose I have such folder on HDFS which contains some files:

    /my_hadoop_hdfs/my_folder:
                /my_hadoop_hdfs/my_folder/file1.txt
                /my_hadoop_hdfs/my_folder/file2.txt
                /my_hadoop_hdfs/my_folder/file3.txt


how can I execute  hadoop mapreduce on file1.txt , file2.txt and file3.txt?

Is it possible to provide to hadoop job folder as parameter and all files
will be produced by mapreduce job?

Thanks In Advance
Oleg.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message