hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shuja Rehman <shujamug...@gmail.com>
Subject Configure Groovy for map/reduce jobs in hadoop
Date Tue, 22 Jun 2010 12:34:27 GMT
Hi
I have write a map/reduce job in groovy and run the job using hadoop
streaming as follow.

hadoop jar
/usr/lib/hadoop-0.20/contrib/streaming/hadoop-0.20.2+228-streaming.jar \
-inputformat StreamInputFormat \
-inputreader "StreamXmlRecordReader,begin=<mdc xmlns:HTML=\"
http://www.w3.org/TR/REC-xml\">,end=</mdc>" \
-input /user/root/telecom/ \
-jobconf mapred.map.tasks=1 \
-output q5 \
-mapper /home/ftpuser1/Nodemapper2.groovy \
-jobconf mapred.reduce.tasks=0 \
-file /home/ftpuser1/Nodemapper2.groovy


The problem is that the output files has 0 byte size. I have checked the log
file. The log file contains the following error.

*/usr/bin/env: groovy: No such file or directory*


The groovy is installed on the system and i verify it by running with
help of following command.

*cat 1.xml | /root/Nodemapper2.groovy *

This command runs fine and shows the output. The Mapper has the
following line in the start.

*#!/usr/bin/env groovy*

Can anybody tell is there any special configurations required to run groovy
map/reduce jobs with hadoop?


-- 
Regards
Shuja-ur-Rehman Baig
_________________________________
MS CS - School of Science and Engineering
Lahore University of Management Sciences (LUMS)
Sector U, DHA, Lahore, 54792, Pakistan
Cell: +92 3214207445

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message