hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From janani venkat <jan...@gmail.com>
Subject distributed cache
Date Fri, 09 Apr 2010 10:18:54 GMT
Am quite new to using this hadoop. I set the single node and tried running
the sample map-reduce programs in that. They worked fine.
1)I want to run the distributed cache code(single node or 2-node cluster)
and view the output. But i dont understand how to specify the input files,
setting up the path in  JobConf.java and where to add the functions
specified in the instruction.
2)I also want to view the output files(logs).
3)They are talking about speculative execution and it is set to true by
default in JobConf. But where exactly the actual logic of speculative
execution could be found in the hadoop installation? I mean the specific
code which gets executed when it is called.

Waiting for guidance..


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message