hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yang <teddyyyy...@gmail.com>
Subject can't disable speculative execution?
Date Thu, 12 Jul 2012 03:46:23 GMT
I set the following params to be false in my pig script (0.10.0)

SET mapred.map.tasks.speculative.execution false;
SET mapred.reduce.tasks.speculative.execution false;


I also verified in the jobtracker UI in the job.xml that they are indeed
set correctly.

when the job finished, jobtracker UI shows that there is only one attempt
for each task (in fact I have only 1 task too).

but when I went to the tasktracker node, looked under the
/var/log/hadoop/userlogs/job_id_here/
dir , there are 3 attempts dir ,
 job_201207111710_0024 # ls
attempt_201207111710_0024_m_000000_0  attempt_201207111710_0024_m_000001_0
 attempt_201207111710_0024_m_000002_0  job-acls.xml

so 3 attempts were indeed fired ??

I have to get this controlled correctly because I'm trying to debug the
mappers through eclipse,
but if more than 1 mapper process is fired, they all try to connect to the
same debugger port, and the end result is that nobody is able to
hook to the debugger.


Thanks
Yang

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message