pig-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thejas M Nair <te...@yahoo-inc.com>
Subject Re: ERROR 6015: During execution, encountered a Hadoop error | ERROR 1066: Unable to open iterator for alias grouped_records
Date Mon, 13 Dec 2010 14:29:10 GMT
It seems like a problem with hadoop configuration that is probably not
specific to pig. Are you able to run other MR jobs, such as the wordcount
example ? 

I searched for the exception string and found few matches including -

http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201007.mbox/%
3C441D899DB0E264409344AFC3D1998EEA01229FE6@email4.us.syncsort.com%3E

Thanks,
Thejas


On 12/13/10 6:09 AM, "deepak.n85@wipro.com" <deepak.n85@wipro.com> wrote:

> Thanks Thejas,
> 
> Reduce Task Logs:
> 
> 2010-12-13 18:15:08,340 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=SHUFFLE, sessionId= 2010-12-13
> 18:15:09,062 INFO org.apache.hadoop.mapred.ReduceTask: ShuffleRamManager:
> MemoryLimit=141937872, MaxSingleShuffleLimit=35484468 2010-12-13 18:15:09,076
> INFO org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Thread started: Thread for merging on-disk files 2010-12-13 18:15:09,076 INFO
> org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Thread waiting: Thread for merging on-disk files 2010-12-13 18:15:09,081 INFO
> org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Thread started: Thread for merging in memory files 2010-12-13 18:15:09,082
> INFO org.apache.hadoop.mapred.ReduceTask: attempt_201012121200_0018_r_000000_3
> Need another 2 map output(s) where 0 is already in progress 2010-12-13
> 18:15:09,083 INFO org.apache.hadoop.mapred.ReduceTask:
> attempt_201012121200_0018_r_000000_3 Scheduled 0 outputs (0 slow hosts and0
> dup hosts) 2010-12-13 18:15:09,083 INFO org.apache.hadoop.mapred.ReduceTask:
> attempt_201012121200_0018_r_000000_3 Thread started: Thread for polling Map
> Completion Events 2010-12-13 18:15:09,092 FATAL
> org.apache.hadoop.mapred.TaskRunner: attempt_201012121200_0018_r_000000_3
> GetMapEventsThread Ignoring exception : java.lang.NullPointerException at
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768) at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapComp
> letionEvents(ReduceTask.java:2683) at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(Reduce
> Task.java:2605) 2010-12-13 18:15:11,389 INFO
> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
> processName=CLEANUP, sessionId= 2010-12-13 18:15:12,107 INFO
> org.apache.hadoop.mapred.TaskRunner: Runnning cleanup for the task 2010-12-13
> 18:15:12,119 INFO org.apache.hadoop.mapred.TaskRunner:
> Task:attempt_201012121200_0018_r_000000_3 is done. And is in the process of
> commiting 2010-12-13 18:15:12,138 INFO org.apache.hadoop.mapred.TaskRunner:
> Task 'attempt_201012121200_0018_r_000000_3' done.
> 
> ________________________________
> From: Thejas M Nair [mailto:tejas@yahoo-inc.com]
> Sent: Monday, December 13, 2010 7:32 PM
> To: user@pig.apache.org; Deepak Choudhary N (WT01 - Product Engineering
> Services)
> Subject: Re: ERROR 6015: During execution, encountered a Hadoop error | ERROR
> 1066: Unable to open iterator for alias grouped_records
> 
> From the job tracker web UI, you should be able see the MR job run by this pig
> query.  If you follow the links, you should be able to find the reduce task
> logs.
> 
> Thanks,
> Thejas
> 
> 
> On 12/13/10 5:11 AM, "deepak.n85@wipro.com" <deepak.n85@wipro.com> wrote:
> 
> My Script:
> 
> records = LOAD 'hdfs://hadoop.namenode:54310/data' USING PigStorage(',')
> AS (Year:int, Month:int,DayofMonth:int,DayofWeek:int);
> grouped_records = GROUP records BY Month;
> DUMP grouped_records;
> 
> Hadoop Version: 0.20.2
> Pig Version: 0.7.0
> 
> I couldn't find the reduce task logs. Where are they generated?
> 
> Surprisingly, PIG jobs donot seem to generate any Hadoop (namenode, datanode,
> tasktracker etc) logs.
> 
> 
> -----Original Message-----
> From: Dmitriy Ryaboy [mailto:dvryaboy@gmail.com]
> Sent: Monday, December 13, 2010 4:51 PM
> To: user@pig.apache.org
> Subject: Re: ERROR 6015: During execution, encountered a Hadoop error | ERROR
> 1066: Unable to open iterator for alias grouped_records
> 
> Can you send along your script and the reduce task logs?
> What version of Pig and Hadoop are you using?
> 
> Thanks,
> -Dmitriy
> 
> On Sun, Dec 12, 2010 at 10:36 PM, <deepak.n85@wipro.com> wrote:
> 
>> Hi,
>> 
>> I loaded a csv file with about 10 fields into PigStorage and tried to
>> do a GROUP BY on one of the fields. The MapReduce job gets created,
>> and the Mappers finish execution.
>> 
>> But after that, the job fails with the following error messages:
>> 
>> 2010-12-13 10:31:08,902 [main] INFO
>> 
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduce
>> Launcher
>> - 100% complete
>> 2010-12-13 10:31:08,902 [main] ERROR
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduce
>> Launcher
>> - 1 map reduce job(s) failed!
>> 2010-12-13 10:31:08,911 [main] ERROR
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduce
>> Launcher
>> - Failed to produce result in:
>> "hdfs://hadoop.namenode:54310/tmp/temp2041073534/tmp-2060206542"
>> 2010-12-13 10:31:08,911 [main] INFO
>> 
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduce
>> Launcher
>> - Failed!
>> 2010-12-13 10:31:08,961 [main] ERROR org.apache.pig.tools.grunt.Grunt
>> - ERROR 6015: During execution, encountered a Hadoop error.
>> 2010-12-13 10:31:08,961 [main] ERROR org.apache.pig.tools.grunt.Grunt
>> -
>> org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
>> to open iterator for alias grouped_records
>>        at org.apache.pig.PigServer.openIterator(PigServer.java:521)
>>        at
>> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:544)
>>        at
>> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.j
>> ava:241)
>>        at
>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:162)
>>        at
>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:138)
>>        at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:75)
>>        at org.apache.pig.Main.main(Main.java:357)
>> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR
>> 6015: During execution, encountered a Hadoop error.
>>        at
>> .util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>>        at
>> .apache.hadoop.mapred.ReduceTask(ReduceCopier$GetMapEventsThread.getMa
>> pCompletionEvents(ReduceTask.java:2683)
>> Caused by: java.lang.NullPointerException
>>        ... 2 more
>> 
>> The filter statements (Mapper only) work properly, so it's not that
>> nothing is running.
>> 
>> What's the issue here?
> 
> Please do not print this email unless it is absolutely necessary.
> 
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not
> the intended recipient, you should not disseminate, distribute or copy this
> e-mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments.
> 
> WARNING: Computer viruses can be transmitted via email. The recipient should
> check this email and any attachments for the presence of viruses. The company
> accepts no liability for any damage caused by any virus transmitted by this
> email.
> 
> www.wipro.com
> 



Mime
View raw message