hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edward Capriolo <edlinuxg...@gmail.com>
Subject Re: Hadoop 0.20 map/reduce Failing for old API
Date Fri, 27 Nov 2009 17:04:54 GMT
On Fri, Nov 27, 2009 at 10:46 AM, Arv Mistry <arv@kindsight.net> wrote:
> Thanks Rekha, I was missing the new library
> (hadoop-0.20.1-hdfs-core.jar) in my client.
>
> It seems to run a little further but I'm now getting a
> ClassCastException returned by the mapper. Note, this worked with the
> 0.19 load, so I'm assuming there's something additional in the
> configuration that I'm missing. Can anyone help?
>
> java.lang.ClassCastException: org.apache.hadoop.mapred.MultiFileSplit
> cannot be cast to org.apache.hadoop.mapred.FileSplit
>        at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat
> .java:54)
>        at
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:338)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> Cheers Arv
>
> -----Original Message-----
> From: Rekha Joshi [mailto:rekhajos@yahoo-inc.com]
> Sent: November 26, 2009 11:45 PM
> To: common-user@hadoop.apache.org
> Subject: Re: Hadoop 0.20 map/reduce Failing for old API
>
> The exit status of 1 usually indicates configuration issues, incorrect
> command invocation in hadoop 0.20 (incorrect params), if not JVM crash.
> In your logs there is no indication of crash, but some paths/command can
> be the cause. Can you check if your lib paths/data paths are correct?
>
> If it is a memory intensive task, you may also try values on
> mapred.child.java.opts /mapred.job.map.memory.mb.Thanks!
>
> On 11/27/09 1:28 AM, "Arv Mistry" <arv@kindsight.net> wrote:
>
> Hi,
>
> We've recently upgraded to hadoop 0.20. Writing to HDFS seems to be
> working fine, but the map/reduce jobs are failing with the following
> exception. Note, we have not moved to the new map/reduce API yet. In the
> client that launches the job, the only change I have made is to now load
> the three files; core-site.xml, hdfs-site.xml and mapred-site.xml rather
> than the hadoop-site.xml. Any ideas?
>
> INFO   | jvm 1    | 2009/11/26 13:47:26 | 2009-11-26 13:47:26,328 INFO
> [FileInputFormat] Total input paths to process : 711
> INFO   | jvm 1    | 2009/11/26 13:47:28 | 2009-11-26 13:47:28,033 INFO
> [JobClient] Running job: job_200911241319_0003
> INFO   | jvm 1    | 2009/11/26 13:47:29 | 2009-11-26 13:47:29,036 INFO
> [JobClient]  map 0% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,068 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000003_0, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:47:36 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:47:36 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:47:36 |
> INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,094 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000003_0&filter=stdout
> INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,096 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000003_0&filter=stderr
> INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,162 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000000_0, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:47:51 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:47:51 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:47:51 |
> INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,166 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_0&filter=stdout
> INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,167 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_0&filter=stderr
> INFO   | jvm 1    | 2009/11/26 13:47:52 | 2009-11-26 13:47:52,173 INFO
> [JobClient]  map 50% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:48:03 | 2009-11-26 13:48:03,219 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000001_0, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:48:03 | Map output lost, rescheduling:
> getMapOutput(attempt_200911241319_0003_m_000001_0,0) failed :
> INFO   | jvm 1    | 2009/11/26 13:48:03 |
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
> taskTracker/jobcache/job_200911241319_0003/attempt_200911241319_0003_m_0
> 00001_0/output/file.out.index in any of the configured local directories
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathT
> oRead(LocalDirAllocator.java:389)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAlloca
> tor.java:138)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.
> java:2886)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:2
> 16)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandler
> Collection.java:230)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.Server.handle(Server.java:324)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConne
> ction.java:864)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:
> 409)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java
> :522)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |
> INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,235 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000000_1, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:48:06 |
> INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:48:06 |
> INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,239 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_1&filter=stdout
> INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,245 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_1&filter=stderr
> INFO   | jvm 1    | 2009/11/26 13:48:13 | 2009-11-26 13:48:13,302 INFO
> [JobClient]  map 0% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:48:16 | 2009-11-26 13:48:16,315 INFO
> [JobClient]  map 50% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:48:18 | 2009-11-26 13:48:18,324 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000000_2, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:48:18 | java.io.IOException: Task
> process exit with nonzero status of 1.
>
>
>

Based on you just adding one jar file and now you are seeing
ClassCastException, you upgrade may have problems. Did you try to
upgrade in the same hadpoop directory and possible left files from the
old install in the same directories with the new ones?

Mime
View raw message