hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wojciech Langiewicz <wlangiew...@gmail.com>
Subject java.io.IOException: Split metadata size exceeded 10000000
Date Tue, 15 Mar 2011 09:55:26 GMT
Hello,
I'm having this problem running mapreduce jobs over about 10TB of data 
(smaller jobs are ok):
2011-03-15 07:48:22,031 ERROR org.apache.hadoop.mapred.JobTracker: Job 
initialization failed:
java.io.IOException: Split metadata size exceeded 10000000. Aborting job 
job_201103141436_0058
         at 
org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:48)
         at 
org.apache.hadoop.mapred.JobInProgress.createSplits(JobInProgress.java:732)
         at 
org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:633)
         at 
org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:3965)
         at 
org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:619)

2011-03-15 07:48:22,031 INFO org.apache.hadoop.mapred.JobTracker: 
Failing job job_201103141436_0058

What settings should I change to run this job?
I'm using CDH3b3.
Thanks for all answers.

--
Wojciech Langiewcz

Mime
View raw message