hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Venner <jason.had...@gmail.com>
Subject Re: hdfs error when starting the jobtracker
Date Sat, 19 Dec 2009 20:46:14 GMT
A common set of reasons for the jobtracker not starting are:
1) namenode not running
2) namenode not out of safe mode
2.1) no / insufficient datanodes running

On Thu, Dec 17, 2009 at 7:36 PM, Iman E <hadoop_ami@yahoo.com> wrote:

> Hi,
>   I do have this basic question about hadoop configuration. Whenever I try
> to start the jobtracker it will remain in "initializing" mode forever, and
> when I checked the log file, I found the following errors:
>
> several lines like these for different slaves in my cluster:
>
> *2009-12-17 17:47:43,717 INFO org.apache.hadoop.hdfs.DFSClient: Exception
> in createBlockOutputStream java.net <http://java.net.so/>.SocketTimeoutException:
> 66000 millis timeout while waiting for channel to be ready for connect. ch :
> java.nio.channels.SocketChannel[connection-pending
> remote=/XXX.XXX.XXX.XXX:50010]*
> *2009-12-17 17:47:43,717 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> block blk_7740448897934265604_1010
> 2009-12-17 17:47:43,720 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
> find target node: XXX.XXX.XXX.XXX:50010*
>
> then
>
> *2009-12-17 17:47:49,727 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: java.io <http://java.io.io/>.IOException: Unable
> to create new block.
>         at
> org.apache.hadoop..hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2812)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
> *
> *2009-12-17 17:47:49,728 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block blk_7740448897934265604_1010 bad datanode[0] nodes ==
> null
> 2009-12-17 17:47:49,728 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file "${**mapred.system.dir**}/mapred/system/
> jobtracker.info" - Aborting...
> 2009-12-17 17:47:49,728 WARN org.apache.hadoop.mapred.JobTracker: Writing
> to file* *${**fs.default.name**}/${**mapred..system.dir**}/mapred/system/
> jobtracker.info failed!
> 2009-12-17 17:47:49,728 WARN org.apache.hadoop.mapred.JobTracker:
> FileSystem is not ready yet!
> 2009-12-17 17:47:49,749 WARN org.apache.hadoop.mapred.JobTracker: Failed to
> initialize recovery manager.
> java.net.SocketTimeoutException: 66000 millis timeout while waiting for
> channel to be ready for connect. ch :
> java.nio.channels.SocketChannel[connection-pending
> remote=/XXX.XXX.XXX.XXX:50010]
>         at org.apache.hadoop.net<http://org.apache.hadoop.net.socketiowithtimeout.co/>
> .SocketIOWithTimeout.connect(SocketIOWithTimeout.java:213)
>         at org.apache.hadoop.net<http://org.apache.hadoop.net.netutils.co/>
> .NetUtils.connect(NetUtils.java:404)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2837)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2793)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
> 2009-12-17 17:47:59,757 WARN org.apache.hadoop.mapred.JobTracker:
> Retrying...
> *
>
> then it will start all over again.
>
> I am not sure what is the reason for this error.. I tried to set
> mapred.system.dir to leave it to the default value, and overwriting it in
> mapred-site.xml to both local and shared directories but no use. In all
> cases the this error will show in the log file: *Writing to file* *${**
> fs.default.name**}/${**mapred.system.dir**}/mapred/system/jobtracker.infofailed!
> *Is it true that hadoop append these values together? What should I do to
> avoid this? Does anyone know what I am doing wrong or what could be causing
> these errors?
>
> Thanks
>
>
>


-- 
Pro Hadoop, a book to guide you from beginner to hadoop mastery,
http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals

Mime
View raw message