Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 735F7C2CD for ; Thu, 10 May 2012 06:53:16 +0000 (UTC) Received: (qmail 47191 invoked by uid 500); 10 May 2012 06:53:13 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 46941 invoked by uid 500); 10 May 2012 06:53:11 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 46928 invoked by uid 99); 10 May 2012 06:53:11 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 May 2012 06:53:11 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [137.215.101.101] (HELO kendy.up.ac.za) (137.215.101.101) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 May 2012 06:53:03 +0000 Received: from b069pc058.up.ac.za ([137.215.69.58]) by kendy.up.ac.za with esmtp (Exim 4.63) (envelope-from ) id 1SSNEu-0005GD-0S; Thu, 10 May 2012 08:52:40 +0200 Message-ID: <4FAB65B7.3000907@up.ac.za> Date: Thu, 10 May 2012 08:52:39 +0200 From: Fourie Joubert User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:9.0) Gecko/20111220 Thunderbird/9.0 MIME-Version: 1.0 To: common-user@hadoop.apache.org, harsh@cloudera.com Subject: Re: DataNodeRegistration problem References: <4FAA89E8.7020301@up.ac.za> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Scan-Signature: fdae7c05cf7647165644ed24c5ee7382 Hi Yes - that was indeed the problem... I cleaned up the Java's on all the nodes, did a clean reinstall of Sun jdk1.6.0_23 and the problem is gone. Many thanks and regards! Fourie On 05/09/2012 05:47 PM, Harsh J wrote: > You may be hitting https://issues.apache.org/jira/browse/HDFS-1115? > Have you ensured Sun JDK is the only JDK available in the machines and > your services aren't using OpenJDK accidentally? > > On Wed, May 9, 2012 at 8:44 PM, Fourie Joubert wrote: >> Hi >> >> I am running Hadoop-1.0.1 with Sun jdk1.6.0_23. >> >> My system is a head node with 14 compute blades >> >> When trying to start hadoop, I get the following message in the logs for >> each data node: >> >> >> 2012-05-09 16:53:35,548 ERROR >> org.apache.hadoop.hdfs.server.datanode.DataNode: >> DatanodeRegistration(137.215.75.201:50010, >> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075, >> ipcPort=50020):DataXceiver >> >> java.net.SocketException: Protocol not available >> ... >> ... >> >> The full log is shown below. >> >> I can't seem to get past this problem - any help or advice would be >> sincerely appreciated. >> >> Kindest regards! >> >> Fourie >> >> >> >> >> >> 2012-05-09 16:53:31,800 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: >> /************************************************************ >> STARTUP_MSG: Starting DataNode >> STARTUP_MSG: host = wonko1/137.215.75.201 >> STARTUP_MSG: args = [] >> STARTUP_MSG: version = 1.0.1 >> STARTUP_MSG: build = >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r >> 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012 >> ************************************************************/ >> 2012-05-09 16:53:31,934 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: >> loaded properties from hadoop-metrics2.properties >> 2012-05-09 16:53:31,945 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >> MetricsSystem,sub=Stats registered. >> 2012-05-09 16:53:31,946 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period >> at 10 second(s). >> 2012-05-09 16:53:31,946 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system >> started >> 2012-05-09 16:53:32,022 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi >> registered. >> 2012-05-09 16:53:32,232 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered >> FSDatasetStatusMBean >> 2012-05-09 16:53:32,242 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010 >> 2012-05-09 16:53:32,244 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is >> 1048576 bytes/s >> 2012-05-09 16:53:32,291 INFO org.mortbay.log: Logging to >> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via >> org.mortbay.log.Slf4jLog >> 2012-05-09 16:53:32,347 INFO org.apache.hadoop.http.HttpServer: Added global >> filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) >> 2012-05-09 16:53:32,359 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false >> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: Port >> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. >> Opening the listener on 50075 >> 2012-05-09 16:53:32,359 INFO org.apache.hadoop.http.HttpServer: >> listener.getLocalPort() returned 50075 >> webServer.getConnectors()[0].getLocalPort() returned 50075 >> 2012-05-09 16:53:32,360 INFO org.apache.hadoop.http.HttpServer: Jetty bound >> to port 50075 >> 2012-05-09 16:53:32,360 INFO org.mortbay.log: jetty-6.1.26 >> 2012-05-09 16:53:32,590 INFO org.mortbay.log: Started >> SelectChannelConnector@0.0.0.0:50075 >> 2012-05-09 16:53:32,594 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm >> registered. >> 2012-05-09 16:53:32,595 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >> DataNode registered. >> 2012-05-09 16:53:32,614 INFO org.apache.hadoop.ipc.Server: Starting >> SocketReader >> 2012-05-09 16:53:32,616 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >> RpcDetailedActivityForPort50020 registered. >> 2012-05-09 16:53:32,616 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >> RpcActivityForPort50020 registered. >> 2012-05-09 16:53:32,618 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = >> DatanodeRegistration(wonko1.bi.up.ac.za:50010, >> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075, >> ipcPort=50020) >> 2012-05-09 16:53:32,620 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block >> report scan >> 2012-05-09 16:53:32,620 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: >> DatanodeRegistration(137.215.75.201:50010, >> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075, >> ipcPort=50020)In DataNode.run, data = >> FSDataset{dirpath='/hadooplocal/datadir/current'} >> 2012-05-09 16:53:32,620 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block >> report scan in 0ms >> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server >> Responder: starting >> 2012-05-09 16:53:32,621 INFO org.apache.hadoop.ipc.Server: IPC Server >> listener on 50020: starting >> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 0 on 50020: starting >> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 1 on 50020: starting >> 2012-05-09 16:53:32,623 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL >> of 3600000msec Initial delay: 0msec >> 2012-05-09 16:53:32,623 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 2 on 50020: starting >> 2012-05-09 16:53:32,626 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous >> block report against current state in 0 ms >> 2012-05-09 16:53:32,628 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks >> took 0 msec to generate and 2 msecs for RPC and NN processing >> 2012-05-09 16:53:32,628 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block >> scanner. >> 2012-05-09 16:53:32,629 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) >> block report in 0 ms >> 2012-05-09 16:53:32,629 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous >> block report against current state in 0 ms >> 2012-05-09 16:53:35,548 ERROR >> org.apache.hadoop.hdfs.server.datanode.DataNode: >> DatanodeRegistration(137.215.75.201:50010, >> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075, >> ipcPort=50020):DataXceiver >> java.net.SocketException: Protocol not available >> at sun.nio.ch.Net.getIntOption0(Native Method) >> at sun.nio.ch.Net.getIntOption(Net.java:181) >> at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) >> at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) >> at >> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142) >> at >> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286) >> at >> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148) >> at >> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336) >> at >> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238) >> at >> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107) >> at java.lang.Thread.run(Thread.java:636) >> 2012-05-09 16:54:36,377 ERROR >> org.apache.hadoop.hdfs.server.datanode.DataNode: >> DatanodeRegistration(137.215.75.201:50010, >> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075, >> ipcPort=50020):DataXceiver >> java.net.SocketException: Protocol not available >> at sun.nio.ch.Net.getIntOption0(Native Method) >> at sun.nio.ch.Net.getIntOption(Net.java:181) >> at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) >> at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) >> at >> sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142) >> at >> sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286) >> at >> sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148) >> at >> sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336) >> at >> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:238) >> at >> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107) >> at java.lang.Thread.run(Thread.java:636) >> 2012-05-09 16:54:46,427 ERROR >> org.apache.hadoop.hdfs.server.datanode.DataNode: >> DatanodeRegistration(137.215.75.201:50010, >> storageID=DS-2067460883-137.215.75.201-50010-1336575105195, infoPort=50075, >> ipcPort=50020):DataXceiver >> java.net.SocketException: Protocol not available >> at sun.nio.ch.Net.getIntOption0(Native Method) >> >> >> >> -- >> -------------- >> Prof Fourie Joubert >> Bioinformatics and Computational Biology Unit >> Department of Biochemistry >> University of Pretoria >> fourie.joubert@up.ac.za >> http://www.bi.up.ac.za >> Tel. +27-12-420-5825 >> Fax. +27-12-420-5800 >> >> ------------------------------------------------------------------------- >> This message and attachments are subject to a disclaimer. Please refer >> to www.it.up.ac.za/documentation/governance/disclaimer/ for full details. >> > > -- -------------- Prof Fourie Joubert Bioinformatics and Computational Biology Unit Department of Biochemistry University of Pretoria fourie.joubert@up.ac.za http://www.bi.up.ac.za Tel. +27-12-420-5825 Fax. +27-12-420-5800 ------------------------------------------------------------------------- This message and attachments are subject to a disclaimer. Please refer to www.it.up.ac.za/documentation/governance/disclaimer/ for full details.