hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sanford Rockowitz <rockow...@minsoft.com>
Subject exceptions copying files into HDFS
Date Sun, 12 Dec 2010 06:41:32 GMT
Folks,

I'm a Hadoop newbie, and I hope this is an appropriate place to post 
this question.

I'm trying to work through the initial examples.  When I try to copy 
files into HDFS, hadoop throws exceptions.   I imagine it's something in 
my configuration, but I'm at a loss to figure out what.

I'm running on openSuSE 11.3, using Oracle Java 1.6.0_23.  The problem 
occurs whether I use 32 bit or 64 bit Java.   The problem occurs in both 
vanilla Apache hadoop 0.20.2 and Cloudera's 0.20.2+737.

Following are the console output, the datanode log file, and the 
relevant configuration files.

Thanks in advance for any pointers.

Sanford

=== CONSOLE ===

rock@ritter:~/programs/hadoop-0.20.2+737> hadoop fs -put conf input
10/12/11 21:04:41 INFO hdfs.DFSClient: Exception in 
createBlockOutputStream java.io.EOFException
10/12/11 21:04:41 INFO hdfs.DFSClient: Abandoning block 
blk_1699203955671139323_1010
10/12/11 21:04:41 INFO hdfs.DFSClient: Excluding datanode 127.0.0.1:50010
10/12/11 21:04:41 WARN hdfs.DFSClient: DataStreamer Exception: 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/rock/input/fair-scheduler.xml could only be replicated to 0 nodes, 
instead of 1
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1415)
         at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:588)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:528)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1319)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1315)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:396)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1313)

         at org.apache.hadoop.ipc.Client.call(Client.java:1054)
         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
         at $Proxy0.addBlock(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
         at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
         at $Proxy0.addBlock(Unknown Source)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3166)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3036)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2288)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2483)

10/12/11 21:04:41 WARN hdfs.DFSClient: Error Recovery for block 
blk_1699203955671139323_1010 bad datanode[0] nodes == null
10/12/11 21:04:41 WARN hdfs.DFSClient: Could not get block locations. 
Source file "/user/rock/input/fair-scheduler.xml" - Aborting...
put: java.io.IOException: File /user/rock/input/fair-scheduler.xml could 
only be replicated to 0 nodes, instead of 1
10/12/11 21:04:41 ERROR hdfs.DFSClient: Exception closing file 
/user/rock/input/fair-scheduler.xml : 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/rock/input/fair-scheduler.xml could only be replicated to 0 nodes, 
instead of 1
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1415)
         at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:588)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:528)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1319)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1315)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:396)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1313)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/rock/input/fair-scheduler.xml could only be replicated to 0 nodes, 
instead of 1
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1415)
         at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:588)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:528)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1319)
         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1315)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:396)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1313)

         at org.apache.hadoop.ipc.Client.call(Client.java:1054)
         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
         at $Proxy0.addBlock(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
         at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
         at $Proxy0.addBlock(Unknown Source)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3166)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3036)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2288)
         at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2483)
rock@ritter:~/programs/hadoop-0.20.2+737>


=== DATANODE LOG ===

And here's the the corresponding contents of the datanode log:
10-12-11 21:02:37,541 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ritter.minsoft.com/127.0.0.2
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2+737
STARTUP_MSG:   build = git://bruno-desktop/ on branch  -r 
98c55c28258aa6f42250569bd7fa431ac657bdbd; compiled by 'bruno' on Mon Oct 
11 09:37:19 PDT 2010
************************************************************/
2010-12-11 21:02:42,046 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered 
FSDatasetStatusMBean
2010-12-11 21:02:42,047 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2010-12-11 21:02:42,049 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 
1048576 bytes/s
2010-12-11 21:02:42,085 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
2010-12-11 21:02:42,124 INFO org.apache.hadoop.http.HttpServer: Added 
global filtersafety 
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2010-12-11 21:02:42,130 INFO org.apache.hadoop.http.HttpServer: Port 
returned by webServer.getConnectors()[0].getLocalPort() before open() is 
-1. Opening the listener on 50075
2010-12-11 21:02:42,130 INFO org.apache.hadoop.http.HttpServer: 
listener.getLocalPort() returned 50075 
webServer.getConnectors()[0].getLocalPort() returned 50075
2010-12-11 21:02:42,130 INFO org.apache.hadoop.http.HttpServer: Jetty 
bound to port 50075
2010-12-11 21:02:42,130 INFO org.mortbay.log: jetty-6.1.14
2010-12-11 21:02:47,772 INFO org.mortbay.log: Started 
SelectChannelConnector@0.0.0.0:50075
2010-12-11 21:02:47,782 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=DataNode, sessionId=null
2010-12-11 21:02:47,797 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=DataNode, port=50020
2010-12-11 21:02:47,798 INFO 
org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: Initializing RPC 
Metrics with hostName=DataNode, port=50020
2010-12-11 21:02:47,800 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = 
DatanodeRegistration(ritter.minsoft.com:50010, 
storageID=DS-1618752214-127.0.0.2-50010-1292091159510, infoPort=50075, 
ipcPort=50020)
2010-12-11 21:02:47,813 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(127.0.0.1:50010, 
storageID=DS-1618752214-127.0.0.2-50010-1292091159510, infoPort=50075, 
ipcPort=50020)In DataNode.run, data = 
FSDataset{dirpath='/tmp/hadoop-rock/dfs/data/current'}
2010-12-11 21:02:47,816 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2010-12-11 21:02:47,818 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: using 
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2010-12-11 21:02:47,819 INFO org.apache.hadoop.ipc.Server: IPC Server 
listener on 50020: starting
2010-12-11 21:02:47,819 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 1 on 50020: starting
2010-12-11 21:02:47,819 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 2 on 50020: starting
2010-12-11 21:02:47,819 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 0 on 50020: starting
2010-12-11 21:02:47,827 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks 
got processed in 6 msecs
2010-12-11 21:02:47,827 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block 
scanner.
2010-12-11 21:04:41,371 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(127.0.0.1:50010, 
storageID=DS-1618752214-127.0.0.2-50010-1292091159510, infoPort=50075, 
ipcPort=50020):DataXceiver
java.net.SocketException: Operation not supported
         at sun.nio.ch.Net.getIntOption0(Native Method)
         at sun.nio.ch.Net.getIntOption(Net.java:181)
         at 
sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
         at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
         at 
sun.nio.ch.SocketOptsImpl.receiveBufferSize(SocketOptsImpl.java:142)
         at 
sun.nio.ch.SocketOptsImpl$IP$TCP.receiveBufferSize(SocketOptsImpl.java:286)
         at 
sun.nio.ch.OptionAdaptor.getReceiveBufferSize(OptionAdaptor.java:148)
         at 
sun.nio.ch.SocketAdaptor.getReceiveBufferSize(SocketAdaptor.java:336)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:255)
         at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)

=== CONFIG FILES ===

rock@ritter:~/programs/hadoop-0.20.2+737/conf> cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost</value>
<!-- default port 8020 -->
</property>
</configuration>


rock@ritter:~/programs/hadoop-0.20.2+737/conf> cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


Mime
View raw message