hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3559) test-libhdfs fails on linux
Date Tue, 24 Jun 2008 12:26:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12607597#action_12607597
] 

Hudson commented on HADOOP-3559:
--------------------------------

Integrated in Hadoop-trunk #528 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/528/])

> test-libhdfs fails on linux
> ---------------------------
>
>                 Key: HADOOP-3559
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3559
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: libhdfs
>    Affects Versions: 0.18.0
>         Environment: linux
>            Reporter: Mukund Madhugiri
>            Assignee: Lohit Vijayarenu
>            Priority: Blocker
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3559-1.patch
>
>
> test-libhdfs fails on linux
> test-libhdfs:
>     [mkdir] Created dir: /workspace/trunk/build/test/libhdfs
>     [mkdir] Created dir: /workspace/trunk/build/test/libhdfs/logs
>     [mkdir] Created dir: /workspace/trunk/build/test/libhdfs/dfs/name
>      [exec] ./tests/test-libhdfs.sh	
>      [exec] 08/06/13 03:25:15 INFO dfs.NameNode: STARTUP_MSG: 
>      [exec] /************************************************************
>      [exec] STARTUP_MSG: Starting NameNode
>      [exec] STARTUP_MSG:   host = hudson/NNN.NNN.NNN.NNN
>      [exec] STARTUP_MSG:   args = [-format]
>      [exec] STARTUP_MSG:   version = 0.18.0
>      [exec] STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/trunk
-r 667040; compiled by 'hudsonqa' on Fri Jun 13 03:24:28 UTC 2008
>      [exec] ************************************************************/
>      [exec] Re-format filesystem in ../../../build/test/libhdfs/dfs/name ? (Y or N) 08/06/13
03:25:15 INFO fs.FSNamesystem: fsOwner=hudsonqa,users
>      [exec] 08/06/13 03:25:15 INFO fs.FSNamesystem: supergroup=supergroup
>      [exec] 08/06/13 03:25:15 INFO fs.FSNamesystem: isPermissionEnabled=true
>      [exec] 08/06/13 03:25:15 INFO dfs.FSNamesystemMetrics: Initializing FSNamesystemMeterics
using context object:org.apache.hadoop.metrics.spi.NullContext
>      [exec] 08/06/13 03:25:16 INFO fs.FSNamesystem: Registered FSNamesystemStatusMBean
>      [exec] 08/06/13 03:25:16 INFO dfs.Storage: Image file of size 82 saved in 0 seconds.
>      [exec] 08/06/13 03:25:16 INFO dfs.Storage: Storage directory ../../../build/test/libhdfs/dfs/name
has been successfully formatted.
>      [exec] 08/06/13 03:25:16 INFO dfs.NameNode: SHUTDOWN_MSG: 
>      [exec] /************************************************************
>      [exec] SHUTDOWN_MSG: Shutting down NameNode at hudson/NNN.NNN.NNN.NNN
>      [exec] ************************************************************/
>      [exec] starting namenode, logging to /workspace/trunk/build/test/libhdfs/logs/hadoop-hudsonqa-namenode-hudsonqa.out
>      [exec] starting datanode, logging to /workspace/trunk/build/test/libhdfs/logs/hadoop-hudsonqa-datanode-hudsonqa.out
>      [exec] CLASSPATH=/workspace/trunk/src/c++/libhdfs/tests/conf:/workspace/trunk/conf:/workspace/trunk/src/c++/libhdfs/tests/conf:/workspace/trunk/conf:/home/hudsonqa/tools/java/jdk1.5.0_11-32bit/lib/tools.jar:/workspace/trunk/build/classes:/workspace/trunk/build:/workspace/trunk/build/test/classes:/workspace/trunk/lib/commons-cli-2.0-SNAPSHOT.jar:/workspace/trunk/lib/commons-codec-1.3.jar:/workspace/trunk/lib/commons-httpclient-3.0.1.jar:/workspace/trunk/lib/commons-logging-1.0.4.jar:/workspace/trunk/lib/commons-logging-api-1.0.4.jar:/workspace/trunk/lib/commons-net-1.4.1.jar:/workspace/trunk/lib/jets3t-0.6.0.jar:/workspace/trunk/lib/jetty-5.1.4.jar:/workspace/trunk/lib/junit-3.8.1.jar:/workspace/trunk/lib/kfs-0.1.3.jar:/workspace/trunk/lib/log4j-1.2.13.jar:/workspace/trunk/lib/oro-2.0.8.jar:/workspace/trunk/lib/servlet-api.jar:/workspace/trunk/lib/slf4j-api-1.4.3.jar:/workspace/trunk/lib/slf4j-log4j12-1.4.3.jar:/workspace/trunk/lib/xmlenc-0.52.jar:/workspace/trunk/lib/jsp-2.0/*.jar
LD_PRELOAD=/workspace/trunk/build/libhdfs/libhdfs.so /workspace/trunk/build/libhdfs/hdfs_test
>      [exec] 08/06/13 03:25:22 WARN fs.FileSystem: "localhost:23000" is a deprecated filesystem
name. Use "hdfs://localhost:23000/" instead.
>      [exec] 08/06/13 03:25:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 0 time(s).
>      [exec] 08/06/13 03:25:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 1 time(s).
>      [exec] 08/06/13 03:25:26 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 2 time(s).
>      [exec] 08/06/13 03:25:27 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 3 time(s).
>      [exec] 08/06/13 03:25:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 4 time(s).
>      [exec] 08/06/13 03:25:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 5 time(s).
>      [exec] 08/06/13 03:25:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 6 time(s).
>      [exec] 08/06/13 03:25:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 7 time(s).
>      [exec] 08/06/13 03:25:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 8 time(s).
>      [exec] 08/06/13 03:25:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 9 time(s).
>      [exec] Exception in thread "main" java.io.IOException: Call failed on local exception
>      [exec] 	at org.apache.hadoop.ipc.Client.call(Client.java:710)
>      [exec] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>      [exec] 	at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
>      [exec] 	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
>      [exec] 	at org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103)
>      [exec] 	at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:173)
>      [exec] 	at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:67)
>      [exec] 	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1335)
>      [exec] 	at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>      [exec] 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
>      [exec] 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:209)
>      [exec] 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:114)
>      [exec] Caused by: java.net.ConnectException: Connection refused
>      [exec] 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>      [exec] 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:527)
>      [exec] 	at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
>      [exec] 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:300)
>      [exec] 	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
>      [exec] 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:781)
>      [exec] 	at org.apache.hadoop.ipc.Client.call(Client.java:696)
>      [exec] 	... 11 more
>      [exec] Call to org.apache.hadoop.fs.FileSystem::get failed!
>      [exec] Oops! Failed to connect to hdfs!
>      [exec] no datanode to stop
>      [exec] no namenode to stop
>      [exec] exiting with 255
>      [exec] make: *** [test] Error 255

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message