hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-459) libhdfs leaks memory when writing to files
Date Fri, 03 Nov 2006 19:24:18 GMT
     [ http://issues.apache.org/jira/browse/HADOOP-459?page=all ]

Doug Cutting updated HADOOP-459:
--------------------------------

    Status: Open  (was: Patch Available)

After applying this, 'ant clean test-libhdfs' fails with:

compile-libhdfs:
    [mkdir] Created dir: /home/cutting/src/hadoop/trunk/build/libhdfs
     [exec] gcc -g -Wall -O2 -fPIC -m32 -I/home/cutting/local/jdk/include -I/home/cutting/local/jdk/include/linux
-c hdfs.c -o /home/cutting/src/hadoop/trunk/build/libhdfs/hdfs.o
     [exec] hdfs.c: In function 'hdfsGetWorkingDirectory':
     [exec] hdfs.c:1048: warning: pointer targets in initialization differ in signedness
     [exec] hdfs.c:1054: warning: pointer targets in passing argument 3 of '(*env)->ReleaseStringUTFChars'
differ in signedness
     [exec] hdfs.c: In function 'hdfsGetHosts':
     [exec] hdfs.c:1212: warning: pointer targets in assignment differ in signedness
     [exec] hdfs.c:1215: warning: pointer targets in passing argument 3 of '(*env)->ReleaseStringUTFChars'
differ in signedness
     [exec] hdfs.c: In function 'getFileInfo':
     [exec] hdfs.c:1412: warning: pointer targets in initialization differ in signedness
     [exec] hdfs.c:1416: warning: pointer targets in passing argument 3 of '(*env)->ReleaseStringUTFChars'
differ in signedness
     [exec] make: *** No rule to make target `/home/cutting/src/hadoop/trunk/build/libhdfs/hdfsJniHelper.o',
needed by `/home/cutting/src/hadoop/trunk/build/libhdfs/libhdfs.so.1'.  Stop.
     [exec] Result: 2

That said, before applying it, 'ant clean test-libhdfs' fails with:

test-libhdfs:
   [delete] Deleting directory /home/cutting/src/hadoop/trunk/build/libhdfs/tests/logs
    [mkdir] Created dir: /home/cutting/src/hadoop/trunk/build/libhdfs/tests/logs     [exec]
./tests/test-libhdfs.sh
     [exec] 06/11/03 11:20:33 INFO conf.Configuration: parsing file:/home/cutting/src/hadoop/trunk/conf/hadoop-default.xml
     [exec] starting namenode, logging to /home/cutting/src/hadoop/trunk/build/libhdfs/tests/logs/hadoop-cutting-namenode-cutting-dc5100.out
     [exec] starting datanode, logging to /home/cutting/src/hadoop/trunk/build/libhdfs/tests/logs/hadoop-cutting-datanode-cutting-dc5100.out
     [exec] CLASSPATH=/home/cutting/src/hadoop/trunk/src/c++/libhdfs/tests/conf:/home/cutting/src/hadoop/trunk/conf:/home/cutting/src/hadoop/trunk/build/hadoop-0.7.3-dev.jar:/home/cutting/src/hadoop/trunk/lib/commons-cli-2.0-SNAPSHOT.jar:/home/cutting/src/hadoop/trunk/lib/commons-logging-1.0.4.jar:/home/cutting/src/hadoop/trunk/lib/commons-logging-api-1.0.4.jar:/home/cutting/src/hadoop/trunk/lib/jetty-5.1.4.jar:/home/cutting/src/hadoop/trunk/lib/junit-3.8.1.jar:/home/cutting/src/hadoop/trunk/lib/log4j-1.2.13.jar:/home/cutting/src/hadoop/trunk/lib/lucene-core-1.9.1.jar:/home/cutting/src/hadoop/trunk/lib/servlet-api.jar
LD_PRELOAD=/home/cutting/src/hadoop/trunk/build/libhdfs/libhdfs.so /home/cutting/src/hadoop/trunk/build/libhdfs/hdfs_test
     [exec] 06/11/03 11:20:40 INFO conf.Configuration: parsing file:/home/cutting/src/hadoop/trunk/conf/hadoop-default.xml
     [exec] 06/11/03 11:20:40 INFO ipc.Client: org.apache.hadoop.io.ObjectWritableConnection
culler maxidletime= 1000ms
     [exec] 06/11/03 11:20:40 INFO ipc.Client: org.apache.hadoop.io.ObjectWritable Connection
Culler: starting
     [exec] 06/11/03 11:20:40 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 1 time(s).
     [exec] 06/11/03 11:20:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 2 time(s).
     [exec] 06/11/03 11:20:42 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 3 time(s).
     [exec] 06/11/03 11:20:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 4 time(s).
     [exec] 06/11/03 11:20:44 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 5 time(s).
     [exec] 06/11/03 11:20:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 6 time(s).
     [exec] 06/11/03 11:20:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 7 time(s).
     [exec] 06/11/03 11:20:47 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 8 time(s).
     [exec] 06/11/03 11:20:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 9 time(s).
     [exec] 06/11/03 11:20:49 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:23000.
Already tried 10 time(s).
     [exec] Exception in thread "main" java.net.ConnectException: Connection refused
     [exec]     at java.net.PlainSocketImpl.socketConnect(Native Method)
     [exec]     at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
     [exec]     at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
     [exec]     at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
     [exec]     at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
     [exec]     at java.net.Socket.connect(Socket.java:516)
     [exec]     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:138)
     [exec]     at org.apache.hadoop.ipc.Client.getConnection(Client.java:517)
     [exec]     at org.apache.hadoop.ipc.Client.call(Client.java:445)
     [exec]     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:164)
     [exec]     at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
     [exec]     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:248)
     [exec]     at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:105)
     [exec]     at org.apache.hadoop.dfs.DistributedFileSystem.<init>(DistributedFileSystem.java:49)
     [exec]     at org.apache.hadoop.fs.FileSystem.getNamed(FileSystem.java:104)     [exec]
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:89)
     [exec] Call to org.apache.hadoop.fs.FileSystem::get failed!
     [exec] 06/11/03 11:20:50 INFO conf.Configuration: parsing file:/home/cutting/src/hadoop/trunk/conf/hadoop-default.xml
     [exec] ./tests/test-libhdfs.sh: line 45: 26791 Aborted                 CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
LD_PRELOAD="$LIBHDFS_BUILD_DIR/libhdfs.so" $LIBHDFS_BUILD_DIR/$HDFS_TEST
     [exec] #
     [exec] # An unexpected error has been detected by HotSpot Virtual Machine:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0x402ff1c3, pid=26791, tid=1083387680
     [exec] #
     [exec] # Java VM: Java HotSpot(TM) Client VM (1.5.0_08-b03 mixed mode, sharing)
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x1a51c3]
     [exec] #
     [exec] # An error report file with more information is saved as hs_err_pid26791.log
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
     [exec] stopping datanode
     [exec] no namenode to stop

BUILD SUCCESSFUL

Which is a separate issue...

> libhdfs leaks memory when writing to files
> ------------------------------------------
>
>                 Key: HADOOP-459
>                 URL: http://issues.apache.org/jira/browse/HADOOP-459
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.7.1, 0.7.0, 0.6.2, 0.6.1, 0.6.0, 0.5.0
>            Reporter: Christian Kunz
>         Assigned To: Sameer Paranjpye
>         Attachments: libhdfs_fixes.txt
>
>
> hdfsWrite leaks memory when called repeatedly. The same probably applies to repeated
reads using hdfsRead

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message