hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shubhangi Garg (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-4467) Segmentation fault in libhdfs while connecting to HDFS in an application running a Hive Query
Date Mon, 04 Feb 2013 12:24:12 GMT

     [ https://issues.apache.org/jira/browse/HDFS-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Shubhangi Garg updated HDFS-4467:
---------------------------------

    Description: 
Connecting to HDFS using the libhdfs compiled library gives a segmentation vault and memory
leaks; easily verifiable by valgrind.

Even a simple application program given below has memory leaks:


#include "hdfs.h"
#include <iostream>

int main(int argc, char **argv) {

    hdfsFS fs = hdfsConnect("localhost", 9000);
    const char* writePath = "/tmp/testfile.txt";
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
    if(!writeFile) {
          fprintf(stderr, "Failed to open %s for writing!\n", writePath);
          exit(-1);
    }
    char* buffer = "Hello, World!";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) {
           fprintf(stderr, "Failed to 'flush' %s\n", writePath);
          exit(-1);
    }
   hdfsCloseFile(fs, writeFile);
}


shell>valgrind  --leak-check=full ./sample

==12773== LEAK SUMMARY:
==12773==    definitely lost: 7,893 bytes in 21 blocks
==12773==    indirectly lost: 4,460 bytes in 23 blocks
==12773==      possibly lost: 119,833 bytes in 121 blocks
==12773==    still reachable: 1,349,514 bytes in 8,953 blocks



  was:
Connecting to HDFS using the libhdfs compiled library gives a segmentation vault and memory
leaks; easily verifiable by valgrind.

Even a simple application program given below has memory leaks:


    Environment: Ubuntu 12.04 (32 bit), application in C++, hadoop 1.0.4  (was: Ubuntu 12.04,
application in C++)
        Summary: Segmentation fault in libhdfs while connecting to HDFS  in an application
running a Hive Query  (was: Segmentation fault in libhdfs while connecting to HDFS and running
a Hive Query)

I am using libhdfs to import data into HDFS from different databases, and populate Hive tables.
However, to manipulate HDFS I am using libhdfs; which has segmentation faults. 

However, even a sample program as above gives a memory leak!
                
> Segmentation fault in libhdfs while connecting to HDFS  in an application running a Hive
Query
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-4467
>                 URL: https://issues.apache.org/jira/browse/HDFS-4467
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: libhdfs
>    Affects Versions: 1.0.4
>         Environment: Ubuntu 12.04 (32 bit), application in C++, hadoop 1.0.4
>            Reporter: Shubhangi Garg
>
> Connecting to HDFS using the libhdfs compiled library gives a segmentation vault and
memory leaks; easily verifiable by valgrind.
> Even a simple application program given below has memory leaks:
> #include "hdfs.h"
> #include <iostream>
> int main(int argc, char **argv) {
>     hdfsFS fs = hdfsConnect("localhost", 9000);
>     const char* writePath = "/tmp/testfile.txt";
>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
>     if(!writeFile) {
>           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>           exit(-1);
>     }
>     char* buffer = "Hello, World!";
>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
>     if (hdfsFlush(fs, writeFile)) {
>            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
>           exit(-1);
>     }
>    hdfsCloseFile(fs, writeFile);
> }
> shell>valgrind  --leak-check=full ./sample
> ==12773== LEAK SUMMARY:
> ==12773==    definitely lost: 7,893 bytes in 21 blocks
> ==12773==    indirectly lost: 4,460 bytes in 23 blocks
> ==12773==      possibly lost: 119,833 bytes in 121 blocks
> ==12773==    still reachable: 1,349,514 bytes in 8,953 blocks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message