hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "vince zhang (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-10369) hdfsread crash when reading data reaches to 128M
Date Thu, 05 May 2016 13:58:12 GMT
vince zhang created HDFS-10369:
----------------------------------

             Summary: hdfsread crash when reading data reaches to 128M
                 Key: HDFS-10369
                 URL: https://issues.apache.org/jira/browse/HDFS-10369
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: fs
            Reporter: vince zhang


see code below, it would crash after   printf("hdfsGetDefaultBlockSize2:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs),
ret);
  
hdfsFile read_file = hdfsOpenFile(fs, "/testpath", O_RDONLY, 0, 0, 1); 
  int total = hdfsAvailable(fs, read_file);
  printf("Total:%d\n", total);
  char* buffer = (char*)malloc(sizeof(size+1) * sizeof(char));
  int ret = -1; 
  int len = 0;
  ret = hdfsSeek(fs, read_file, 134152192);
  printf("hdfsGetDefaultBlockSize1:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs), ret);
  ret = hdfsRead(fs, read_file, (void*)buffer, size);
  printf("hdfsGetDefaultBlockSize2:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs), ret);
  ret = hdfsRead(fs, read_file, (void*)buffer, size);
  printf("hdfsGetDefaultBlockSize3:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs), ret);
  return 0;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message