hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hadoop <had...@wangzw.org>
Subject Report two bugs
Date Tue, 13 Mar 2012 21:31:15 GMT
<pre>Hi,

I am testing on the hadoop 0.23.1 and find two issues. I think that they are bugs and need
someone to confirm.

Issue 1:

STEP:
1, deploy a single node hdfs  0.23.1 cluster and configure hdfs as:
A) enable webhdfs
B) enable append
C) disable permissions
2,  start hdfs
3,  run the test script as attached
RESULT:
expected: a file named testFile should be created and populated with 32K * 5000 zeros, HDFS
should be OK.
I got: script cannot be finished, file has been created but not be populated as expected,
actually append operation failed.

Datanode log shows that, blockscaner report a bad replica and nanenode decide to delete it.
Since it is a single node cluster, append fail. It makes no sense that the script fail every
time.
Datanode and Namenode logs are attached.

Issue 2:

STEP:
1, create a new EMPTY file
2, read it using webhdfs.

RESULT:
expected: get a empty file
I got: {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Offset=0
out of the range [0, 0); OPEN, path=/testFile"}}

First of all, [0, 0) is not a valid range, and I thing read a empty file should be OK.

 Are these two issues bugs? If they are, I will report them on the jira.


Thanks


Zhanwei Wang
</pre>
Mime
View raw message