hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: Report two bugs
Date Wed, 14 Mar 2012 17:09:58 GMT
Hi,

I didn't look in detail at the attached logs, but they smell like bugs
to me, or at least worth investigating. Please do file on the JIRA.

Thanks
-Todd

On Tue, Mar 13, 2012 at 2:31 PM, hadoop <hadoop@wangzw.org> wrote:
> Hi,
>
> I am testing on the hadoop 0.23.1 and find two issues. I think that they are bugs and
need someone to confirm.
>
> Issue 1:
>
> STEP:
> 1, deploy a single node hdfs  0.23.1 cluster and configure hdfs as:
> A) enable webhdfs
> B) enable append
> C) disable permissions
> 2,  start hdfs
> 3,  run the test script as attached
> RESULT:
> expected: a file named testFile should be created and populated with 32K * 5000 zeros,
HDFS should be OK.
> I got: script cannot be finished, file has been created but not be populated as expected,
actually append operation failed.
>
> Datanode log shows that, blockscaner report a bad replica and nanenode decide to delete
it. Since it is a single node cluster, append fail. It makes no sense that the script fail
every time.
> Datanode and Namenode logs are attached.
>
> Issue 2:
>
> STEP:
> 1, create a new EMPTY file
> 2, read it using webhdfs.
>
> RESULT:
> expected: get a empty file
> I got: {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Offset=0
out of the range [0, 0); OPEN, path=/testFile"}}
>
> First of all, [0, 0) is not a valid range, and I thing read a empty file should be OK.
>
>  Are these two issues bugs? If they are, I will report them on the jira.
>
>
> Thanks
>
>
> Zhanwei Wang



-- 
Todd Lipcon
Software Engineer, Cloudera

Mime
View raw message