hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1657) NNBench benchmark hangs with trunk
Date Wed, 01 Aug 2007 19:40:53 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12517054
] 

Raghu Angadi commented on HADOOP-1657:
--------------------------------------

Thanks Nigel. 

+1 for your changes.

> NNBench benchmark hangs with trunk
> ----------------------------------
>
>                 Key: HADOOP-1657
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1657
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1657.patch, HADOOP-1657.patch, HADOOP-1657.patch, HADOOP-1657.patch,
HADOOP-1657.patch
>
>
> NNBench runs with a small block size (say 20). But uses default value of 512 for io.bytes.per.checksum.
Since HADOOP-1134, block size should be a multiple of of bytes.per.checksum. Fix is to set
bytes.per.checksum to same as blocksize.
> I think following changes to NNBench would help in general (at least the first one) :
>  - NNBench does not log these exceptions. I think it should. 
>  - It calls create() in an infinite loop as long as create does not succeed. May be we
should have an upper limit. Say, max 10000.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message