hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1259) DFS should enforce block size is a multiple of io.bytes.per.checksum
Date Fri, 13 Apr 2007 18:46:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488746

Raghu Angadi commented on HADOOP-1259:

> [...] bytesPerChecksum since checksum is always.

This sentence was not finished. "last checksum could be for less than bytesPerChecksum byte
for the blocks, since checksum in only for block and for the file."

> DFS should enforce block size is a multiple of io.bytes.per.checksum 
> ---------------------------------------------------------------------
>                 Key: HADOOP-1259
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1259
>             Project: Hadoop
>          Issue Type: Improvement
>            Reporter: Raghu Angadi
> DFSClient currently does not enforce that dfs.block.size is a multiple io.bytes.per.checksum.
This not really problem currently but can future upgrades like HADOOP-1134 (see one of the
comments http://issues.apache.org/jira/browse/HADOOP-1134#action_12488542 there). 
> I propose DFSClient should fail loudly and ask the user politely to change the config
to meet this conidtion. Of course we will change the documentation for dfs.block.size also.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message