hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1259) DFS should enforce block size is a multiple of io.bytes.per.checksum
Date Fri, 13 Apr 2007 18:46:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488747
] 

Doug Cutting commented on HADOOP-1259:
--------------------------------------

So the reason to do this is to simplify future checksum upgrades, right?  But I don't see
how it complicates upgrades to permit the final checksum in each block to represent fewer
bytes than bytesPerChecksum.  If we want to change bytesPerChecksum for a file or a block
we can do that as a datanode-local operation, not requiring access to the namenode or other
datanodes.  The client may become a bit more complicated, but not much.  But then we can add
file append without further changes to the client or datanodes.

> DFS should enforce block size is a multiple of io.bytes.per.checksum 
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-1259
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1259
>             Project: Hadoop
>          Issue Type: Improvement
>            Reporter: Raghu Angadi
>
> DFSClient currently does not enforce that dfs.block.size is a multiple io.bytes.per.checksum.
This not really problem currently but can future upgrades like HADOOP-1134 (see one of the
comments http://issues.apache.org/jira/browse/HADOOP-1134#action_12488542 there). 
> I propose DFSClient should fail loudly and ask the user politely to change the config
to meet this conidtion. Of course we will change the documentation for dfs.block.size also.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message