hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (HADOOP-2154) Non-interleaved checksums would optimize block transfers.
Date Thu, 29 Nov 2007 20:10:43 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12546864
] 

rangadi edited comment on HADOOP-2154 at 11/29/07 12:08 PM:
-----------------------------------------------------------------

Also "the original data (not partitioned into chunks)" does not mean full 128MB right? It
is upto something like 64kB or what ever the io buffersize is...
Edit: This is what I mean by 2nd option in 2nd comment above.


      was (Author: rangadi):
    Also "the original data (not partitioned into chunks)" does not mean full 128MB right?
It is upto something like 64kB or what ever the io buffersize is...

  
> Non-interleaved checksums would optimize block transfers.
> ---------------------------------------------------------
>
>                 Key: HADOOP-2154
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2154
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Konstantin Shvachko
>            Assignee: Rajagopal Natarajan
>             Fix For: 0.16.0
>
>
> Currently when a block is transfered to a data-node the client interleaves data chunks
with the respective checksums. 
> This requires creating an extra copy of the original data in a new buffer interleaved
with the crcs.
> We can avoid extra copying if the data and the crc are fed to the socket one after another.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message