hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1450) checksums should be closer to data generation and consumption
Date Thu, 31 May 2007 22:04:17 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Doug Cutting updated HADOOP-1450:
---------------------------------

    Attachment: HADOOP-1450.patch

This patch changes the outer buffers to contain just bytesPerSum, and uses the user-specified
buffer size for inner buffers.  This should catch more memory errors, especially when large
buffers are used.

> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
>                 Key: HADOOP-1450
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>            Reporter: Doug Cutting
>             Fix For: 0.14.0
>
>         Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.  The outermost
buffer should be as small as possible, so that, when writing, checksums are computed before
the data has spent much time in memory, and, when reading, checksums are validated as close
to their time of use as possible.  Currently the outer buffer is the larger, using the bufferSize
specified by the user, and the inner is small, so that most reads and writes will bypass it,
as an optimization.  Instead, the outer buffer should be made to be bytesPerChecksum, and
the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message