hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1656) HDFS does not record the blocksize for a file
Date Mon, 27 Aug 2007 21:35:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12523112

Chris Douglas commented on HADOOP-1656:

I could only find one piece of code that could be a problem: dfs.FileDataServlet::pickSrcDatanode.
For a file longer than the reported blocksize, it assumes a length of n*blocksize will return
n blocks (unless it's a zero length file, when it asks for only one block). It will still
work, but the number of blocks surveyed is no longer as claimed.

The only other case is irrelevant, in TestDFSShell where querying the blocksize of a zero-length
file need only not throw. That it expects zero doesn't matter.

> HDFS does not record the blocksize for a file
> ---------------------------------------------
>                 Key: HADOOP-1656
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1656
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.13.0
>            Reporter: Sameer Paranjpye
>            Assignee: dhruba borthakur
>             Fix For: 0.15.0
>         Attachments: blockSize4.patch
> The blocksize that a file is created with is not recorded by the Namenode. It is used
only by the client when it writes the file. Invoking 'getBlockSize' merely returns the size
of the first block. The Namenode should record the blocksize.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message