hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-7623) S3FileSystem reports block-size as length of File if file length is less than a block
Date Mon, 12 Sep 2011 12:55:08 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13102625#comment-13102625

Uma Maheswara Rao G commented on HADOOP-7623:

Looks S3 is getting the size directly from actual size of first block.

private static long findBlocksize(INode inode) {
      final Block[] ret = inode.getBlocks();
      return ret == null ? 0L : ret[0].getLength();

 I too also think to serialize block size into INode as DFS. Lets see is there any specific
reason do like this for S3 alone. Can someone clarify who involved initially in S3FileSystem

Here the problem is till the firstblock completes, we will not know exact blocksize if i am
not wrong. Is this your issue?


> S3FileSystem reports block-size as length of File if file length is less than a block
> -------------------------------------------------------------------------------------
>                 Key: HADOOP-7623
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7623
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 0.20.1,,, 0.21.0
>            Reporter: Subroto Sanyal
>              Labels: hadoop
>             Fix For: 0.24.0
> in S3FileSystem create a File with Block size as 67108864.
> Write some data in file of size 2048 (less than 67108864)
> Assert the block size of the file. the block size reported will be 2048 rather than 67108864.
> This behavior is not inline with HDFS.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message