hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dan Hecht (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-11584) s3a file block size set to 0
Date Wed, 11 Feb 2015 20:15:11 GMT
Dan Hecht created HADOOP-11584:
----------------------------------

             Summary: s3a file block size set to 0
                 Key: HADOOP-11584
                 URL: https://issues.apache.org/jira/browse/HADOOP-11584
             Project: Hadoop Common
          Issue Type: Bug
          Components: fs/s3
    Affects Versions: 2.6.0
            Reporter: Dan Hecht


The consequence is that mapreduce probably is not splitting s3a files in the expected way.
This is similar to HADOOP-5861 (which was for s3n, though s3n was passing 5G rather than 0
for block size).

FileInputFormat.getSplits() relies on the FileStatus block size being set:
{code}
        if (isSplitable(job, path)) {
          long blockSize = file.getBlockSize();
          long splitSize = computeSplitSize(blockSize, minSize, maxSize);
{code}

However, S3AFileSystem does not set the FileStatus block size field. From S3AFileStatus.java:
{code}
  // Files
  public S3AFileStatus(long length, long modification_time, Path path) {
    super(length, false, 1, 0, modification_time, path);
    isEmptyDirectory = false;
  }
{code}

I think it should use S3AFileSystem.getDefaultBlockSize() for each file's block size (where
it's currently passing 0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message