hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-12444) Consider implementing lazy seek in S3AInputStream
Date Fri, 01 Apr 2016 17:19:25 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Steve Loughran updated HADOOP-12444:
------------------------------------
    Attachment: HADOOP-12444-004.patch

I've had a look; I estimate 2+ revisions and it will be ready, but corner cases need to be
worked through —that is: failure handling on IO and bad arguments. New tests are probably
appropriate too, as I think readFully could have its requirements explored more. (that is:
if the patch 003 passed the contract tests, we need to extend the tests)

1. Fixed in -004 {{seekInStream}} would update {{this.pos}} before trying to close the current
stream; that pos() is used to decide whether to abort or close. In the ordering proposed,
this would mean that for a 4GB file at position(0), a seek(4GB) would have triggered reading
the 4GB of data. This would be a subtle regression on HADOOP-11570.

The fix (in this patch) is to change the order
{code}
    closeStream(this.requestedStreamLen);
    // now update the target position
    pos = targetPos;
{code}


2. Why does {{getPos()}} re-open the stream in some cases?

I don't see why this should ever be needed on a lazy operation; it would make the following
sequence expensive, rather than a sequence of updates to internal state only.

{code}
seek(0)
seek(256)
getPos()
seek(0)
{code}


3. {{readFully()}} semantics

{code}
  @Override
  public synchronized void readFully(long position, byte[] buffer, int
      offset, int length) throws IOException {

    checkNotClosed();

    if (this.contentLength == 0 || (nextReadPos > contentLength - 1)) {
      return;
    }
{code}

I don't think this complies with {{PositionedReadable.readFully}}
 [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/fsdatainputstream.html].
though I see that's incorrect there (too  much of read() was copied in)
 
 If something wrong with the source lengths, {{readFully}} *MUST* fail with an exception,
rather than just return silently.

> Consider implementing lazy seek in S3AInputStream
> -------------------------------------------------
>
>                 Key: HADOOP-12444
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12444
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.1
>            Reporter: Rajesh Balamohan
>            Assignee: Rajesh Balamohan
>         Attachments: HADOOP-12444-004.patch, HADOOP-12444.1.patch, HADOOP-12444.2.patch,
HADOOP-12444.3.patch, HADOOP-12444.WIP.patch, hadoop-aws-test-reports.tar.gz
>
>
> - Currently, "read(long position, byte[] buffer, int offset, int length)" is not implemented
in S3AInputStream (unlike DFSInputStream). So, "readFully(long position, byte[] buffer, int
offset, int length)" in S3AInputStream goes through the default implementation of seek(),
read(), seek() in FSInputStream. 
> - However, seek() in S3AInputStream involves re-opening of connection to S3 everytime
(https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L115).
 
> - It would be good to consider having a lazy seek implementation to reduce connection
overheads to S3. (e.g Presto implements lazy seek. https://github.com/facebook/presto/blob/master/presto-hive/src/main/java/com/facebook/presto/hive/PrestoS3FileSystem.java#L623)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message