hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs
Date Fri, 13 May 2016 18:18:13 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Steve Loughran updated HADOOP-13130:
------------------------------------
    Attachment: HADOOP-13130-001.patch

Amazon S3, service and client exceptions are caught and wrapped into IOEs.

If they map to standard exceptions (e.g. 404 -> not found, 416-.EOF) then that is done...I
opened up some of the constructors on the existing hadoop.fs exceptions to ease wrapping the
amazon ones here.

If they aren't known, there are two new IOEs, {{AwsServiceIOException}} and {{AwsS3IOException}}
which wrap {{AmazonServiceException}} and {{AmazonS3Exception}} respectively. These relay
all the getters to the wrapped cause, such as {{getStatusCode()}}, {{getRawResponseContent()}}
, etc

I've gone through all the code to make sure that all invocations of the s3 object are, ultimately,
caught and translated to IOEs. For the main FS operations, I've done this by splitting up
an innerX and the public X operations (rename, delete, ..), with the outer one doing the catch
and translate. Some operations do exception handling more internally ({{getFileStatus}} in
particular), so that's more complex. The {{S3AFastOutputStream}} is also somewhat convoluted.
Reviewing there welcome.

It's hard to test the codepaths without fault injection or knowledge of specific buckets which
don't exist, files you can't read, write, etc. We could get away with that for AWS S3, but
they wouldn't work against other endpoints. What I have done is one test of
# create an 8k file
# seek to near the end
# overwrite- a 4K file
# seek to 6K
# attempt a read(), expect -1
# attempt a readFully at 5K, expect EOF exception
# attempt a read(byte[]), expect EOF addr

This shows that the logic can catch the situation of an InputStream having the file underneath
shortened works reliably everywhere; I also check that deletion results in {{FileNotFoundEvents}}
being passed up

> s3a failures can surface as RTEs, not IOEs
> ------------------------------------------
>
>                 Key: HADOOP-13130
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13130
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.2
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>         Attachments: HADOOP-13130-001.patch
>
>
> S3A failures happening in the AWS library surface as {{AmazonClientException}} derivatives,
rather than IOEs. As the amazon exceptions are runtime exceptions, any code which catches
IOEs for error handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with meaningful
exceptions rather than a generic IOE. Furthermore, if anyone has been catching AWS exceptions,
they are going to be disappointed. That means that fixing this situation could be considered
"incompatible" —but only for code which contains assumptions about the underlying FS and
the exceptions they raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message