hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HADOOP-9577) Actual data loss using s3n (against US Standard region)
Date Fri, 16 Jan 2015 13:39:35 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Steve Loughran resolved HADOOP-9577.
------------------------------------
    Resolution: Won't Fix

I'm going to close this as something we don't currently plan to fix in the Hadoop core codebase,
given that Netflix S3mper and EMR itself both offer a solution, namely support on Amazon Dynamo
for a consistent metadata store.

The other way to get guaranteed create consistency is "don't use US East", which has no consistency
guarantees —whereas everything else offers Create , but not Update or Delete

> Actual data loss using s3n (against US Standard region)
> -------------------------------------------------------
>
>                 Key: HADOOP-9577
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9577
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 1.0.3
>            Reporter: Joshua Caplan
>            Priority: Critical
>
>  The implementation of needsTaskCommit() assumes that the FileSystem used for writing
temporary outputs is consistent.  That happens not to be the case when using the S3 native
filesystem in the US Standard region.  It is actually quite common in larger jobs for the
exists() call to return false even if the task attempt wrote output minutes earlier, which
essentially cancels the commit operation with no error.  That's real life data loss right
there, folks.
> The saddest part is that the Hadoop APIs do not seem to provide any legitimate means
for the various RecordWriters to communicate with the OutputCommitter.  In my projects I have
created a static map of semaphores keyed by TaskAttemptID, which all my custom RecordWriters
have to be aware of.  That's pretty lame.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message