accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Newton (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (ACCUMULO-1053) continuous ingest detected data loss
Date Fri, 15 Feb 2013 18:27:13 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13579380#comment-13579380
] 

Eric Newton edited comment on ACCUMULO-1053 at 2/15/13 6:25 PM:
----------------------------------------------------------------

It looks like leaseRecovery is an async operation.  You request it, and sometime later, it
finishes. From the javadoc:

"Start the lease recovery of a file"

"@return true if the file is already closed"

HDFS unit tests do this:

{noformat}
while (!fs.recoverLease(path)) {
   Thread.sleep(5000);
}
{noformat}

I've updated my workspace with this approach to wait for the file to be closed.  In my initial
tests, this seems to provide the necessary wait for the commit of the last block to the file.
Interestingly, HBase does not do this, but has a hard-coded one-second sleep after the recovery
(I'm looking at 0.94.4).



                
      was (Author: ecn):
    It looks like leaseRecovery is an async operation.  You request it, and sometime later,
it finishes. From the javadoc:

"Start the lease recovery of a file"

"@return true if the file is already closed"

HDFS unit tests do this:

{noformat}
while (!fs.recoveryLease(path)) {
   Thread.sleep(5000);
}
{noformat}

I've updated my workspace with this approach to wait for the file to be closed.  In my initial
tests, this seems to provide the necessary wait for the commit of the last block to the file.
Interestingly, HBase does not do this, but has a hard-coded one-second sleep after the recovery
(I'm looking at 0.94.4).



                  
> continuous ingest detected data loss
> ------------------------------------
>
>                 Key: ACCUMULO-1053
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1053
>             Project: Accumulo
>          Issue Type: Bug
>          Components: test, tserver
>            Reporter: Eric Newton
>            Assignee: Eric Newton
>            Priority: Critical
>             Fix For: 1.5.0
>
>
> Now that we're logging directly HDFS, we added datanodes to the agitator. That is, we
are now killing data nodes during ingest, and now we are losing data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message