accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keith Turner (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-813) clear block caches on IOException
Date Thu, 27 Dec 2012 20:36:12 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13540136#comment-13540136
] 

Keith Turner commented on ACCUMULO-813:
---------------------------------------

Was this related to dat with a bad visibility?  If so, ACCUMULO-360 may help avoid the situation.
  ACCUMULO-918 and ACCUMULO-844 will help users deal with this situation should it occur.

Handling issues that arise higer in the iterator stack is tricky, but not impossible.  For
a corrupt rfile, where the problem is detected in the rfile code, handling the situation should
be more straightforward.  Want to understand the issue you ran into more before working on
it.
                
> clear block caches on IOException
> ---------------------------------
>
>                 Key: ACCUMULO-813
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-813
>             Project: Accumulo
>          Issue Type: Improvement
>          Components: tserver
>            Reporter: Eric Newton
>            Assignee: Keith Turner
>            Priority: Blocker
>             Fix For: 1.5.0
>
>
> A user generated a bulk import file with illegal data.  After re-generating the file,
they thought they could just move the file into HDFS with the new name.  Unfortunately, the
block cache remembered some of the data, which caused the data at the block boundaries to
be corrupt.
> One possible solution is to clear the block cache when an IOException occurs on a read.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message