accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keith Turner (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-4391) Source deepcopies cannot be used safely in separate threads in tserver
Date Thu, 08 Sep 2016 18:39:20 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15474665#comment-15474665
] 

Keith Turner commented on ACCUMULO-4391:
----------------------------------------

[~ivan.bella] while looking at the pull request for a bit. I am wondering if you have a sense
of what specifically is causing problem?    I have read back over the comments to try to determine
this. Do you think one thread is closing an RFile while other threads are reading from deep
copies of that RFile?  If so, do you think this is resulting in the threads reading from deep
copies on a closed RFile using decompressor that were returned? And this is resulting in multiple
threads using the same decompressor?  

> Source deepcopies cannot be used safely in separate threads in tserver
> ----------------------------------------------------------------------
>
>                 Key: ACCUMULO-4391
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-4391
>             Project: Accumulo
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.6.5
>            Reporter: Ivan Bella
>            Assignee: Ivan Bella
>             Fix For: 1.6.6, 1.7.3, 1.8.1, 2.0.0
>
>   Original Estimate: 24h
>          Time Spent: 10.5h
>  Remaining Estimate: 13.5h
>
> We have iterators that create deep copies of the source and use them in separate threads.
 As it turns out this is not safe and we end up with many exceptions, mostly down in the ZlibDecompressor
library.  Curiously if you turn on the data cache for the table being scanned then the errors
disappear.
> After much hunting it turns out that the real bug is in the BoundedRangeFileInputStream.
 The read() method therein appropriately synchronizes on the underlying FSDataInputStream,
however the available() method does not.  Adding similar synchronization on that stream fixes
the issues.  On a side note, the available() call is only invoked within the hadoop CompressionInputStream
for use in the getPos() call.  That call does not appear to actually be used at least in this
context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message