hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-1529) Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock
Date Tue, 07 Dec 2010 02:19:09 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Todd Lipcon updated HDFS-1529:
------------------------------

    Attachment: hdfs-1529.txt

Fixed test cases which were missing a finally { cluster.shutdown() } 

> Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1529
>                 URL: https://issues.apache.org/jira/browse/HDFS-1529
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-1529.txt, hdfs-1529.txt, Test.java
>
>
> In HDFS-895 the handling of interrupts during hflush/close was changed to preserve interrupt
status. This ends up creating an infinite loop in waitForAckedSeqno if the waiting thread
gets interrupted, since Object.wait() has a strange semantic that it doesn't give up the lock
even momentarily if the thread is already in interrupted state at the beginning of the call.
> We should decide what the correct behavior is here - if a thread is interrupted while
it's calling hflush() or close() should we (a) throw an exception, perhaps InterruptedIOException
(b) ignore, or (c) wait for the flush to finish but preserve interrupt status on exit?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message