hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Clampffer (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-10311) libhdfs++: DatanodeConnection::Cancel should not delete the underlying socket
Date Thu, 21 Apr 2016 18:05:25 GMT

     [ https://issues.apache.org/jira/browse/HDFS-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

James Clampffer updated HDFS-10311:
    Attachment: HDFS-10311.HDFS-8707.002.patch

New patch addressing [~bobhansen]'s comments
-got rid of extra is_open
-return e.what
-don't hold lock before event hooks

> libhdfs++: DatanodeConnection::Cancel should not delete the underlying socket
> -----------------------------------------------------------------------------
>                 Key: HDFS-10311
>                 URL: https://issues.apache.org/jira/browse/HDFS-10311
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: James Clampffer
>         Attachments: HDFS-10311.HDFS-8707.000.patch, HDFS-10311.HDFS-8707.001.patch,
> DataNodeConnectionImpl calls reset on the unique_ptr that references the underlying asio::tcp::socket.
 If this happens after the continuation pipeline checks the cancel state but before asio uses
the socket it will segfault because unique_ptr::reset will explicitly change it's value to
> Cancel should only call shutdown() and close() on the socket but keep the instance of
it alive.  The socket can probably also be turned into a member of DataNodeConnectionImpl
to get rid of the unique pointer and simplify things a bit.  

This message was sent by Atlassian JIRA

View raw message