hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Clampffer (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9699) libhdfs++: Add appropriate catch blocks for ASIO operations that throw
Date Tue, 01 Mar 2016 15:40:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173914#comment-15173914
] 

James Clampffer commented on HDFS-9699:
---------------------------------------

bq. Perhaps we should put it into a loop and respond to exceptions by re-entering the stack
so we won't lose the worker thread. I'll put that together in another patch.

Are you taking about doing something like:
{code}
while(!fs_shutdown) {
  try {
    my_io_service.run()
  } catch (stuff) {
    //handle stuff
  }
}
{code}

If so I think that's a solid approach as long as it's logging a lot.

> libhdfs++: Add appropriate catch blocks for ASIO operations that throw
> ----------------------------------------------------------------------
>
>                 Key: HDFS-9699
>                 URL: https://issues.apache.org/jira/browse/HDFS-9699
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: James Clampffer
>         Attachments: HDFS-6966.HDFS-8707.000.patch, HDFS-9699.HDFS-8707.001.patch, cancel_backtrace.txt
>
>
> libhdfs++ doesn't create exceptions of its own but it should be able to gracefully handle
exceptions thrown by libraries it uses, particularly asio.
> libhdfs++ should be able to catch most exceptions within reason either at the call site
or in the code that spins up asio worker threads.  Certain system exceptions like std::bad_alloc
don't need to be caught because by that point the process is likely in a unrecoverable state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message