hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Chansler (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2062) Standardize long-running, daemon-like, threads in hadoop daemons
Date Tue, 25 Mar 2008 03:03:26 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Robert Chansler updated HADOOP-2062:
------------------------------------

    Fix Version/s:     (was: 0.17.0)

> Standardize long-running, daemon-like, threads in hadoop daemons
> ----------------------------------------------------------------
>
>                 Key: HADOOP-2062
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2062
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs, mapred
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>
> There are several long-running, independent, threads in hadoop daemons (atleast in the
JobTracker - e.g. ExpireLaunchingTasks, ExpireTrackers, TaskCommitQueue etc.) which need to
be alive as long as the daemon itself and hence should be impervious to various errors and
exceptions (e.g. HADOOP-2051). 
> Currently, each of them seem to be hand-crafted (again, specifically the JobTracker)
and different from the other.
> I propose we standardize on an implementation of a long-running, impervious, daemon-thread
which can be used all over the shop. That thread should be explicitly shut-down by the hadoop
daemon and shouldn't be vulnerable to any exceptions/errors.
> This mostly likely will look like this:
> {noformat}
> public abstract class DaemonThread extends Thread {
>   public static final Log LOG = LogFactory.getLog(DaemonThread.class);
>   {
>     setDaemon(true);                              // always a daemon
>   }
>   public abstract void innerLoop() throws InterruptedException;
>   
>   public final void run() {
>     while (!isInterrupted()) {
>       try {
>         innerLoop();
>       } catch (InterruptedException ie) {
>         LOG.warn(getName() + " interrupted, exiting...");
>       } catch (Throwable t) {
>         LOG.error(getName() + " got an exception: " + 
>                   StringUtils.stringifyException(t));
>       }
>     }
>   }
> }
> {noformat}
> In fact, we could probably hijack org.apache.hadoop.util.Daemon since it isn't used anywhere
(Doug is it still used in nutch?) or atleast sub-class that.
> Thoughts? Could someone from hdfs/hbase chime in?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message