hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5640) Allow ServicePlugins to hook callbacks into key service events
Date Thu, 20 Aug 2009 18:25:14 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12745543#action_12745543

Todd Lipcon commented on HADOOP-5640:

Hey Sanjay,

It's been a while, but I think the issue was this:

- In the Thrift/HDFS RPC stuff, both the DN and the NN have plugins. The DN exposes things
to let the client read chunks out of blocks, and the NN of course exposes the metadata ops,
getBlockLocations, etc.
- The trick is that we don't want to force the DN thrift plugins to bind to a hardcoded port.
So, by default the DN plugin binds to an ephemeral port, and once the server is up and listening,
it passes along the actual port to the NN Plugin via Thrift. Thus, the NN Plugin maintains
a list of DatanodeID -> DN Plugin Thrift Port and can give out the thrift port to clients
in response to getBlockLocations.
- This is all well and good, since that registration happens via thrift call from DN to NN
when the DN plugin starts. The issue comes up when the NN restarts while the DN stays up.
In that case, the DN service loop automatically re-registers itself with the new NN, but the
DN plugin has no idea that it has to re-register. Hence the DatanodeID->ThriftPort map
in the NN Plugin is empty and has no way of repopulating itself.

The only solutions here were (a) to add a hook on registration so the DN Plugin can reregister,
or (b) add a second heartbeat from the DN Plugin to the NN Plugin. We decided that the heartbeats
added unnecessary complexity as well as extra load on the NN, so went with the hook solution.

> Allow ServicePlugins to hook callbacks into key service events
> --------------------------------------------------------------
>                 Key: HADOOP-5640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: util
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hadoop-5640.txt, hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt
> HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin
implementations at NN/DN start/stop. However, this is insufficient integration for some common
use cases.
> We should add some functionality for Plugins to subscribe to events generated by the
service they're plugging into. Some potential hook points are:
> NameNode:
>   - new datanode registered
>   - datanode has died
>   - exception caught
>   - etc?
> DataNode:
>   - startup
>   - initial registration with NN complete (this is important for HADOOP-4707 to sync
up datanode.dnRegistration.name with the NN-side registration)
>   - namenode reconnect
>   - some block transfer hooks?
>   - exception caught
> I see two potential routes for implementation:
> 1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin
interface. Something like:
> {code:java}
> enum HookPoint {
>  ...
> }
> void runHook(HookPoint hp, Object value);
> {code}
> 2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257.
Something like:
> {code:java}
> class DataNodePlugin {
>   void datanodeStarted() {}
>   void receivedNewBlock(block info, etc) {}
>   void caughtException(Exception e) {}
>   ...
> }
> {code}
> I personally prefer option (2) since we can ensure plugin API compatibility at compile-time,
and we avoid an ugly switch statement in a runHook() function.
> Interested to hear what people's thoughts are here.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message