hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4383) Add standard interface/methods for all services to query IPC and HTTP addresses and ports
Date Tue, 26 May 2009 23:19:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12713322#action_12713322

Todd Lipcon commented on HADOOP-4383:

Rather than adding interfaces to the specific daemons, I'd like to propose going the other
direction and factoring this into a ServiceLocator interface. This would provide the traditional
"name service" role which many distributed systems implicitly assume exists.

When any daemon opens up a server, they would register it (IP(s), port, protocol type) with
the ServiceLocator. When any daemon (or client) wants to connect to a specific endpoint, they
query the ServiceLocator to find it. The initial implementation of this interface would simply
perform lookups in Configuration, maintaining the status quo, but I foresee a lot other very
useful potential implementations:
 - The J2EE-ish solution (I'm not a big J2EE guy, but I think JMS or JNDI are the appropriate
TLAs here?)
 - ZooKeeper
 - mdns (aka zeroconf)
 - RFC 2136 dynamic DNS updates
 - Organization specific service locators (eg SmartFrog)
 - Amazon Elastic IP (eg automatically attach an elastic IP to the NN when the NN boots)

Making this nicely pluggable through contrib jars would do well to allow flexibility while
keeping core clean.

This should solve several goals in parallel:
  - Factors out common code regarding "bind address" configurations, wildcard addresses, localhost
vs wildcard vs external IPs, etc
  - Reduces the reliance on "writing back" into Conf objects at service start time, which
I think most people would agree is a somewhat dirty practice.
  - Provides pluggable methods we'll need later if we look into automatic failover of master
  - Provides better integration with external systems already in use in various organizations
(eg SmartFrog, Thrift-based service directories, etc)

> Add standard interface/methods for all services to query IPC and HTTP addresses and ports
> -----------------------------------------------------------------------------------------
>                 Key: HADOOP-4383
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4383
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs, mapred
>    Affects Versions: 0.20.0
>            Reporter: Steve Loughran
>            Priority: Minor
> This is something I've ended up doing in subclasses of all the services: methods to get
at the IPC and HTTP port and addresses. Some services have exported methods for this (JobTracker),
others package-private member variables (namenode), while others don't allow you to get at
all the data (Datanode keeps the http server private). 
> A uniform way to query any service for its live port and address values make some aspects
of service management much easier, such as feeding those values in to http page monitoring

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message