hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "George Porter (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4049) Cross-system causal tracing within Hadoop
Date Wed, 10 Sep 2008 23:54:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630038#action_12630038

George Porter commented on HADOOP-4049:

The following are some notes on creating a pluggable path-based tracing framework.  They are
the result of conversations between Ari, Andy, George, Rodrigo, Owen, Mac, Arun, and others.
 This will hopefully serve as a starting point for further discussion.

Path-based tracing consists of two different operations: propagation and instrumentation.
Propagation is responsible for keeping "path state" needed to reconstruct the event graph
flowing along the datapath. Path state must be maintained within a single thread or JVM, and
communicated across network protocols. You can think of path state as a small set of bytes
that follow a given operation such as a DFS write or an RPC call through each of the machines
involved in that call. Instrumentation is responsible for creating events. These events can
make use of the path state, and can also modify that path state. Instrumentation points are
called at key places in the code, such as when an RPC client is about to invoke a call across
the network, or when that call is received by the server.

At first, there will be three abstract instrumentation classes:
* HDFSInstrumentation
* IPCInstrumentation
* MapReduceInstrumentation

Specific path-based tracing frameworks will create subclasses (i.e., XTraceIPCInstrumentation.
 There would be one abstract path state class, PathState.  This state is very small, and consists
of a type, a length, and a few bytes. Concrete subclasses can impose semantics on the bytes.

A proposal for the IPC instrumentation abstract class is:

public abstract class IPCInstrumentation 
    /** Called when the client initiates an RPC call with invocation 'i' */
    public abstract void clientStartCall(Invocation i);

    /** Called when the server receives the RPC call, before it begins processing */
    public abstract void serverReceiveCall();

    /** Called when the server has finished processing, and is about to return the result
'retvalue' */
    public abstract void serverSendResponse(Writable retvalue);

    /** Called when the client receives the response from the server */
    public abstract void clientReceiveResponse();

    /** Called when the RPC invocation 'i' throws an exception 't' on the server */
    public abstract void remoteException(Invocation i, Throwable t);

    /** Called when a failure occurs reaching the server (i.e., network failure)
      * 'i' is the invocation that failed, and 't' is the failure exception */
    public abstract void ipcInfrastructureFailure(Invocation i, Throwable t);

In terms of propagation, the IPCInstrumentation class will have a reference to a PathState
object, and if that reference is non-null, will include it in the ipc protocol. The IPCInstrumentation
methods can get, set, and modify that PathState object.

> Cross-system causal tracing within Hadoop
> -----------------------------------------
>                 Key: HADOOP-4049
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4049
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs, ipc, mapred
>            Reporter: George Porter
>         Attachments: multiblockread.png, multiblockwrite.png
> Much of Hadoop's behavior is client-driven, with clients responsible for contacting individual
datanodes to read and write data, as well as dividing up work for map and reduce tasks.  In
a large deployment with many concurrent users, identifying the effects of individual clients
on the infrastructure is a challenge.  The use of data pipelining in HDFS and Map/Reduce make
it hard to follow the effects of a given client request through the system.
> This proposal is to instrument the HDFS, IPC, and Map/Reduce layers of Hadoop with X-Trace.
 X-Trace is an open-source framework for capturing causality of events in a distributed system.
 It can correlate operations making up a single user request, even if those operations span
multiple machines.  As an example, you could use X-Trace to follow an HDFS write operation
as it is pipelined through intermediate nodes.  Additionally, you could trace a single Map/Reduce
job and see how it is decomposed into lower-layer HDFS operations.
> Matei Zaharia and Andy Konwinski initially integrated X-Trace with a local copy of the
0.14 release, and I've brought that code up to release 0.17.  Performing the integration involves
modifying the IPC protocol, inter-datanode protocol, and some data structures in the map/reduce
layer to include 20-byte long tracing metadata.  With release 0.18, the generated traces could
be collected with Chukwa.
> I've attached some example traces of HDFS and IPC layers from the 0.17 patch to this
JIRA issue.
> More information about X-Trace is available from http://www.x-trace.net/ as well as in
a paper that appeared at NSDI 2007, available online at http://www.usenix.org/events/nsdi07/tech/fonseca.html

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message