hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ari Rabkin (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4049) Cross-system causal tracing within Hadoop
Date Mon, 15 Sep 2008 05:53:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630941#action_12630941
] 

Ari Rabkin commented on HADOOP-4049:
------------------------------------

George:  Looks good, and thanks so much for doing this.  A few thoughts, if I may kibitz instead
of coding:

- You have raw byte[] arrays sprinkled around, e.g., in Server.  Can we hide these behind
abstract classes? Something like "RPCCallInstrumentationState". 
- Why do we need the thread-local stuff in IPCInstrumentation?  Couldn't it be pushed down
to the concrete XTraceIPCInstrumentation?
- Hadoop's io libraries have these utility functions for serializing variable length ints
(hadoop.io.WritableUtils.writeVInt and readVInt) .   I think Owen is pushing using them for
the serialized length field we send with RPC optional fields.

> Cross-system causal tracing within Hadoop
> -----------------------------------------
>
>                 Key: HADOOP-4049
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4049
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs, ipc, mapred
>            Reporter: George Porter
>         Attachments: HADOOP-4049.patch, multiblockread.png, multiblockwrite.png
>
>
> Much of Hadoop's behavior is client-driven, with clients responsible for contacting individual
datanodes to read and write data, as well as dividing up work for map and reduce tasks.  In
a large deployment with many concurrent users, identifying the effects of individual clients
on the infrastructure is a challenge.  The use of data pipelining in HDFS and Map/Reduce make
it hard to follow the effects of a given client request through the system.
> This proposal is to instrument the HDFS, IPC, and Map/Reduce layers of Hadoop with X-Trace.
 X-Trace is an open-source framework for capturing causality of events in a distributed system.
 It can correlate operations making up a single user request, even if those operations span
multiple machines.  As an example, you could use X-Trace to follow an HDFS write operation
as it is pipelined through intermediate nodes.  Additionally, you could trace a single Map/Reduce
job and see how it is decomposed into lower-layer HDFS operations.
> Matei Zaharia and Andy Konwinski initially integrated X-Trace with a local copy of the
0.14 release, and I've brought that code up to release 0.17.  Performing the integration involves
modifying the IPC protocol, inter-datanode protocol, and some data structures in the map/reduce
layer to include 20-byte long tracing metadata.  With release 0.18, the generated traces could
be collected with Chukwa.
> I've attached some example traces of HDFS and IPC layers from the 0.17 patch to this
JIRA issue.
> More information about X-Trace is available from http://www.x-trace.net/ as well as in
a paper that appeared at NSDI 2007, available online at http://www.usenix.org/events/nsdi07/tech/fonseca.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message