helix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Brandt <brandt.g...@gmail.com>
Subject Helix IPC update
Date Mon, 11 Aug 2014 17:33:19 GMT
We've fleshed out the idea for Helix IPC a little more (HELIX-470), and
results
are fairly promising.

The prototype is Netty-based, and our benchmark was able to generate ~1Gbps
traffic (on network w/ 1 GigE switch), where message sizes were ~1KB.

The API is simplified somewhat from the original ticket. The idea is that we
just provide the most basic transport layer possible to allow for maximum
flexibility at the application layer.

One can send messages and register callbacks using a HelixIPCService:

public interface HelixIPCService {
    void send(Set<HelixAddress> destinations, int messageType, UUID
messageId, ByteBuf message);
    void registerCallback(int messageType, HelixIPCCallback callback);
}

public interface HelixIPCCallback {
    void onMessage(HelixMessageScope scope, UUID messageId, ByteBuf
message);
}

A HelixResolver is used to generate the HelixAddress(es) that correspond to
a
HelixMessageScope (i.e. physical machine addresses):

public interface HelixResolver {
    Set<HelixAddress> getDestinations(HelixMessageScope scope);
    HelixAddress getSource(HelixMessageScope scope);
}

And a HelixMessageScope contains the cluster, resource, partition, state,
and
sender information that fully describe the message's path in the cluster.

One thing to note is that Netty's ByteBuf is part of the API here. This is
chosen specifically to avoid memory copies when doing things like data
replication. Also, it's available via io.netty:netty-buffer, so this doesn't
mean one needs to pull in all Netty dependencies. This is preferable to
java.nio.ByteBuffer because it has support for buffer pooling and a much
richer
API.

-Greg

Mime
View raw message