incubator-kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Taylor Gautier <>
Subject Re: Kafka is live in prod @ 100%
Date Tue, 06 Dec 2011 17:14:51 GMT
Sure I can update the wiki - it already has this case listed but I can
add the details.

No - we do not delete topics online. We run a tier of silo'd Kafka
instances with our own sharding layered on top.  With no zookeeper we
found that we can bounce Kafka in under 30 seconds.

In general we do not consider our pub sub messages to be guaranteed
delivery so for the moment anyway we don't have to worry if we lost
some messages while bouncing a host - that said we do not lose
messages anyway.

The way we deliver messages is via a custom UDP to Kafka relay.

This relay accepts UDP messages from our PHP tier, buffers them and
sends them to Kafka.

The advantage of this tier is that PHP cannot be taken down by any
slow latency we might have in the Kafka tier. PHP just fires off a UDP
packet and moves on.

And since the tier buffers messages and automatically reconnects to
hosts in the Kafka tier in practice we get a small delay in delivery
if we bounce a host but we do not lose the message.

There are obvious improvements all around that we can make to our
solution. It's just at V1.  But as I have said before we are happy
with Kafka overall and cannot wait to start layering more features on

On Dec 6, 2011, at 8:45 AM, Jun Rao <> wrote:

> Hi, Taylor,
> Thanks for the update. This is great. Could you update your usage in Kafka
> wiki? Also, do you delete topics online? If so, how do you do that?
> Jun
> On Tue, Dec 6, 2011 at 8:30 AM, Taylor Gautier <> wrote:
>> I've already mentioned this before, but I wanted to give a quick shout to
>> let you guys know that our newest game, Deckadence, is 100% live as of
>> yesterday.
>> Check it out at
>> A little about our use case:
>>  - Deckadence is a game of buying and selling - or rather trading -
>>  cards.  Every user on Tagged owns a card.  There are 100M uses on Tagged,
>>  so that means there are 100M cards to trade.
>>  - Kafka enables real-time delivery of events in the game
>>  - An end user browser makes a long-poll event http connection to receive
>>  1:1 messages and 1:M messages from a specialized http server we built for
>>  this purpose.  1:M messages are delivered from Kafka.
>>  - Because of this design, we can publish a message anywhere inside our
>>  datacenter and send it directly and immediately to any other system that
>> is
>>  subscribed to Kafka, or to an end-user browser
>>  - Every update event for every card is sent to a unique topic that
>>  represents the users card.
>>  - When a user is browsing any card or list of cards - say a search
>>  result - their browser subscribes to all of the cards on screen.
>>  - The effect of this is that any changes to any card seen on-screen are
>>  seen in real-time by all users of the game
>>  - Our primary producers and consumers are PHP and NodeJS, respectively
>> Well, I plan to write up more about this use case in the near future.  As
>> you might have guessed, this is just about as far away from the original
>> intent of Kafka as you could get - we have PHP that sends messages to
>> Kafka.  Since it's not good to hold a TCP connection open in PHP, we had to
>> do some trickery here.  There was no existing Node client so we had to
>> write our own.  And since there are 100 million users registered on Tagged,
>> that means we could have in theory 100M topics.  Of course in practice we
>> have far fewer than that.  One of the main things we currently have to do
>> is aggressively clean topics.  But basically we have many topics, few
>> messages (relatively) per topic.  And order matters, so we had to deal with
>> ensuring that we could handle the number of topics we would create, and
>> ensure ordered delivery and receipt.
>> In the future I have big plans for Kafka, another feature is currently in
>> private test and will be released to the public soon (it uses Kafka in a
>> more traditional way).  And we hope to have many more in 2012...

View raw message