kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeyhun Karimov <je.kari...@gmail.com>
Subject Re: Handling 2 to 3 Million Events before Kafka
Date Wed, 21 Jun 2017 13:20:28 GMT

With kafka you can increase overall throughput  by increasing the number of
nodes in a cluster.
I had a similar issue, where we needed to ingest vast amounts of data to
streaming system.
In our case, kafka was a bottleneck, because of disk I/O. To solve it, we
implemented (simple) distributed pub-sub system with C which reside data in
memory. Also you should take account your network bandwidth and the
(upper-bound) capability of your processing engine or http server.


On Wed, Jun 21, 2017 at 2:58 PM SenthilKumar K <senthilec566@gmail.com>

> Hi Team ,   Sorry if this question is irrelevant to Kafka Group ...
> I have been trying to solve problem of handling 5 GB/sec ingestion. Kafka
> is really good candidate for us to handle this ingestion rate ..
> 100K machines ----> { Http Server (Jetty/Netty) } --> Kafka Cluster..
> I see the problem in Http Server where it can't handle beyond 50K events
> per instance ..  I'm thinking some other solution would be right choice
> before Kafka ..
> Anyone worked on similar use case and similar load ? Suggestions/Thoughts ?
> --Senthil


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message