storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Niskodé-Dossett <>
Subject Re: Increasing worker parallelism decreases throughput and increases tuple timeout
Date Tue, 06 Sep 2016 13:39:56 GMT
Hi Kris,

One possibility is that the Serializer isn't actually caching the schema
<-> id mappings and is hitting the schema registry every time.  The call to
register() in getFingerprint() [1] in particular can be a finicky since the
cache is ultimately in an IDENTITY hash map, not a regular old hashmap[2].
I'm familiar with the Avro deserializer you're using and though it
accounted for this, but perhaps not.

You could add timing information to the getFingerprint() and getSchema()
calls in ConfluentAvroSerializer.  If the results indicate cache misses,
that's probably your culprit.

Best, Aaron


On Tue, Sep 6, 2016 at 7:40 AM Kristopher Kane <> wrote:

> Hi everyone.
> I have a simple topology that uses the Avro serializer (
> and writes to Elasticsearch.
> The topology is like this:
> Kafka (raw scheme) -> Avro deserializer -> Elasticsearch
> This topology runs well with one worker, however, once I add one more
> worker (total of two) and change nothing else, the topology throughput
> drops and tuples start timing out.
> I've attached visualvm/jstatd to the workers when in multi worker mode -
> and added some jmx configs to the worker opts - but I am unable to see
> anything glaring.
> I've never seen Storm act this way but have also never worked with a
> custom serializer so assume that it is the culprit but I cannot explain
> why.
> Any pointers would be appreciated.
> Kris

View raw message