incubator-s4-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthieu Morel <mmo...@apache.org>
Subject Re: About 200,000 events/s
Date Mon, 24 Jun 2013 16:25:22 GMT
Hi,

I would suggest to:

1/ check how much you can generate when creating events read from the file - without event
sending to a remote stream. This gives you the upper bound for a single adapter (producer)

2/ check how much you can consume in the app cluster. By default the remote senders are blocking,
i.e. the adapter won't inject more than what the app cluster can consume. This gives you an
upper bound for the consumer.

3/ use more adapter processes. In the benchmarks subprojects, one can configure the number
of injection processes, and you might need more than one

4/ make sure the tuning parameters you are setting are appropriate. For instance, I am not
sure using 100 threads for serializing events is a good setting (see my notes about context
switching in a previous mail).

Also note that 200k msg/s/stream/node corresponds to the average rate in one minute _once
the cluster has reached stable mode_. Indeed JVMs typically perform better after a while,
due to various kinds of dynamic optimizations. Do make sure you experiments are long enough.

Regards,

Matthieu


On Jun 24, 2013, at 11:19 , Sky Zhao <sky.zhao@ericsson.com> wrote:

> I try to use Adapter to send s4 events. With metrics report,
> 20,10462,88.63259092217602,539.6449108859357,18.577650313690874,6.241814566462701
> 40,36006,417.83633322358764,914.1057643161282,97.55624823196746,33.40088245418529
> 60,63859,674.1012974987167,1075.2326549158463,176.33878995148274,61.646803531230724
> 80,97835,953.6282787690939,1232.2934375999696,271.48890371088254,96.56144395108957
> 100,131535,1162.2060916405578,1323.3704459079934,363.98505627735324,131.98430793014757
> 120,165282,1327.52314133145,1384.2675551261093,453.5195236495672,167.61679021575551
> 140,190776,1305.7285112621298,1368.4361242524062,504.7782182758366,191.36049732440895
>  
> 20,000 events per 20s  => 1000 EVENTS/s
>  
> Very slow, I modify the S4_HOME/subprojects/s4-comm/bin/default.s4.comm.properties
>  
> s4.comm.emitter.class=org.apache.s4.comm.tcp.TCPEmitter
> s4.comm.emitter.remote.class=org.apache.s4.comm.tcp.TCPRemoteEmitter
> s4.comm.listener.class=org.apache.s4.comm.tcp.TCPListener
>  
> # I/O channel connection timeout, when applicable (e.g. used by netty)
> s4.comm.timeout=1000
>  
> # NOTE: the following numbers should be tuned according to the application, use case,
and infrastructure
>  
> # how many threads to use for the sender stage (i.e. serialization)
> #s4.sender.parallelism=1
> s4.sender.parallelism=100
> # maximum number of events in the buffer of the sender stage
> #s4.sender.workQueueSize=10000
> s4.sender.workQueueSize=100000
> # maximum sending rate from a given node, in events / s (used with throttling sender
executors)
> s4.sender.maxRate=200000
>  
> # how many threads to use for the *remote* sender stage (i.e. serialization)
> #s4.remoteSender.parallelism=1
> s4.remoteSender.parallelism=100
> # maximum number of events in the buffer of the *remote* sender stage
> #s4.remoteSender.workQueueSize=10000
> s4.remoteSender.workQueueSize=100000
> # maximum *remote* sending rate from a given node, in events / s (used with throttling
*remote* sender executors)
> s4.remoteSender.maxRate=200000
>  
> # maximum number of pending writes to a given comm channel
> #s4.emitter.maxPendingWrites=1000
> s4.emitter.maxPendingWrites=10000
>  
> # maximum number of events in the buffer of the processing stage
> #s4.stream.workQueueSize=10000
> s4.stream.workQueueSize=100000
>  
> only improve from 500 events 1000 events,
>  
> I read file 88m only need 8s, but send events cost 620s now for 1,237,632 events, why
slow, s4 can trigger 200,000 events/s, how I can do up to this values, pls give me detail
instructions.
>  
>  
>  


Mime
View raw message