mina-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bruno de Carvalho <kindern...@gmail.com>
Subject Message processing issue with MINA
Date Fri, 24 Jul 2009 03:26:16 GMT
Hi,


Before I begin, let me just throw out a big 'thank you' to the folks
that made MINA what it is. It's truly a remarkable library and besides
using and abusing it, I've also been recommending it to everyone I
know ;)

On to the problem, I'm having an issue regarding message processing
times. The test code involves a client and a server, both launched from
within the same application. Before the client floods the server with N
objects, the test initiation instant is saved with
System.currentTimeMillis().
When the last packet is received at server side, the time diff is done
and an average of the time it takes for each packet to be sent from the
client to the server (I'll refer to this process as 'lifetime') is
calculated by totalTestTime/numberOfPackets.

If I calculate this using another approach, that keeps each packet's
before-send instant in an array and calculates, upon reception, it's
individual lifetime (time from client to server), I get average values
way above the global average.

Speaking in numbers, executing the test multiple times, I get a constant
global average of ~1ms lifetime, but individual lifetime measurement
averages ranges 40~80ms.

If I introduce a sleep as low as 4-5ms between sending each packet from
the client to the server, the results become consistent: global lifetime
average tends to match individual lifetime average. So it looks as if
the server is choking with many simultaneous packets. Is there any way
around this?

The code is ultra-simple and available at
http://bruno.factor45.org/LifetimeIssues.java in case someone wants to
see what I'm talking about (change ".java" for ".zip" for full project
with libs, ready to run).

I thought it could be threads-to-cpu issue, so I've tested with client
and server in two different applications (only difference is that
measurement is also made on the client side, with the server mirroring
packets). Same thing happens. Even tried with client and server on
different machines, just to find that it still happens.

I'm basically looking for a way to support heavy bursts without that
per-packet performance penalty. Is it possible?


Best regards,
  Bruno 


Mime
View raw message