directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinod Panicker <vino...@gmail.com>
Subject Re: [mina] Performance issues
Date Thu, 24 Mar 2005 05:20:21 GMT
Phew.. lots of replies.. thought it would be better to reply in a
consolidated mail instead of "carpet bombing" the list.

Guys, please remember that there is absolutely no functionality being
undertaken by the client and the server apart from just maintaining
connected sockets.  So in real-life performance, these numbers wont
make much difference.

Alright... Latest stats are as follows - 

Server - 30K concurrent connections on a Linux box with 512 MB RAM. 
Ran out of Direct Buffer Memory (more on this later)
Client - 30K concurrent connections from a single Windows 2003 server
box 1 GB RAM (no problems)

Basically the client is going full throttle with connect requests. 
Obviously we will run benchmarks with more sedate clients that
occasionally cause load spikes.

I agree with what Berin says, and every Internet server should know
its limits.  Thats where these benchmarks help - we can estimate what
kind of load is MAX, under which the system can still provide expected
performance levels.

Alex, there's still lot more to do.  Will compile all of this into
something that Trustin can put someplace.  I'm also tinkering around
with JVMStat to obtain exact figures.  Regarding the data limits per
partition, I wont be able to help out over there (I think you should
be looking at external storage for such kind of benchmarks)

Alright, problem areas - 

1. No backlog support currently provided with IoProtocolAcceptor

2. When the server accepts a connection, 8K of Direct Buffer Memory is
being allocated.  I suspect this is too much.  Depending on where MINA
is used, this memory should be tunable.  From what I saw in the code,
this is for the read buffer.  Taking an example of a Directory Server
such as ApacheDS, I doubt incoming data is going to fill the 8K
buffer.  And since its direct buffer, more care should be taken to use
up this kind of memory.  If MINA is being used for an FTP server, this
figure would make sense.  My suggestion would be to make the buffer
size and kind of buffer (direct, non-direct) values tunable.

3.  Client/Server CPU usage.  The number of connections per second
steadily keeps decreasing as the total connections increase.  Client &
Server both are at 100% CPU.  That is what is weird.  I'm aiming to
isolate the problem.  I'm currently a bit doubtful about the way the
Connector works, but wont be able to point out anything concrete until
more permutations & combinations are tried.

I'm expecting that MINA should be able to take 100K connections on the
server.  It cant go over 65K for the client due to lack of available
port numbers.

Currently there are no OS/Kernel issues that should prevent this -
hopefully I can solve them if I hit them.

When I ran a test for file descriptors, the Linux/FreeBSD boxes could
easily allocate up to 262144 FD's (I stopped testing at this number). 
The Windows box gets limited to a bit over 100K - maybe I can get
around that as well.

Once I hit the target of 100K concurrent connections, I'll be starting
with IO on the concurrent connections to measure throughput values.

There have been benchmarks put up that say that a
thread-per-connection model is much better than NIO for throughput and
performance (http://www.theserverside.com/discussions/thread.tss?thread_id=26700)
but the point I'm trying to make is that the level of scalability the
NIO model can provide can never be provided by a thead-per-connection
model.

Phew.. long mail.. And since it seems that all you guys are in a way
different time zone, more long mails are likely to come by (I'm in
IST).

Regards,
Vinod.

-- snipped * --

Mime
View raw message