directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Berin Loritsch <blorit...@d-haven.org>
Subject Re: [mina] Performance issues
Date Thu, 24 Mar 2005 14:03:41 GMT
Vinod Panicker wrote:
> Guys, please remember that there is absolutely no functionality being
> undertaken by the client and the server apart from just maintaining
> connected sockets.  So in real-life performance, these numbers wont
> make much difference.

Yep.  That will make a huge difference.

> 
> Alright... Latest stats are as follows - 
> 
> Server - 30K concurrent connections on a Linux box with 512 MB RAM. 
> Ran out of Direct Buffer Memory (more on this later)
> Client - 30K concurrent connections from a single Windows 2003 server
> box 1 GB RAM (no problems)

Ok, let me wipe the drool out of the corner of my mouth.  Trustin, you 
win!  We can probably wipe the last remnants of the stuff I started way 
back in incubator days.  I'm looking at some other fish right now (one 
of them is a pretty ambitious catch), so I won't be able to finish what 
I started anyway.

> 
> Alright, problem areas - 
> 
> 1. No backlog support currently provided with IoProtocolAcceptor

This is something that the blocking sockets do for us.  Essentially it 
amounts to just not responding to a socket until we are done with 
another.  There should also be a timeout to simply kill the connection 
if it has been in the backlog too long.

> 
> 2. When the server accepts a connection, 8K of Direct Buffer Memory is
> being allocated.  I suspect this is too much.  Depending on where MINA
> is used, this memory should be tunable.  From what I saw in the code,
> this is for the read buffer.  Taking an example of a Directory Server
> such as ApacheDS, I doubt incoming data is going to fill the 8K
> buffer.  And since its direct buffer, more care should be taken to use
> up this kind of memory.  If MINA is being used for an FTP server, this
> figure would make sense.  My suggestion would be to make the buffer
> size and kind of buffer (direct, non-direct) values tunable.

Agreed.  8K is two pages of memory on x86 systems.  One page should be 
plenty for most cases.  And I have a feeling that the system will 
reserve a whole page even if we are only using 1/4 of it.  I have no 
tests to prove that though.

> 
> 3.  Client/Server CPU usage.  The number of connections per second
> steadily keeps decreasing as the total connections increase.  Client &
> Server both are at 100% CPU.  That is what is weird.  I'm aiming to
> isolate the problem.  I'm currently a bit doubtful about the way the
> Connector works, but wont be able to point out anything concrete until
> more permutations & combinations are tried.

A couple of thoughts here.  First it is normal for the number of 
connections per second to decrease as the total connections increase. 
The key here is how quickly it decreases.  Ideally we would have a 
smooth taper off of connections per second instead of an abrupt drop. 
That means the system scales well.  Secondly, the 100% CPU cycles can be 
a symptom of how you are running your loops.  I have found that calling 
Thread.yeild() will sometimes allow another thread to run, and sometimes 
not--on the same system.  No matter what, the processor is still pegged. 
  The next question to ask is how often do we need to poll, really?  I 
find that even the process of forcing the loop to sleep 1 millisecond 
between iterations will greatly reduce the load on the processor.  In 
the life of connection times the loss of a millisecond is nothing, the 
the reduction in stress on the CPU is tremendous.  It helps the CPU stay 
busy with real work.  Same thing with game loops.  If you draw a new 
screen each iteration you might have 200+ FPS but your processor will be 
busy doing work that you will never truly appreciate.

> 
> I'm expecting that MINA should be able to take 100K connections on the
> server.  It cant go over 65K for the client due to lack of available
> port numbers.

Thats where multiple clients come in to play.  Quick question, are you 
checking the number of sustained client connections or the number of 
connection requests?

> 
> Currently there are no OS/Kernel issues that should prevent this -
> hopefully I can solve them if I hit them.
> 
> When I ran a test for file descriptors, the Linux/FreeBSD boxes could
> easily allocate up to 262144 FD's (I stopped testing at this number). 
> The Windows box gets limited to a bit over 100K - maybe I can get
> around that as well.

I'm not sure how you are handling threading, but on normal user accounts 
with Linux there is a limit of 500 threads/processes total for that user.

> 
> Once I hit the target of 100K concurrent connections, I'll be starting
> with IO on the concurrent connections to measure throughput values.
> 
> There have been benchmarks put up that say that a
> thread-per-connection model is much better than NIO for throughput and
> performance (http://www.theserverside.com/discussions/thread.tss?thread_id=26700)
> but the point I'm trying to make is that the level of scalability the
> NIO model can provide can never be provided by a thead-per-connection
> model.

Yep, and sometimes scalability is far more important than raw 
performance.  The best analogy I can come up with is that of the 
transition from traditional manufacturing to assembly line 
manufacturing.  You might be able to build one car in less time the 
traditional way, but you can't build the same number of cars per hour.

> 
> Phew.. long mail.. And since it seems that all you guys are in a way
> different time zone, more long mails are likely to come by (I'm in
> IST).

Thanks for the update.  Its very encouraging.

Mime
View raw message