thrift-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rocco Corsi (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (THRIFT-4488) Performance impact of Nagle disabled
Date Mon, 14 Jan 2019 16:10:00 GMT

    [ https://issues.apache.org/jira/browse/THRIFT-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742251#comment-16742251
] 

Rocco Corsi commented on THRIFT-4488:
-------------------------------------

Sorry for not responding to your earlier post.

As mentioned the performance hit may or may not be noticed due to Nagle setting, even my testing
showed that performance might be worse or better depending on specific traffic type.  It
might only be exposed due to binary traffic with C++ implementation and might not be seen
otherwise.  Also I only noticed when looking at packets with Wireshark, and due to Wireshark
not fully supporting Thrift protocol I don't know how many people have look at it at the network
level.

Do you have a link to what you fixed?  Would be interesting to see what that is.

Do you know which version of Thrift will include it?  Once we move to that version, I could
retest.

Regarding should we keep the issue open, do you see any point in keeping this issue open? 
It has not gather any response from anyone else, claiming to also be experiencing the issue.
Might be something only impacting my environment.  I really don't know.

I really don't mind if you close it, or keep it open as a reminder to others that might want
to look into it some day.

> Performance impact of Nagle disabled
> ------------------------------------
>
>                 Key: THRIFT-4488
>                 URL: https://issues.apache.org/jira/browse/THRIFT-4488
>             Project: Thrift
>          Issue Type: Bug
>          Components: C++ - Library
>    Affects Versions: 0.10.0
>            Reporter: Rocco Corsi
>            Priority: Major
>
> Running a SUSE 12 SP2 x86_64 C++ Thrift Server that is using OpenSSL.  Our thrift service
uses all Oneway methods exclusively, so a Java client sends request Oneway and C++ server
responds with Oneway method calls too.
> Noticed that the packets from Java client API method calls were mostly contained within
one or two packets, but C++ server responses are being split over many packets.  Often 1
data byte per packet.  This is not really a good use of SSL protocol.  Under high load,
too many extra packets can exhaust random data cache and stale SSL library.
> As an experiment, re-enabled Nagle's Algorithm on C++ Thrift server (modified TServerSocket.cpp)
and did tests at various load levels with various number of java clients.  Comparing results
with Nagle disabled and enabled, performance improvements varied from -10% to +40%, most of
the results were on the plus side.
> Additionally, also working with wireshark developer to decode thrift traffic and the
large number of packets that need to be reassembled is causing huge headaches to program the
dissector.  Hopefully he can fix that, but seems very difficult from what he tells me.
> Our C++ Thrift Server is based on TBufferedProtocol and TBinaryProtocol.   Briefly
tried changing to TFramed, but that didn't appear to make any difference, and client didn't
work any longer (we did try to change it to match server, maybe we did something wrong).
> Is there a problem with the way we are creating our C++ Thrift server (TBuffered + TBinary),
see further below for more complete info? Shouldn't the TBufferedProcotol send complete API
messages and prevent the large number of packets?  Is TBinaryProtocol the problem?
> Would it be asking too much to allow Thrift Server user the choice to enable Nagle or
not during server creation?
> or
> Is there a problem with TBufferedProtocol or TBinaryProtocol or something else we are
doing wrong?
> Thanks for your time.
>  This is how we create our C++ Thrift server
> {code:java}
>     shared_ptr<toNappIfFactory> handlerFactory(new NappServiceHandlerFactory());
>     shared_ptr<TProcessorFactory> processorFactory(new toNappProcessorFactory(handlerFactory));
>     shared_ptr<TTransportFactory> transportFactory(new TBufferedTransportFactory());
>     shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());
>     shared_ptr<ThreadManager> threadManager(ThreadManager::newSimpleThreadManager(NUMBER_OF_SERVER_THREADS));
>     shared_ptr<PlatformThreadFactory> threadFactory = shared_ptr<PlatformThreadFactory>(new
PlatformThreadFactory());
>     threadManager->threadFactory(threadFactory);
>     threadManager->start();
>     shared_ptr<TServerSocket> socket( nappServerSocketBuilder->buildSSLServerSocket(
nappServerSocketBuilder->getPortNumber(), s_sslConfig));
>     shared_ptr<TServerTransport> serverTransport(socket);
>     shared_ptr<TServer> server( new TThreadPoolServer(processorFactory,
>                                                       
serverTransport,
>                                                       
transportFactory,
>                                                       
protocolFactory,
>                                                       
threadManager));
> {code}
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message