ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nikita Amelchev <nsamelc...@gmail.com>
Subject Re: Transport compression (not store compression)
Date Fri, 16 Feb 2018 10:06:18 GMT
Vladimir Ozerov, I also agree that your solution is good.

I will check this flag before adding a client to map of clients. If one of
the nodes have the flag then the session will be marked "compressed".  At
the nearest time, I will provide a solution.

Dmitriy Setrakyan, I will implement and test compressed flag after I write
test with real operations (put, get etc) on yardstick.

2018-02-16 0:05 GMT+03:00 Dmitriy Setrakyan <dsetrakyan@apache.org>:

> Vova, I think your solution is fine, but I think we will always have some
> messages compressed and others not. For example, in many cases, especially
> when messages are relatively small, compressing them will introduce an
> unnecessary overhead, and most likely slow down the cluster.
>
> Why not have compression flag or compression bit on per-message level? We
> check if the bit is turned on, and if it is, then we uncompress the message
> on the receiving side before processing it.
>
> D.
>
> On Thu, Feb 15, 2018 at 12:24 AM, Vladimir Ozerov <vozerov@gridgain.com>
> wrote:
>
> > I think that we should not guess on how the clients are used. They could
> be
> > used in any way - in the same network, in another network, in Docker, in
> > hypervisor, etc.. This holds for both thin and thick clients. It is
> > essential that we design configuration API in a way that compression
> could
> > be enabled only for some participants.
> >
> > What if we do this as follows:
> > 1) Define "IgniteConfiguration.compressionEnabled" flag
> > 2) When two nodes communicate and at least one of them has this flag,
> then
> > all data sent between them is compressed.
> >
> > Makes sense?
> >
> > On Thu, Feb 15, 2018 at 8:50 AM, Nikita Amelchev <nsamelchev@gmail.com>
> > wrote:
> >
> > > Hello, Igniters.
> > >
> > > I have not seen such use-cases, where heavy client ignite node placed
> in
> > a
> > > much worse network than the server. I'm not sure we should encourage a
> > bad
> > > cluster architecture.
> > >
> > > Usually, in my use-cases, the servers and clients locate in the same
> > > network. And if the cluster has SSL enabled, it makes sense to enable
> > > compression, even if the network is fast. It also makes sense when we
> > have
> > > a high load on the network, and the CPU is utilized poorly.
> > >
> > > I'll do tests on yardstick for real operations like get, put etc. and
> SQL
> > > requests.
> > >
> > > I propose to add configurable compression for thin client/ODBC/JDBC as
> a
> > > separate issue because it increases the current PR.
> > >
> > > Even if it really makes sense to compress the traffic only between
> > > client-server ignite nodes, it should also be a separate issue, that
> > would
> > > not increase the PR. Especially, since this compression architecture
> may
> > > not be accepted by the community.
> > >
> > > 2018-02-05 13:02 GMT+03:00 Nikita Amelchev <nsamelchev@gmail.com>:
> > >
> > > > Thanks for your comments,
> > > >
> > > > I will try to separate network compression for clients and servers.
> > > >
> > > > It makes sense to enable compression on servers if we have SSL turned
> > on.
> > > > I tested rebalancing time and compression+ssl is faster. SSL
> throughput
> > > is
> > > > limited by 800 Mbits/sec per connection and if enable compression, it
> > > > boosted up to 1100 Mbits.
> > > >
> > > > 2018-02-02 18:52 GMT+03:00 Alexey Kuznetsov <akuznetsov@apache.org>:
> > > >
> > > >> I think Igor is right.
> > > >>
> > > >> Ususally servers connected via fast local network.
> > > >> But clients could be in external and slow network.
> > > >> In this scenario compression will be very useful.
> > > >>
> > > >> Once I had such scenario - client connected to cluster via 300 kb/s
> > > >> network
> > > >> and tries to transfer ~10Mb of uncumpressed data.
> > > >> So it takse ~30 seconds.
> > > >> After I implemented compression it becamed 1M and transfered for ~3
> > > >> seconds.
> > > >>
> > > >> I think we should take care of all mentioned problems with NIO
> threads
> > > in
> > > >> order to not slow down whole cluster.
> > > >>
> > > >>
> > > >> On Fri, Feb 2, 2018 at 10:05 PM, gvvinblade <gvvinblade@gmail.com>
> > > wrote:
> > > >>
> > > >> > Nikita,
> > > >> >
> > > >> > Yes, you're right. Maybe I wasn't clear enough.
> > > >> >
> > > >> > Usually server nodes are placed in the same fast network segment
> > (one
> > > >> > datacenter); in any case we need an ability to setup compression
> per
> > > >> > connection using some filter like useCompression(ClusterNode,
> > > >> ClusterNode)
> > > >> > to compress traffic only between servers and client nodes.
> > > >> >
> > > >> > But issue is still there, since the same NIO worker serves both
> > client
> > > >> and
> > > >> > server connections, enabled compression may impact whole cluster
> > > >> > performance
> > > >> > because NIO threads will compress client messages instead of
> > > processing
> > > >> > servers' compute requests. That was my concern.
> > > >> >
> > > >> > Compression for clients is really cool feature and usefull in
some
> > > >> cases.
> > > >> > Probably it makes sense to have two NIO servers with and without
> > > >> > compression
> > > >> > to process server and client requests separately or pin somehow
> > worker
> > > >> > threads to client or server sessions...
> > > >> >
> > > >> > Also we have to think about client connections (JDBC, ODBC, .Net
> > thin
> > > >> > client, etc) and setup compression for them separately.
> > > >> >
> > > >> > Anyway I would compare put, get, putAll, getAll and SQL SELECT
> > > >> operations
> > > >> > for strings and POJOs, one server, several clients with and
> without
> > > >> > compression, setting up the server to utilize all cores by NIO
> > > workers,
> > > >> > just
> > > >> > to get know possible impact.
> > > >> >
> > > >> > Possible configuration for servers with 16 cores:
> > > >> >
> > > >> > Selectors cnt = 16
> > > >> > Connections per node = 4
> > > >> >
> > > >> > Where client nodes perform operations in 16 threads
> > > >> >
> > > >> > Regards,
> > > >> > Igor
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Alexey Kuznetsov
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Best wishes,
> > > > Amelchev Nikita
> > > >
> > >
> > >
> > >
> > > --
> > > Best wishes,
> > > Amelchev Nikita
> > >
> >
>



-- 
Best wishes,
Amelchev Nikita

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message