ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Denis Magda <dma...@apache.org>
Subject Re: Merging all network components to a single one
Date Tue, 07 Mar 2017 19:32:03 GMT
Personally, I’m fully for this idea.

BTW, there is already a ticket for this task created by Yakov some time ago:
https://issues.apache.org/jira/browse/IGNITE-3480 <https://issues.apache.org/jira/browse/IGNITE-3480>

—
Denis

> On Mar 7, 2017, at 10:50 AM, Dmitriy Setrakyan <dsetrakyan@apache.org> wrote:
> 
> Yakov,
> 
> I think you are proposing to have a single NIO (or TCP) SPI and have all
> other SPIs and components register with it and receive message callbacks.
> Is that right? If yes, then I really like the idea.
> 
> D.
> 
> On Tue, Mar 7, 2017 at 2:15 AM, Yakov Zhdanov <yzhdanov@apache.org> wrote:
> 
>> Guys,
>> 
>> I have an idea of merging all net components to one.
>> 
>> Now we have the following components interacting via network:
>> 1. discovery
>> 2. communication
>> 3. rest
>> 4. odbc
>> 5. ignite-hadoop
>> 6. time processor (being removed together with clock mode)
>> 7. IPC communication endpoint
>> 
>> 2-6 use GridNioServer each with different set of selector threads which may
>> result to exceeding the number of cores. Tcp discovery uses blocking socket
>> API.
>> 
>> All above mean that we may require many TCP ports opened on nodes. When it
>> comes to some secured environments with firewalls and gathering special
>> permissions to open new ports Ignite installation may become painful.
>> 
>> What if we have the only TCP port per node (of course we can still bind all
>> the components to different ports) and single component that encapsulates
>> all the network activities and resource management? All components that
>> need network interaction may register filter chain to the network component
>> and start getting/sending network messages.
>> 
>> In other words, I suggest to have a single set of nio-selectors and clean
>> API to install network listeners to satisfy demands of all other
>> components. E.g. discovery, communication and rest will not open their own
>> servers but go to the new NetworkProcessor and setup listeners chain on
>> some ports (possibly on the default one and NetworkProcessor will properly
>> dispatch incoming connections between the components)
>> 
>> Current implementation has the following drawbacks that will be fixed by
>> new approach.
>> 1. may require too many ports opened
>> 2. may have selector threads count exceeding number of CPU which may lead
>> to performance degradation.
>> 
>> Please share your thoughts or ask questions.
>> 
>> --Yakov
>> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message