ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raymond Wilson <raymond_wil...@trimble.com>
Subject RE: Massive commit sizes for processes with local Ignite Grid servers
Date Thu, 07 Sep 2017 23:12:52 GMT
Hi Alexey,



The server itself is pretty simple, it runs up and waits for requests
without preloading aything.



Once the process has initialized everything (except for activating the
Ignite server node), the Commit Size is around 750Mb in size. It then
stalls waiting for Active to become true, at which point it creates four
caches on the server. Once this has completed, Commit Size grows to around
3Gb.



The odd thing here is that there shouldn’t really be anything else loaded
in terms of libraries as it should all be there by the time the node is
activated.



I don’t have any memory constraints on the caches as this should be managed
under the umbrella memory policy provided for the server node as a whole.



I see the same behaviour if I run the server with no persistent data
compared to running it again after processing some 250Mb of data across the
four caches. The commit size itself doesn’t change during operations (as
these are really just the Ignite server node reacting to cache requests
made by another totally separate Ignite client node in another process.



Perhaps 3Gb commit is the minimum amount the Ignite JVM requests?



Thanks,

Raymond.





*From:* Alexey Kukushkin [mailto:kukushkinalexey@gmail.com]
*Sent:* Thursday, September 7, 2017 11:45 PM
*To:* user@ignite.apache.org
*Cc:* dev@ignite.apache.org
*Subject:* Re: Massive commit sizes for processes with local Ignite Grid
servers



Raymond,



So you see 3 "extra" GB that your server app takes. Is it possible your app
really loads additional 3GB of referenced libraries and data besides
Ignite? Did you try temporarily changing the code to NOT start Ignite and
see how much memory such an app takes?



On Thu, Sep 7, 2017 at 1:49 PM, Raymond Wilson <raymond_wilson@trimble.com>
wrote:

Hi Dmitry,



Thanks for the pointer to the MemoryPolicy.



I added the following:



            cfg.MemoryConfiguration = new MemoryConfiguration()

            {

                SystemCacheMaxSize = (long)1 * 1024 * 1024 * 1024,

                DefaultMemoryPolicyName = "defaultPolicy",

                MemoryPolicies = new[]

                {

                    new MemoryPolicyConfiguration

                    {

                        Name = "defaultPolicy",

                        InitialSize = 128 * 1024 * 1024,  // 128 MB

                        MaxSize = 1L * 1024 * 1024 * 1024  // 1 GB

                    }

                }

            };



After running both servers the commit size peaked at 4Gb for both processes
(with ~430Mb actual allocated memory) which is s significant improvement,
though still seems higher than might be expected.



Thanks,

Raymond.







*From:* Dmitry Pavlov [mailto:dpavlov.spb@gmail.com]
*Sent:* Thursday, September 7, 2017 10:22 PM
*To:* user@ignite.apache.org; dev@ignite.apache.org
*Subject:* Re: Massive commit sizes for processes with local Ignite Grid
servers



Hi Raymond,



Total memory usage since 2.0 version is determined as sum of heap size and
memory policies MaxSizes (overall segment sizes). If it is not configured
there is 80% of physical RAM is used for each node (before 2.2). In 2.2
this behaviour will be changed.



To run several nodes at one PC it may be required to manually setup Memory
Configuration and Memory Policy(ies).





Hi Igniters, esp. Pavel T.



please share your thoughts. To which Java property value of
SystemCacheMaxSize is now mapped?



Sincerely,

Dmitriy Pavlov



P.S. Please see example of configuration

https://apacheignite-net.readme.io/docs/durable-memory



MemoryPolicies = new[]

    {

      new MemoryPolicyConfiguration

      {

        Name = "defaultPolicy",

        MaxSize = 4L * 1024 * 1024 * 1025  // 4 GB

      }

    }



чт, 7 сент. 2017 г. в 12:44, Raymond Wilson <raymond_wilson@trimble.com>:

I tried an experiment where I ran only two instances of the server locally,
this is the result in the Task Manager:





*From:* Raymond Wilson [mailto:raymond_wilson@trimble.com]
*Sent:* Thursday, September 7, 2017 9:21 PM
*To:* user@ignite.apache.org; 'dev@ignite.apache.org' <dev@ignite.apache.org
>
*Subject:* Massive commit sizes for processes with local Ignite Grid servers



I’m running a set of four server applications on a local system to simulate
a cluster.



Each of the servers has the following memory configurations set:



        public override void ConfigureRaptorGrid(IgniteConfiguration cfg)

        {

            cfg.JvmInitialMemoryMb = 512; // Set to minimum advised memory
for Ignite grid JVM of 512Mb

            cfg.JvmMaxMemoryMb = 1 * 1024; // Set max to 1Gb



            // Don't permit the Ignite node to use more than 1Gb RAM (handy
when running locally...)

            cfg.MemoryConfiguration = new MemoryConfiguration()

            {

                SystemCacheMaxSize = (long)1 * 1024 * 1024 * 1024

            };

        }



The snap below is from the Windows 10 Task Manager where I have included
the Commit Size value. As can be seen, the four identical servers are using
very large and wildly varying commit sizes. Some Googling suggests this is
due to the JVM allocating the largest contiguous block of virtual memory it
can, but I would not have expected this size to be larger than the
configured memory for the JVM (1Gb plus memory from the wider process it is
running in, though this is only a few hundred Mb at most)





The result is that my local system reports ~50-60Gb committed memory on a
system with 16Gb of physical RAM, and I don’t think it likes it!



Is there are way to configure the Ignite JVM to be a better citizen with
respect to the commited size it requests from the host operating system?



Thanks,

Raymond.







-- 

Best regards,

Alexey

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message