ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raymond Wilson <raymond_wil...@trimble.com>
Subject RE: Effective size limit for cache items in Ignite
Date Mon, 13 Feb 2017 01:43:11 GMT
Ah, I found the CopyOnRead flag in the cache configuration.

Unfortunately, it seems to have the same behaviour regardless of the
setting for this flag.

If I create an example like the below, it seems that querying the same
element from the cache many times takes about the same amount of time in
both cases. Visual Studio also reports large numbers of GC episodes while
it cleans up the large freed MyCacheClass instances. Is this flag only
applicable to Java contexts? I did also try setting KeepBinaryInStore to
true, though there was no noticeable difference.


    public class MyCacheClass


        public String name = String.Empty;

        private byte[] localData = null;

        public MyCacheClass(String _name)


            name = _name;

            localData = new byte[4000000];



    class Program


        static void Main(string[] args)


            IIgnite ignite = Ignition.Start();

            // Add a cache to Ignite

            ICache<String, MyCacheClass> cache = ignite.CreateCache<String,

                (new CacheConfiguration()


                    Name = "TestCache",

                    CopyOnRead = false,

                    KeepBinaryInStore = true


            // Add a cache item

            cache.Put("First", new MyCacheClass("FirstItem"));

            // query back the cache items

            for (int i = 0; i < 30000; i++)


                MyCacheClass first = cache.Get("First");



*From:* Raymond Wilson [mailto:raymond_wilson@trimble.com]
*Sent:* Monday, February 13, 2017 11:35 AM
*To:* user@ignite.apache.org
*Subject:* Effective size limit for cache items in Ignite


What is the practical size limit for items in an Ignite cache?

I suspect the answer is something “As large as the memory you have to hold
it”, but my question is more aimed at the practicality of large items in a
cache due to the overhead of pulling copies of the items out of the cache
in response to a Cache.Get() request.

For instance, let’s say I had cache items in the 64Kb size range, and had
requests that commonly refer to those cache items to perform some work on
them in response to a request. Will each Cache.Get() request require an
extraction and repackaging of the cache item prior to handing it back to
the caller as a new (copied) version of that cache item, or is there a way
for just a reference to the cache item to be returned to the caller?

I understand there is a way to designate the information in a cache as just
blobs of data with no serialisation semantics. In this case does a
Cache.Get() return a pointer or a copy (with a local locking semantic to
prevent change)?



View raw message