ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Denis Magda <dma...@gridgain.com>
Subject Re: OutOfMemoryError with Hadoop backing filesystem
Date Mon, 09 Nov 2015 09:08:13 GMT
Hi, 

Since IGFS is used as a secondary file system for Hadoop you can enable
eviction policy for IGFS.

IgfsPerBlockLruEvictionPolicy has two parameters - maxSize and maxBlocks.
When any parameter exceeds the policy will evict last recently used data
from the memory. When previously evicted data is requested it will be loaded
from Hadoop to the memory automatically. 

The eviction policy with default parameters can be added this way to your
configuration:

<bean id="dataCacheCfgBase"
class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
<property name="evictionPolicy"
value="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy"/>
.........
</bean>


Regards,
Denis



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/OutOfMemoryError-with-Hadoop-backing-filesystem-tp1854p1885.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Mime
View raw message