lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Johannes Siegert <>
Subject high memory usage with small data set
Date Wed, 29 Jan 2014 13:49:33 GMT

we are using Apache Solr Cloud within a production environment. If the 
maximum heap-space is reached the Solr access time slows down, because 
of the working garbage collector for a small amount of time.

We use the following configuration:

- Apache Tomcat as webserver to run the Solr web application
- 13 indices with about 1500000 entries (300 MB)
- 5 server with one replication per index (5 GB max heap-space)
- All indices have the following caches
    - maximum document-cache-size is 4096 entries, all other indices 
have between 64 and 1536 entries
    - maximum query-cache-size is 1024 entries, all other indices have 
between 64 and 768
    - maximum filter-cache-size is 1536 entries, all other i ndices have 
between 64 and 1024
- the directory-factory-implementation is NRTCachingDirectoryFactory
- the index is updated once per hour (no auto commit)
- ca. 5000 requests per hour per server
- large filter-queries (up to 15000 bytes and 1500 boolean operations)
- many facet-queries (30%)


Started with 512 MB heap space. Over several days the heap-space grow 
up, until the 5 GB was reached. At this moment the described problem 
occurs. From this time on the heap-space-useage is between 50 and 90 
percent. No OutOfMemoryException occurs.


1. Why does Solr use 5 GB ram, with this small amount of data?
2. Which impact does the large filter-queries have in relation to ram usage?


Johannes Siegert

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message