lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Glen Newton <glen.new...@gmail.com>
Subject Re: What kind of System Resources are required to index 625 million row table...???
Date Mon, 15 Aug 2011 23:08:44 GMT
> We have increased the up to 4 GB... on an 8 GB machine...
> That's why we'd like a methodology for calculating memory requirements
> to see if this application is even feasible.

Please indicate when you are speaking about the indexing part or the
searching part. There are times where it is not clear or ambiguous.
:-)

The IBM Java VM has a limitation on the size of an NIO buffer. The
default is 64MB. This may be impacting your indexing and searching.
Consider setting this to a larger size
(-XX:MaxDirectMemorySize=<size>). Perhaps similar to your RAMBuffer
size in your IndexWriter (assuming NIOFSDirectory directory). See
https://www.ibm.com/developerworks/java/jdk/aix/j664/sdkguide.aix64.html

With regards to the machine, you didn't indicate how much swap you were using.

Heap: hnless there are other things running, you could try up to 7GB of heap.

You should also consider using huge pages. PPC64 supports 4K(default)
and 16M (although this is more likely to speed things up but unlikely
solve your heap problem...)
 General info for AIX and PPC:
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/large_page_ovw.htm
Java vm command line:
"-Xlp<size>
    AIX: Requests the JVM to allocate the Java heap (the heap from
which Java objects are allocated) with large (16 MB) pages, if a size
is not specified. If large pages are not available, the Java heap is
allocated with the next smaller page size that is supported by the
system. AIX requires special configuration to enable large pages. For
more information about configuring AIX support for large pages, see
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.prftungd/doc/prftungd/large_page_ovw.htm.
The SDK supports the use of large pages only to back the Java heap
shared memory segments. The JVM uses shmget() with the SHM_LGPG and
SHM_PIN flags to allocate large pages. The -Xlp option replaces the
environment variable IBM_JAVA_LARGE_PAGE_SIZE, which is now ignored if
set.
    AIX, Linux, and Windows only: If a <size> is specified, the JVM
attempts to allocate the JIT code cache memory using pages of that
size. If unsuccessful, or if executable pages of that size are not
supported, the JIT code cache memory is allocated using the smallest
available executable page size."

 General info on huge pages & Java, MySql, Linux, AIX:
http://zzzoot.blogspot.com/2009/02/java-mysql-increased-performance-with.html
 [my blog]


Consider some of the following Java VM command line options (some IBM
vm specific):
-   -Xgcpolicy:subpool    "Uses an improved object allocation
algorithm to achieve better performance when allocating objects on the
heap. This option might improve performance on large SMP systems"
-  -Xcompressedrefs   "Use -Xcompressedrefs in any of these
situations: When your Java applications does not need more than a 25
GB Java heap.     When your application uses a lot of native memory
and needs the JVM to run in a small footprint."
-  -Xcompactexplicitgc    "Enables full compaction each time
System.gc() is called."
-  -Xcompactgc   "Compacts on all garbage collections (system and global)."
-  -Xsoftrefthreshold<number> "Sets the value used by the GC to
determine the number of GCs after which a soft reference is cleared if
its referent has not been marked. The default is 32, meaning that the
soft reference is cleared after 32 * (percentage of free heap space)
GC cycles where its referent was not marked." Reducing this will clear
out soft references sooner. If any soft referenced-based caching is
being used, cache hits will go down but memory will be freed up
faster. But this will not directly solve your OOM problem: "All soft
references are guaranteed to have been cleared before the
OutOfMemoryError is thrown.
    The default (no compaction option specified) makes the GC compact
based on a series of triggers that attempt to compact only when it is
beneficial to the future performance of the JVM." - from
https://www.ibm.com/developerworks/java/jdk/aix/j664/sdkguide.aix64.html

Very useful document on IBM Java VM: "Diagnostics Guide: IBM Developer
Kit and Runtime Environment, Java: Technology Edition, Version 6"
 http://download.boulder.ibm.com/ibmdl/pub/software/dw/jdk/diagnosis/diag60.pdf
  [page references refer to this document]
Relevant tips from this document on memory management:
- "Ensure that the heap never pages; that is, the maximum heap size
must be able to be contained in physical memory." p,8  Note that this
is a performance tip, not an OOM tip

You are using "-Xms4072m -Xmx4072m". The IBM documentation suggests
this is not a good choice:
"When you have established the maximum heap size that you need, you might
want to set the minimum heap size to the same value; for example, -Xms512M
-Xmx512M. However, using the same values is typically not a good idea,
because it
delays the start of garbage collection until the heap is full.
Therefore, the first time
that the GC runs, the process can take longer. Also, the heap is more
likely to be
fragmented and require a heap compaction. You are advised to start your
application with the minimum heap size that your application requires. When the
GC starts up, it will run frequently and efficiently, because the heap
is small." - p43

AIX allows different malloc policies to be used in the underlying
system calls. Consider using the WATSON (!) malloc policy. p.134,136
and http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/sys_mem_alloc.htm

Finally (or before doing all of this! :-)  ), do some profiling, both
inside of Java, and of the AIX native heap using svmon (see "Native
Heap Exhaustion, p.135).

-Glen Newton
http://zzzoot.blogspot.com/




On Mon, Aug 15, 2011 at 5:55 PM, Bennett, Tony <Bennett.Tony@con-way.com> wrote:
> Thanks for the quick response.
>
> As to your questions:
>
>  Can you talk a bit more about what the search part of this is?
>  What are you hoping to get that you don't already have by adding in search?  Choices
for fields can have impact on
>  performance, memory, etc.
>
> We currently have a "exact match" search facility, which uses SQL.
> We would like to add "text search" capabilities...
> ...initially, having the ability to search the 229 character field for a given word,
or phrase, instead of an exact match.
> A future enhancement would be to add a synonym list.
> As to "field choice", yes, it is possible that all fields would be involved in the "search"...
> ...in the interest of full disclosure, the fields are:
>   - corp  - corporation that owns the document
>   - type  - document type
>   - tmst  - creation timestamp
>   - xmlid - xml namespace ID
>   - tag   - meta data qualifier
>   - data  - actual metadata  (example:  carton of red 3 ring binders )
>
>
>
>  Was this single threaded or multi-threaded?  How big was the resulting index?
>
> The search would be a threaded application.
>
>  How big was the resulting index?
>
> The index that was built was 70 GB in size.
>
>  Have you tried increasing the heap size?
>
> We have increased the up to 4 GB... on an 8 GB machine...
> That's why we'd like a methodology for calculating memory requirements
> to see if this application is even feasible.
>
> Thanks,
> -tony
>
>
> -----Original Message-----
> From: Grant Ingersoll [mailto:gsingers@apache.org]
> Sent: Monday, August 15, 2011 2:33 PM
> To: java-user@lucene.apache.org
> Subject: Re: What kind of System Resources are required to index 625 million row table...???
>
>
> On Aug 15, 2011, at 2:39 PM, Bennett, Tony wrote:
>
>> We are examining the possibility of using Lucene to provide Text Search
>> capabilities for a 625 million row DB2 table.
>>
>> The table has 6 fields, all which must be stored in the Lucene Index.
>> The largest column is 229 characters, the others are 8, 12, 30, and 1....
>> ...with an additional column that is an 8 byte integer (i.e. a 'C' long long).
>
> Can you talk a bit more about what the search part of this is?  What are you hoping
to get that you don't already have by adding in search?  Choices for fields can have impact
on performance, memory, etc.
>
>>
>> We have written a test app on a development system (AIX 6.1),
>> and have successfully Indexed 625 million rows...
>> ...which took about 22 hours.
>
> Was this single threaded or multi-threaded?  How big was the resulting index?
>
>
>>
>> When writing the "search" application... we find a simple version works, however,
>> if we add a Filter or a "sort" to it... we get an "out of memory" exception.
>>
>
> How many terms do you have in your index and in the field you are sorting/filtering on?
 Have you tried increasing the heap size?
>
>
>> Before continuing our research, we'd like to find a way to determine
>> what system resources are required to run this kind of application...???
>
> I don't know that there is a straight forward answer here with the information you've
presented.  It can depend on how you intend to search/sort/filter/facet, etc.  General rule
of thumb is that when you get over 100M documents, you need to shard, but you also have pretty
small documents so your mileage may vary.   I've seen indexes in your range on a single machine
(for small docs) with low search volumes, but that isn't to say it will work for you without
more insight into your documents, etc.
>
>> In other words, how do we calculate the memory needs...???
>>
>> Have others created a similar sized Index to run on a single "shared" server...???
>>
>
> Off the cuff, I think you are pushing the capabilities of doing this on a single machine,
especially the one you have spec'd out below.
>
>>
>> Current Environment:
>>
>>       Lucene Version: 3.2
>>       Java Version:   J2RE 6.0 IBM J9 2.4 AIX ppc64-64 build jvmap6460-20090215_29883
>>                        (i.e. 64 bit Java 6)
>>       OS:                     AIX 6.1
>>       Platform:               PPC  (IBM P520)
>>       cores:          2
>>       Memory:         8 GB
>>       jvm memory:     `       -Xms4072m -Xmx4072m
>>
>> Any guidance would be greatly appreciated.
>>
>> -tony
>
> --------------------------------------------
> Grant Ingersoll
> Lucid Imagination
> http://www.lucidimagination.com
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>



-- 

-

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message