lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Morus Walter <>
Subject RE: multivalue fields
Date Mon, 17 May 2004 10:19:49 GMT
Alex McManus writes:
> > Maybe your fields are too long so that only part of it gets indexed (look
> at IndexWriter.maxFieldLength).
> This is interesting, I've had a look at the JavaDoc and I think I
> understand. The maximum field length describes the maximum number of unique
> terms, not the maximum number of words/tokens. Therefore, even if I have a
> 4Gb field, I could quite safely have a maxFieldLength of, say, 100k words
> which should safely handle the maximum number of unique words, rather than
> 800 million which would be needed to handle every token.
> Is this correct? 

A short look at the source says no.

maxFieldLength is handed to DocumentWriter where one finds

          TokenStream stream = analyzer.tokenStream(fieldName, reader);
          try {
            for (Token t =; t != null; t = {
              position += (t.getPositionIncrement() - 1);
              addPosition(fieldName, t.termText(), position++);
              if (++length > maxFieldLength) break;
          } finally {

so it's the number of terms not the number of different tokens.

> Is 100k a worrying maxFieldLength, in terms of how much memory this would
> consume?
Depends on the size of your documents ;-)
I use 250000 without problems, but my documents are not as big (<40000
tokens). I just want to make sure, not to loose any text for indexing.

> Does Lucene issue a warning if this limit is exceeded during indexing (it
> would be quite worrying if it was silently discarding terms)?
I guess the idea behind this limit is, that the relevant terms should occur
in the first n words and indexing the rest just increases index size.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message