lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <>
Subject [jira] Commented: (LUCENE-532) [PATCH] Indexing on Hadoop distributed file system
Date Fri, 10 Nov 2006 15:22:42 GMT
    [ ] 
Michael McCandless commented on LUCENE-532:

I think this is the same issue as LUCENE-532 (I just marked that one as a dup).

But there was one difference: does HDFS allow writing to the same file (eg "segments") more
than once?  I thought it did not because it's "write once"?  Do we need to not do that (write
to the same file more than once) to work with HDFS (lock-less gets us closer)?

> [PATCH] Indexing on Hadoop distributed file system
> --------------------------------------------------
>                 Key: LUCENE-532
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>    Affects Versions: 1.9
>            Reporter: Igor Bolotin
>            Priority: Minor
>         Attachments: indexOnDFS.patch, SegmentTermEnum.patch, TermInfosWriter.patch
> In my current project we needed a way to create very large Lucene indexes on Hadoop distributed
file system. When we tried to do it directly on DFS using Nutch FsDirectory class - we immediately
found that indexing fails because method throws UnsupportedOperationException.
The reason for this behavior is clear - DFS does not support random updates and so seek()
method can't be supported (at least not easily).
> Well, if we can't support random updates - the question is: do we really need them? Search
in the Lucene code revealed 2 places which call method: one is in TermInfosWriter
and another one in CompoundFileWriter. As we weren't planning to use CompoundFileWriter -
the only place that concerned us was in TermInfosWriter.
> TermInfosWriter uses in its close() method to write total number of
terms in the file back into the beginning of the file. It was very simple to change file format
a little bit and write number of terms into last 8 bytes of the file instead of writing them
into beginning of file. The only other place that should be fixed in order for this to work
is in SegmentTermEnum constructor - to read this piece of information at position = file length
- 8.
> With this format hack - we were able to use FsDirectory to write index directly to DFS
without any problems. Well - we still don't index directly to DFS for performance reasons,
but at least we can build small local indexes and merge them into the main index on DFS without
copying big main index back and forth. 

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message