couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Anderson <>
Subject Re: Building IFI View for Text Queries
Date Wed, 06 Jan 2010 21:01:42 GMT
On Wed, Jan 6, 2010 at 12:57 PM, Nic Pottier <> wrote:
> On Wed, Jan 6, 2010 at 12:39 PM, Chris Anderson <> wrote:
>>> Any way to get an insight as to how big the index is?  I can see how
>>> big my database is (78M with ~11k docs) but I'd be curious to know how
>>> big that view is stored in memory.
>> The view is stored on disk. Look in the CouchDB data directory
>> /usr/local/var/lib/couchdb for the view directory.
> I only see the primary database file here, so I guess I get a feeling
> for the total size, but not what portion of that size is from the
> view.  I suppose I could delete it, look at the size, then rebuild,
> comparing the growth?

there is actually a separate index directory called


inside that directory. within it is 1 index file per design document.
that file size is the actual index size.

>> Our reduce is not key-bounded, so [id array] would end up being the
>> list of unique ids in the entire database for full-reduce.
> Ok, that's kind of what I suspected.  Are there any plans to offer
> multiple levels of mapping?  It seems like it would still fit into
> pattern of individual updates and tree aggregation and could allow for
> fast recreation of these kind of indexes.  Just a random question /
> idea..

we're definitely into alternate query engines. Lucene is pretty
popular with CouchDB, and the way it is kept up to date is the same
architecture as the "built-in" map reduce. This is also how you'd hook
up SQLite or Neo4j

>> The storage inefficiency you describe is likely what would force you
>> from a pure Couch to a Lucene FTI solution first, as your data begins
>> to scale.
> Understood.. I'll take another look at the Lucene integration again..
> how many people are using that?

Chris Anderson

View raw message