lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Simon Willnauer <>
Subject Re: modularization discussion
Date Thu, 05 May 2011 08:15:44 GMT
Hey folks

On Tue, May 3, 2011 at 6:49 PM, Michael McCandless
<> wrote:
> Isn't our end goal here a bunch of well factored search modules?  Ie,
> fast forward a year or two and I think we should have modules like
> these:

I think we have two camps here (10k feet view):

1. wants to move towards modularization might support all the modules
mike has listed below
2. wants to stick with Solr's current architecture and remain
"monolithic" (not negative in this case) as much as possible

I think we can meet somewhere in between and agree on certain module
that should be available to lucene users as well. The ones I have in
mind are
primary search features like:
 - Faceting
- Highlighting
- Suggest
- Function Query (consolidation is needed here!)
- Analyzer factories

things like distribution and replication should remain in solr IMO but
might be moved to a more extensible API so that people can add their
own implementation. I am thinking about things like the ZooKeeper
support that might not be a good solution for everybody where folks
have already JGroups infrastructure. So I think we can work towards 2
distinct goals.
1. extract common search features into modules
2. refactor solr to be more "elastic" / "distributed"  and extensible
with respect to those goals.

maybe we can get agreement on such a basis though.

let me know what you think

>  * Faceting
>  * Highlighting
>  * Suggest (good patch is on LUCENE-2995)
>  * Schema
>  * Query impls
>  * Query parsers
>  * Analyzers (good progress here already, thanks Robert!),
>    incl. factories/XML configuration (still need this)
>  * Database import (DIH)
>  * Web app
>  * Distribution/replication
>  * Doc set representations
>  * Collapse/grouping
>  * Caches
>  * Similarity/scoring impls (BM25, etc.)
>  * Codecs
>  * Joins
>  * Lucene core
> In this future, much of this code came from what is now Solr and
> Lucene, but we should freely and aggressively poach from other
> projects when appropriate (and license/provenance is OK).
> I keep seeing all these cool "compressed int set" projects popping
> up... surely these are useful for us.  Solr poached a doc set impl
> from Nutch; probably there's other stuff to poach from Nutch, Mahout,
> etc.
> Katta's doing something sweet with distribution/replication; let's
> poach & merge w/ Solr's approach.  There are various facet impls out
> there (Bobo browse/Zoie; Toke's; Elastic Search); let's poach & merge
> with Solr's.
> Elastic Search has lots of cool stuff, too, under ASL2.
> All these external open-source projects are fair game for poaching and
> refactoring into shared modules, along with what is now Solr and
> Lucene sources.
> In this ideal future, Solr becomes the bundling and default/example
> configuration of the Web App and other modules, much like how the
> various Linux distros bundle different stuff together around the Linux
> kernel.  And if you are an advanced app and don't need the webapp
> part, you can cherry pick the huper duper modules you do need and
> directly embedded into your app.
> Isn't this the future we are working towards?
> Mike
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message