lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Rutherglen" <>
Subject Re: Some new SOLR features
Date Thu, 18 Sep 2008 11:43:47 GMT
> multi-core allows you to instantiate a completely
> new core and swap it for the old one, but it's a bit of a heavyweight
> approach.

Multi core seems like more of a hack to get around running multiple
JVMs.  It doesn't seem like the most elegant solutions for most
problems because usually the same configuration files can be used, and
the schemas will be different.  This is true for the systems because
they are loading data from an SQL database for indexing which is in
separate tables.  I put a field in the documents demarking the
different tables or object types which then is filtered on.  This is
not ideal if one wants separation of the term frequencies between the
tables and possibly for speed if one table is really small, it should
return results faster and not still iterate over the data of the other

On Wed, Sep 17, 2008 at 2:21 PM, Yonik Seeley <> wrote:
> On Wed, Sep 17, 2008 at 1:27 PM, Jason Rutherglen
> <> wrote:
>> If the configuration code is going to be rewritten then I would like
>> to see the ability to dynamically update the configuration and schema
>> without needing to reboot the server.
> Exactly.  Actually, multi-core allows you to instantiate a completely
> new core and swap it for the old one, but it's a bit of a heavyweight
> approach.
> The key is finding the right granularity of change.
> My current thought is that a schema object would not be mutable, but
> that one could easily swap in a new schema object for an index at any
> time.  That would allow a single request to see a stable view of the
> schema, while preventing having to make every aspect of the schema
> thread-safe.
>> Also I would like the
>> configuration classes to just contain data and not have so many
>> methods that operate on the filesystem.
> That's the plan... completely separate the serialized and in memory
> representations.
>> This way the configuration
>> object can be serialized, and loaded by the server dynamically.  It
>> would be great for the schema to work the same way.
> Nothing will stop one from using java serialization for config
> persistence, however I am a fan of human readable for config files...
> so much easier to debug and support.  Right now, people can
> cut-n-paste relevant parts of their config in email for support, or to
> a wiki to explain things, etc.
> Of course, if you are talking about being able to have custom filters
> or analyzers (new classes that don't even exist on the server yet),
> then it does start to get interesting.  This intersects with
> deployment in general... and I'm not sure what the right answer is.
> What if Lucene or Solr needs an upgrade?  It would be nice if that
> could also automatically be handled in a a large cluster... what are
> the options for handling that?  Is there a role here for OSGi to play?
>  It sounds like at least some of that is outside of the Solr domain.
> An alternative to serializing everything would be to ship a new schema
> along with a new jar file containing the custom components.
> -Yonik

View raw message