lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley (JIRA)" <>
Subject [jira] Commented: (SOLR-1277) Implement a Solr specific naming service (using Zookeeper)
Date Thu, 10 Dec 2009 20:30:18 GMT


Yonik Seeley commented on SOLR-1277:

Nice work Mark!  I'll try and get what you have up and running.

bq. So then the idea would: a user sets up everything in the model (we need good tools for
this if thats the case), then the system builds the state automatically? When a search request
comes in, we grab which shards to hit, cache them, and use them until a Watch event tells
us to look again?

Yep... but there are race conditions, so in our request to each node, it should specify what
shard it is querying on that node.  The node needs to notify us if it no longer has that shard.

bq. How about how a host registers itself?

Seems simple, but I ended up retyping and deleting my answer to you 3 times.  Some complicating
- a machine may have multiple network interfaces
- a network interface may have multiple IP addresses (and either IPv4 or IPv6)
- there may be multiple NIS/DNS entries for an IP
- there may be multiple virtual machines on a single physical box

I do think that a node should be able to register itself though, and that a user should be
able to override that.
We could perhaps start off with just identifying a node by the IP address + port the servlet
container is bound to (or if multiple, just the first IP?) and model physical_box like  like
other topology items... rack, switch, datacenter, etc.

> Implement a Solr specific naming service (using Zookeeper)
> ----------------------------------------------------------
>                 Key: SOLR-1277
>                 URL:
>             Project: Solr
>          Issue Type: New Feature
>    Affects Versions: 1.4
>            Reporter: Jason Rutherglen
>            Assignee: Grant Ingersoll
>            Priority: Minor
>             Fix For: 1.5
>         Attachments: log4j-1.2.15.jar, SOLR-1277.patch, SOLR-1277.patch, SOLR-1277.patch,
SOLR-1277.patch, zookeeper-3.2.1.jar
>   Original Estimate: 672h
>  Remaining Estimate: 672h
> The goal is to give Solr server clusters self-healing attributes
> where if a server fails, indexing and searching don't stop and
> all of the partitions remain searchable. For configuration, the
> ability to centrally deploy a new configuration without servers
> going offline.
> We can start with basic failover and start from there?
> Features:
> * Automatic failover (i.e. when a server fails, clients stop
> trying to index to or search it)
> * Centralized configuration management (i.e. new solrconfig.xml
> or schema.xml propagates to a live Solr cluster)
> * Optionally allow shards of a partition to be moved to another
> server (i.e. if a server gets hot, move the hot segments out to
> cooler servers). Ideally we'd have a way to detect hot segments
> and move them seamlessly. With NRT this becomes somewhat more
> difficult but not impossible?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message