lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (SOLR-1277) Implement a Solr specific naming service (using Zookeeper)
Date Wed, 16 Dec 2009 21:20:18 GMT

    [ https://issues.apache.org/jira/browse/SOLR-1277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12791588#action_12791588
] 

Yonik Seeley commented on SOLR-1277:
------------------------------------

bq. How are we addressing a failed connection to a slave server, and instead of failing the
request, re-making the request to an adjacent slave?

Yes, I didn't spell it out, but that's the HA part of why you have multiple copies of a shard
(in addition to increasing capacity).

bq. The way things work now, if someone searched during the GC, theyd get all the results
back, the search would just take longer. They'd see the hour glass spinning, know the results
where slow for this search, but still coming. I was/am not sure if we wanted to replicate
that.

I think we always need to support that.  If/when a solr request should time out should be
on a per-request basis, and the default should probably be to not time out at all (or at least
have a very high timeout).  This really doesn't have anything to do with zookeeper.

Zookeeper gives us the layout of the cluster.  It doesn't seem like we need (yet) fast failure
detection from zookeeper - other nodes can do this synchronously themselves (and would need
to anyway) on things like connection failures.  App-level timeouts should not mark the node
as failed since we don't know how long the request was supposed to take.


> Implement a Solr specific naming service (using Zookeeper)
> ----------------------------------------------------------
>
>                 Key: SOLR-1277
>                 URL: https://issues.apache.org/jira/browse/SOLR-1277
>             Project: Solr
>          Issue Type: New Feature
>    Affects Versions: 1.4
>            Reporter: Jason Rutherglen
>            Assignee: Grant Ingersoll
>            Priority: Minor
>             Fix For: 1.5
>
>         Attachments: log4j-1.2.15.jar, SOLR-1277.patch, SOLR-1277.patch, SOLR-1277.patch,
SOLR-1277.patch, zookeeper-3.2.1.jar
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> The goal is to give Solr server clusters self-healing attributes
> where if a server fails, indexing and searching don't stop and
> all of the partitions remain searchable. For configuration, the
> ability to centrally deploy a new configuration without servers
> going offline.
> We can start with basic failover and start from there?
> Features:
> * Automatic failover (i.e. when a server fails, clients stop
> trying to index to or search it)
> * Centralized configuration management (i.e. new solrconfig.xml
> or schema.xml propagates to a live Solr cluster)
> * Optionally allow shards of a partition to be moved to another
> server (i.e. if a server gets hot, move the hot segments out to
> cooler servers). Ideally we'd have a way to detect hot segments
> and move them seamlessly. With NRT this becomes somewhat more
> difficult but not impossible?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message