lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Dunning (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-2765) Shard/Node states
Date Sat, 08 Oct 2011 01:31:29 GMT

    [ https://issues.apache.org/jira/browse/SOLR-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123345#comment-13123345
] 

Ted Dunning commented on SOLR-2765:
-----------------------------------

{quote}
That answers part of it, I am trying to consider this with regards to the project I am currently
working. On this project we have the case where we are also interested in additional slices/shards
becoming available serving more data. So in this case it's not a question of replicas but
completely new slices.
{quote}
This is just a case of slices appearing that are not yet replicated.  It should be no different
than if all nodes handling these were to die simultaneously, really.

{quote}
We also distribute our queries (much like the latest solrj on trunk does) where we randomly
choose a server with the role "searcher", I think this means that each searcher needs to be
aware of all of the other available servers with the role "searcher" to be able to execute
the query. I suppose the servers with role "indexer" do not need to build the watchers as
they are not being used in the query process (assuming they aren't also searchers).
{quote}
I don't follow this at all.  Why would servers need to know about other servers?

This wouldn't be quite n^2 in any case.  It could be order n for each change, but the constant
factor would be quite small for reasonable clusters.  The cost would be n^2/2 as the cluster
came up, but even for 1000 nodes, this would clear in less than a minute which is much less
time than it would take to actually bring the servers up.

I still don't think that there would be any need for all servers to know about this.  The
clients need to know, but not the servers.

If you mean that a searcher serves as a query proxy between the clients and the servers, then
you would require a notification to each search for each change.  If you have k searchers
and n nodes, bringing up the cluster would require k n / 2 notifications.  For 100 proxies
and 1000 search nodes, this is a few seconds of ZK work.  Again, this is much less than the
time required to bring up so many nodes.  

If you put server status into a single file, far fewer notifications would actually be sent
because as the notifications are delayed, the watchers get delayed in being reset so you have
natural quenching while still staying very nearly up to date.


Regarding putting the information about the collections available into the live nodes, I think
that would be inefficient for the clients compared to putting it into a summarized file. 
For commanding the nodes, it is very bad practice to mix command and status files in ZK.

I am a Zookeeper brother, btw.  (PMC member and all that)

                
> Shard/Node states
> -----------------
>
>                 Key: SOLR-2765
>                 URL: https://issues.apache.org/jira/browse/SOLR-2765
>             Project: Solr
>          Issue Type: Sub-task
>          Components: SolrCloud, update
>            Reporter: Yonik Seeley
>             Fix For: 4.0
>
>         Attachments: incremental_update.patch, shard-roles.patch
>
>
> Need state for shards that indicate they are recovering, active/enabled, or disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message