lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jimtronic <>
Subject Is this a SolrCloud bug? Or expected behavior?
Date Tue, 18 Sep 2012 15:49:31 GMT
I've got a set up as follows: 

- 13 cores 
- 2 servers 
- running Solr 4.0 Beta with numShards=1 and an external zookeeper. 

I'm trying to figure out why some complex queries are running so slowly in
this setup versus quickly in a standalone mode. 

Given a query like: /select?q=(some complex query) 

It runs fast and gets faster (caches) when only running one server: 

1. ?fl=*&q=(complex query)&wt=json&rows=24 (QTime 3) 

When, I issue the same query to the cluster and watch the logs, it looks
like it's actually performing the query 3 times like so: 

1. ?q=(complex
(QTime 2) 

2. ?ids=(ids from query
query)&isShard=true (QTime 4) 

3.  ?fl=*&q=(complex query)&wt=json&rows=24 (QTime 459) 

Why is it performing #3? It already has everything it needs in #2 and #3
seems to be really slow even when warmed and cached. 

As stated above, this query is fast when running on a single server that is
warmed and cached. 

Since my query is complex, I could understand some slowness if I was
attempting this across multiple shards, but since there's only one shard,
shouldn't it just pick one server and query it? 

I can "fix" this by adding "distrib=false" to my original queries, but then
that sort of makes the whole cluster meaningless.

Ideally, I'd just spin up a new server that attaches itself to zookeeper and
add it to my load balancer and forget about it. 


View this message in context:
Sent from the Lucene - Java Developer mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message