lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Victor D'agostino" <victor.d.agost...@fiducial.net>
Subject Re: Setting up a two nodes Solr Cloud 5.4.1 environment
Date Tue, 29 Mar 2016 11:52:14 GMT
Hi guys

It seems I tried to add two additional shards on a existing Solr 
ensemble and this is not supported (or I didn't find how).

So after setting ZooKeeper I first setup my node n°2 and then setup my 
node n°1 with
wget --no-proxy 
"http://node1:8983/solr/admin/collections?&collection.configName=xxxxx&name=db&replicationFactor=1&action=CREATE&numShards=4&maxShardsPerNode=2"

Because node n°2 was already up then two shards are created on each node.

Regards
Victor

-------- Message original --------
*Sujet: *Re: Setting up a two nodes Solr Cloud 5.4.1 environment
*De : *Victor D'agostino <victor.d.agostino@fiducial.net>
*Pour : *solr-user@lucene.apache.org
*Copie à : *Erick Erickson <erickerickson@gmail.com>
*Date : *29/03/2016 09:58
> Hi Erick
>
> Thanks for your help, here is what I've done.
>
> 1. I deleted zookeepers and Solr installations.
> 2. I setup zookeepers on my two servers.
> 3. I successfully setup Solr Cloud node 1 with the same API call (1 
> collection named db and two cores) :
>      wget --no-proxy 
> "http://$HOSTNAME:8983/solr/admin/collections?numShards=2&collection.configName=copiemail3&route.name=compositeId&maxShardsPerNode=2&router.field=mail_id&name=db&replicationFactor=1&action=CREATE"
>
> 4. I didn't use the core API anymore.
>     I tried to setup node 2 with the collection API 
> <https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api8>

> and here is the error message (shards can be added only to 'implicit' 
> collections) :
>
> *Request :*
> wget --no-proxy 
> "http://$HOSTNAME:8983/solr/admin/collections?action=CREATESHARD&collection=db&shard=db_shard3_replica1"
> *
> **Error log**:*
> 2016-03-29 08:49:09.422 INFO  (qtp2085805465-13) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :createshard 
> with params shard=db_shard3_replica1&action=CREATESHARD&collection=db
> 2016-03-29 08:49:09.425 ERROR (qtp2085805465-13) [   ] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: 
> shards can be added only to 'implicit' collections
>         at 
> org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation$10.call(CollectionsHandler.java:468)
>         at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:176)
>         at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>         at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
>         at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:438)
>         at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>         at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>         at org.eclipse.jetty.server.Server.handle(Server.java:499)
>         at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>         at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>         at java.lang.Thread.run(Thread.java:745)
>
>
> If I do a status check on Solr node 2 (lxlyosol31) I can see ZooKeeper 
> is ok but node 2 is not in the cluster :
>
> /etc/init.d/solr status
>
> Found 1 Solr nodes:
>
> Solr process 3883 running on port 8983
> {
>   "solr_home":"/data/solr-5.4.1/server/solr",
>   "version":"5.4.1 1725212 - jpountz - 2016-01-18 11:51:45",
>   "startTime":"2016-03-29T07:49:15.192Z",
>   "uptime":"0 days, 0 hours, 7 minutes, 48 seconds",
>   "memory":"259.7 MB (%10.8) of 2.4 GB",
>   "cloud":{
>     "ZooKeeper":"lxlyosol30:2181,lxlyosol31:2181",
>     "liveNodes":"2",
>     "collections":"1"}}
>
>
> Regards
> Victor
>
>
> -------- Message original --------
> *Sujet: *Re: Setting up a two nodes Solr Cloud 5.4.1 environment
> *De : *Erick Erickson <erickerickson@gmail.com>
> *Pour : *solr-user <solr-user@lucene.apache.org>
> *Date : *25/03/2016 19:44
>> bq:  (collections API won't work because i use compositeId routing mode)
>>
>> This had better NOT be true or SolrCloud is horribly broken. compositeId is
>> the default and it's tested a all the time by unit tests. So is implicit for
>> that matter.
>>
>> One question I have is that you've specified a route field with this param:
>>
>> router.field=mail_id
>>
>> so the data is being routed based on a hash of that field, your GUID-based
>> id field is totally ignored for routing purposes. That may be what you intend,
>> but it's confusing that you mentioned the GUID in that context.
>>
>> As far as Solr is concerned, you only have a 2 shard system on node1 I think.
>> There's some legacy code in Solr that tries to add cores back into a collection
>> when it thinks it sees an orphan, perhaps that's being triggered (incorrectly)
>> by the presence of the cores on node2 you're creating with the cores admin API.
>>
>> Under any circumstances, here's what I recommend:
>> 1> wipe out your current collection, Zookeeper's data too
>> 2> You should be able to simply create your collection with both
>>      nodes up with the appropriate replcationFactor etc.
>> 3> Post any errors you get from using the Collections API and
>>     we'll figure out what's up
>> 4> Do not use the Core Admin API to try to add replicas to a SolrCloud
>>       collection. Really. Under the covers, Solr cloud actually _does_
>>       use the core admin API to create rplicase, but as you're seeing it
>>       must be used exactly correctly to give the desired results.
>> 5> You should _not_ have to put any files anywhere on the nodes
>>       to create replicas....
>>
>> Best,
>> Erick
>>
>> On Fri, Mar 25, 2016 at 10:05 AM, Victor D'agostino
>> <victor.d.agostino@fiducial.net>  wrote:
>>> Hi guys
>>>
>>> I am trying to set up a Solr Cloud environment of two Solr 5.4.1 nodes but
>>> the data are always indexed on the first node although the unique id is a
>>> GUID.
>>>
>>> It looks like I can't add an additional node. Could you tell me where i'm
>>> wrong ?
>>>
>>> I try to set up a collection named "db" with two shards on each node.
>>> Without replica. The config is named "copiemail3".
>>>
>>>
>>> On node n°1 I put schema.xml, solrconfig.xml, etc. in
>>> $SOLRHOME/configsets/copiemail3/conf/
>>> Then i do a upconfig to zkp1 with zkcli.sh
>>> I start Solr and create my collection with the API.
>>>   wget --no-proxy
>>> "http://$HOSTNAME:8983/solr/admin/collections?numShards=2&collection.configName=copiemail3&route.name=compositeId&maxShardsPerNode=2&router.field=mail_id&name=db&replicationFactor=1&action=CREATE"
>>> My two first shards are created, Cloud is enabled and i also enable ping
>>> with the API :
>>>   wget --no-proxy
>>> "http://$HOSTNAME:8983/solr/db_shard1_replica1/admin/ping?action=enable"
>>>   wget --no-proxy
>>> "http://$HOSTNAME:8983/solr/db_shard2_replica1/admin/ping?action=enable"
>>> Finally I restart Solr
>>>
>>> On node n°2
>>> I start Solr and create the two shards with the cores API (collections API
>>> won't work because i use compositeId routing mode) :
>>>   wget --no-proxy
>>> "http://$HOSTNAME:8983/solr/admin/cores?action=CREATE&schema=schema.xml&shard=shard3&instanceDir=db_shard3_replica1&indexInfo=false&name=db_shard3_replica1&config=solrconfig.xml&collection=db&dataDir=data"
>>>   wget --no-proxy
>>> "http://$HOSTNAME:8983/solr/admin/cores?action=CREATE&schema=schema.xml&shard=shard4&instanceDir=db_shard4_replica1&indexInfo=false&name=db_shard4_replica1&config=solrconfig.xml&collection=db&dataDir=data"
>>> Like node 1 i activate the ping and restart Solr.
>>>
>>> On each Solr admin interface I can see ZooKeeper config is good (4 alived
>>> nodes). Cloud schema seems ok because i have my db collection, 4 shards on
>>> two nodes (all leaders).
>>>
>>>
>>> Search request are well distributed but as I said before the data are always
>>> indexed on the two shards on the first node :
>>>
>>>
>>>
>>> [root@node1 ~]# /data/solr-5.4.1/bin/solr healthcheck -c db -z
>>> localhost:2181
>>> {
>>>    "collection":"db",
>>>    "status":"healthy",
>>>    "numDocs":10000,
>>>    "numShards":4,
>>>    "shards":[
>>>      {
>>>        "shard":"shard1",
>>>        "status":"healthy",
>>>        "replicas":[{
>>>            "name":"core_node2",
>>>            "url":"http://10.69.220.46:8983/solr/db_shard1_replica1/",
>>>            "numDocs":5175,
>>>            "status":"active",
>>>            "uptime":"0 days, 0 hours, 14 minutes, 35 seconds",
>>>            "memory":"339.3 MB (%14.1) of 2.4 GB",
>>>            "leader":true}]},
>>>      {
>>>        "shard":"shard2",
>>>        "status":"healthy",
>>>        "replicas":[{
>>>            "name":"core_node1",
>>>            "url":"http://10.69.220.46:8983/solr/db_shard2_replica1/",
>>>            "numDocs":4825,
>>>            "status":"active",
>>>            "uptime":"0 days, 0 hours, 14 minutes, 35 seconds",
>>>            "memory":"339.3 MB (%14.1) of 2.4 GB",
>>>            "leader":true}]},
>>>      {
>>>        "shard":"shard3",
>>>        "status":"healthy",
>>>        "replicas":[{
>>>            "name":"core_node3",
>>>            "url":"http://10.69.220.47:8983/solr/db_shard3_replica1/",
>>>            "numDocs":0,
>>>            "status":"active",
>>>            "uptime":"0 days, 0 hours, 13 minutes, 44 seconds",
>>>            "memory":"177 MB (%7.4) of 2.4 GB",
>>>            "leader":true}]},
>>>      {
>>>        "shard":"shard4",
>>>        "status":"healthy",
>>>        "replicas":[{
>>>            "name":"core_node4",
>>>            "url":"http://10.69.220.47:8983/solr/db_shard4_replica1/",
>>>            "numDocs":0,
>>>            "status":"active",
>>>            "uptime":"0 days, 0 hours, 13 minutes, 44 seconds",
>>>            "memory":"177 MB (%7.4) of 2.4 GB",
>>>            "leader":true}]}]}
>>>
>>>
>>> Thanks for your help
>>>
>>> Victor
>>>
>>>
>>> 
>>> ________________
>>> Ce message et les éventuels documents joints peuvent contenir des
>>> informations confidentielles. Au cas où il ne vous serait pas destiné, nous
>>> vous remercions de bien vouloir le supprimer et en aviser immédiatement
>>> l'expéditeur. Toute utilisation de ce message non conforme à sa destination,
>>> toute diffusion ou publication, totale ou partielle et quel qu'en soit le
>>> moyen est formellement interdite. Les communications sur internet n'étant
>>> pas sécurisées, l'intégrité de ce message n'est pas assurée et la société
>>> émettrice ne peut être tenue pour responsable de son contenu.
>



________________
Ce message et les éventuels documents joints peuvent contenir des informations confidentielles.
Au cas où il ne vous serait pas destiné, nous vous remercions de bien vouloir le supprimer
et en aviser immédiatement l'expéditeur. Toute utilisation de ce message non conforme à
sa destination, toute diffusion ou publication, totale ou partielle et quel qu'en soit le
moyen est formellement interdite. Les communications sur internet n'étant pas sécurisées,
l'intégrité de ce message n'est pas assurée et la société émettrice ne peut être tenue
pour responsable de son contenu. 
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message