lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Will Miller <will.mil...@ospgroup.com>
Subject All documents indexed into the same shard despite different prefix in id field
Date Mon, 15 Dec 2014 19:39:25 GMT
?I have a SolrCloud cluster with two servers and I created a collection using two shards with
this command:


http://server1:8983/admin/collections?action=CREATE&name=products&numShards=2


When I look at clusterstate.json in the Solr admin page I can see the collection is correctly
places across the two servers:


  "products":{
    "replicationFactor":"1",
    "shards":{
      "shard1":{
        "range":"80000000-ffffffff",
        "state":"active",
        "replicas":{"core_node2":{
            "core":"products_shard1_replica1",
            "base_url":"http://10.0.0.5:8983/solr",
            "node_name":"10.0.0.5:8983_solr",
            "state":"active",
            "leader":"true"}}},
      "shard2":{
        "range":"0-7fffffff",
        "state":"active",
        "replicas":{"core_node1":{
            "core":"products_shard2_replica1",
            "base_url":"http://10.0.0.6:8983/solr",
            "node_name":"10.0.0.6:8983_solr",
            "state":"active",
            "leader":"true"}}}},
    "router":{"name":"compositeId"},
    "maxShardsPerNode":"1",
    "autoAddReplicas":"false"}


However when I index products with different prefixes in the id field, all of the documents
go into the same shard. This is seen when querying for *:*:


"shards.info":{
    "http://10.0.0.6:8983/solr/products_shard2_replica1/":{
      "time":11,
      "shardAddress":"http://10.0.0.6:8983/solr/products_shard2_replica1/",
      "numFound":0,
      "maxScore":"NaN"},
    "http://10.0.0.5:8983/solr/products_shard1_replica1/":{
      "time":11,
      "shardAddress":"http://10.0.0.5:8983/solr/products_shard1_replica1/",
      "numFound":230,
      "maxScore":"NaN"}}


There were 230 documents in the set I indexed and there were 3 different prefixes (RM!, WW!
and BH!) but all were routed into the same shard. Is there anything I can do to debug this
further?


Thanks,

Will





Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message