lucene-solr-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Solr Wiki] Trivial Update of "SolrCloud" by YonikSeeley
Date Sun, 14 Mar 2010 05:23:12 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change notification.

The "SolrCloud" page has been changed by YonikSeeley.
The comment on this change is: creating core with CoreAdmin.
http://wiki.apache.org/solr/SolrCloud?action=diff&rev1=29&rev2=30

--------------------------------------------------

  
  When Solr runs an embedded zookeeper server, it defaults to using the solr port plus 1000
for the zookeeper client port.  In addition, it defaults to adding one to the client port
for the zookeeper server port, and two for the zookeeper leader election port.  So in the
first example with Solr running at 8983, the embedded zookeeper server used port 9983 for
the client port and 9984,9985 for the server ports.
  
+ 
+ == Creating cores via CoreAdmin ==
+ New Solr cores may also be created and associated with a collection via CoreAdmin.
+ 
+ Additional cloud related parameters for the CREATE action:
+  * '''collection''' - the name of the collection this core belongs to.  Default is the name
of the core.
+  * '''shard''' - the shard id this core represents
+  * '''collection.<param>=<value>''' - causes a property of <param>=<value>
to be set if a new collection is being created.
+   *  Use  collection.configName=<configname> to point to the config for a new collection.

+ 
+ Example:
+ {{{
+ curl 'http://localhost:8983/solr/admin/cores?action=CREATE&name=mycore&collection=collection1&shard=shard2'
+ }}}
+ 
+ 
- == Distributed Request ==
+ == Distributed Requests ==
  Explicitly specify the addresses of shards you want to query:
  
  {{{
@@ -158, +174 @@

  {{{
  http://localhost:8983/solr/collection1/select?distrib=true
  }}}
+ Query specific shard ids. In this example, the user has partitioned the index by date, creating
a new shard every month.
+ 
+ {{{
+ http://localhost:8983/solr/collection1/select?shards=shard_200812,shard_200912,shard_201001&distrib=true
+ }}}
  NOT IMPLEMENTED: Query all shards of a compatible collection, explicitly specified:
  
  {{{
- http://localhost:8983/solr/collection1/select?collection=collection1
- }}}
- NOT IMPLEMENTED: Query all shards of a compatible collection, explicitly specified:
- 
- {{{
  http://localhost:8983/solr/collection1/select?collection=collection1_recent
  }}}
  NOT IMPLEMENTED: Query all shards of multiple compatible collections, explicitly specified:
  
  {{{
  http://localhost:8983/solr/collection1/select?collection=collection1_NY,collection1_NJ,collection1_CT
- }}}
- Query specific shard ids. In this example, the user has partitioned the index by date, creating
a new shard every month.
- 
- {{{
- http://localhost:8983/solr/collection1/select?shards=shard_200812,shard_200912,shard_201001&distrib=true
  }}}
  = Developer Section =
  == TODO ==
   * when this stull is merged to trunk, integrate the user documentation above into DistributedSearch
-  * hook up the cloud state to distributed search
    * optionally allow user to query by collection
    * optionally allow user to query by multiple collections (assume schemas are compatible)
   * user SolrJ support for getting server list to query from zk
@@ -189, +199 @@

    * this includes a new code path where there may be no servers up for a shard (i.e. no
exception from the LB server is thrown because we never try)
    * seems to require propagating more info into the search handler... need to know what
logical shard is missing? if it's just a list of
     . URLs (the shard addresses) then servers that aren't up would just be represented by
a blank space:    localhost:8983,,localhost:7574
-  * optionally run ZK server in Solr for simple dev cluster (zk data dir under the solr data
dir?)
-  * document/demo the simplest two server distributed search solution in "getting started"...
make it easy!
-  * look into changing the name of the ugly DEFAULT_CORE (SOLR-1722)
   * when using master/slave replication, optionally remove the periodic polling that slaves
do and replace with a watch on a znode that can immediately ping/pull an index when the version
changes.  Seems like low priority since index version polls can be frequent with low overhead.
  
  == High level design goals ==

Mime
View raw message