lucene-solr-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Solr Wiki] Update of "SolrCloud" by AlanWoodward
Date Wed, 08 Aug 2012 09:41:56 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change notification.

The "SolrCloud" page has been changed by AlanWoodward:
http://wiki.apache.org/solr/SolrCloud?action=diff&rev1=54&rev2=55

Comment:
Update configuration locations and admin GUI URLs

  
  {{{
  cd example
- java -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=myconf -DzkRun
-DnumShards=2 -jar start.jar
+ java -Dbootstrap_confdir=./solr/conf -Dcollection.configName=myconf -DzkRun -DnumShards=2
-jar start.jar
  }}}
   * {{{-DzkRun}}} causes an embedded zookeeper server to be run as part of this Solr server.
-  * {{{-Dbootstrap_confdir=./solr/collection1/conf}}} Since we don't yet have a config in
zookeeper, this parameter causes the local configuration directory {{{./solr/collection1/conf}}}
to be uploaded as the "myconf" config.  The name "myconf" is taken from the "collection.configName"
param below.
+  * {{{-Dbootstrap_confdir=./solr/conf}}} Since we don't yet have a config in zookeeper,
this parameter causes the local configuration directory {{{./solr/conf}}} to be uploaded as
the "myconf" config.  The name "myconf" is taken from the "collection.configName" param below.
   * {{{-Dcollection.configName=myconf}}} sets the config to use for the new collection. Omitting
this param will cause the config name to default to "configuration1".
   * {{{-DnumShards=2}}} the number of logical partitions we plan on splitting the index into.
  
- Browse to http://localhost:8983/solr/#/cloud to see the state of the cluster (the zookeeper
distributed filesystem).
+ Browse to http://localhost:8983/solr/#/~cloud to see the state of the cluster (the zookeeper
distributed filesystem).
  
  You can see from the zookeeper browser that the Solr configuration files were uploaded under
"myconf", and that a new document collection called "collection1" was created.  Under collection1
is a list of shards, the pieces that make up the complete collection.
  
@@ -56, +56 @@

   * {{{-Djetty.port=7574}}}  is just one way to tell the Jetty servlet container to use a
different port.
   * {{{-DzkHost=localhost:9983}}} points to the Zookeeper ensemble containing the cluster
state.  In this example we're running a single Zookeeper server embedded in the first Solr
server.  By default, an embedded Zookeeper server runs at the Solr port plus 1000, so 9983.
  
- If you refresh the zookeeper browser, you should now see both shard1 and shard2 in collection1.
 View http://localhost:8983/solr/#/cloud.
+ If you refresh the zookeeper browser, you should now see both shard1 and shard2 in collection1.
 View http://localhost:8983/solr/#/~cloud.
  
  Next, index some documents. If you want to whip up some Java you can use the CloudSolrServer
solrj impl and simply init it with the address to ZooKeeper. Or simply randomly choose which
instance to add documents too - they will be automatically forwarded to where they belong:
  
@@ -94, +94 @@

  cd example2B
  java -Djetty.port=7500 -DzkHost=localhost:9983 -jar start.jar
  }}}
- Refresh the zookeeper browser page [[http://localhost:8983/solr/#/cloud|Solr Zookeeper Admin
UI]] and verify that 4 solr nodes are up, and that each shard is present at 2 nodes.
+ Refresh the zookeeper browser page [[http://localhost:8983/solr/#/~cloud|Solr Zookeeper
Admin UI]] and verify that 4 solr nodes are up, and that each shard is present at 2 nodes.
  
  Because we have been telling Solr that we want two logical shards, starting instances 3
and 4 are assigned to be replicas of instances one and two automatically.
  
@@ -128, +128 @@

  
  {{{
  cd example
- java -Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=myconf -DzkRun
-DzkHost=localhost:9983,localhost:8574,localhost:9900 -DnumShards=2 -jar start.jar
+ java -Dbootstrap_confdir=./solr/conf -Dcollection.configName=myconf -DzkRun -DzkHost=localhost:9983,localhost:8574,localhost:9900
-DnumShards=2 -jar start.jar
  }}}
  {{{
  cd example2
@@ -142, +142 @@

  cd example2B
  java -Djetty.port=7500 -DzkHost=localhost:9983,localhost:8574,localhost:9900 -jar start.jar
  }}}
- Now since we are running three embedded zookeeper servers as an ensemble, everything can
keep working even if a server is lost. To demonstrate this, kill the exampleB server by pressing
CTRL+C in it's window and then browse to the [[http://localhost:8983/solr/#/cloud|Solr Zookeeper
Admin UI]] to verify that the zookeeper service still works.
+ Now since we are running three embedded zookeeper servers as an ensemble, everything can
keep working even if a server is lost. To demonstrate this, kill the exampleB server by pressing
CTRL+C in it's window and then browse to the [[http://localhost:8983/solr/#/~cloud|Solr Zookeeper
Admin UI]] to verify that the zookeeper service still works.
  
  == ZooKeeper ==
  Multiple Zookeeper servers running together for fault tolerance and high availability is
called an ensemble.  For production, it's recommended that you run an external zookeeper ensemble
rather than having Solr run embedded servers.  See the [[http://zookeeper.apache.org/|Apache
ZooKeeper]] site for more information on downloading and running a zookeeper ensemble. More
specifically, try [[http://zookeeper.apache.org/doc/r3.3.4/zookeeperStarted.html|Getting Started]]
and [[http://zookeeper.apache.org/doc/r3.3.4/zookeeperAdmin.html|ZooKeeper Admin]]. It's actually
pretty simple to get going. You can stick to having Solr run ZooKeeper, but keep in mind that
a ZooKeeper cluster is not easily changed dynamically. Until further support is added to ZooKeeper,
changes are best done with rolling restarts. Handling this in a separate process from Solr
will usually be preferable. 

Mime
View raw message