lucene-solr-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Solr Wiki] Trivial Update of "SolrCloud" by YonikSeeley
Date Sun, 31 Jan 2010 22:13:51 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change notification.

The "SolrCloud" page has been changed by YonikSeeley.
The comment on this change is: snapshot: more work on demo.
http://wiki.apache.org/solr/SolrCloud?action=diff&rev1=18&rev2=19

--------------------------------------------------

  
  Solr embeds and uses ZooKeeper as a repository for cluster configuration and coordination
- think of it as a distributed filesystem.
  
+ Since we'll need two solr servers for this example, simply make a copy of the example directory
for the second server.
+ {{{
+ cp -r example example2
+ }}}
+ 
- This example starts up a Solr server and creates a new solr cluster.
+ This command starts up a Solr server and bootstraps a new solr cluster.
  {{{
  cd example
  java -Dbootstrap_confname=myconf -Dbootstrap_confdir=./solr/conf -DzkRun -jar start.jar
@@ -27, +32 @@

   * {{{-Dbootstrap_confname=myconf}}} tells this solr node to use the "myconf" configuration
stored within zookeeper.
   * {{{-Dbootstrap_confdir=./solr/conf}}} since "myconf" does not actually exist yet, this
parameter causes the local configuration directory {{{./solr/conf}}} to be uploaded to zookeeper
as the "myconf" config.
  
- Browse to http://localhost:8983/solr/admin/zookeeper.jsp to see the state of the cluster
(the zookeeper distributed filesystem).
+ Browse to http://localhost:8983/solr/collection1/admin/zookeeper.jsp to see the state of
the cluster (the zookeeper distributed filesystem).
+ 
+ You can see from the zookeeper browser that the Solr configuration files were uploaded under
"myconf", and that a new document collection called "collection1" was created.  Under collection1
is a list of shards, the pieces that make up the complete collection.
+ 
+ Now we want to start up our second server, assigning it a different shard, or piece of the
collection.
+ Simply change the shardId parameter for the appropriate solr core in solr.xml:
+ {{{
+ cd example2
+ perl -pi -e 's/shard1/shard2/g' solr/solr.xml
+ #note: if you don't have perl installed, you can simply hand edit solr.xml, changing shard1
to shard2
+ }}}
+ 
+ Then start the second server, pointing it at the cluster:
+ {{{
+ java -Djetty.port=7574 -DhostPort=7574 -DzkHost=localhost:9983 -jar start.jar
+ }}}
+ 
+  * {{{-Djetty.port=7574}}}  is just one way to tell the Jetty servlet container to use a
different port.
+  * {{{-DhostPort=7574}}} tells Solr what port the servlet container is running on.
+  * {{{-DzkHost=localhost:9983}}} points to the ZooKeeper ensemble containing the cluster
state.  In this example we're running a single ZooKeeper server embedded in the first server.
 By default, an embedded ZooKeeper server runs at the solr port plus 1000, so 9983.
+ 
  
  == ZooKeeper ==
  == Distributed Request ==

Mime
View raw message