lucene-solr-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Solr Wiki] Trivial Update of "SolrCloud" by JanHoydahl
Date Thu, 11 Feb 2010 15:39:17 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change notification.

The "SolrCloud" page has been changed by JanHoydahl.
The comment on this change is: Just added a sed command line doing the same as the perl oneliner.
http://wiki.apache.org/solr/SolrCloud?action=diff&rev1=27&rev2=28

--------------------------------------------------

  
  {{{
  cd example2
+ sed -i .bak 's/shard1/shard2/g' solr/solr.xml
- perl -pi -e 's/shard1/shard2/g' solr/solr.xml
+ #OR perl -pi -e 's/shard1/shard2/g' solr/solr.xml
- #note: if you don't have perl installed, you can simply hand edit solr.xml, changing shard1
to shard2
+ #note: if you don't have sed or perl installed, you can simply hand edit solr.xml, changing
shard1 to shard2
  }}}
  Then start the second server, pointing it at the cluster:
  
@@ -114, +115 @@

  For production, it's recommended that you run an external zookeeper ensemble rather than
having Solr run embedded zookeeper servers.  For this example, we'll use the embedded servers
for simplicity.
  
  First, stop all 4 servers and then clean up the zookeeper data directories for a fresh start.
+ 
  {{{
  rm -r example*/solr/zoo_data
  }}}
- 
  We will be running the servers again at ports 8983,7574,8900,7500.  The default is to run
an embedded zookeeper server at hostPort+1000, so if we run an embedded zookeeper on the first
three servers, the ensemble address will be {{{localhost:9983,localhost:8574,localhost:9900}}}.
  
  As a convenience, we'll have the first server upload the solr config to the cluster.  You
will notice it block until you have actually started the second server.  This is due to zookeeper
needing a quorum before it can operate.
@@ -126, +127 @@

  cd example
  java -Dbootstrap_confname=myconf -Dbootstrap_confdir=./solr/conf -DzkRun -DzkHost=localhost:9983,localhost:8574,localhost:9900
 -jar start.jar
  }}}
- 
  {{{
  cd example2
  java -Djetty.port=7574 -DhostPort=7574 -DzkRun -DzkHost=localhost:9983,localhost:8574,localhost:9900
-jar start.jar
  }}}
- 
  {{{
  cd exampleB
  java -Djetty.port=8900 -DhostPort=8900 -DzkRun -DzkHost=localhost:9983,localhost:8574,localhost:9900
-jar start.jar
  }}}
- 
  {{{
  cd example2B
  java -Djetty.port=7500 -DhostPort=7500 -DzkHost=localhost:9983,localhost:8574,localhost:9900
-jar start.jar
  }}}
- 
- Now since we are running three embedded zookeeper servers as an ensemble, everything can
keep working even if a server is lost.
- To demonstrate this, kill the exampleB server by pressing CTRL+C in it's window and then
browse to http://localhost:8983/solr/admin/zookeeper.jsp to verify that the zookeeper service
still works.
+ Now since we are running three embedded zookeeper servers as an ensemble, everything can
keep working even if a server is lost. To demonstrate this, kill the exampleB server by pressing
CTRL+C in it's window and then browse to http://localhost:8983/solr/admin/zookeeper.jsp to
verify that the zookeeper service still works.
  
  == ZooKeeper ==
  Multiple Zookeeper servers running together for fault tolerance and high availability is
called an ensemble.  For production, it's recommended that you run an external zookeeper ensemble
rather than having Solr run embedded servers.  See the [[http://hadoop.apache.org/zookeeper/|Apache
ZooKeeper]] site for more information on downloading and running a zookeeper ensemble.
@@ -181, +177 @@

  {{{
  http://localhost:8983/solr/collection1/select?collection=collection1_NY,collection1_NJ,collection1_CT
  }}}
- 
  Query specific shard ids. In this example, the user has partitioned the index by date, creating
a new shard every month.
  
  {{{
  http://localhost:8983/solr/collection1/select?shards=shard_200812,shard_200912,shard_201001&distrib=true
  }}}
- 
- 
  = Developer Section =
  == TODO ==
   * when this stull is merged to trunk, integrate the user documentation above into DistributedSearch

Mime
View raw message