lucene-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ctarg...@apache.org
Subject lucene-solr:jira/solr-10290: SOLR-10296: conversion, letter S part 1
Date Mon, 08 May 2017 00:47:48 GMT
Repository: lucene-solr
Updated Branches:
  refs/heads/jira/solr-10290 ff9fdcf1f -> 3f9dc3859


SOLR-10296: conversion, letter S part 1


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/3f9dc385
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/3f9dc385
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/3f9dc385

Branch: refs/heads/jira/solr-10290
Commit: 3f9dc385915c69dce15d224898908e8c25b26c3a
Parents: ff9fdcf
Author: Cassandra Targett <ctargett@apache.org>
Authored: Sun May 7 19:47:01 2017 -0500
Committer: Cassandra Targett <ctargett@apache.org>
Committed: Sun May 7 19:47:01 2017 -0500

----------------------------------------------------------------------
 .../src/requestdispatcher-in-solrconfig.adoc    |   2 +-
 solr/solr-ref-guide/src/response-writers.adoc   |   2 +-
 solr/solr-ref-guide/src/schema-api.adoc         | 344 +++++++++----------
 .../src/schema-browser-screen.adoc              |  10 +-
 ...schema-factory-definition-in-solrconfig.adoc |  30 +-
 solr/solr-ref-guide/src/schemaless-mode.adoc    |  34 +-
 solr/solr-ref-guide/src/securing-solr.adoc      |   4 +-
 solr/solr-ref-guide/src/segments-info.adoc      |   1 -
 ...tting-up-an-external-zookeeper-ensemble.adoc |  48 +--
 .../shards-and-indexing-data-in-solrcloud.adoc  |  36 +-
 10 files changed, 249 insertions(+), 262 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
index 82b8534..e8064b7 100644
--- a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
@@ -55,7 +55,7 @@ The `<httpCaching>` element controls HTTP cache control headers. Do not confuse
 
 This element allows for three attributes and one sub-element. The attributes of the `<httpCaching>` element control whether a 304 response to a GET request is allowed, and if so, what sort of response it should be. When an HTTP client application issues a GET, it may optionally specify that a 304 response is acceptable if the resource has not been modified since the last time it was fetched.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |never304 |If present with the value `true`, then a GET request will never respond with a 304 code, even if the requested resource has not been modified. When this attribute is set to true, the next two attributes are ignored. Setting this to true is handy for development, as the 304 response can be confusing when tinkering with Solr responses through a web browser or other client that supports cache headers.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/response-writers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/response-writers.adoc b/solr/solr-ref-guide/src/response-writers.adoc
index 4a333b4..7324ffa 100644
--- a/solr/solr-ref-guide/src/response-writers.adoc
+++ b/solr/solr-ref-guide/src/response-writers.adoc
@@ -7,7 +7,7 @@ A Response Writer generates the formatted response of a search. Solr supports a
 
 The `wt` parameter selects the Response Writer to be used. The table below lists the most common settings for the `wt` parameter.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |`wt` Parameter Setting |Response Writer Selected
 |csv |<<ResponseWriters-CSVResponseWriter,CSVResponseWriter>>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/schema-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index f189fce..02c51a9 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -2,18 +2,18 @@
 :page-shortname: schema-api
 :page-permalink: schema-api.html
 
+The Schema API allows you to use an HTTP API to manage many of the elements of your schema.
+
 The Schema API utilizes the ManagedIndexSchemaFactory class, which is the default schema factory in modern Solr versions. See the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more information about choosing a schema factory for your index.
 
 This API provides read and write access to the Solr schema for each collection (or core, when using standalone Solr). Read access to all schema elements is supported. Fields, dynamic fields, field types and copyField rules may be added, removed or replaced. Future Solr releases will extend write access to allow more schema elements to be modified.
 
 .Why is hand editing of the managed schema discouraged?
-[IMPORTANT]
+[NOTE]
 ====
-
 The file named "managed-schema" in the example configurations may include a note that recommends never hand-editing the file. Before the Schema API existed, such edits were the only way to make changes to the schema, and users may have a strong desire to continue making changes this way.
 
 The reason that this is discouraged is because hand-edits of the schema may be lost if the Schema API described here is later used to make a change, unless the core or collection is reloaded or Solr is restarted before using the Schema API. If care is taken to always reload or restart after a manual edit, then there is no problem at all with doing those edits.
-
 ====
 
 The API allows two output modes for all calls: JSON or XML. When requesting the complete schema, there is another output mode which is XML modeled after the managed-schema file itself, which is in XML format.
@@ -23,14 +23,12 @@ When modifying the schema with the API, a core reload will automatically occur i
 .Re-index after schema modifications!
 [IMPORTANT]
 ====
-
 If you modify your schema, you will likely need to re-index all documents. If you do not, you may lose access to documents, or not be able to interpret them properly, e.g. after replacing a field type.
 
 Modifying your schema will never modify any documents that are already indexed. You must re-index documents in order to apply schema changes to them. Queries and updates made after the change may encounter errors that were not present before the change. Completely deleting the index and rebuilding it is usually the only option to fix such errors.
-
 ====
 
-The base address for the API is `http://<host>:<port>/solr/<collection_name>`. If for example you run Solr's "```cloud```" example (via the `bin/solr` command shown below), which creates a "```gettingstarted```" collection, then the base URL for that collection (as in all the sample URLs in this section) would be: `http://localhost:8983/solr/gettingstarted` .
+The base address for the API is `\http://<host>:<port>/solr/<collection_name>`. If, for example, you run Solr's "```cloud```" example (via the `bin/solr` command shown below), which creates a "```gettingstarted```" collection, then the base URL for that collection (as in all the sample URLs in this section) would be: `\http://localhost:8983/solr/gettingstarted`.
 
 [source,bash]
 ----
@@ -40,27 +38,33 @@ bin/solr -e cloud -noprompt
 [[SchemaAPI-APIEntryPoints]]
 == API Entry Points
 
-`/schema`: <<SchemaAPI-RetrievetheEntireSchema,retrieve>> the schema, or <<SchemaAPI-ModifytheSchema,modify>> the schema to add, remove, or replace fields, dynamic fields, copy fields, or field types `/schema/fields`: <<SchemaAPI-ListFields,retrieve information>> about all defined fields or a specific named field `/schema/dynamicfields`: <<SchemaAPI-ListDynamicFields,retrieve information>> about all dynamic field rules or a specific named dynamic rule `/schema/fieldtypes`: <<SchemaAPI-ListFieldTypes,retrieve information>> about all field types or a specific field type `/schema/copyfields`: <<SchemaAPI-ListCopyFields,retrieve information>> about copy fields `/schema/name`: <<SchemaAPI-ShowSchemaName,retrieve>> the schema name `/schema/version`: <<SchemaAPI-ShowtheSchemaVersion,retrieve>> the schema version `/schema/uniquekey`: <<SchemaAPI-ListUniqueKey,retrieve>> the defined uniqueKey `/schema/similarity`: <<SchemaAPI-ShowGlobalSimilarity,retrieve>> the global similarity definition `
 /schema/solrqueryparser/defaultoperator`: <<SchemaAPI-GettheDefaultQueryOperator,retrieve>> the default operator
+* `/schema`: <<SchemaAPI-RetrievetheEntireSchema,retrieve>> the schema, or <<SchemaAPI-ModifytheSchema,modify>> the schema to add, remove, or replace fields, dynamic fields, copy fields, or field types
+* `/schema/fields`: <<SchemaAPI-ListFields,retrieve information>> about all defined fields or a specific named field
+* `/schema/dynamicfields`: <<SchemaAPI-ListDynamicFields,retrieve information>> about all dynamic field rules or a specific named dynamic rule
+* `/schema/fieldtypes`: <<SchemaAPI-ListFieldTypes,retrieve information>> about all field types or a specific field type
+* `/schema/copyfields`: <<SchemaAPI-ListCopyFields,retrieve information>> about copy fields
+* `/schema/name`: <<SchemaAPI-ShowSchemaName,retrieve>> the schema name
+* `/schema/version`: <<SchemaAPI-ShowtheSchemaVersion,retrieve>> the schema version
+* `/schema/uniquekey`: <<SchemaAPI-ListUniqueKey,retrieve>> the defined uniqueKey
+* `/schema/similarity`: <<SchemaAPI-ShowGlobalSimilarity,retrieve>> the global similarity definition
+* `/schema/solrqueryparser/defaultoperator`: <<SchemaAPI-GettheDefaultQueryOperator,retrieve>> the default operator
 
 [[SchemaAPI-ModifytheSchema]]
 == Modify the Schema
 
-`POST /__collection__/schema`
+`POST /_collection_/schema`
 
 To add, remove or replace fields, dynamic field rules, copy field rules, or new field types, you can send a POST request to the `/collection/schema/` endpoint with a sequence of commands to perform the requested actions. The following commands are supported:
 
 * `add-field`: add a new field with parameters you provide.
 * `delete-field`: delete a field.
 * `replace-field`: replace an existing field with one that is differently configured.
-
 * `add-dynamic-field`: add a new dynamic field rule with parameters you provide.
 * `delete-dynamic-field`: delete a dynamic field rule.
 * `replace-dynamic-field`: replace an existing dynamic field rule with one that is differently configured.
-
 * `add-field-type`: add a new field type with parameters you provide.
 * `delete-field-type`: delete a field type.
 * `replace-field-type`: replace an existing field type with one that is differently configured.
-
 * `add-copy-field`: add a new copy field rule.
 * `delete-copy-field`: delete a copy field rule.
 
@@ -82,7 +86,7 @@ For example, to define a new stored field named "sell-by", of type "tdate", you
 [source,bash]
 ----
 curl -X POST -H 'Content-type:application/json' --data-binary '{
-  "add-field":{ 
+  "add-field":{
      "name":"sell-by",
      "type":"tdate",
      "stored":true }
@@ -115,7 +119,7 @@ For example, to replace the definition of an existing field "sell-by", to make i
 [source,bash]
 ----
 curl -X POST -H 'Content-type:application/json' --data-binary '{
-  "replace-field":{ 
+  "replace-field":{
      "name":"sell-by",
      "type":"date",
      "stored":false }
@@ -134,7 +138,7 @@ For example, to create a new dynamic field rule where all incoming fields ending
 [source,bash]
 ----
 curl -X POST -H 'Content-type:application/json' --data-binary '{
-  "add-dynamic-field":{ 
+  "add-dynamic-field":{
      "name":"*_s",
      "type":"string",
      "stored":true }
@@ -167,7 +171,7 @@ For example, to replace the definition of the "*_s" dynamic field rule with one
 [source,bash]
 ----
 curl -X POST -H 'Content-type:application/json' --data-binary '{
-  "replace-dynamic-field":{ 
+  "replace-dynamic-field":{
      "name":"*_s",
      "type":"text_general",
      "stored":false }
@@ -195,12 +199,12 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
            "class":"solr.PatternReplaceCharFilterFactory",
            "replacement":"$1$1",
            "pattern":"([a-zA-Z])\\\\1+" }],
-        "tokenizer":{ 
+        "tokenizer":{
            "class":"solr.WhitespaceTokenizerFactory" },
         "filters":[{
            "class":"solr.WordDelimiterFilterFactory",
            "preserveOriginal":"0" }]}}
-}' http://localhost:8983/solr/gettingstarted/schema 
+}' http://localhost:8983/solr/gettingstarted/schema
 ----
 
 Note in this example that we have only defined a single analyzer section that will apply to index analysis and query analysis. If we wanted to define separate analysis, we would replace the `analyzer` section in the above example with separate sections for `indexAnalyzer` and `queryAnalyzer`. As in this example:
@@ -213,12 +217,12 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "class":"solr.TextField",
      "indexAnalyzer":{
         "tokenizer":{
-           "class":"solr.PathHierarchyTokenizerFactory", 
+           "class":"solr.PathHierarchyTokenizerFactory",
            "delimiter":"/" }},
      "queryAnalyzer":{
-        "tokenizer":{ 
+        "tokenizer":{
            "class":"solr.KeywordTokenizerFactory" }}}
-}' http://localhost:8983/solr/gettingstarted/schema 
+}' http://localhost:8983/solr/gettingstarted/schema
 ----
 
 [[SchemaAPI-DeleteaFieldType]]
@@ -232,7 +236,7 @@ For example, to delete the field type named "myNewTxtField", you can make a POST
 ----
 curl -X POST -H 'Content-type:application/json' --data-binary '{
   "delete-field-type":{ "name":"myNewTxtField" }
-}' http://localhost:8983/solr/gettingstarted/schema 
+}' http://localhost:8983/solr/gettingstarted/schema
 ----
 
 [[SchemaAPI-ReplaceaFieldType]]
@@ -252,9 +256,9 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "class":"solr.TextField",
      "positionIncrementGap":"100",
      "analyzer":{
-        "tokenizer":{ 
+        "tokenizer":{
            "class":"solr.StandardTokenizerFactory" }}}
-}' http://localhost:8983/solr/gettingstarted/schema 
+}' http://localhost:8983/solr/gettingstarted/schema
 ----
 
 [[SchemaAPI-AddaNewCopyFieldRule]]
@@ -264,7 +268,7 @@ The `add-copy-field` command adds a new copy field rule to your schema.
 
 The attributes supported by the command are the same as when creating copy field rules by manually editing the `schema.xml`, as below:
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Name |Required |Description
 |source |Yes |The source field.
@@ -320,12 +324,12 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
            "class":"solr.PatternReplaceCharFilterFactory",
            "replacement":"$1$1",
            "pattern":"([a-zA-Z])\\\\1+" }],
-        "tokenizer":{ 
+        "tokenizer":{
            "class":"solr.WhitespaceTokenizerFactory" },
         "filters":[{
            "class":"solr.WordDelimiterFilterFactory",
            "preserveOriginal":"0" }]}},
-   "add-field" : { 
+   "add-field" : {
       "name":"sell-by",
       "type":"myNewTxtField",
       "stored":true }
@@ -337,15 +341,15 @@ Or, the same command can be repeated, as in this example:
 [source,bash]
 ----
 curl -X POST -H 'Content-type:application/json' --data-binary '{
-  "add-field":{ 
+  "add-field":{
      "name":"shelf",
      "type":"myNewTxtField",
      "stored":true },
-  "add-field":{ 
+  "add-field":{
      "name":"location",
      "type":"myNewTxtField",
      "stored":true },
-  "add-copy-field":{ 
+  "add-copy-field":{
      "source":"shelf",
       "dest":[ "location", "catchall" ]}
 }' http://localhost:8983/solr/gettingstarted/schema
@@ -369,9 +373,13 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 [[SchemaAPI-SchemaChangesamongReplicas]]
 === Schema Changes among Replicas
 
-When running in SolrCloud mode, changes made to the schema on one node will propagate to all replicas in the collection. You can pass the *updateTimeoutSecs* parameter with your request to set the number of seconds to wait until all replicas confirm they applied the schema updates. This helps your client application be more robust in that you can be sure that all replicas have a given schema change within a defined amount of time. If agreement is not reached by all replicas in the specified time, then the request fails and the error message will include information about which replicas had trouble. In most cases, the only option is to re-try the change after waiting a brief amount of time. If the problem persists, then you'll likely need to investigate the server logs on the replicas that had trouble applying the changes. If you do not supply an *updateTimeoutSecs* parameter, the default behavior is for the receiving node to return immediately after persisting the updates to ZooKeep
 er. All other replicas will apply the updates asynchronously. Consequently, without supplying a timeout, your client application cannot be sure that all replicas have applied the changes.
+When running in SolrCloud mode, changes made to the schema on one node will propagate to all replicas in the collection.
+
+You can pass the `updateTimeoutSecs` parameter with your request to set the number of seconds to wait until all replicas confirm they applied the schema updates. This helps your client application be more robust in that you can be sure that all replicas have a given schema change within a defined amount of time.
+
+If agreement is not reached by all replicas in the specified time, then the request fails and the error message will include information about which replicas had trouble. In most cases, the only option is to re-try the change after waiting a brief amount of time. If the problem persists, then you'll likely need to investigate the server logs on the replicas that had trouble applying the changes.
 
-<<main,Back to Top>>
+If you do not supply an `updateTimeoutSecs` parameter, the default behavior is for the receiving node to return immediately after persisting the updates to ZooKeeper. All other replicas will apply the updates asynchronously. Consequently, without supplying a timeout, your client application cannot be sure that all replicas have applied the changes.
 
 [[SchemaAPI-RetrieveSchemaInformation]]
 == Retrieve Schema Information
@@ -383,14 +391,14 @@ To modify the schema, see the previous section <<SchemaAPI-ModifytheSchema,Modif
 [[SchemaAPI-RetrievetheEntireSchema]]
 === Retrieve the Entire Schema
 
-`GET /__collection__/schema`
+`GET /_collection_/schema`
 
 [[SchemaAPI-INPUT]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -400,10 +408,10 @@ To modify the schema, see the previous section <<SchemaAPI-ModifytheSchema,Modif
 
 The query parameters should be added to the API request after '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are **json**, *xml* or **schema.xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json*, *xml* or *schema.xml*. If not specified, JSON will be returned by default.
 |===
 
 [[SchemaAPI-OUTPUT]]
@@ -449,8 +457,7 @@ curl http://localhost:8983/solr/gettingstarted/schema?wt=json
               "class":"solr.PatternReplaceFilterFactory",
               "replace":"all",
               "replacement":"",
-              "pattern":"([^a-z])"}]}},
-...
+              "pattern":"([^a-z])"}]}}],
     "fields":[{
         "name":"_version_",
         "type":"long",
@@ -466,8 +473,7 @@ curl http://localhost:8983/solr/gettingstarted/schema?wt=json
         "type":"string",
         "multiValued":true,
         "indexed":true,
-        "stored":true},
-...
+        "stored":true}],
     "copyFields":[{
         "source":"author",
         "dest":"text"},
@@ -477,7 +483,6 @@ curl http://localhost:8983/solr/gettingstarted/schema?wt=json
       {
         "source":"content",
         "dest":"text"},
-...
       {
         "source":"author",
         "dest":"author_s"}]}}
@@ -564,21 +569,20 @@ curl http://localhost:8983/solr/gettingstarted/schema?wt=schema.xml
 </schema>
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ListFields]]
 === List Fields
 
-`GET /__collection__/schema/fields`
+`GET /_collection_/schema/fields`
 
-`GET /__collection__/schema/fields/__fieldname__`
+`GET /_collection_/schema/fields/_fieldname_`
 
 [[SchemaAPI-INPUT.1]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -589,13 +593,13 @@ curl http://localhost:8983/solr/gettingstarted/schema?wt=schema.xml
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |fl |string |No |(all fields) |Comma- or space-separated list of one or more fields to return. If not specified, all fields will be returned by default.
-|includeDynamic |boolean |No |false |If **true**, and if the *fl* query parameter is specified or the *fieldname* path parameter is used, matching dynamic fields are included in the response and identified with the *dynamicBase* property. If neither the *fl* query parameter nor the *fieldname* path parameter is specified, the *includeDynamic* query parameter is ignored. If **false**, matching dynamic fields will not be returned.
-|showDefaults |boolean |No |false |If **true**, all default field properties from each field's field type will be included in the response (e.g. *tokenized* for **solr.TextField**). If **false**, only explicitly specified field properties will be included.
+|includeDynamic |boolean |No |false |If *true*, and if the *fl* query parameter is specified or the *fieldname* path parameter is used, matching dynamic fields are included in the response and identified with the *dynamicBase* property. If neither the *fl* query parameter nor the *fieldname* path parameter is specified, the *includeDynamic* query parameter is ignored. If *false*, matching dynamic fields will not be returned.
+|showDefaults |boolean |No |false |If *true*, all default field properties from each field's field type will be included in the response (e.g. *tokenized* for `solr.TextField`). If *false*, only explicitly specified field properties will be included.
 |===
 
 [[SchemaAPI-OUTPUT.1]]
@@ -617,53 +621,52 @@ curl http://localhost:8983/solr/gettingstarted/schema/fields?wt=json
 
 The sample output below has been truncated to only show a few fields.
 
-[source,javascript]
+[source,json]
 ----
 {
     "fields": [
         {
-            "indexed": true, 
-            "name": "_version_", 
-            "stored": true, 
+            "indexed": true,
+            "name": "_version_",
+            "stored": true,
             "type": "long"
-        }, 
+        },
         {
-            "indexed": true, 
-            "name": "author", 
-            "stored": true, 
+            "indexed": true,
+            "name": "author",
+            "stored": true,
             "type": "text_general"
-        }, 
+        },
         {
-            "indexed": true, 
-            "multiValued": true, 
-            "name": "cat", 
-            "stored": true, 
+            "indexed": true,
+            "multiValued": true,
+            "name": "cat",
+            "stored": true,
             "type": "string"
-        }, 
-...
-    ], 
+        },
+"..."
+    ],
     "responseHeader": {
-        "QTime": 1, 
+        "QTime": 1,
         "status": 0
     }
 }
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ListDynamicFields]]
 === List Dynamic Fields
 
-`GET /__collection__/schema/dynamicfields`
+`GET /_collection_/schema/dynamicfields`
 
-`GET /__collection__/schema/dynamicfields/__name__`
+`GET /_collection_/schema/dynamicfields/_name_`
 
 [[SchemaAPI-INPUT.2]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -674,11 +677,11 @@ The sample output below has been truncated to only show a few fields.
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json,* **xml**. If not specified, JSON will be returned by default.
-|showDefaults |boolean |No |false |If **true**, all default field properties from each dynamic field's field type will be included in the response (e.g. *tokenized* for **solr.TextField**). If **false**, only explicitly specified field properties will be included.
+|wt |string |No |json |Defines the format of the response. The options are *json,* *xml*. If not specified, JSON will be returned by default.
+|showDefaults |boolean |No |false |If *true*, all default field properties from each dynamic field's field type will be included in the response (e.g. *tokenized* for `solr.TextField`). If *false*, only explicitly specified field properties will be included.
 |===
 
 [[SchemaAPI-OUTPUT.2]]
@@ -700,63 +703,61 @@ curl http://localhost:8983/solr/gettingstarted/schema/dynamicfields?wt=json
 
 The sample output below has been truncated.
 
-[source,javascript]
+[source,json]
 ----
 {
     "dynamicFields": [
         {
-            "indexed": true, 
-            "name": "*_coordinate", 
-            "stored": false, 
+            "indexed": true,
+            "name": "*_coordinate",
+            "stored": false,
             "type": "tdouble"
-        }, 
+        },
         {
-            "multiValued": true, 
-            "name": "ignored_*", 
+            "multiValued": true,
+            "name": "ignored_*",
             "type": "ignored"
-        }, 
+        },
         {
-            "name": "random_*", 
+            "name": "random_*",
             "type": "random"
-        }, 
+        },
         {
-            "indexed": true, 
-            "multiValued": true, 
-            "name": "attr_*", 
-            "stored": true, 
+            "indexed": true,
+            "multiValued": true,
+            "name": "attr_*",
+            "stored": true,
             "type": "text_general"
-        }, 
+        },
         {
-            "indexed": true, 
-            "multiValued": true, 
-            "name": "*_txt", 
-            "stored": true, 
+            "indexed": true,
+            "multiValued": true,
+            "name": "*_txt",
+            "stored": true,
             "type": "text_general"
-        } 
-...
-    ], 
+        }
+"..."
+    ],
     "responseHeader": {
-        "QTime": 1, 
+        "QTime": 1,
         "status": 0
     }
 }
 ----
 
-<<main,Back to Top>>
-
 [[SchemaAPI-ListFieldTypes]]
 === List Field Types
 
-`GET /__collection__/schema/fieldtypes`
+`GET /_collection_/schema/fieldtypes`
 
-`GET /__collection__/schema/fieldtypes/__name__`
+`GET /_collection_/schema/fieldtypes/_name_`
 
 [[SchemaAPI-INPUT.3]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -767,11 +768,11 @@ The sample output below has been truncated.
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
-|showDefaults |boolean |No |false |If **true**, all default field properties from each field type will be included in the response (e.g. *tokenized* for **solr.TextField**). If **false**, only explicitly specified field properties will be included.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
+|showDefaults |boolean |No |false |If *true*, all default field properties from each field type will be included in the response (e.g. *tokenized* for `solr.TextField`). If *false*, only explicitly specified field properties will be included.
 |===
 
 [[SchemaAPI-OUTPUT.3]]
@@ -793,70 +794,66 @@ curl http://localhost:8983/solr/gettingstarted/schema/fieldtypes?wt=json
 
 The sample output below has been truncated to show a few different field types from different parts of the list.
 
-[source,javascript]
+[source,json]
 ----
 {
     "fieldTypes": [
         {
             "analyzer": {
-                "class": "solr.TokenizerChain", 
+                "class": "solr.TokenizerChain",
                 "filters": [
                     {
                         "class": "solr.LowerCaseFilterFactory"
-                    }, 
+                    },
                     {
                         "class": "solr.TrimFilterFactory"
-                    }, 
+                    },
                     {
-                        "class": "solr.PatternReplaceFilterFactory", 
-                        "pattern": "([^a-z])", 
-                        "replace": "all", 
+                        "class": "solr.PatternReplaceFilterFactory",
+                        "pattern": "([^a-z])",
+                        "replace": "all",
                         "replacement": ""
                     }
-                ], 
+                ],
                 "tokenizer": {
                     "class": "solr.KeywordTokenizerFactory"
                 }
-            }, 
-            "class": "solr.TextField", 
-            "dynamicFields": [], 
-            "fields": [], 
-            "name": "alphaOnlySort", 
-            "omitNorms": true, 
+            },
+            "class": "solr.TextField",
+            "dynamicFields": [],
+            "fields": [],
+            "name": "alphaOnlySort",
+            "omitNorms": true,
             "sortMissingLast": true
-        }, 
-...
+        },
         {
-            "class": "solr.TrieFloatField", 
+            "class": "solr.TrieFloatField",
             "dynamicFields": [
-                "*_fs", 
+                "*_fs",
                 "*_f"
-            ], 
+            ],
             "fields": [
-                "price", 
+                "price",
                 "weight"
-            ], 
-            "name": "float", 
-            "positionIncrementGap": "0", 
+            ],
+            "name": "float",
+            "positionIncrementGap": "0",
             "precisionStep": "0"
-        }, 
-...
+        }]
 }
 ----
 
-<<main,Back to Top>>
-
 [[SchemaAPI-ListCopyFields]]
 === List Copy Fields
 
-`GET /__collection__/schema/copyfields`
+`GET /_collection_/schema/copyfields`
 
 [[SchemaAPI-INPUT.4]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -866,10 +863,10 @@ The sample output below has been truncated to show a few different field types f
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |source.fl |string |No |(all source fields) |Comma- or space-separated list of one or more copyField source fields to include in the response - copyField directives with all other source fields will be excluded from the response. If not specified, all copyField-s will be included in the response.
 |dest.fl |string |No |(all dest fields) |Comma- or space-separated list of one or more copyField dest fields to include in the response - copyField directives with all other dest fields will be excluded. If not specified, all copyField-s will be included in the response.
 |===
@@ -893,48 +890,46 @@ curl http://localhost:8983/solr/gettingstarted/schema/copyfields?wt=json
 
 The sample output below has been truncated to the first few copy definitions.
 
-[source,javascript]
+[source,json]
 ----
 {
     "copyFields": [
         {
-            "dest": "text", 
+            "dest": "text",
             "source": "author"
-        }, 
+        },
         {
-            "dest": "text", 
+            "dest": "text",
             "source": "cat"
-        }, 
+        },
         {
-            "dest": "text", 
+            "dest": "text",
             "source": "content"
-        }, 
+        },
         {
-            "dest": "text", 
+            "dest": "text",
             "source": "content_type"
-        }, 
-...
-    ], 
+        },
+    ],
     "responseHeader": {
-        "QTime": 3, 
+        "QTime": 3,
         "status": 0
     }
 }
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ShowSchemaName]]
 === Show Schema Name
 
-`GET /__collection__/schema/name`
+`GET /_collection_/schema/name`
 
 [[SchemaAPI-INPUT.5]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -944,10 +939,10 @@ The sample output below has been truncated to the first few copy definitions.
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |===
 
 [[SchemaAPI-OUTPUT.5]]
@@ -965,7 +960,7 @@ Get the schema name.
 curl http://localhost:8983/solr/gettingstarted/schema/name?wt=json
 ----
 
-[source,javascript]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -974,19 +969,18 @@ curl http://localhost:8983/solr/gettingstarted/schema/name?wt=json
   "name":"example"}
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ShowtheSchemaVersion]]
 === Show the Schema Version
 
-`GET /__collection__/schema/version`
+`GET /_collection_/schema/version`
 
 [[SchemaAPI-INPUT.6]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -996,10 +990,10 @@ curl http://localhost:8983/solr/gettingstarted/schema/name?wt=json
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |===
 
 [[SchemaAPI-OUTPUT.6]]
@@ -1019,7 +1013,7 @@ Get the schema version
 curl http://localhost:8983/solr/gettingstarted/schema/version?wt=json
 ----
 
-[source,javascript]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -1028,19 +1022,18 @@ curl http://localhost:8983/solr/gettingstarted/schema/version?wt=json
   "version":1.5}
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ListUniqueKey]]
 === List UniqueKey
 
-`GET /__collection__/schema/uniquekey`
+`GET /_collection_/schema/uniquekey`
 
 [[SchemaAPI-INPUT.7]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -1050,10 +1043,10 @@ curl http://localhost:8983/solr/gettingstarted/schema/version?wt=json
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |===
 
 [[SchemaAPI-OUTPUT.7]]
@@ -1073,7 +1066,7 @@ List the uniqueKey.
 curl http://localhost:8983/solr/gettingstarted/schema/uniquekey?wt=json
 ----
 
-[source,javascript]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -1082,19 +1075,18 @@ curl http://localhost:8983/solr/gettingstarted/schema/uniquekey?wt=json
   "uniqueKey":"id"}
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ShowGlobalSimilarity]]
 === Show Global Similarity
 
-`GET /__collection__/schema/similarity`
+`GET /_collection_/schema/similarity`
 
 [[SchemaAPI-INPUT.8]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -1104,10 +1096,10 @@ curl http://localhost:8983/solr/gettingstarted/schema/uniquekey?wt=json
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |===
 
 [[SchemaAPI-OUTPUT.8]]
@@ -1127,7 +1119,7 @@ Get the similarity implementation.
 curl http://localhost:8983/solr/gettingstarted/schema/similarity?wt=json
 ----
 
-[source,javascript]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -1137,19 +1129,18 @@ curl http://localhost:8983/solr/gettingstarted/schema/similarity?wt=json
     "class":"org.apache.solr.search.similarities.DefaultSimilarityFactory"}}
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-GettheDefaultQueryOperator]]
 === Get the Default Query Operator
 
-`GET /__collection__/schema/solrqueryparser/defaultoperator`
+`GET /_collection_/schema/solrqueryparser/defaultoperator`
 
 [[SchemaAPI-INPUT.9]]
 ==== INPUT
 
 *Path Parameters*
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Description
 |collection |The collection (or core) name.
@@ -1159,10 +1150,10 @@ curl http://localhost:8983/solr/gettingstarted/schema/similarity?wt=json
 
 The query parameters can be added to the API request after a '?'.
 
-[width="100%",cols="20%,20%,20%,20%,20%",options="header",]
+[width="100%",options="header",]
 |===
 |Key |Type |Required |Default |Description
-|wt |string |No |json |Defines the format of the response. The options are *json* or **xml**. If not specified, JSON will be returned by default.
+|wt |string |No |json |Defines the format of the response. The options are *json* or *xml*. If not specified, JSON will be returned by default.
 |===
 
 [[SchemaAPI-OUTPUT.9]]
@@ -1182,7 +1173,7 @@ Get the default operator.
 curl http://localhost:8983/solr/gettingstarted/schema/solrqueryparser/defaultoperator?wt=json
 ----
 
-[source,javascript]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -1191,7 +1182,6 @@ curl http://localhost:8983/solr/gettingstarted/schema/solrqueryparser/defaultope
   "defaultOperator":"OR"}
 ----
 
-<<main,Back to Top>>
 
 [[SchemaAPI-ManageResourceData]]
 == Manage Resource Data

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/schema-browser-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-browser-screen.adoc b/solr/solr-ref-guide/src/schema-browser-screen.adoc
index 801c592..ad526c7 100644
--- a/solr/solr-ref-guide/src/schema-browser-screen.adoc
+++ b/solr/solr-ref-guide/src/schema-browser-screen.adoc
@@ -2,20 +2,20 @@
 :page-shortname: schema-browser-screen
 :page-permalink: schema-browser-screen.html
 
-The Schema Browser screen lets you review schema data in a browser window. If you have accessed this window from the Analysis screen, it will be opened to a specific field, dynamic field rule or field type. If there is nothing chosen, use the pull-down menu to choose the field or field type.
+The Schema Browser screen lets you review schema data in a browser window.
 
-image::images/schema-browser-screen/schema_browser_terms.png[image,height=400]
+If you have accessed this window from the Analysis screen, it will be opened to a specific field, dynamic field rule or field type. If there is nothing chosen, use the pull-down menu to choose the field or field type.
 
+.Schema Browser Screen
+image::images/schema-browser-screen/schema_browser_terms.png[image,height=400]
 
 The screen provides a great deal of useful information about each particular field and fieldtype in the Schema, and provides a quick UI for adding fields or fieldtypes using the <<schema-api.adoc#schema-api,Schema API>> (if enabled). In the example above, we have chosen the `cat` field. On the left side of the main view window, we see the field name, that it is copied to the `_text_` (because of a copyField rule) and that it use the `strings` fieldtype. Click on one of those field or fieldtype names, and you can see the corresponding definitions.
 
 In the right part of the main view, we see the specific properties of how the `cat` field is defined – either explicitly or implicitly via its fieldtype, as well as how many documents have populated this field. Then we see the analyzer used for indexing and query processing. Click the icon to the left of either of those, and you'll see the definitions for the tokenizers and/or filters that are used. The output of these processes is the information you see when testing how content is handled for a particular field with the <<analysis-screen.adoc#analysis-screen,Analysis Screen>>.
 
-Under the analyzer information is a button to **Load Term Info**. Clicking that button will show the top _N_ terms that are in a sample shard for that field, as well as a histogram showing the number of terms with various frequencies. Click on a term, and you will be taken to the <<query-screen.adoc#query-screen,Query Screen>> to see the results of a query of that term in that field. If you want to always see the term information for a field, choose *Autoload* and it will always appear when there are terms for a field. A histogram shows the number of terms with a given frequency in the field.
+Under the analyzer information is a button to *Load Term Info*. Clicking that button will show the top _N_ terms that are in a sample shard for that field, as well as a histogram showing the number of terms with various frequencies. Click on a term, and you will be taken to the <<query-screen.adoc#query-screen,Query Screen>> to see the results of a query of that term in that field. If you want to always see the term information for a field, choose *Autoload* and it will always appear when there are terms for a field. A histogram shows the number of terms with a given frequency in the field.
 
 [IMPORTANT]
 ====
-
 Term Information is loaded from single arbitrarily selected core from the collection, to provide a representative sample for the collection. Full <<faceting.adoc#faceting,Field Facet>> query results are needed to see precise term counts across the entire collection.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
index 051e4cb..400bc16 100644
--- a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
@@ -2,13 +2,15 @@
 :page-shortname: schema-factory-definition-in-solrconfig
 :page-permalink: schema-factory-definition-in-solrconfig.html
 
-Solr's <<schema-api.adoc#schema-api,Schema API>> enables remote clients to access <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,schema>> information, and make schema modifications, through a REST interface. Other features such as Solr's <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> also work via schema modifications made programatically at run time.
+Solr's <<schema-api.adoc#schema-api,Schema API>> enables remote clients to access <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,schema>> information, and make schema modifications, through a REST interface.
+
+Other features such as Solr's <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> also work via schema modifications made programatically at run time.
 
 [IMPORTANT]
 ====
+Using the Managed Schema is required to be able to use the Schema API to modify your schema. However, using Managed Schema does not by itself mean you are also using Solr in Schemaless Mode (or "schema guessing" mode).
 
-Using the Managed Schema is required to be able to use the Schema API to modify your schema. However, using Managed Schema does not by itself mean you are also using Solr in Schemaless Mode (or "schema guessing" mode). Schemaless mode requires enabling the Managed Schema if it is not already, but full schema guessing requires additional configuration as described in <<schemaless-mode.adoc#schemaless-mode,other sections of this Guide>>.
-
+Schemaless mode requires enabling the Managed Schema if it is not already, but full schema guessing requires additional configuration as described in the section <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>.
 ====
 
 While the "read" features of the Schema API are supported for all schema types, support for making schema modifications programatically depends on the `<schemaFactory/>` in use.
@@ -20,8 +22,8 @@ When a `<schemaFactory/>` is not explicitly declared in a `solrconfig.xml` file,
 
 [source,xml]
 ----
- <!-- An example of Solr's implicit default behavior if no 
-      no schemaFactory is explicitly defined. 
+ <!-- An example of Solr's implicit default behavior if no
+      no schemaFactory is explicitly defined.
  -->
   <schemaFactory class="ManagedIndexSchemaFactory">
     <bool name="mutable">true</bool>
@@ -49,7 +51,9 @@ An alternative to using a managed schema is to explicitly configure a `ClassicIn
 [[SchemaFactoryDefinitioninSolrConfig-Switchingfromschema.xmltoManagedSchema]]
 === Switching from `schema.xml` to Managed Schema
 
-If you have an existing Solr collection that uses `ClassicIndexSchemaFactory`, and you wish to convert to use a managed schema, you can simplify modify the `solrconfig.xml` to specify the use of the `ManagedIndexSchemaFactory`. Once Solr is restarted and it detects that a `schema.xml` file exists, but the `managedSchemaResourceName` file (ie: "`managed-schema`") does not exist, the existing `schema.xml` file will be renamed to `schema.xml.bak` and the contents are re-written to the managed schema file. If you look at the resulting file, you'll see this at the top of the page:
+If you have an existing Solr collection that uses `ClassicIndexSchemaFactory`, and you wish to convert to use a managed schema, you can simply modify the `solrconfig.xml` to specify the use of the `ManagedIndexSchemaFactory`.
+
+Once Solr is restarted and it detects that a `schema.xml` file exists, but the `managedSchemaResourceName` file (ie: "`managed-schema`") does not exist, the existing `schema.xml` file will be renamed to `schema.xml.bak` and the contents are re-written to the managed schema file. If you look at the resulting file, you'll see this at the top of the page:
 
 [source,xml]
 ----
@@ -63,19 +67,15 @@ You are now free to use the <<schema-api.adoc#schema-api,Schema API>> as much as
 
 If you have started Solr with managed schema enabled and you would like to switch to manually editing a `schema.xml` file, you should take the following steps:
 
-// TODO: This 'ol' has problematic nested lists inside of it, needs manual editing
-
-1.  Rename the `managed-schema` file to `schema.xml`.
-2.  Modify `solrconfig.xml` to replace the `schemaFactory` class.
-1.  Remove any `ManagedIndexSchemaFactory` definition if it exists.
-2.  Add a `ClassicIndexSchemaFactory` definition as shown above
-3.  Reload the core(s).
+. Rename the `managed-schema` file to `schema.xml`.
+. Modify `solrconfig.xml` to replace the `schemaFactory` class.
+.. Remove any `ManagedIndexSchemaFactory` definition if it exists.
+.. Add a `ClassicIndexSchemaFactory` definition as shown above
+. Reload the core(s).
 
 If you are using SolrCloud, you may need to modify the files via ZooKeeper. The `bin/solr` script provides an easy way to download the files from ZooKeeper and upload them back after edits. See the section <<solr-control-script-reference.adoc#SolrControlScriptReference-ZooKeeperOperations,ZooKeeper Operations>> for more information.
 
 [TIP]
 ====
-
 To have full control over your `schema.xml` file, you may also want to disable schema guessing, which allows unknown fields to be added to the schema during indexing. The properties that enable this feature are discussed in the section <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/schemaless-mode.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schemaless-mode.adoc b/solr/solr-ref-guide/src/schemaless-mode.adoc
index 53849ea..459f854 100644
--- a/solr/solr-ref-guide/src/schemaless-mode.adoc
+++ b/solr/solr-ref-guide/src/schemaless-mode.adoc
@@ -2,11 +2,13 @@
 :page-shortname: schemaless-mode
 :page-permalink: schemaless-mode.html
 
-Schemaless Mode is a set of Solr features that, when used together, allow users to rapidly construct an effective schema by simply indexing sample data, without having to manually edit the schema. These Solr features, all controlled via `solrconfig.xml`, are:
+Schemaless Mode is a set of Solr features that, when used together, allow users to rapidly construct an effective schema by simply indexing sample data, without having to manually edit the schema.
 
-1.  Managed schema: Schema modifications are made at runtime through Solr APIs, which requires the use of `schemaFactory` that supports these changes - see <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more details.
-2.  Field value class guessing: Previously unseen fields are run through a cascading set of value-based parsers, which guess the Java class of field values - parsers for Boolean, Integer, Long, Float, Double, and Date are currently available.
-3.  Automatic schema field addition, based on field value class(es): Previously unseen fields are added to the schema, based on field value Java classes, which are mapped to schema field types - see <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
+These Solr features, all controlled via `solrconfig.xml`, are:
+
+. Managed schema: Schema modifications are made at runtime through Solr APIs, which requires the use of `schemaFactory` that supports these changes - see <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more details.
+. Field value class guessing: Previously unseen fields are run through a cascading set of value-based parsers, which guess the Java class of field values - parsers for Boolean, Integer, Long, Float, Double, and Date are currently available.
+. Automatic schema field addition, based on field value class(es): Previously unseen fields are added to the schema, based on field value Java classes, which are mapped to schema field types - see <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
 
 [[SchemalessMode-UsingtheSchemalessExample]]
 == Using the Schemaless Example
@@ -18,11 +20,11 @@ The three features of schemaless mode are pre-configured in the `data_driven_sch
 bin/solr start -e schemaless
 ----
 
-This will launch a Solr server, and automatically create a collection (named "```gettingstarted```") that contains only three fields in the initial schema: `id`, `_version_`, and `_text_`.
+This will launch a Solr server, and automatically create a collection (named "```gettingstarted```") that contains only three fields in the initial schema: `id`, `\_version_`, and `\_text_`.
 
-You can use the `/schema/fields` <<schema-api.adoc#schema-api,Schema API>> to confirm this: `curl http://localhost:8983/solr/gettingstarted/schema/fields` will output:
+You can use the `/schema/fields` <<schema-api.adoc#schema-api,Schema API>> to confirm this: `curl \http://localhost:8983/solr/gettingstarted/schema/fields` will output:
 
-[source,javascript]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -49,11 +51,9 @@ You can use the `/schema/fields` <<schema-api.adoc#schema-api,Schema API>> to co
       "uniqueKey":true}]}
 ----
 
-[IMPORTANT]
+[TIP]
 ====
-
-Because the `data_driven_schema_configs` config set includes a `copyField` directive that causes all content to be indexed in a predefined "catch-all" `_text_` field, to enable single-field search that includes all fields' content, the index will be larger than it would be without the `copyField`. When you nail down your schema, consider removing the `_text_` field and the corresponding `copyField` directive if you don't need it.
-
+Because the `data_driven_schema_configs` config set includes a `copyField` directive that causes all content to be indexed in a predefined "catch-all" `\_text_` field, to enable single-field search that includes all fields' content, the index will be larger than it would be without the `copyField`. When you nail down your schema, consider removing the `\_text_` field and the corresponding `copyField` directive if you don't need it.
 ====
 
 [[SchemalessMode-ConfiguringSchemalessMode]]
@@ -168,9 +168,7 @@ Once the UpdateRequestProcessorChain has been defined, you must instruct your Up
 
 [IMPORTANT]
 ====
-
 After each of these changes have been made, Solr should be restarted (or, you can reload the cores to load the new `solrconfig.xml` definitions).
-
 ====
 
 [[SchemalessMode-ExamplesofIndexedDocuments]]
@@ -196,9 +194,9 @@ Output indicating success:
 </response>
 ----
 
-The fields now in the schema (output from `curl http://localhost:8983/solr/gettingstarted/schema/fields` ):
+The fields now in the schema (output from `curl \http://localhost:8983/solr/gettingstarted/schema/fields` ):
 
-[source,javascript]
+[source,text]
 ----
 {
   "responseHeader":{
@@ -209,7 +207,7 @@ The fields now in the schema (output from `curl http://localhost:8983/solr/getti
       "type":"strings"},      // Field value guessed as String -> strings fieldType
     {
       "name":"Artist",
-      "type":"strings"},      // Field value guessed as String -> strings fieldType 
+      "type":"strings"},      // Field value guessed as String -> strings fieldType
     {
       "name":"FromDistributor",
       "type":"tlongs"},       // Field value guessed as Long -> tlongs fieldType
@@ -232,18 +230,16 @@ The fields now in the schema (output from `curl http://localhost:8983/solr/getti
     },
     {
       "name":"id",
-... 
+...
     }]}
 ----
 
 .You Can Still Be Explicit
 [TIP]
 ====
-
 Even if you want to use schemaless mode for most fields, you can still use the <<schema-api.adoc#schema-api,Schema API>> to pre-emptively create some fields, with explicit types, before you index documents that use them.
 
 Internally, the Schema API and the Schemaless Update Processors both use the same <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Managed Schema>> functionality.
-
 ====
 
 Once a field has been added to the schema, its field type is fixed. As a consequence, adding documents with field value(s) that conflict with the previously guessed field type will fail. For example, after adding the above document, the "```Sold```" field has the fieldType `tlongs`, but the document below has a non-integral decimal value in this field:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/securing-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/securing-solr.adoc b/solr/solr-ref-guide/src/securing-solr.adoc
index 9cb7c1a..e8a226b 100644
--- a/solr/solr-ref-guide/src/securing-solr.adoc
+++ b/solr/solr-ref-guide/src/securing-solr.adoc
@@ -15,7 +15,5 @@ When planning how to secure Solr, you should consider which of the available fea
 
 [WARNING]
 ====
-
-No Solr API, including the Admin UI, is designed to be exposed to non-trusted parties. Tune your firewall so that only trusted computers and people are allowed access. Because of this, the project will not regard e.g. Admin UI XSS issues as security vulnerabilities. However, we still ask you to report such issues in JIRA.
-
+No Solr API, including the Admin UI, is designed to be exposed to non-trusted parties. Tune your firewall so that only trusted computers and people are allowed access. Because of this, the project will not regard e.g., Admin UI XSS issues as security vulnerabilities. However, we still ask you to report such issues in JIRA.
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/segments-info.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/segments-info.adoc b/solr/solr-ref-guide/src/segments-info.adoc
index 143ec8a..1f11fb1 100644
--- a/solr/solr-ref-guide/src/segments-info.adoc
+++ b/solr/solr-ref-guide/src/segments-info.adoc
@@ -6,5 +6,4 @@ The Segments Info screen lets you see a visualization of the various segments in
 
 image::images/segments-info/segments_info.png[image,width=486,height=250]
 
-
 This information may be useful for people to help make decisions about the optimal <<indexconfig-in-solrconfig.adoc#IndexConfiginSolrConfig-MergingIndexSegments,merge settings>> for their data.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
index 261853a..1db80a8 100644
--- a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -2,23 +2,27 @@
 :page-shortname: setting-up-an-external-zookeeper-ensemble
 :page-permalink: setting-up-an-external-zookeeper-ensemble.html
 
-Although Solr comes bundled with http://zookeeper.apache.org[Apache ZooKeeper], you should consider yourself discouraged from using this internal ZooKeeper in production, because shutting down a redundant Solr instance will also shut down its ZooKeeper server, which might not be quite so redundant. Because a ZooKeeper ensemble must have a quorum of more than half its servers running at any given time, this can be a problem.
+Although Solr comes bundled with http://zookeeper.apache.org[Apache ZooKeeper], you should consider yourself discouraged from using this internal ZooKeeper in production.
+
+Shutting down a redundant Solr instance will also shut down its ZooKeeper server, which might not be quite so redundant. Because a ZooKeeper ensemble must have a quorum of more than half its servers running at any given time, this can be a problem.
 
 The solution to this problem is to set up an external ZooKeeper ensemble. Fortunately, while this process can seem intimidating due to the number of powerful options, setting up a simple ensemble is actually quite straightforward, as described below.
 
 .How Many ZooKeepers?
-[NOTE]
-====
-
-"For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. *_To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines_* . Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures. Note that a deployment of six machines can only handle two failures since three machines is not a majority.
+[quote,ZooKeeper Administrator's Guide,http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html]
+____
+"For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. *To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines*. Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures. Note that a deployment of six machines can only handle two failures since three machines is not a majority.
 
 For this reason, ZooKeeper deployments are usually made up of an odd number of machines."
+____
 
-_-- http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html[ZooKeeper Administrator's Guide]._
+When planning how many ZooKeeper nodes to configure, keep in mind that the main principle for a ZooKeeper ensemble is maintaining a majority of servers to serve requests. This majority is also called a _quorum_.
 
-====
+It is generally recommended to have an odd number of ZooKeeper servers in your ensemble, so a majority is maintained.
+
+For example, if you only have two ZooKeeper nodes and one goes down, 50% of available servers is not a majority, so ZooKeeper will no longer serve requests. However, if you have three ZooKeeper nodes and one goes down, you have 66% of available servers available, and ZooKeeper will continue normally while you repair the one down node. If you have 5 nodes, you could continue operating with two down nodes if necessary.
 
-When planning how many ZooKeeper nodes to configure, keep in mind that the main principle for a ZooKeeper ensemble is maintaining a majority of servers to serve requests. This majority is also called a __quorum__. It is generally recommended to have an odd number of ZooKeeper servers in your ensemble, so a majority is maintained. For example, if you only have two ZooKeeper nodes and one goes down, 50% of available servers is not a majority, so ZooKeeper will no longer serve requests. However, if you have three ZooKeeper nodes and one goes down, you have 66% of available servers available, and ZooKeeper will continue normally while you repair the one down node. If you have 5 nodes, you could continue operating with two down nodes if necessary. More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html#sc_zkMulitServerSetup.
+More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html#sc_zkMulitServerSetup.
 
 [[SettingUpanExternalZooKeeperEnsemble-DownloadApacheZooKeeper]]
 == Download Apache ZooKeeper
@@ -27,11 +31,9 @@ The first step in setting up Apache ZooKeeper is, of course, to download the sof
 
 [IMPORTANT]
 ====
-
 When using stand-alone ZooKeeper, you need to take care to keep your version of ZooKeeper updated with the latest version distributed with Solr. Since you are using it as a stand-alone application, it does not get upgraded when you upgrade Solr.
 
 Solr currently uses Apache ZooKeeper v3.4.6.
-
 ====
 
 [[SettingUpanExternalZooKeeperEnsemble-SettingUpaSingleZooKeeper]]
@@ -39,12 +41,10 @@ Solr currently uses Apache ZooKeeper v3.4.6.
 
 [[SettingUpanExternalZooKeeperEnsemble-Createtheinstance]]
 === Create the instance
-
 Creating the instance is a simple matter of extracting the files into a specific target directory. The actual directory itself doesn't matter, as long as you know where it is, and where you'd like to have ZooKeeper store its internal data.
 
 [[SettingUpanExternalZooKeeperEnsemble-Configuretheinstance]]
 === Configure the instance
-
 The next step is to configure your ZooKeeper instance. To do that, create the following file: `<ZOOKEEPER_HOME>/conf/zoo.cfg`. To this file, add the following information:
 
 [source,plain]
@@ -56,11 +56,11 @@ clientPort=2181
 
 The parameters are as follows:
 
-**tickTime**: Part of what ZooKeeper does is to determine which servers are up and running at any given time, and the minimum session time out is defined as two "ticks". The `tickTime` parameter specifies, in miliseconds, how long each tick should be.
+`tickTime`:: Part of what ZooKeeper does is to determine which servers are up and running at any given time, and the minimum session time out is defined as two "ticks". The `tickTime` parameter specifies, in miliseconds, how long each tick should be.
 
-**dataDir**: This is the directory in which ZooKeeper will store data about the cluster. This directory should start out empty.
+`dataDir`:: This is the directory in which ZooKeeper will store data about the cluster. This directory should start out empty.
 
-**clientPort**: This is the port on which Solr will access ZooKeeper.
+`clientPort`:: This is the port on which Solr will access ZooKeeper.
 
 Once this file is in place, you're ready to start the ZooKeeper instance.
 
@@ -90,7 +90,7 @@ Add a node pointing to an existing ZooKeeper at port 2181:
  bin/solr start -cloud -s <path to solr home for new node> -p 8987 -z localhost:2181
 ----
 
-*NOTE:* When you are not using an example to start solr, make sure you upload the configuration set to zookeeper before creating the collection.
+NOTE: When you are not using an example to start solr, make sure you upload the configuration set to ZooKeeper before creating the collection.
 
 [[SettingUpanExternalZooKeeperEnsemble-ShutdownZooKeeper]]
 === Shut down ZooKeeper
@@ -104,7 +104,7 @@ With an external ZooKeeper ensemble, you need to set things up just a little mor
 
 The difference is that rather than simply starting up the servers, you need to configure them to know about and talk to each other first. So your original `zoo.cfg` file might look like this:
 
-[source,java]
+[source,text]
 ----
 dataDir=/var/lib/zookeeperdata/1
 clientPort=2181
@@ -117,17 +117,17 @@ server.3=localhost:2890:3890
 
 Here you see three new parameters:
 
-**initLimit**: Amount of time, in ticks, to allow followers to connect and sync to a leader. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader.
+initLimit:: Amount of time, in ticks, to allow followers to connect and sync to a leader. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader.
 
-**syncLimit**: Amount of time, in ticks, to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
+syncLimit:: Amount of time, in ticks, to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
 
-**server.X**: These are the IDs and locations of all servers in the ensemble, the ports on which they communicate with each other. The server ID must additionally stored in the `<dataDir>/myid` file and be located in the `dataDir` of each ZooKeeper instance. The ID identifies each server, so in the case of this first instance, you would create the file `/var/lib/zookeeperdata/1/myid` with the content "1".
+server.X:: These are the IDs and locations of all servers in the ensemble, the ports on which they communicate with each other. The server ID must additionally stored in the `<dataDir>/myid` file and be located in the `dataDir` of each ZooKeeper instance. The ID identifies each server, so in the case of this first instance, you would create the file `/var/lib/zookeeperdata/1/myid` with the content "1".
 
 Now, whereas with Solr you need to create entirely new directories to run multiple instances, all you need for a new ZooKeeper instance, even if it's on the same machine for testing purposes, is a new configuration file. To complete the example you'll create two more configuration files.
 
 The `<ZOOKEEPER_HOME>/conf/zoo2.cfg` file should have the content:
 
-[source,java]
+[source,text]
 ----
 tickTime=2000
 dataDir=c:/sw/zookeeperdata/2
@@ -141,7 +141,7 @@ server.3=localhost:2890:3890
 
 You'll also need to create `<ZOOKEEPER_HOME>/conf/zoo3.cfg`:
 
-[source,java]
+[source,text]
 ----
 tickTime=2000
 dataDir=c:/sw/zookeeperdata/3
@@ -157,7 +157,7 @@ Finally, create your `myid` files in each of the `dataDir` directories so that e
 
 To start the servers, you can simply explicitly reference the configuration files:
 
-[source,java]
+[source,bash]
 ----
 cd <ZOOKEEPER_HOME>
 bin/zkServer.sh start zoo.cfg
@@ -167,7 +167,7 @@ bin/zkServer.sh start zoo3.cfg
 
 Once these servers are running, you can reference them from Solr just as you did before:
 
-[source,java]
+[source,bash]
 ----
  bin/solr start -e cloud -z localhost:2181,localhost:2182,localhost:2183 -noprompt
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/3f9dc385/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
index 9936163..930779c 100644
--- a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
@@ -2,19 +2,19 @@
 :page-shortname: shards-and-indexing-data-in-solrcloud
 :page-permalink: shards-and-indexing-data-in-solrcloud.html
 
-When your collection is too large for one node, you can break it up and store it in sections by creating multiple **shards**.
+When your collection is too large for one node, you can break it up and store it in sections by creating multiple *shards*.
 
 A Shard is a logical partition of the collection, containing a subset of documents from the collection, such that every document in a collection is contained in exactly one Shard. Which shard contains a each document in a collection depends on the overall "Sharding" strategy for that collection. For example, you might have a collection where the "country" field of each document determines which shard it is part of, so documents from the same country are co-located. A different collection might simply use a "hash" on the uniqueKey of each document to determine its Shard.
 
 Before SolrCloud, Solr supported Distributed Search, which allowed one query to be executed across multiple shards, so the query was executed against the entire Solr index and no documents would be missed from the search results. So splitting an index across shards is not exclusively a SolrCloud concept. There were, however, several problems with the distributed approach that necessitated improvement with SolrCloud:
 
-1.  Splitting an index into shards was somewhat manual.
-2.  There was no support for distributed indexing, which meant that you needed to explicitly send documents to a specific shard; Solr couldn't figure out on its own what shards to send documents to.
-3.  There was no load balancing or failover, so if you got a high number of queries, you needed to figure out where to send them and if one shard died it was just gone.
+. Splitting an index into shards was somewhat manual.
+. There was no support for distributed indexing, which meant that you needed to explicitly send documents to a specific shard; Solr couldn't figure out on its own what shards to send documents to.
+. There was no load balancing or failover, so if you got a high number of queries, you needed to figure out where to send them and if one shard died it was just gone.
 
 SolrCloud fixes all those problems. There is support for distributing both the index process and the queries automatically, and ZooKeeper provides failover and load balancing. Additionally, every shard can also have multiple replicas for additional robustness.
 
-In SolrCloud there are no masters or slaves. Instead, every shard consists of at least one physical **replica**, exactly one of which is a **leader**. Leaders are automatically elected, initially on a first-come-first-served basis, and then based on the Zookeeper process described at http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection[http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection.].
+In SolrCloud there are no masters or slaves. Instead, every shard consists of at least one physical *replica*, exactly one of which is a *leader*. Leaders are automatically elected, initially on a first-come-first-served basis, and then based on the Zookeeper process described at http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection[http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection.].
 
 If a leader goes down, one of the other replicas is automatically elected as the new leader.
 
@@ -23,26 +23,26 @@ When a document is sent to a Solr node for indexing, the system first determines
 [[ShardsandIndexingDatainSolrCloud-DocumentRouting]]
 == Document Routing
 
-Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#CollectionsAPI-create,creating your collection>>. If you use the (default) "```compositeId```" router, you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently. For example, if you wanted to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
+Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#CollectionsAPI-create,creating your collection>>.
 
-Then at query time, you include the prefix(es) into your query with the `_route_` parameter (i.e., `q=solr&_route_=IBM!`) to direct queries to specific shards. In some situations, this may improve query performance because it overcomes network latency when querying all the shards.
+If you use the (default) "```compositeId```" router, you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently. For example, if you wanted to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
+
+Then at query time, you include the prefix(es) into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM!`) to direct queries to specific shards. In some situations, this may improve query performance because it overcomes network latency when querying all the shards.
 
 [IMPORTANT]
 ====
-
-The `_route_` parameter replaces `shard.keys`, which has been deprecated and will be removed in a future Solr release.
-
+The `\_route_` parameter replaces `shard.keys`, which has been deprecated and will be removed in a future Solr release.
 ====
 
 The `compositeId` router supports prefixes containing up to 2 levels of routing. For example: a prefix routing first by region, then by customer: "USA!IBM!12345"
 
 Another use case could be if the customer "IBM" has a lot of documents and you want to spread it across multiple shards. The syntax for such a use case would be : "shard_key/num!document_id" where the /num is the number of bits from the shard key to use in the composite hash.
 
-So "IBM/3!12345" will take 3 bits from the shard key and 29 bits from the unique doc id, spreading the tenant over 1/8th of the shards in the collection. Likewise if the num value was 2 it would spread the documents across 1/4th the number of shards. At query time, you include the prefix(es) along with the number of bits into your query with the `_route_` parameter (i.e., `q=solr&_route_=IBM/3!`) to direct queries to specific shards.
+So "IBM/3!12345" will take 3 bits from the shard key and 29 bits from the unique doc id, spreading the tenant over 1/8th of the shards in the collection. Likewise if the num value was 2 it would spread the documents across 1/4th the number of shards. At query time, you include the prefix(es) along with the number of bits into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM/3!`) to direct queries to specific shards.
 
 If you do not want to influence how documents are stored, you don't need to specify a prefix in your document ID.
 
-If you created the collection and defined the "implicit" router at the time of creation, you can additionally define a `router.field` parameter to use a field from each document to identify a shard where the document belongs. If the field specified is missing in the document, however, the document will be rejected. You could also use the `_route_` parameter to name a specific shard.
+If you created the collection and defined the "implicit" router at the time of creation, you can additionally define a `router.field` parameter to use a field from each document to identify a shard where the document belongs. If the field specified is missing in the document, however, the document will be rejected. You could also use the `\_route_` parameter to name a specific shard.
 
 [[ShardsandIndexingDatainSolrCloud-ShardSplitting]]
 == Shard Splitting
@@ -56,9 +56,13 @@ More details on how to use shard splitting is in the section on the Collection A
 [[ShardsandIndexingDatainSolrCloud-IgnoringCommitsfromClientApplicationsinSolrCloud]]
 == Ignoring Commits from Client Applications in SolrCloud
 
-In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster. To enforce a policy where client applications should not send explicit commits, you should update all client applications that index data into SolrCloud. However, that is not always feasible, so Solr provides the IgnoreCommitOptimizeUpdateProcessorFactory, which allows you to ignore explicit commits and/or optimize requests from client applications without having refactor your client application code. To activate this request processor you'll need to add the following to your solrconfig.xml:
+In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.
+
+To enforce a policy where client applications should not send explicit commits, you should update all client applications that index data into SolrCloud. However, that is not always feasible, so Solr provides the `IgnoreCommitOptimizeUpdateProcessorFactory`, which allows you to ignore explicit commits and/or optimize requests from client applications without having refactor your client application code.
+
+To activate this request processor you'll need to add the following to your `solrconfig.xml`:
 
-[source,plain]
+[source,xml]
 ----
 <updateRequestProcessorChain name="ignore-commit-from-client" default="true">
   <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
@@ -74,7 +78,7 @@ As shown in the example above, the processor will return 200 to the client but w
 
 In the following example, the processor will raise an exception with a 403 code with a customized error message:
 
-[source,plain]
+[source,xml]
 ----
 <updateRequestProcessorChain name="ignore-commit-from-client" default="true">
   <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
@@ -89,7 +93,7 @@ In the following example, the processor will raise an exception with a 403 code
 
 Lastly, you can also configure it to just ignore optimize and let commits pass thru by doing:
 
-[source,plain]
+[source,xml]
 ----
 <updateRequestProcessorChain name="ignore-optimize-only-from-client-403">
   <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">


Mime
View raw message