lucene-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ctarg...@apache.org
Subject [06/11] lucene-solr:branch_7_0: SOLR-11050: remove Confluence-style anchors and fix all incoming links
Date Fri, 14 Jul 2017 18:35:04 GMT
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/running-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr.adoc b/solr/solr-ref-guide/src/running-solr.adoc
index ecc4112..f18183e 100644
--- a/solr/solr-ref-guide/src/running-solr.adoc
+++ b/solr/solr-ref-guide/src/running-solr.adoc
@@ -114,7 +114,7 @@ Solr also provides a number of useful examples to help you learn about key featu
 bin/solr -e techproducts
 ----
 
-Currently, the available examples you can run are: techproducts, dih, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#SolrControlScriptReference-RunningwithExampleConfigurations,Running with Example Configurations>> for details on each example.
+Currently, the available examples you can run are: techproducts, dih, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#running-with-example-configurations,Running with Example Configurations>> for details on each example.
 
 .Getting Started with SolrCloud
 [NOTE]
@@ -171,7 +171,7 @@ You may want to add a few sample documents before trying to index your own conte
 
 In the `bin/` directory is the post script, a command line tool which can be used to index different types of documents. Do not worry too much about the details for now. The <<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>> section has all the details on indexing.
 
-To see some information about the usage of `bin/post`, use the `-help` option. Windows users, see the section for <<post-tool.adoc#PostTool-WindowsSupport,Post Tool on Windows>>.
+To see some information about the usage of `bin/post`, use the `-help` option. Windows users, see the section for <<post-tool.adoc#post-tool-windows-support,Post Tool on Windows>>.
 
 `bin/post` can post various types of content to Solr, including files in Solr's native XML and JSON formats, CSV files, a directory tree of rich documents, or even a simple short web crawl. See the examples at the end of `bin/post -help` for various commands to easily get started posting your content into Solr.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/schema-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 893936f..a12eeb5 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -52,7 +52,7 @@ The base address for the API is `\http://<host>:<port>/solr/<collection_name>`.
 bin/solr -e cloud -noprompt
 ----
 
-== API Entry Points
+== Schema API Entry Points
 
 * `/schema`: <<Retrieve the Entire Schema,retrieve>> the schema, or <<Modify the Schema,modify>> the schema to add, remove, or replace fields, dynamic fields, copy fields, or field types
 * `/schema/fields`: <<List Fields,retrieve information>> about all defined fields or a specific named field
@@ -408,14 +408,12 @@ The query parameters should be added to the API request after '?'.
 `wt`::
 Defines the format of the response. The options are *json*, *xml* or *schema.xml*. If not specified, JSON will be returned by default.
 
-[[SchemaAPI-OUTPUT]]
 ==== Retrieve Schema Response
 
 *Output Content*
 
 The output will include all fields, field types, dynamic rules and copy field rules, in the format requested (JSON or XML). The schema name and version are also included.
 
-[[SchemaAPI-EXAMPLES]]
 ==== Retrieve Schema Examples
 
 Get the entire schema in JSON.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
index a16979b..4f26591 100644
--- a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
@@ -85,7 +85,7 @@ If you have started Solr with managed schema enabled and you would like to switc
 .. Add a `ClassicIndexSchemaFactory` definition as shown above
 . Reload the core(s).
 
-If you are using SolrCloud, you may need to modify the files via ZooKeeper. The `bin/solr` script provides an easy way to download the files from ZooKeeper and upload them back after edits. See the section <<solr-control-script-reference.adoc#SolrControlScriptReference-ZooKeeperOperations,ZooKeeper Operations>> for more information.
+If you are using SolrCloud, you may need to modify the files via ZooKeeper. The `bin/solr` script provides an easy way to download the files from ZooKeeper and upload them back after edits. See the section <<solr-control-script-reference.adoc#zookeeper-operations,ZooKeeper Operations>> for more information.
 
 [TIP]
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/segments-info.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/segments-info.adoc b/solr/solr-ref-guide/src/segments-info.adoc
index c5a4395..b0d72fe 100644
--- a/solr/solr-ref-guide/src/segments-info.adoc
+++ b/solr/solr-ref-guide/src/segments-info.adoc
@@ -22,4 +22,4 @@ The Segments Info screen lets you see a visualization of the various segments in
 
 image::images/segments-info/segments_info.png[image,width=486,height=250]
 
-This information may be useful for people to help make decisions about the optimal <<indexconfig-in-solrconfig.adoc#IndexConfiginSolrConfig-MergingIndexSegments,merge settings>> for their data.
+This information may be useful for people to help make decisions about the optimal <<indexconfig-in-solrconfig.adoc#merging-index-segments,merge settings>> for their data.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
index d2dbcf7..3d0a87d 100644
--- a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
@@ -36,10 +36,9 @@ If a leader goes down, one of the other replicas is automatically elected as the
 
 When a document is sent to a Solr node for indexing, the system first determines which Shard that document belongs to, and then which node is currently hosting the leader for that shard. The document is then forwarded to the current leader for indexing, and the leader forwards the update to all of the other replicas.
 
-[[ShardsandIndexingDatainSolrCloud-DocumentRouting]]
 == Document Routing
 
-Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#CollectionsAPI-create,creating your collection>>.
+Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#create,creating your collection>>.
 
 If you use the (default) "```compositeId```" router, you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently. For example, if you wanted to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
 
@@ -55,16 +54,14 @@ If you do not want to influence how documents are stored, you don't need to spec
 
 If you created the collection and defined the "implicit" router at the time of creation, you can additionally define a `router.field` parameter to use a field from each document to identify a shard where the document belongs. If the field specified is missing in the document, however, the document will be rejected. You could also use the `\_route_` parameter to name a specific shard.
 
-[[ShardsandIndexingDatainSolrCloud-ShardSplitting]]
 == Shard Splitting
 
 When you create a collection in SolrCloud, you decide on the initial number shards to be used. But it can be difficult to know in advance the number of shards that you need, particularly when organizational requirements can change at a moment's notice, and the cost of finding out later that you chose wrong can be high, involving creating new cores and re-indexing all of your data.
 
 The ability to split shards is in the Collections API. It currently allows splitting a shard into two pieces. The existing shard is left as-is, so the split action effectively makes two copies of the data as new shards. You can delete the old shard at a later time when you're ready.
 
-More details on how to use shard splitting is in the section on the Collection API's <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD command>>.
+More details on how to use shard splitting is in the section on the Collection API's <<collections-api.adoc#splitshard,SPLITSHARD command>>.
 
-[[ShardsandIndexingDatainSolrCloud-IgnoringCommitsfromClientApplicationsinSolrCloud]]
 == Ignoring Commits from Client Applications in SolrCloud
 
 In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/solr-control-script-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
index 45a9e80..368aacc 100644
--- a/solr/solr-ref-guide/src/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -83,7 +83,7 @@ The available options are:
 * dih
 * schemaless
 +
-See the section <<SolrControlScriptReference-RunningwithExampleConfigurations,Running with Example Configurations>> below for more details on the example configurations.
+See the section <<Running with Example Configurations>> below for more details on the example configurations.
 +
 *Example*: `bin/solr start -e schemaless`
 
@@ -185,7 +185,6 @@ When starting in SolrCloud mode, the interactive script session will prompt you
 
 For more information about starting Solr in SolrCloud mode, see also the section <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
 
-[[SolrControlScriptReference-RunningwithExampleConfigurations]]
 ==== Running with Example Configurations
 
 `bin/solr start -e <name>`
@@ -297,7 +296,6 @@ Solr process 39827 running on port 8865
     "collections":"2"}}
 ----
 
-[[SolrControlScriptReference-Healthcheck]]
 === Healthcheck
 
 The `healthcheck` command generates a JSON-formatted health report for a collection when running in SolrCloud mode. The health report provides information about the state of every replica for all shards in a collection, including the number of committed documents and its current state.
@@ -306,7 +304,6 @@ The `healthcheck` command generates a JSON-formatted health report for a collect
 
 `bin/solr healthcheck -help`
 
-[[SolrControlScriptReference-AvailableParameters.2]]
 ==== Healthcheck Parameters
 
 `-c <collection>`::
@@ -371,7 +368,6 @@ Below is an example healthcheck request and response using a non-standard ZooKee
           "leader":true}]}]}
 ----
 
-[[SolrControlScriptReference-CollectionsandCores]]
 == Collections and Cores
 
 The `bin/solr` script can also help you create new collections (in SolrCloud mode) or cores (in standalone mode), or delete collections.
@@ -566,7 +562,6 @@ If the `-updateIncludeFileOnly` option is set to *true*, then only the settings
 
 If the `-updateIncludeFileOnly` option is set to *false*, then the settings in `bin/solr.in.sh` or `bin\solr.in.cmd` will be updated, and `security.json` will be removed. However, the `basicAuth.conf` file is not removed with either option.
 
-[[SolrControlScriptReference-ZooKeeperOperations]]
 == ZooKeeper Operations
 
 The `bin/solr` script allows certain operations affecting ZooKeeper. These operations are for SolrCloud mode only. The operations are available as sub-commands, which each have their own set of options.
@@ -577,7 +572,6 @@ The `bin/solr` script allows certain operations affecting ZooKeeper. These opera
 
 NOTE: Solr should have been started at least once before issuing these commands to initialize ZooKeeper with the znodes Solr expects. Once ZooKeeper is initialized, Solr doesn't need to be running on any node to use these commands.
 
-[[SolrControlScriptReference-UploadaConfigurationSet]]
 === Upload a Configuration Set
 
 Use the `zk upconfig` command to upload one of the pre-configured configuration set or a customized configuration set to ZooKeeper.
@@ -618,10 +612,9 @@ bin/solr zk upconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configse
 .Reload Collections When Changing Configurations
 [WARNING]
 ====
-This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#CollectionsAPI-reload,RELOAD command>> to reload any collections that uses this configuration set.
+This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#reload,RELOAD command>> to reload any collections that uses this configuration set.
 ====
 
-[[SolrControlScriptReference-DownloadaConfigurationSet]]
 === Download a Configuration Set
 
 Use the `zk downconfig` command to download a configuration set from ZooKeeper to the local filesystem.
@@ -791,12 +784,10 @@ An example of this command with the parameters is:
 `bin/solr zk ls /collections`
 
 
-[[SolrControlScriptReference-Createaznode_supportschroot_]]
 === Create a znode (supports chroot)
 
 Use the `zk mkroot` command to create a znode. The primary use-case for this command to support ZooKeeper's "chroot" concept. However, it can also be used to create arbitrary paths.
 
-[[SolrControlScriptReference-AvailableParameters.9]]
 ==== Create znode Parameters
 
 `<path>`::

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/solr-glossary.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-glossary.adoc b/solr/solr-ref-guide/src/solr-glossary.adoc
index 1feed2f..de27081 100644
--- a/solr/solr-ref-guide/src/solr-glossary.adoc
+++ b/solr/solr-ref-guide/src/solr-glossary.adoc
@@ -33,7 +33,7 @@ Where possible, terms are linked to relevant parts of the Solr Reference Guide f
 [[SolrGlossary-A]]
 === A
 
-[[atomicupdates]]<<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-AtomicUpdates,Atomic updates>>::
+[[atomicupdates]]<<updating-parts-of-documents.adoc#atomic-updates,Atomic updates>>::
 An approach to updating only one or more fields of a document, instead of reindexing the entire document.
 
 
@@ -120,7 +120,7 @@ A JVM instance running Solr. Also known as a Solr server.
 [[SolrGlossary-O]]
 === O
 
-[[optimisticconcurrency]]<<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-OptimisticConcurrency,Optimistic concurrency>>::
+[[optimisticconcurrency]]<<updating-parts-of-documents.adoc#optimistic-concurrency,Optimistic concurrency>>::
 Also known as "optimistic locking", this is an approach that allows for updates to documents currently in the index while retaining locking or version control.
 
 [[overseer]]Overseer::

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/spell-checking.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spell-checking.adoc b/solr/solr-ref-guide/src/spell-checking.adoc
index adb784a..b46c8a1 100644
--- a/solr/solr-ref-guide/src/spell-checking.adoc
+++ b/solr/solr-ref-guide/src/spell-checking.adoc
@@ -212,7 +212,7 @@ This parameter turns on SpellCheck suggestions for the request. If *true*, then
 [[SpellChecking-Thespellcheck.qorqParameter]]
 === The spellcheck.q or q Parameter
 
-This parameter specifies the query to spellcheck. If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#Tokenizers-WhiteSpaceTokenizer,`WhitespaceTokenizer`>> is used. The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
+This parameter specifies the query to spellcheck. If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#white-space-tokenizer,`WhitespaceTokenizer`>> is used. The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
 
 [NOTE]
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/stream-decorators.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/stream-decorators.adoc b/solr/solr-ref-guide/src/stream-decorators.adoc
index e65f18a..4db4a82 100644
--- a/solr/solr-ref-guide/src/stream-decorators.adoc
+++ b/solr/solr-ref-guide/src/stream-decorators.adoc
@@ -382,7 +382,7 @@ cartesianProduct(
 }
 ----
 
-As you can see in the examples above, the `cartesianProduct` function does support flattening tuples across multiple fields and/or evaluators. 
+As you can see in the examples above, the `cartesianProduct` function does support flattening tuples across multiple fields and/or evaluators.
 
 == classify
 
@@ -615,8 +615,6 @@ eval(expr)
 In the example above the `eval` expression reads the first tuple from the underlying expression. It then compiles and
 executes the string Streaming Expression in the epxr_s field.
 
-
-[[StreamingExpressions-executor]]
 == executor
 
 The `executor` function wraps a stream source that contains streaming expressions, and executes the expressions in parallel. The `executor` function looks for the expression in the `expr_s` field in each tuple. The `executor` function has an internal thread pool that runs tasks that compile and run expressions in parallel on the same worker node. This function can also be parallelized across worker nodes by wrapping it in the <<parallel,`parallel`>> function to provide parallel execution of expressions across a cluster.
@@ -984,7 +982,6 @@ The worker nodes can be from the same collection as the data, or they can be a d
 * `zkHost`: (Optional) The ZooKeeper connect string where the worker collection resides.
 * `sort`: The sort criteria for ordering tuples returned by the worker nodes.
 
-[[StreamingExpressions-Syntax.25]]
 === parallel Syntax
 
 [source,text]
@@ -1000,10 +997,9 @@ The worker nodes can be from the same collection as the data, or they can be a d
 
 The expression above shows a `parallel` function wrapping a `reduce` function. This will cause the `reduce` function to be run in parallel across 20 worker nodes.
 
-[[StreamingExpressions-priority]]
 == priority
 
-The `priority` function is a simple priority scheduler for the <<StreamingExpressions-executor,executor>> function. The executor function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
+The `priority` function is a simple priority scheduler for the <<executor>> function. The `executor` function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
 
 The `priority` function wraps two <<stream-sources.adoc#topic,topics>> that are both emitting tuples that contain streaming expressions to execute. The first topic is considered the higher priority task queue.
 
@@ -1011,14 +1007,12 @@ Each time the `priority` function is called, it checks the higher priority task
 
 The `priority` function will only emit a batch of tasks from one of the queues each time it is called. This ensures that no lower priority tasks are executed until the higher priority queue has no tasks to run.
 
-[[StreamingExpressions-Parameters.25]]
-=== Parameters
+=== priority Parameters
 
 * `topic expression`: (Mandatory) the high priority task queue
 * `topic expression`: (Mandatory) the lower priority task queue
 
-[[StreamingExpressions-Syntax.26]]
-=== Syntax
+=== priority Syntax
 
 [source,text]
 ----
@@ -1092,7 +1086,7 @@ The example about shows the rollup function wrapping the search function. Notice
 
 == scoreNodes
 
-See section in <<graph-traversal.adoc#GraphTraversal-UsingthescoreNodesFunctiontoMakeaRecommendation,graph traversal>>.
+See section in <<graph-traversal.adoc#using-the-scorenodes-function-to-make-a-recommendation,graph traversal>>.
 
 == select
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/streaming-expressions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/streaming-expressions.adoc b/solr/solr-ref-guide/src/streaming-expressions.adoc
index 5ea3dd9..1474aaa 100644
--- a/solr/solr-ref-guide/src/streaming-expressions.adoc
+++ b/solr/solr-ref-guide/src/streaming-expressions.adoc
@@ -46,7 +46,6 @@ Streams from outside systems can be joined with streams originating from Solr an
 Both streaming expressions and the streaming API are considered experimental, and the APIs are subject to change.
 ====
 
-[[StreamingExpressions-StreamLanguageBasics]]
 == Stream Language Basics
 
 Streaming Expressions are comprised of streaming functions which work with a Solr collection. They emit a stream of tuples (key/value Maps).
@@ -55,7 +54,6 @@ Many of the provided streaming functions are designed to work with entire result
 
 Some streaming functions act as stream sources to originate the stream flow. Other streaming functions act as stream decorators to wrap other stream functions and perform operations on the stream of tuples. Many streams functions can be parallelized across a worker collection. This can be particularly powerful for relational algebra functions.
 
-[[StreamingExpressions-StreamingRequestsandResponses]]
 === Streaming Requests and Responses
 
 Solr has a `/stream` request handler that takes streaming expression requests and returns the tuples as a JSON stream. This request handler is implicitly defined, meaning there is nothing that has to be defined in `solrconfig.xml` - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>>.
@@ -112,7 +110,6 @@ StreamFactory streamFactory = new StreamFactory().withCollectionZkHost("collecti
 ParallelStream pstream = (ParallelStream)streamFactory.constructStream("parallel(collection1, group(search(collection1, q=\"*:*\", fl=\"id,a_s,a_i,a_f\", sort=\"a_s asc,a_f asc\", partitionKeys=\"a_s\"), by=\"a_s asc\"), workers=\"2\", zkHost=\""+zkHost+"\", sort=\"a_s asc\")");
 ----
 
-[[StreamingExpressions-DataRequirements]]
 === Data Requirements
 
 Because streaming expressions relies on the `/export` handler, many of the field and field type requirements to use `/export` are also requirements for `/stream`, particularly for `sort` and `fl` parameters. Please see the section <<exporting-result-sets.adoc#exporting-result-sets,Exporting Result Sets>> for details.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/taking-solr-to-production.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/taking-solr-to-production.adoc b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
index 9763410..fe7ed08 100644
--- a/solr/solr-ref-guide/src/taking-solr-to-production.adoc
+++ b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
@@ -20,17 +20,14 @@
 
 This section provides guidance on how to setup Solr to run in production on *nix platforms, such as Ubuntu. Specifically, we’ll walk through the process of setting up to run a single Solr instance on a Linux host and then provide tips on how to support multiple Solr nodes running on the same host.
 
-[[TakingSolrtoProduction-ServiceInstallationScript]]
 == Service Installation Script
 
 Solr includes a service installation script (`bin/install_solr_service.sh`) to help you install Solr as a service on Linux. Currently, the script only supports CentOS, Debian, Red Hat, SUSE and Ubuntu Linux distributions. Before running the script, you need to determine a few parameters about your setup. Specifically, you need to decide where to install Solr and which system user should be the owner of the Solr files and process.
 
-[[TakingSolrtoProduction-Planningyourdirectorystructure]]
 === Planning Your Directory Structure
 
 We recommend separating your live Solr files, such as logs and index files, from the files included in the Solr distribution bundle, as that makes it easier to upgrade Solr and is considered a good practice to follow as a system administrator.
 
-[[TakingSolrtoProduction-SolrInstallationDirectory]]
 ==== Solr Installation Directory
 
 By default, the service installation script will extract the distribution archive into `/opt`. You can change this location using the `-i` option when running the installation script. The script will also create a symbolic link to the versioned directory of Solr. For instance, if you run the installation script for Solr {solr-docs-version}.0, then the following directory structure will be used:
@@ -43,19 +40,16 @@ By default, the service installation script will extract the distribution archiv
 
 Using a symbolic link insulates any scripts from being dependent on the specific Solr version. If, down the road, you need to upgrade to a later version of Solr, you can just update the symbolic link to point to the upgraded version of Solr. We’ll use `/opt/solr` to refer to the Solr installation directory in the remaining sections of this page.
 
-[[TakingSolrtoProduction-SeparateDirectoryforWritableFiles]]
 ==== Separate Directory for Writable Files
 
 You should also separate writable Solr files into a different directory; by default, the installation script uses `/var/solr`, but you can override this location using the `-d` option. With this approach, the files in `/opt/solr` will remain untouched and all files that change while Solr is running will live under `/var/solr`.
 
-[[TakingSolrtoProduction-CreatetheSolruser]]
 === Create the Solr User
 
 Running Solr as `root` is not recommended for security reasons, and the <<solr-control-script-reference.adoc#solr-control-script-reference,control script>> start command will refuse to do so. Consequently, you should determine the username of a system user that will own all of the Solr files and the running Solr process. By default, the installation script will create the *solr* user, but you can override this setting using the -u option. If your organization has specific requirements for creating new user accounts, then you should create the user before running the script. The installation script will make the Solr user the owner of the `/opt/solr` and `/var/solr` directories.
 
 You are now ready to run the installation script.
 
-[[TakingSolrtoProduction-RuntheSolrInstallationScript]]
 === Run the Solr Installation Script
 
 To run the script, you'll need to download the latest Solr distribution archive and then do the following:
@@ -97,12 +91,10 @@ If you do not want to start the service immediately, pass the `-n` option. You c
 
 We'll cover some additional configuration settings you can make to fine-tune your Solr setup in a moment. Before moving on, let's take a closer look at the steps performed by the installation script. This gives you a better overview and will help you understand important details about your Solr installation when reading other pages in this guide; such as when a page refers to Solr home, you'll know exactly where that is on your system.
 
-[[TakingSolrtoProduction-SolrHomeDirectory]]
 ==== Solr Home Directory
 
 The Solr home directory (not to be confused with the Solr installation directory) is where Solr manages core directories with index files. By default, the installation script uses `/var/solr/data`. If the `-d` option is used on the install script, then this will change to the `data` subdirectory in the location given to the -d option. Take a moment to inspect the contents of the Solr home directory on your system. If you do not <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,store `solr.xml` in ZooKeeper>>, the home directory must contain a `solr.xml` file. When Solr starts up, the Solr Control Script passes the location of the home directory using the `-Dsolr.solr.home=...` system property.
 
-[[TakingSolrtoProduction-Environmentoverridesincludefile]]
 ==== Environment Overrides Include File
 
 The service installation script creates an environment specific include file that overrides defaults used by the `bin/solr` script. The main advantage of using an include file is that it provides a single location where all of your environment-specific overrides are defined. Take a moment to inspect the contents of the `/etc/default/solr.in.sh` file, which is the default path setup by the installation script. If you used the `-s` option on the install script to change the name of the service, then the first part of the filename will be different. For a service named `solr-demo`, the file will be named `/etc/default/solr-demo.in.sh`. There are many settings that you can override using this file. However, at a minimum, this script needs to define the `SOLR_PID_DIR` and `SOLR_HOME` variables, such as:
@@ -115,7 +107,6 @@ SOLR_HOME=/var/solr/data
 
 The `SOLR_PID_DIR` variable sets the directory where the <<solr-control-script-reference.adoc#solr-control-script-reference,control script>> will write out a file containing the Solr server’s process ID.
 
-[[TakingSolrtoProduction-Logsettings]]
 ==== Log Settings
 
 Solr uses Apache Log4J for logging. The installation script copies `/opt/solr/server/resources/log4j.properties` to `/var/solr/log4j.properties`. Take a moment to verify that the Solr include file is configured to send logs to the correct location by checking the following settings in `/etc/default/solr.in.sh`:
@@ -128,7 +119,6 @@ SOLR_LOGS_DIR=/var/solr/logs
 
 For more information about Log4J configuration, please see: <<configuring-logging.adoc#configuring-logging,Configuring Logging>>
 
-[[TakingSolrtoProduction-init.dscript]]
 ==== init.d Script
 
 When running a service like Solr on Linux, it’s common to setup an init.d script so that system administrators can control Solr using the service tool, such as: `service solr start`. The installation script creates a very basic init.d script to help you get started. Take a moment to inspect the `/etc/init.d/solr` file, which is the default script name setup by the installation script. If you used the `-s` option on the install script to change the name of the service, then the filename will be different. Notice that the following variables are setup for your environment based on the parameters passed to the installation script:
@@ -149,7 +139,6 @@ service solr start
 
 The `/etc/init.d/solr` script also supports the **stop**, **restart**, and *status* commands. Please keep in mind that the init script that ships with Solr is very basic and is intended to show you how to setup Solr as a service. However, it’s also common to use more advanced tools like *supervisord* or *upstart* to control Solr as a service on Linux. While showing how to integrate Solr with tools like supervisord is beyond the scope of this guide, the `init.d/solr` script should provide enough guidance to help you get started. Also, the installation script sets the Solr service to start automatically when the host machine initializes.
 
-[[TakingSolrtoProduction-ProgressCheck]]
 === Progress Check
 
 In the next section, we cover some additional environment settings to help you fine-tune your production setup. However, before we move on, let's review what we've achieved thus far. Specifically, you should be able to control Solr using `/etc/init.d/solr`. Please verify the following commands work with your setup:
@@ -174,10 +163,8 @@ Solr process PID running on port 8983
 
 If the `status` command is not successful, look for error messages in `/var/solr/logs/solr.log`.
 
-[[TakingSolrtoProduction-Finetuneyourproductionsetup]]
 == Fine-Tune Your Production Setup
 
-[[TakingSolrtoProduction-MemoryandGCSettings]]
 === Memory and GC Settings
 
 By default, the `bin/solr` script sets the maximum Java heap size to 512M (-Xmx512m), which is fine for getting started with Solr. For production, you’ll want to increase the maximum heap size based on the memory requirements of your search application; values between 10 and 20 gigabytes are not uncommon for production servers. When you need to change the memory settings for your Solr server, use the `SOLR_JAVA_MEM` variable in the include file, such as:
@@ -189,13 +176,11 @@ SOLR_JAVA_MEM="-Xms10g -Xmx10g"
 
 Also, the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script>> comes with a set of pre-configured Java Garbage Collection settings that have shown to work well with Solr for a number of different workloads. However, these settings may not work well for your specific use of Solr. Consequently, you may need to change the GC settings, which should also be done with the `GC_TUNE` variable in the `/etc/default/solr.in.sh` include file. For more information about tuning your memory and garbage collection settings, see: <<jvm-settings.adoc#jvm-settings,JVM Settings>>.
 
-[[TakingSolrtoProduction-Out-of-MemoryShutdownHook]]
 ==== Out-of-Memory Shutdown Hook
 
 The `bin/solr` script registers the `bin/oom_solr.sh` script to be called by the JVM if an OutOfMemoryError occurs. The `oom_solr.sh` script will issue a `kill -9` to the Solr process that experiences the `OutOfMemoryError`. This behavior is recommended when running in SolrCloud mode so that ZooKeeper is immediately notified that a node has experienced a non-recoverable error. Take a moment to inspect the contents of the `/opt/solr/bin/oom_solr.sh` script so that you are familiar with the actions the script will perform if it is invoked by the JVM.
 
-[[TakingSolrtoProduction-SolrCloud]]
-=== SolrCloud
+=== Going to Production with SolrCloud
 
 To run Solr in SolrCloud mode, you need to set the `ZK_HOST` variable in the include file to point to your ZooKeeper ensemble. Running the embedded ZooKeeper is not supported in production environments. For instance, if you have a ZooKeeper ensemble hosted on the following three hosts on the default client port 2181 (zk1, zk2, and zk3), then you would set:
 
@@ -206,7 +191,6 @@ ZK_HOST=zk1,zk2,zk3
 
 When the `ZK_HOST` variable is set, Solr will launch in "cloud" mode.
 
-[[TakingSolrtoProduction-ZooKeeperchroot]]
 ==== ZooKeeper chroot
 
 If you're using a ZooKeeper instance that is shared by other systems, it's recommended to isolate the SolrCloud znode tree using ZooKeeper's chroot support. For instance, to ensure all znodes created by SolrCloud are stored under `/solr`, you can put `/solr` on the end of your `ZK_HOST` connection string, such as:
@@ -225,12 +209,9 @@ bin/solr zk mkroot /solr -z <ZK_node>:<ZK_PORT>
 
 [NOTE]
 ====
-
 If you also want to bootstrap ZooKeeper with existing `solr_home`, you can instead use the `zkcli.sh` / `zkcli.bat` `bootstrap` command, which will also create the chroot path if it does not exist. See <<command-line-utilities.adoc#command-line-utilities,Command Line Utilities>> for more info.
-
 ====
 
-[[TakingSolrtoProduction-SolrHostname]]
 === Solr Hostname
 
 Use the `SOLR_HOST` variable in the include file to set the hostname of the Solr server.
@@ -242,7 +223,6 @@ SOLR_HOST=solr1.example.com
 
 Setting the hostname of the Solr server is recommended, especially when running in SolrCloud mode, as this determines the address of the node when it registers with ZooKeeper.
 
-[[TakingSolrtoProduction-Overridesettingsinsolrconfig.xml]]
 === Override Settings in solrconfig.xml
 
 Solr allows configuration properties to be overridden using Java system properties passed at startup using the `-Dproperty=value` syntax. For instance, in `solrconfig.xml`, the default auto soft commit settings are set to:
@@ -268,7 +248,6 @@ The `bin/solr` script simply passes options starting with `-D` on to the JVM dur
 SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
 ----
 
-[[TakingSolrtoProduction-RunningmultipleSolrnodesperhost]]
 == Running Multiple Solr Nodes Per Host
 
 The `bin/solr` script is capable of running multiple instances on one machine, but for a *typical* installation, this is not a recommended setup. Extra CPU and memory resources are required for each additional instance. A single instance is easily capable of handling multiple indexes.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/the-terms-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-terms-component.adoc b/solr/solr-ref-guide/src/the-terms-component.adoc
index 69e1b07..c8b51ca 100644
--- a/solr/solr-ref-guide/src/the-terms-component.adoc
+++ b/solr/solr-ref-guide/src/the-terms-component.adoc
@@ -22,12 +22,10 @@ The Terms Component provides access to the indexed terms in a field and the numb
 
 In a sense, this search component provides fast field-faceting over the whole index, not restricted by the base query or any filters. The document frequencies returned are the number of documents that match the term, including any documents that have been marked for deletion but not yet removed from the index.
 
-[[TheTermsComponent-ConfiguringtheTermsComponent]]
 == Configuring the Terms Component
 
 By default, the Terms Component is already configured in `solrconfig.xml` for each collection.
 
-[[TheTermsComponent-DefiningtheTermsComponent]]
 === Defining the Terms Component
 
 Defining the Terms search component is straightforward: simply give it a name and use the class `solr.TermsComponent`.
@@ -39,7 +37,6 @@ Defining the Terms search component is straightforward: simply give it a name an
 
 This makes the component available for use, but by itself will not be useable until included with a request handler.
 
-[[TheTermsComponent-UsingtheTermsComponentinaRequestHandler]]
 === Using the Terms Component in a Request Handler
 
 The terms component is included with the `/terms` request handler, which is among Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>>.
@@ -48,7 +45,6 @@ Note that the defaults for this request handler set the parameter "terms" to tru
 
 You could add this component to another handler if you wanted to, and pass "terms=true" in the HTTP request in order to get terms back. If it is only defined in a separate handler, you must use that handler when querying in order to get terms and not regular documents as results.
 
-[[TheTermsComponent-TermsComponentParameters]]
 === Terms Component Parameters
 
 The parameters below allow you to control what terms are returned. You can also configure any of these with the request handler if you'd like to set them permanently. Or, you can add them to the query request. These parameters are:
@@ -159,12 +155,10 @@ The response to a terms request is a list of the terms and their document freque
 
 You may also be interested in the {solr-javadocs}/solr-core/org/apache/solr/handler/component/TermsComponent.html[TermsComponent javadoc].
 
-[[TheTermsComponent-Examples]]
-== Examples
+== Terms Component Examples
 
 All of the following sample queries work with Solr's "`bin/solr -e techproducts`" example.
 
-[[TheTermsComponent-GetTop10Terms]]
 === Get Top 10 Terms
 
 This query requests the first ten terms in the name field: `\http://localhost:8983/solr/techproducts/terms?terms.fl=name`
@@ -195,8 +189,6 @@ Results:
 </response>
 ----
 
-
-[[TheTermsComponent-GetFirst10TermsStartingwithLetter_a_]]
 === Get First 10 Terms Starting with Letter 'a'
 
 This query requests the first ten terms in the name field, in index order (instead of the top 10 results by document count): `\http://localhost:8983/solr/techproducts/terms?terms.fl=name&terms.lower=a&terms.sort=index`
@@ -227,7 +219,6 @@ Results:
 </response>
 ----
 
-[[TheTermsComponent-SolrJinvocation]]
 === SolrJ Invocation
 
 [source,java]
@@ -245,7 +236,6 @@ Results:
     List<Term> terms = request.process(getSolrClient()).getTermsResponse().getTerms("terms_s");
 ----
 
-[[TheTermsComponent-UsingtheTermsComponentforanAuto-SuggestFeature]]
 == Using the Terms Component for an Auto-Suggest Feature
 
 If the <<suggester.adoc#suggester,Suggester>> doesn't suit your needs, you can use the Terms component in Solr to build a similar feature for your own search application. Simply submit a query specifying whatever characters the user has typed so far as a prefix. For example, if the user has typed "at", the search engine's interface would submit the following query:
@@ -288,7 +278,6 @@ Result:
 }
 ----
 
-[[TheTermsComponent-DistributedSearchSupport]]
 == Distributed Search Support
 
 The TermsComponent also supports distributed indexes. For the `/terms` request handler, you must provide the following two parameters:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/tokenizers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/tokenizers.adoc b/solr/solr-ref-guide/src/tokenizers.adoc
index 7a8bdeb..7718723 100644
--- a/solr/solr-ref-guide/src/tokenizers.adoc
+++ b/solr/solr-ref-guide/src/tokenizers.adoc
@@ -49,7 +49,6 @@ The following sections describe the tokenizer factory classes included in this r
 
 For user tips about Solr's tokenizers, see http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters.
 
-[[Tokenizers-StandardTokenizer]]
 == Standard Tokenizer
 
 This tokenizer splits the text field into tokens, treating whitespace and punctuation as delimiters. Delimiter characters are discarded, with the following exceptions:
@@ -80,7 +79,6 @@ The Standard Tokenizer supports http://unicode.org/reports/tr29/#Word_Boundaries
 
 *Out:* "Please", "email", "john.doe", "foo.com", "by", "03", "09", "re", "m37", "xq"
 
-[[Tokenizers-ClassicTokenizer]]
 == Classic Tokenizer
 
 The Classic Tokenizer preserves the same behavior as the Standard Tokenizer of Solr versions 3.1 and previous. It does not use the http://unicode.org/reports/tr29/#Word_Boundaries[Unicode standard annex UAX#29] word boundary rules that the Standard Tokenizer uses. This tokenizer splits the text field into tokens, treating whitespace and punctuation as delimiters. Delimiter characters are discarded, with the following exceptions:
@@ -110,7 +108,6 @@ The Classic Tokenizer preserves the same behavior as the Standard Tokenizer of S
 
 *Out:* "Please", "email", "john.doe@foo.com", "by", "03-09", "re", "m37-xq"
 
-[[Tokenizers-KeywordTokenizer]]
 == Keyword Tokenizer
 
 This tokenizer treats the entire text field as a single token.
@@ -132,7 +129,6 @@ This tokenizer treats the entire text field as a single token.
 
 *Out:* "Please, email john.doe@foo.com by 03-09, re: m37-xq."
 
-[[Tokenizers-LetterTokenizer]]
 == Letter Tokenizer
 
 This tokenizer creates tokens from strings of contiguous letters, discarding all non-letter characters.
@@ -154,7 +150,6 @@ This tokenizer creates tokens from strings of contiguous letters, discarding all
 
 *Out:* "I", "can", "t"
 
-[[Tokenizers-LowerCaseTokenizer]]
 == Lower Case Tokenizer
 
 Tokenizes the input stream by delimiting at non-letters and then converting all letters to lowercase. Whitespace and non-letters are discarded.
@@ -176,7 +171,6 @@ Tokenizes the input stream by delimiting at non-letters and then converting all
 
 *Out:* "i", "just", "love", "my", "iphone"
 
-[[Tokenizers-N-GramTokenizer]]
 == N-Gram Tokenizer
 
 Reads the field text and generates n-gram tokens of sizes in the given range.
@@ -219,7 +213,6 @@ With an n-gram size range of 4 to 5:
 
 *Out:* "bicy", "bicyc", "icyc", "icycl", "cycl", "cycle", "ycle"
 
-[[Tokenizers-EdgeN-GramTokenizer]]
 == Edge N-Gram Tokenizer
 
 Reads the field text and generates edge n-gram tokens of sizes in the given range.
@@ -279,7 +272,6 @@ Edge n-gram range of 2 to 5, from the back side:
 
 *Out:* "oo", "loo", "aloo", "baloo"
 
-[[Tokenizers-ICUTokenizer]]
 == ICU Tokenizer
 
 This tokenizer processes multilingual text and tokenizes it appropriately based on its script attribute.
@@ -319,7 +311,6 @@ To use this tokenizer, you must add additional .jars to Solr's classpath (as des
 
 ====
 
-[[Tokenizers-PathHierarchyTokenizer]]
 == Path Hierarchy Tokenizer
 
 This tokenizer creates synonyms from file path hierarchies.
@@ -347,7 +338,6 @@ This tokenizer creates synonyms from file path hierarchies.
 
 *Out:* "c:", "c:/usr", "c:/usr/local", "c:/usr/local/apache"
 
-[[Tokenizers-RegularExpressionPatternTokenizer]]
 == Regular Expression Pattern Tokenizer
 
 This tokenizer uses a Java regular expression to break the input text stream into tokens. The expression provided by the pattern argument can be interpreted either as a delimiter that separates tokens, or to match patterns that should be extracted from the text as tokens.
@@ -407,7 +397,6 @@ Extract part numbers which are preceded by "SKU", "Part" or "Part Number", case
 
 *Out:* "1234", "5678", "126-987"
 
-[[Tokenizers-SimplifiedRegularExpressionPatternTokenizer]]
 == Simplified Regular Expression Pattern Tokenizer
 
 This tokenizer is similar to the `PatternTokenizerFactory` described above, but uses Lucene {lucene-javadocs}/core/org/apache/lucene/util/automaton/RegExp.html[`RegExp`] pattern matching to construct distinct tokens for the input stream. The syntax is more limited than `PatternTokenizerFactory`, but the tokenization is quite a bit faster.
@@ -431,7 +420,6 @@ To match tokens delimited by simple whitespace characters:
 </analyzer>
 ----
 
-[[Tokenizers-SimplifiedRegularExpressionPatternSplittingTokenizer]]
 == Simplified Regular Expression Pattern Splitting Tokenizer
 
 This tokenizer is similar to the `SimplePatternTokenizerFactory` described above, but uses Lucene {lucene-javadocs}/core/org/apache/lucene/util/automaton/RegExp.html[`RegExp`] pattern matching to identify sequences of characters that should be used to split tokens. The syntax is more limited than `PatternTokenizerFactory`, but the tokenization is quite a bit faster.
@@ -455,7 +443,6 @@ To match tokens delimited by simple whitespace characters:
 </analyzer>
 ----
 
-[[Tokenizers-UAX29URLEmailTokenizer]]
 == UAX29 URL Email Tokenizer
 
 This tokenizer splits the text field into tokens, treating whitespace and punctuation as delimiters. Delimiter characters are discarded, with the following exceptions:
@@ -491,7 +478,6 @@ The UAX29 URL Email Tokenizer supports http://unicode.org/reports/tr29/#Word_Bou
 
 *Out:* "Visit", "http://accarol.com/contact.htm?from=external&a=10", "or", "e", "mail", "bob.cratchet@accarol.com"
 
-[[Tokenizers-WhiteSpaceTokenizer]]
 == White Space Tokenizer
 
 Simple tokenizer that splits the text stream on whitespace and returns sequences of non-whitespace characters as tokens. Note that any punctuation _will_ be included in the tokens.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/transforming-result-documents.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-result-documents.adoc b/solr/solr-ref-guide/src/transforming-result-documents.adoc
index 9e1d4ad..754060d 100644
--- a/solr/solr-ref-guide/src/transforming-result-documents.adoc
+++ b/solr/solr-ref-guide/src/transforming-result-documents.adoc
@@ -126,14 +126,14 @@ A default style can be configured by specifying an "args" parameter in your conf
 
 === [child] - ChildDocTransformerFactory
 
-This transformer returns all <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-NestedChildDocuments,descendant documents>> of each parent document matching your query in a flat list nested inside the matching parent document. This is useful when you have indexed nested child documents and want to retrieve the child documents for the relevant parent documents for any type of search query.
+This transformer returns all <<uploading-data-with-index-handlers.adoc#nested-child-documents,descendant documents>> of each parent document matching your query in a flat list nested inside the matching parent document. This is useful when you have indexed nested child documents and want to retrieve the child documents for the relevant parent documents for any type of search query.
 
 [source,plain]
 ----
 fl=id,[child parentFilter=doc_type:book childFilter=doc_type:chapter limit=100]
 ----
 
-Note that this transformer can be used even though the query itself is not a <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,Block Join query>>.
+Note that this transformer can be used even though the query itself is not a <<other-parsers.adoc#block-join-query-parsers,Block Join query>>.
 
 When using this transformer, the `parentFilter` parameter must be specified, and works the same as in all Block Join Queries, additional optional parameters are:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/update-request-processors.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/update-request-processors.adoc b/solr/solr-ref-guide/src/update-request-processors.adoc
index cbc6013..a11d74a 100644
--- a/solr/solr-ref-guide/src/update-request-processors.adoc
+++ b/solr/solr-ref-guide/src/update-request-processors.adoc
@@ -142,10 +142,10 @@ However executing a processor only on the forwarding nodes is a great way of dis
 .Custom update chain post-processors may never be invoked on a recovering replica
 [WARNING]
 ====
-While a replica is in <<read-and-write-side-fault-tolerance.adoc#ReadandWriteSideFaultTolerance-WriteSideFaultTolerance,recovery>>, inbound update requests are buffered to the transaction log. After recovery has completed successfully, those buffered update requests are replayed. As of this writing, however, custom update chain post-processors are never invoked for buffered update requests. See https://issues.apache.org/jira/browse/SOLR-8030[SOLR-8030]. To work around this problem until SOLR-8030 has been fixed, *avoid specifying post-processors in custom update chains*.
+While a replica is in <<read-and-write-side-fault-tolerance.adoc#write-side-fault-tolerance,recovery>>, inbound update requests are buffered to the transaction log. After recovery has completed successfully, those buffered update requests are replayed. As of this writing, however, custom update chain post-processors are never invoked for buffered update requests. See https://issues.apache.org/jira/browse/SOLR-8030[SOLR-8030]. To work around this problem until SOLR-8030 has been fixed, *avoid specifying post-processors in custom update chains*.
 ====
 
-=== Atomic Updates
+=== Atomic Update Processor Factory
 
 If the `AtomicUpdateProcessorFactory` is in the update chain before the `DistributedUpdateProcessor`, the incoming document to the chain will be a partial document.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc b/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
index 040da86..43314574 100644
--- a/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
@@ -27,12 +27,10 @@ The settings in this section are configured in the `<updateHandler>` element in
 </updateHandler>
 ----
 
-[[UpdateHandlersinSolrConfig-Commits]]
 == Commits
 
 Data sent to Solr is not searchable until it has been _committed_ to the index. The reason for this is that in some cases commits can be slow and they should be done in isolation from other possible commit requests to avoid overwriting data. So, it's preferable to provide control over when data is committed. Several options are available to control the timing of commits.
 
-[[UpdateHandlersinSolrConfig-commitandsoftCommit]]
 === commit and softCommit
 
 In Solr, a `commit` is an action which asks Solr to "commit" those changes to the Lucene index files. By default commit actions result in a "hard commit" of all the Lucene index files to stable storage (disk). When a client includes a `commit=true` parameter with an update request, this ensures that all index segments affected by the adds & deletes on an update are written to disk as soon as index updates are completed.
@@ -41,7 +39,6 @@ If an additional flag `softCommit=true` is specified, then Solr performs a 'soft
 
 For more information about Near Real Time operations, see <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>.
 
-[[UpdateHandlersinSolrConfig-autoCommit]]
 === autoCommit
 
 These settings control how often pending updates will be automatically pushed to the index. An alternative to `autoCommit` is to use `commitWithin`, which can be defined when making the update request to Solr (i.e., when pushing documents), or in an update RequestHandler.
@@ -77,7 +74,6 @@ You can also specify 'soft' autoCommits in the same way that you can specify 'so
 </autoSoftCommit>
 ----
 
-[[UpdateHandlersinSolrConfig-commitWithin]]
 === commitWithin
 
 The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to slave servers in a master/slave environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example:
@@ -91,7 +87,6 @@ The `commitWithin` settings allow forcing document commits to happen in a define
 
 With this configuration, when you call `commitWithin` as part of your update message, it will automatically perform a hard commit every time.
 
-[[UpdateHandlersinSolrConfig-EventListeners]]
 == Event Listeners
 
 The UpdateHandler section is also where update-related event listeners can be configured. These can be triggered to occur after any commit (`event="postCommit"`) or only after optimize commands (`event="postOptimize"`).
@@ -113,7 +108,6 @@ Any arguments to pass to the program. The default is none.
 `env`::
 Any environment variables to set. The default is none.
 
-[[UpdateHandlersinSolrConfig-TransactionLog]]
 == Transaction Log
 
 As described in the section <<realtime-get.adoc#realtime-get,RealTime Get>>, a transaction log is required for that feature. It is configured in the `updateHandler` section of `solrconfig.xml`.
@@ -127,7 +121,7 @@ Realtime Get currently relies on the update log feature, which is enabled by def
 </updateLog>
 ----
 
-Three additional expert-level configuration settings affect indexing performance and how far a replica can fall behind on updates before it must enter into full recovery - see the section on <<read-and-write-side-fault-tolerance.adoc#ReadandWriteSideFaultTolerance-WriteSideFaultTolerance,write side fault tolerance>> for more information:
+Three additional expert-level configuration settings affect indexing performance and how far a replica can fall behind on updates before it must enter into full recovery - see the section on <<read-and-write-side-fault-tolerance.adoc#write-side-fault-tolerance,write side fault tolerance>> for more information:
 
 `numRecordsToKeep`::
 The number of update records to keep per log. The default is `100`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/updating-parts-of-documents.adoc b/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
index 5ff8a28..e6b5175 100644
--- a/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
+++ b/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
@@ -20,15 +20,14 @@
 
 Once you have indexed the content you need in your Solr index, you will want to start thinking about your strategy for dealing with changes to those documents. Solr supports three approaches to updating documents that have only partially changed.
 
-The first is __<<UpdatingPartsofDocuments-AtomicUpdates,atomic updates>>__. This approach allows changing only one or more fields of a document without having to re-index the entire document.
+The first is _<<Atomic Updates,atomic updates>>_. This approach allows changing only one or more fields of a document without having to re-index the entire document.
 
-The second approach is known as __<<UpdatingPartsofDocuments-In-PlaceUpdates,in-place updates>>__. This approach is similar to atomic updates (is a subset of atomic updates in some sense), but can be used only for updating single valued non-indexed and non-stored docValue-based numeric fields.
+The second approach is known as _<<In-Place Updates,in-place updates>>_. This approach is similar to atomic updates (is a subset of atomic updates in some sense), but can be used only for updating single valued non-indexed and non-stored docValue-based numeric fields.
 
-The third approach is known as _<<UpdatingPartsofDocuments-OptimisticConcurrency,optimistic concurrency>>_ or __optimistic locking__. It is a feature of many NoSQL databases, and allows conditional updating a document based on its version. This approach includes semantics and rules for how to deal with version matches or mis-matches.
+The third approach is known as _<<Optimistic Concurrency,optimistic concurrency>>_ or _optimistic locking_. It is a feature of many NoSQL databases, and allows conditional updating a document based on its version. This approach includes semantics and rules for how to deal with version matches or mis-matches.
 
 Atomic Updates (and in-place updates) and Optimistic Concurrency may be used as independent strategies for managing changes to documents, or they may be combined: you can use optimistic concurrency to conditionally apply an atomic update.
 
-[[UpdatingPartsofDocuments-AtomicUpdates]]
 == Atomic Updates
 
 Solr supports several modifiers that atomically update values of a document. This allows updating only specific fields, which can help speed indexing processes in an environment where speed of index additions is critical to the application.
@@ -52,7 +51,6 @@ Removes all occurrences of the specified regex from a multiValued field. May be
 `inc`::
 Increments a numeric value by a specific amount. Must be specified as a single numeric value.
 
-[[UpdatingPartsofDocuments-FieldStorage]]
 === Field Storage
 
 The core functionality of atomically updating a document requires that all fields in your schema must be configured as stored (`stored="true"`) or docValues (`docValues="true"`) except for fields which are `<copyField/>` destinations, which must be configured as `stored="false"`. Atomic updates are applied to the document represented by the existing stored field values. All data in copyField destinations fields must originate from ONLY copyField sources.
@@ -61,8 +59,7 @@ If `<copyField/>` destinations are configured as stored, then Solr will attempt
 
 There are other kinds of derived fields that must also be set so they aren't stored. Some spatial field types use derived fields. Examples of this are solr.BBoxField and solr.LatLonType. CurrencyFieldType also uses derived fields.  These types create additional fields which are normally specified by a dynamic field definition. That dynamic field definition must be not stored, or indexing will fail.
 
-[[UpdatingPartsofDocuments-Example]]
-=== Example
+=== Example Updating Part of a Document
 
 If the following document exists in our collection:
 
@@ -102,7 +99,6 @@ The resulting document in our collection will be:
 }
 ----
 
-[[UpdatingPartsofDocuments-In-PlaceUpdates]]
 == In-Place Updates
 
 In-place updates are very similar to atomic updates; in some sense, this is a subset of atomic updates. In regular atomic updates, the entire document is re-indexed internally during the application of the update. However, in this approach, only the fields to be updated are affected and the rest of the documents are not re-indexed internally. Hence, the efficiency of updating in-place is unaffected by the size of the documents that are updated (i.e., number of fields, size of fields, etc.). Apart from these internal differences, there is no functional difference between atomic updates and in-place updates.
@@ -121,8 +117,7 @@ Set or replace the field value(s) with the specified value(s). May be specified
 `inc`::
 Increments a numeric value by a specific amount. Must be specified as a single numeric value.
 
-[[UpdatingPartsofDocuments-Example.1]]
-=== Example
+=== In-Place Update Example
 
 If the price and popularity fields are defined in the schema as:
 
@@ -169,17 +164,16 @@ The resulting document in our collection will be:
 }
 ----
 
-[[UpdatingPartsofDocuments-OptimisticConcurrency]]
 == Optimistic Concurrency
 
 Optimistic Concurrency is a feature of Solr that can be used by client applications which update/replace documents to ensure that the document they are replacing/updating has not been concurrently modified by another client application. This feature works by requiring a `\_version_` field on all documents in the index, and comparing that to a `\_version_` specified as part of the update command. By default, Solr's Schema includes a `\_version_` field, and this field is automatically added to each new document.
 
 In general, using optimistic concurrency involves the following work flow:
 
-1.  A client reads a document. In Solr, one might retrieve the document with the `/get` handler to be sure to have the latest version.
-2.  A client changes the document locally.
-3.  The client resubmits the changed document to Solr, for example, perhaps with the `/update` handler.
-4.  If there is a version conflict (HTTP error code 409), the client starts the process over.
+. A client reads a document. In Solr, one might retrieve the document with the `/get` handler to be sure to have the latest version.
+. A client changes the document locally.
+. The client resubmits the changed document to Solr, for example, perhaps with the `/update` handler.
+. If there is a version conflict (HTTP error code 409), the client starts the process over.
 
 When the client resubmits a changed document to Solr, the `\_version_` can be included with the update to invoke optimistic concurrency control. Specific semantics are used to define when the document should be updated or when to report a conflict.
 
@@ -233,7 +227,6 @@ $ curl 'http://localhost:8983/solr/techproducts/query?q=*:*&fl=id,_version_'
 
 For more information, please also see https://www.youtube.com/watch?v=WYVM6Wz-XTw[Yonik Seeley's presentation on NoSQL features in Solr 4] from Apache Lucene EuroCon 2012.
 
-[[UpdatingPartsofDocuments-DocumentCentricVersioningConstraints]]
 == Document Centric Versioning Constraints
 
 Optimistic Concurrency is extremely powerful, and works very efficiently because it uses an internally assigned, globally unique values for the `\_version_` field. However, In some situations users may want to configure their own document specific version field, where the version values are assigned on a per-document basis by an external system, and have Solr reject updates that attempt to replace a document with an "older" version. In situations like this the {solr-javadocs}/solr-core/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.html[`DocBasedVersionConstraintsProcessorFactory`] can be useful.
@@ -252,9 +245,7 @@ Once configured, this update processor will reject (HTTP error code 409) any att
 .versionField vs `\_version_`
 [IMPORTANT]
 ====
-
 The `\_version_` field used by Solr for its normal optimistic concurrency also has important semantics in how updates are distributed to replicas in SolrCloud, and *MUST* be assigned internally by Solr. Users can not re-purpose that field and specify it as the `versionField` for use in the `DocBasedVersionConstraintsProcessorFactory` configuration.
-
 ====
 
 `DocBasedVersionConstraintsProcessorFactory` supports two additional configuration params which are optional:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/upgrading-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/upgrading-solr.adoc b/solr/solr-ref-guide/src/upgrading-solr.adoc
index 6a60b8d..a1db074 100644
--- a/solr/solr-ref-guide/src/upgrading-solr.adoc
+++ b/solr/solr-ref-guide/src/upgrading-solr.adoc
@@ -45,7 +45,7 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 ** The metrics "avgRequestsPerMinute", "5minRateRequestsPerMinute" and "15minRateRequestsPerMinute" have been replaced by corresponding per-second rates viz. "avgRequestsPerSecond", "5minRateRequestsPerSecond" and "15minRateRequestsPerSecond" for consistency with stats output in other parts of Solr.
 * A new highlighter named UnifiedHighlighter has been added. You are encouraged to try out the UnifiedHighlighter by setting `hl.method=unified` and report feedback. It might become the default in 7.0. It's more efficient/faster than the other highlighters, especially compared to the original Highlighter. That said, some options aren't supported yet. It will get more features in time, especially with your input. See HighlightParams.java for a listing of highlight parameters annotated with which highlighters use them. `hl.useFastVectorHighlighter` is now considered deprecated in lieu of `hl.method=fastVector`.
 * The <<query-settings-in-solrconfig.adoc#query-settings-in-solrconfig,`maxWarmingSearchers` parameter>> now defaults to 1, and more importantly commits will now block if this limit is exceeded instead of throwing an exception (a good thing). Consequently there is no longer a risk in overlapping commits. Nonetheless users should continue to avoid excessive committing. Users are advised to remove any pre-existing maxWarmingSearchers entries from their solrconfig.xml files.
-* The <<other-parsers.adoc#OtherParsers-ComplexPhraseQueryParser,Complex Phrase query parser>> now supports leading wildcards. Beware of its possible heaviness, users are encouraged to use ReversedWildcardFilter in index time analysis.
+* The <<other-parsers.adoc#complex-phrase-query-parser,Complex Phrase query parser>> now supports leading wildcards. Beware of its possible heaviness, users are encouraged to use ReversedWildcardFilter in index time analysis.
 * The JMX metric "avgTimePerRequest" (and the corresponding metric in the metrics API for each handler) used to be a simple non-decaying average based on total cumulative time and the number of requests. New Codahale Metrics implementation applies exponential decay to this value, which heavily biases the average towards the last 5 minutes.
 * Index-time boosts are now deprecated. As a replacement, index-time scoring factors should be indexed in a separate field and combined with the query score using a function query. These boosts will be removed in Solr 7.0.
 * Parallel SQL now uses Apache Calcite as its SQL framework. As part of this change the default aggregation mode has been changed to facet rather than map_reduce. There have also been changes to the SQL aggregate response and some SQL syntax changes. Consult the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>> documentation for full details.
@@ -57,7 +57,7 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 * `SolrClient.shutdown()` has been removed, use {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/SolrClient.html[`SolrClient.close()`] instead.
 * The deprecated `zkCredientialsProvider` element in `solrcloud` section of `solr.xml` is now removed. Use the correct spelling (<<zookeeper-access-control.adoc#zookeeper-access-control,`zkCredentialsProvider`>>) instead.
 * Internal/expert - `ResultContext` was significantly changed and expanded to allow for multiple full query results (`DocLists`) per Solr request. `TransformContext` was rendered redundant and was removed. See https://issues.apache.org/jira/browse/SOLR-7957[SOLR-7957] for details.
-* Several changes have been made regarding the "<<other-schema-elements.adoc#OtherSchemaElements-Similarity,`Similarity`>>" used in Solr, in order to provide better default behavior for new users. There are 3 key impacts of these changes on existing users who upgrade:
+* Several changes have been made regarding the "<<other-schema-elements.adoc#similarity,`Similarity`>>" used in Solr, in order to provide better default behavior for new users. There are 3 key impacts of these changes on existing users who upgrade:
 ** `DefaultSimilarityFactory` has been removed. If you currently have `DefaultSimilarityFactory` explicitly referenced in your `schema.xml`, edit your config to use the functionally identical `ClassicSimilarityFactory`. See https://issues.apache.org/jira/browse/SOLR-8239[SOLR-8239] for more details.
 ** The implicit default Similarity used when no `<similarity/>` is configured in `schema.xml` has been changed to `SchemaSimilarityFactory`. Users who wish to preserve back-compatible behavior should either explicitly configure `ClassicSimilarityFactory`, or ensure that the `luceneMatchVersion` for the collection is less then 6.0. See https://issues.apache.org/jira/browse/SOLR-8270[SOLR-8270] + http://SOLR-8271[SOLR-8271] for details.
 ** `SchemaSimilarityFactory` has been modified to use `BM25Similarity` as the default for `fieldTypes` that do not explicitly declare a Similarity. The legacy behavior of using `ClassicSimilarity` as the default will occur if the `luceneMatchVersion` for the collection is less then 6.0, or the `'defaultSimFromFieldType'` configuration option may be used to specify any default of your choosing. See https://issues.apache.org/jira/browse/SOLR-8261[SOLR-8261] + https://issues.apache.org/jira/browse/SOLR-8329[SOLR-8329] for more details.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
index a8c56bb..ff59d61 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
@@ -25,7 +25,6 @@ The recommended way to configure and use request handlers is with path based nam
 
 A single unified update request handler supports XML, CSV, JSON, and javabin update requests, delegating to the appropriate `ContentStreamLoader` based on the `Content-Type` of the <<content-streams.adoc#content-streams,ContentStream>>.
 
-[[UploadingDatawithIndexHandlers-UpdateRequestHandlerConfiguration]]
 == UpdateRequestHandler Configuration
 
 The default configuration file has the update request handler configured by default.
@@ -35,12 +34,10 @@ The default configuration file has the update request handler configured by defa
 <requestHandler name="/update" class="solr.UpdateRequestHandler" />
 ----
 
-[[UploadingDatawithIndexHandlers-XMLFormattedIndexUpdates]]
 == XML Formatted Index Updates
 
 Index update commands can be sent as XML message to the update handler using `Content-type: application/xml` or `Content-type: text/xml`.
 
-[[UploadingDatawithIndexHandlers-AddingDocuments]]
 === Adding Documents
 
 The XML schema recognized by the update handler for adding documents is very straightforward:
@@ -84,11 +81,9 @@ If the document schema defines a unique key, then by default an `/update` operat
 
 If you have a unique key field, but you feel confident that you can safely bypass the uniqueness check (e.g., you build your indexes in batch, and your indexing code guarantees it never adds the same document more than once) you can specify the `overwrite="false"` option when adding your documents.
 
-[[UploadingDatawithIndexHandlers-XMLUpdateCommands]]
 === XML Update Commands
 
-[[UploadingDatawithIndexHandlers-CommitandOptimizeOperations]]
-==== Commit and Optimize Operations
+==== Commit and Optimize During Updates
 
 The `<commit>` operation writes all documents loaded since the last commit to one or more segment files on the disk. Before a commit has been issued, newly indexed content is not visible to searches. The commit operation opens a new searcher, and triggers any event listeners that have been configured.
 
@@ -114,7 +109,6 @@ Here are examples of <commit> and <optimize> using optional attributes:
 <optimize waitSearcher="false"/>
 ----
 
-[[UploadingDatawithIndexHandlers-DeleteOperations]]
 ==== Delete Operations
 
 Documents can be deleted from the index in two ways. "Delete by ID" deletes the document with the specified ID, and can be used only if a UniqueID field has been defined in the schema. "Delete by Query" deletes all documents matching a specified query, although `commitWithin` is ignored for a Delete by Query. A single delete message can contain multiple delete operations.
@@ -136,12 +130,10 @@ When using the Join query parser in a Delete By Query, you should use the `score
 
 ====
 
-[[UploadingDatawithIndexHandlers-RollbackOperations]]
 ==== Rollback Operations
 
 The rollback command rolls back all add and deletes made to the index since the last commit. It neither calls any event listeners nor creates a new searcher. Its syntax is simple: `<rollback/>`.
 
-[[UploadingDatawithIndexHandlers-UsingcurltoPerformUpdates]]
 === Using curl to Perform Updates
 
 You can use the `curl` utility to perform any of the above commands, using its `--data-binary` option to append the XML message to the `curl` command, and generating a HTTP POST request. For example:
@@ -168,7 +160,7 @@ For posting XML messages contained in a file, you can use the alternative form:
 curl http://localhost:8983/solr/my_collection/update -H "Content-Type: text/xml" --data-binary @myfile.xml
 ----
 
-Short requests can also be sent using a HTTP GET command, URL-encoding the request, as in the following. Note the escaping of "<" and ">":
+Short requests can also be sent using a HTTP GET command, if enabled in <<requestdispatcher-in-solrconfig.adoc#requestparsers-element,RequestDispatcher in SolrConfig>> element, URL-encoding the request, as in the following. Note the escaping of "<" and ">":
 
 [source,bash]
 ----
@@ -189,7 +181,6 @@ Responses from Solr take the form shown here:
 
 The status field will be non-zero in case of failure.
 
-[[UploadingDatawithIndexHandlers-UsingXSLTtoTransformXMLIndexUpdates]]
 === Using XSLT to Transform XML Index Updates
 
 The UpdateRequestHandler allows you to index any arbitrary XML using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation]. You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#config-sets,config set>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
@@ -250,23 +241,20 @@ You can also use the stylesheet in `XsltUpdateRequestHandler` to transform an in
 curl "http://localhost:8983/solr/my_collection/update?commit=true&tr=updateXml.xsl" -H "Content-Type: text/xml" --data-binary @myexporteddata.xml
 ----
 
-[[UploadingDatawithIndexHandlers-JSONFormattedIndexUpdates]]
 == JSON Formatted Index Updates
 
 Solr can accept JSON that conforms to a defined structure, or can accept arbitrary JSON-formatted documents. If sending arbitrarily formatted JSON, there are some additional parameters that need to be sent with the update request, described below in the section <<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>.
 
-[[UploadingDatawithIndexHandlers-Solr-StyleJSON]]
 === Solr-Style JSON
 
 JSON formatted update requests may be sent to Solr's `/update` handler using `Content-Type: application/json` or `Content-Type: text/json`.
 
 JSON formatted updates can take 3 basic forms, described in depth below:
 
-* <<UploadingDatawithIndexHandlers-AddingaSingleJSONDocument,A single document to add>>, expressed as a top level JSON Object. To differentiate this from a set of commands, the `json.command=false` request parameter is required.
-* <<UploadingDatawithIndexHandlers-AddingMultipleJSONDocuments,A list of documents to add>>, expressed as a top level JSON Array containing a JSON Object per document.
-* <<UploadingDatawithIndexHandlers-SendingJSONUpdateCommands,A sequence of update commands>>, expressed as a top level JSON Object (aka: Map).
+* <<Adding a Single JSON Document,A single document to add>>, expressed as a top level JSON Object. To differentiate this from a set of commands, the `json.command=false` request parameter is required.
+* <<Adding Multiple JSON Documents,A list of documents to add>>, expressed as a top level JSON Array containing a JSON Object per document.
+* <<Sending JSON Update Commands,A sequence of update commands>>, expressed as a top level JSON Object (aka: Map).
 
-[[UploadingDatawithIndexHandlers-AddingaSingleJSONDocument]]
 ==== Adding a Single JSON Document
 
 The simplest way to add Documents via JSON is to send each document individually as a JSON Object, using the `/update/json/docs` path:
@@ -280,7 +268,6 @@ curl -X POST -H 'Content-Type: application/json' 'http://localhost:8983/solr/my_
 }'
 ----
 
-[[UploadingDatawithIndexHandlers-AddingMultipleJSONDocuments]]
 ==== Adding Multiple JSON Documents
 
 Adding multiple documents at one time via JSON can be done via a JSON Array of JSON Objects, where each object represents a document:
@@ -307,7 +294,6 @@ A sample JSON file is provided at `example/exampledocs/books.json` and contains
 curl 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary @example/exampledocs/books.json -H 'Content-type:application/json'
 ----
 
-[[UploadingDatawithIndexHandlers-SendingJSONUpdateCommands]]
 ==== Sending JSON Update Commands
 
 In general, the JSON update syntax supports all of the update commands that the XML update handler supports, through a straightforward mapping. Multiple commands, adding and deleting documents, may be contained in one message:
@@ -377,7 +363,6 @@ You can also specify `\_version_` with each "delete":
 
 You can specify the version of deletes in the body of the update request as well.
 
-[[UploadingDatawithIndexHandlers-JSONUpdateConveniencePaths]]
 === JSON Update Convenience Paths
 
 In addition to the `/update` handler, there are a few additional JSON specific request handler paths available by default in Solr, that implicitly override the behavior of some request parameters:
@@ -395,13 +380,11 @@ In addition to the `/update` handler, there are a few additional JSON specific r
 
 The `/update/json` path may be useful for clients sending in JSON formatted update commands from applications where setting the Content-Type proves difficult, while the `/update/json/docs` path can be particularly convenient for clients that always want to send in documents – either individually or as a list – without needing to worry about the full JSON command syntax.
 
-[[UploadingDatawithIndexHandlers-CustomJSONDocuments]]
 === Custom JSON Documents
 
 Solr can support custom JSON. This is covered in the section <<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>.
 
 
-[[UploadingDatawithIndexHandlers-CSVFormattedIndexUpdates]]
 == CSV Formatted Index Updates
 
 CSV formatted update requests may be sent to Solr's `/update` handler using `Content-Type: application/csv` or `Content-Type: text/csv`.
@@ -413,7 +396,6 @@ A sample CSV file is provided at `example/exampledocs/books.csv` that you can us
 curl 'http://localhost:8983/solr/my_collection/update?commit=true' --data-binary @example/exampledocs/books.csv -H 'Content-type:application/csv'
 ----
 
-[[UploadingDatawithIndexHandlers-CSVUpdateParameters]]
 === CSV Update Parameters
 
 The CSV handler allows the specification of many parameters in the URL in the form: `f._parameter_._optional_fieldname_=_value_` .
@@ -498,7 +480,6 @@ Add the given offset (as an integer) to the `rowid` before adding it to the docu
 +
 Example: `rowidOffset=10`
 
-[[UploadingDatawithIndexHandlers-IndexingTab-Delimitedfiles]]
 === Indexing Tab-Delimited files
 
 The same feature used to index CSV documents can also be easily used to index tab-delimited files (TSV files) and even handle backslash escaping rather than CSV encapsulation.
@@ -517,7 +498,6 @@ This file could then be imported into Solr by setting the `separator` to tab (%0
 curl 'http://localhost:8983/solr/my_collection/update/csv?commit=true&separator=%09&escape=%5c' --data-binary @/tmp/result.txt
 ----
 
-[[UploadingDatawithIndexHandlers-CSVUpdateConveniencePaths]]
 === CSV Update Convenience Paths
 
 In addition to the `/update` handler, there is an additional CSV specific request handler path available by default in Solr, that implicitly override the behavior of some request parameters:
@@ -530,16 +510,14 @@ In addition to the `/update` handler, there is an additional CSV specific reques
 
 The `/update/csv` path may be useful for clients sending in CSV formatted update commands from applications where setting the Content-Type proves difficult.
 
-[[UploadingDatawithIndexHandlers-NestedChildDocuments]]
 == Nested Child Documents
 
-Solr indexes nested documents in blocks as a way to model documents containing other documents, such as a blog post parent document and comments as child documents -- or products as parent documents and sizes, colors, or other variations as child documents. At query time, the <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,Block Join Query Parsers>> can search these relationships. In terms of performance, indexing the relationships between documents may be more efficient than attempting to do joins only at query time, since the relationships are already stored in the index and do not need to be computed.
+Solr indexes nested documents in blocks as a way to model documents containing other documents, such as a blog post parent document and comments as child documents -- or products as parent documents and sizes, colors, or other variations as child documents. At query time, the <<other-parsers.adoc#block-join-query-parsers,Block Join Query Parsers>> can search these relationships. In terms of performance, indexing the relationships between documents may be more efficient than attempting to do joins only at query time, since the relationships are already stored in the index and do not need to be computed.
 
-Nested documents may be indexed via either the XML or JSON data syntax (or using <<using-solrj.adoc#using-solrj,SolrJ)>> - but regardless of syntax, you must include a field that identifies the parent document as a parent; it can be any field that suits this purpose, and it will be used as input for the <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,block join query parsers>>.
+Nested documents may be indexed via either the XML or JSON data syntax (or using <<using-solrj.adoc#using-solrj,SolrJ)>> - but regardless of syntax, you must include a field that identifies the parent document as a parent; it can be any field that suits this purpose, and it will be used as input for the <<other-parsers.adoc#block-join-query-parsers,block join query parsers>>.
 
 To support nested documents, the schema must include an indexed/non-stored field `\_root_`. The value of that field is populated automatically and is the same for all documents in the block, regardless of the inheritance depth.
 
-[[UploadingDatawithIndexHandlers-XMLExamples]]
 === XML Examples
 
 For example, here are two documents and their child documents:
@@ -570,7 +548,6 @@ For example, here are two documents and their child documents:
 
 In this example, we have indexed the parent documents with the field `content_type`, which has the value "parentDocument". We could have also used a boolean field, such as `isParent`, with a value of "true", or any other similar approach.
 
-[[UploadingDatawithIndexHandlers-JSONExamples]]
 === JSON Examples
 
 This example is equivalent to the XML example above, note the special `\_childDocuments_` key need to indicate the nested documents in JSON.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/using-javascript.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-javascript.adoc b/solr/solr-ref-guide/src/using-javascript.adoc
index 25aabf8..d2247fb 100644
--- a/solr/solr-ref-guide/src/using-javascript.adoc
+++ b/solr/solr-ref-guide/src/using-javascript.adoc
@@ -22,7 +22,7 @@ Using Solr from JavaScript clients is so straightforward that it deserves a spec
 
 HTTP requests can be sent to Solr using the standard `XMLHttpRequest` mechanism.
 
-Out of the box, Solr can send <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JavaScript Object Notation (JSON) responses>>, which are easily interpreted in JavaScript. Just add `wt=json` to the request URL to have responses sent as JSON.
+Out of the box, Solr can send <<response-writers.adoc#json-response-writer,JavaScript Object Notation (JSON) responses>>, which are easily interpreted in JavaScript. Just add `wt=json` to the request URL to have responses sent as JSON.
 
 For more information and an excellent example, read the SolJSON page on the Solr Wiki:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/using-python.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-python.adoc b/solr/solr-ref-guide/src/using-python.adoc
index e790ec9..84a7b4c 100644
--- a/solr/solr-ref-guide/src/using-python.adoc
+++ b/solr/solr-ref-guide/src/using-python.adoc
@@ -18,7 +18,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr includes an output format specifically for <<response-writers.adoc#ResponseWriters-PythonResponseWriter,Python>>, but <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JSON output>> is a little more robust.
+Solr includes an output format specifically for <<response-writers.adoc#python-response-writer,Python>>, but <<response-writers.adoc#json-response-writer,JSON output>> is a little more robust.
 
 == Simple Python
 


Mime
View raw message