lucene-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ctarg...@apache.org
Subject [10/10] lucene-solr:master: SOLR-11050: remove unneeded anchors for pages that have no incoming links from other pages
Date Thu, 13 Jul 2017 01:01:44 GMT
SOLR-11050: remove unneeded anchors for pages that have no incoming links from other pages


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/74ab1616
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/74ab1616
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/74ab1616

Branch: refs/heads/master
Commit: 74ab16168c8a988e5190cc6e032039c43a262f0e
Parents: 47731ce
Author: Cassandra Targett <ctargett@apache.org>
Authored: Wed Jul 12 11:56:50 2017 -0500
Committer: Cassandra Targett <ctargett@apache.org>
Committed: Wed Jul 12 19:57:59 2017 -0500

----------------------------------------------------------------------
 solr/solr-ref-guide/src/about-this-guide.adoc   | 59 ++++++++++----------
 solr/solr-ref-guide/src/about-tokenizers.adoc   |  1 -
 ...adding-custom-plugins-in-solrcloud-mode.adoc |  9 ---
 solr/solr-ref-guide/src/analyzers.adoc          |  2 -
 .../src/basic-authentication-plugin.adoc        |  9 ---
 solr/solr-ref-guide/src/blob-store-api.adoc     |  3 -
 .../solr-ref-guide/src/charfilterfactories.adoc |  4 --
 .../src/collapse-and-expand-results.adoc        |  2 -
 .../src/command-line-utilities.adoc             | 24 +++-----
 .../solr-ref-guide/src/configuring-logging.adoc |  5 --
 ...adir-and-directoryfactory-in-solrconfig.adoc |  3 -
 solr/solr-ref-guide/src/dataimport-screen.adoc  |  1 -
 solr/solr-ref-guide/src/de-duplication.adoc     |  5 --
 solr/solr-ref-guide/src/defining-fields.adoc    |  5 +-
 .../detecting-languages-during-indexing.adoc    |  4 --
 .../src/distributed-requests.adoc               |  7 +--
 .../distributed-search-with-index-sharding.adoc |  7 +--
 solr/solr-ref-guide/src/docvalues.adoc          |  2 -
 solr/solr-ref-guide/src/enabling-ssl.adoc       | 20 +------
 .../src/exporting-result-sets.adoc              |  6 --
 solr/solr-ref-guide/src/faceting.adoc           |  2 +-
 .../field-type-definitions-and-properties.adoc  |  2 -
 .../src/getting-started-with-solrcloud.adoc     |  5 --
 .../src/hadoop-authentication-plugin.adoc       |  5 --
 solr/solr-ref-guide/src/highlighting.adoc       | 13 +----
 .../solr-ref-guide/src/how-solrcloud-works.adoc |  7 +--
 .../src/indexing-and-basic-data-operations.adoc |  1 -
 .../src/initparams-in-solrconfig.adoc           |  3 +-
 .../src/introduction-to-solr-indexing.adoc      |  2 -
 solr/solr-ref-guide/src/jvm-settings.adoc       |  3 -
 .../src/kerberos-authentication-plugin.adoc     | 19 +------
 .../src/local-parameters-in-queries.adoc        |  3 -
 solr/solr-ref-guide/src/logging.adoc            |  1 -
 solr/solr-ref-guide/src/managed-resources.adoc  | 10 +---
 .../src/mbean-request-handler.adoc              |  3 +-
 solr/solr-ref-guide/src/merging-indexes.adoc    |  2 -
 solr/solr-ref-guide/src/morelikethis.adoc       |  9 +--
 .../src/near-real-time-searching.adoc           | 10 +---
 solr/solr-ref-guide/src/post-tool.adoc          |  4 +-
 .../src/request-parameters-api.adoc             | 15 +----
 solr/solr-ref-guide/src/result-clustering.adoc  | 36 ++++--------
 solr/solr-ref-guide/src/result-grouping.adoc    | 15 ++---
 .../src/rule-based-authorization-plugin.adoc    | 10 +---
 .../src/rule-based-replica-placement.adoc       | 24 +-------
 ...schema-factory-definition-in-solrconfig.adoc |  4 --
 solr/solr-ref-guide/src/schemaless-mode.adoc    | 17 ++----
 ...tting-up-an-external-zookeeper-ensemble.adoc | 21 ++-----
 .../src/solr-jdbc-apache-zeppelin.adoc          |  3 -
 .../src/solr-jdbc-dbvisualizer.adoc             | 15 +----
 solr/solr-ref-guide/src/spatial-search.adoc     | 23 +-------
 solr/solr-ref-guide/src/suggester.adoc          | 36 ++----------
 .../src/the-query-elevation-component.adoc      | 11 +---
 .../solr-ref-guide/src/the-stats-component.adoc | 16 ++----
 .../src/the-term-vector-component.adoc          |  8 +--
 .../src/the-well-configured-solr-instance.adoc  |  2 -
 .../transforming-and-indexing-custom-json.adoc  | 52 ++++++++++-------
 .../src/transforming-result-documents.adoc      | 21 +------
 solr/solr-ref-guide/src/uima-integration.adoc   |  2 -
 ...anding-analyzers-tokenizers-and-filters.adoc |  4 --
 .../src/update-request-processors.adoc          | 21 +------
 .../src/upgrading-a-solr-cluster.adoc           |  8 ---
 solr/solr-ref-guide/src/upgrading-solr.adoc     |  4 --
 ...g-data-with-solr-cell-using-apache-tika.adoc | 48 ++--------------
 .../solr-ref-guide/src/using-jmx-with-solr.adoc | 10 +---
 solr/solr-ref-guide/src/using-python.adoc       |  2 -
 solr/solr-ref-guide/src/using-solrj.adoc        |  8 ---
 ...zookeeper-to-manage-configuration-files.adoc | 23 +++-----
 solr/solr-ref-guide/src/v2-api.adoc             |  5 --
 .../src/velocity-response-writer.adoc           |  3 -
 ...king-with-currencies-and-exchange-rates.adoc | 21 +++----
 .../src/working-with-enum-fields.adoc           | 10 +---
 .../src/zookeeper-access-control.adoc           |  8 ---
 72 files changed, 166 insertions(+), 622 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/about-this-guide.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/about-this-guide.adoc b/solr/solr-ref-guide/src/about-this-guide.adoc
index 2168c1c..3a44ab0 100644
--- a/solr/solr-ref-guide/src/about-this-guide.adoc
+++ b/solr/solr-ref-guide/src/about-this-guide.adoc
@@ -1,6 +1,7 @@
 = About This Guide
 :page-shortname: about-this-guide
 :page-permalink: about-this-guide.html
+:page-toc: false
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -26,48 +27,48 @@ Designed to provide high-level documentation, this guide is intended to be more
 
 The material as presented assumes that you are familiar with some basic search concepts and that you can read XML. It does not assume that you are a Java programmer, although knowledge of Java is helpful when working directly with Lucene or when developing custom extensions to a Lucene/Solr installation.
 
-[[AboutThisGuide-SpecialInlineNotes]]
-== Special Inline Notes
+== Hosts and Port Examples
 
-Special notes are included throughout these pages. There are several types of notes:
+The default port when running Solr is 8983. The samples, URLs and screenshots in this guide may show different ports, because the port number that Solr uses is configurable.
 
-Information blocks::
-+
-NOTE: These provide additional information that's useful for you to know.
+If you have not customized your installation of Solr, please make sure that you use port 8983 when following the examples, or configure your own installation to use the port numbers shown in the examples. For information about configuring port numbers, see the section <<managing-solr.adoc#managing-solr,Managing Solr>>.
 
-Important::
-+
-IMPORTANT: These provide information that is critical for you to know.
+Similarly, URL examples use `localhost` throughout; if you are accessing Solr from a location remote to the server hosting Solr, replace `localhost` with the proper domain or IP where Solr is running.
 
-Tip::
-+
-TIP: These provide helpful tips.
+For example, we might provide a sample query like:
 
-Caution::
-+
-CAUTION: These provide details on scenarios or configurations you should be careful with.
+`\http://localhost:8983/solr/gettingstarted/select?q=brown+cow`
 
-Warning::
-+
-WARNING: These are meant to warn you from a possibly dangerous change or action.
+There are several items in this URL you might need to change locally. First, if your server is running at "www.example.com", you'll replace "localhost" with the proper domain. If you aren't using port 8983, you'll replace that also. Finally, you'll want to replace "gettingstarted" (the collection or core name) with the proper one in use in your implementation. The URL would then become:
 
+`\http://www.example.com/solr/mycollection/select?q=brown+cow`
 
-[[AboutThisGuide-HostsandPortExamples]]
-== Hosts and Port Examples
+== Paths
 
-The default port when running Solr is 8983. The samples, URLs and screenshots in this guide may show different ports, because the port number that Solr uses is configurable. If you have not customized your installation of Solr, please make sure that you use port 8983 when following the examples, or configure your own installation to use the port numbers shown in the examples. For information about configuring port numbers, see the section <<managing-solr.adoc#managing-solr,Managing Solr>>.
+Path information is given relative to `solr.home`, which is the location under the main Solr installation where Solr's collections and their `conf` and `data` directories are stored.
 
-Similarly, URL examples use 'localhost' throughout; if you are accessing Solr from a location remote to the server hosting Solr, replace 'localhost' with the proper domain or IP where Solr is running.
+When running the various examples mentioned through out this tutorial (i.e., `bin/solr -e techproducts`) the `solr.home` will be a sub-directory of `example/` created for you automatically.
 
-For example, we might provide a sample query like:
+== Special Inline Notes
 
-`\http://localhost:8983/solr/gettingstarted/select?q=brown+cow`
+Special notes are included throughout these pages. There are several types of notes:
 
-There are several items in this URL you might need to change locally. First, if your server is running at "www.example.com", you'll replace "localhost" with the proper domain. If you aren't using port 8983, you'll replace that also. Finally, you'll want to replace "gettingstarted" (the collection or core name) with the proper one in use in your implementation. The URL would then become:
+=== Information blocks
 
-`\http://www.example.com/solr/mycollection/select?q=brown+cow`
+NOTE: These provide additional information that's useful for you to know.
 
-[[AboutThisGuide-Paths]]
-== Paths
+=== Important
+
+IMPORTANT: These provide information that is critical for you to know.
 
-Path information is given relative to `solr.home`, which is the location under the main Solr installation where Solr's collections and their `conf` and `data` directories are stored. When running the various examples mentioned through out this tutorial (i.e., `bin/solr -e techproducts`) the `solr.home` will be a sub-directory of `example/` created for you automatically.
+=== Tip
+
+TIP: These provide helpful tips.
+
+=== Caution
+
+CAUTION: These provide details on scenarios or configurations you should be careful with.
+
+=== Warning
+
+WARNING: These are meant to warn you from a possibly dangerous change or action.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/about-tokenizers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/about-tokenizers.adoc b/solr/solr-ref-guide/src/about-tokenizers.adoc
index 5bee36c..06227b4 100644
--- a/solr/solr-ref-guide/src/about-tokenizers.adoc
+++ b/solr/solr-ref-guide/src/about-tokenizers.adoc
@@ -37,7 +37,6 @@ A `TypeTokenFilterFactory` is available that creates a `TypeTokenFilter` that fi
 
 For a complete list of the available TokenFilters, see the section <<tokenizers.adoc#tokenizers,Tokenizers>>.
 
-[[AboutTokenizers-WhenTouseaCharFiltervs.aTokenFilter]]
 == When To use a CharFilter vs. a TokenFilter
 
 There are several pairs of CharFilters and TokenFilters that have related (ie: `MappingCharFilter` and `ASCIIFoldingFilter`) or nearly identical (ie: `PatternReplaceCharFilterFactory` and `PatternReplaceFilterFactory`) functionality and it may not always be obvious which is the best choice.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc b/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc
index f9277f0..6e9864e 100644
--- a/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc
+++ b/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc
@@ -30,12 +30,10 @@ In addition to requiring that Solr by running in <<solrcloud.adoc#solrcloud,Solr
 Before enabling this feature, users should carefully consider the issues discussed in the <<Securing Runtime Libraries>> section below.
 ====
 
-[[AddingCustomPluginsinSolrCloudMode-UploadingJarFiles]]
 == Uploading Jar Files
 
 The first step is to use the <<blob-store-api.adoc#blob-store-api,Blob Store API>> to upload your jar files. This will to put your jars in the `.system` collection and distribute them across your SolrCloud nodes. These jars are added to a separate classloader and only accessible to components that are configured with the property `runtimeLib=true`. These components are loaded lazily because the `.system` collection may not be loaded when a particular core is loaded.
 
-[[AddingCustomPluginsinSolrCloudMode-ConfigAPICommandstouseJarsasRuntimeLibraries]]
 == Config API Commands to use Jars as Runtime Libraries
 
 The runtime library feature uses a special set of commands for the <<config-api.adoc#config-api,Config API>> to add, update, or remove jar files currently available in the blob store to the list of runtime libraries.
@@ -74,14 +72,12 @@ curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application
 }'
 ----
 
-[[AddingCustomPluginsinSolrCloudMode-SecuringRuntimeLibraries]]
 == Securing Runtime Libraries
 
 A drawback of this feature is that it could be used to load malicious executable code into the system. However, it is possible to restrict the system to load only trusted jars using http://en.wikipedia.org/wiki/Public_key_infrastructure[PKI] to verify that the executables loaded into the system are trustworthy.
 
 The following steps will allow you enable security for this feature. The instructions assume you have started all your Solr nodes with the `-Denable.runtime.lib=true`.
 
-[[Step1_GenerateanRSAPrivateKey]]
 === Step 1: Generate an RSA Private Key
 
 The first step is to generate an RSA private key. The example below uses a 512-bit key, but you should use the strength appropriate to your needs.
@@ -91,7 +87,6 @@ The first step is to generate an RSA private key. The example below uses a 512-b
 $ openssl genrsa -out priv_key.pem 512
 ----
 
-[[Step2_OutputthePublicKey]]
 === Step 2: Output the Public Key
 
 The public portion of the key should be output in DER format so Java can read it.
@@ -101,7 +96,6 @@ The public portion of the key should be output in DER format so Java can read it
 $ openssl rsa -in priv_key.pem -pubout -outform DER -out pub_key.der
 ----
 
-[[Step3_LoadtheKeytoZooKeeper]]
 === Step 3: Load the Key to ZooKeeper
 
 The `.der` files that are output from Step 2 should then be loaded to ZooKeeper under a node `/keys/exe` so they are available throughout every node. You can load any number of public keys to that node and all are valid. If a key is removed from the directory, the signatures of that key will cease to be valid. So, before removing the a key, make sure to update your runtime library configurations with valid signatures with the `update-runtimelib` command.
@@ -130,7 +124,6 @@ $ .bin/zkCli.sh -server localhost:9983
 
 After this, any attempt to load a jar will fail. All your jars must be signed with one of your private keys for Solr to trust it. The process to sign your jars and use the signature is outlined in Steps 4-6.
 
-[[Step4_SignthejarFile]]
 === Step 4: Sign the jar File
 
 Next you need to sign the sha1 digest of your jar file and get the base64 string.
@@ -142,7 +135,6 @@ $ openssl dgst -sha1 -sign priv_key.pem myjar.jar | openssl enc -base64
 
 The output of this step will be a string that you will need to add the jar to your classpath in Step 6 below.
 
-[[Step5_LoadthejartotheBlobStore]]
 === Step 5: Load the jar to the Blob Store
 
 Load your jar to the Blob store, using the <<blob-store-api.adoc#blob-store-api,Blob Store API>>. This step does not require a signature; you will need the signature in Step 6 to add it to your classpath.
@@ -155,7 +147,6 @@ http://localhost:8983/solr/.system/blob/{blobname}
 
 The blob name that you give the jar file in this step will be used as the name in the next step.
 
-[[Step6_AddthejartotheClasspath]]
 === Step 6: Add the jar to the Classpath
 
 Finally, add the jar to the classpath using the Config API as detailed above. In this step, you will need to provide the signature of the jar that you got in Step 4.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/analyzers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/analyzers.adoc b/solr/solr-ref-guide/src/analyzers.adoc
index c274f8e..ae1ae90 100644
--- a/solr/solr-ref-guide/src/analyzers.adoc
+++ b/solr/solr-ref-guide/src/analyzers.adoc
@@ -60,7 +60,6 @@ In this case, no Analyzer class was specified on the `<analyzer>` element. Rathe
 The output of an Analyzer affects the _terms_ indexed in a given field (and the terms used when parsing queries against those fields) but it has no impact on the _stored_ value for the fields. For example: an analyzer might split "Brown Cow" into two indexed terms "brown" and "cow", but the stored value will still be a single String: "Brown Cow"
 ====
 
-[[Analyzers-AnalysisPhases]]
 == Analysis Phases
 
 Analysis takes place in two contexts. At index time, when a field is being created, the token stream that results from analysis is added to an index and defines the set of terms (including positions, sizes, and so on) for the field. At query time, the values being searched for are analyzed and the terms that result are matched against those that are stored in the field's index.
@@ -89,7 +88,6 @@ In this theoretical example, at index time the text is tokenized, the tokens are
 
 At query time, the only normalization that happens is to convert the query terms to lowercase. The filtering and mapping steps that occur at index time are not applied to the query terms. Queries must then, in this example, be very precise, using only the normalized terms that were stored at index time.
 
-[[Analyzers-AnalysisforMulti-TermExpansion]]
 === Analysis for Multi-Term Expansion
 
 In some types of queries (ie: Prefix, Wildcard, Regex, etc...) the input provided by the user is not natural language intended for Analysis. Things like Synonyms or Stop word filtering do not work in a logical way in these types of Queries.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
index f728216..2a48d7c 100644
--- a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
@@ -22,7 +22,6 @@ Solr can support Basic authentication for users with the use of the BasicAuthPlu
 
 An authorization plugin is also available to configure Solr with permissions to perform various activities in the system. The authorization plugin is described in the section <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>.
 
-[[BasicAuthenticationPlugin-EnableBasicAuthentication]]
 == Enable Basic Authentication
 
 To use Basic authentication, you must first create a `security.json` file. This file and where to put it is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json,Enable Plugins with security.json>>.
@@ -68,7 +67,6 @@ If you are using SolrCloud, you must upload `security.json` to ZooKeeper. You ca
 bin/solr zk cp file:path_to_local_security.json zk:/security.json -z localhost:9983
 ----
 
-[[BasicAuthenticationPlugin-Caveats]]
 === Caveats
 
 There are a few things to keep in mind when using the Basic authentication plugin.
@@ -77,19 +75,16 @@ There are a few things to keep in mind when using the Basic authentication plugi
 * A user who has access to write permissions to `security.json` will be able to modify all the permissions and how users have been assigned permissions. Special care should be taken to only grant access to editing security to appropriate users.
 * Your network should, of course, be secure. Even with Basic authentication enabled, you should not unnecessarily expose Solr to the outside world.
 
-[[BasicAuthenticationPlugin-EditingAuthenticationPluginConfiguration]]
 == Editing Authentication Plugin Configuration
 
 An Authentication API allows modifying user IDs and passwords. The API provides an endpoint with specific commands to set user details or delete a user.
 
-[[BasicAuthenticationPlugin-APIEntryPoint]]
 === API Entry Point
 
 `admin/authentication`
 
 This endpoint is not collection-specific, so users are created for the entire Solr cluster. If users need to be restricted to a specific collection, that can be done with the authorization rules.
 
-[[BasicAuthenticationPlugin-AddaUserorEditaPassword]]
 === Add a User or Edit a Password
 
 The `set-user` command allows you to add users and change their passwords. For example, the following defines two users and their passwords:
@@ -101,7 +96,6 @@ curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'C
                "harry":"HarrysSecret"}}'
 ----
 
-[[BasicAuthenticationPlugin-DeleteaUser]]
 === Delete a User
 
 The `delete-user` command allows you to remove a user. The user password does not need to be sent to remove a user. In the following example, we've asked that user IDs 'tom' and 'harry' be removed from the system.
@@ -112,7 +106,6 @@ curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'C
  "delete-user": ["tom","harry"]}'
 ----
 
-[[BasicAuthenticationPlugin-Setaproperty]]
 === Set a Property
 
 Set arbitrary properties for authentication plugin. The only supported property is `'blockUnknown'`
@@ -123,7 +116,6 @@ curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'C
  "set-property": {"blockUnknown":false}}'
 ----
 
-[[BasicAuthenticationPlugin-UsingBasicAuthwithSolrJ]]
 === Using BasicAuth with SolrJ
 
 In SolrJ, the basic authentication credentials need to be set for each request as in this example:
@@ -144,7 +136,6 @@ req.setBasicAuthCredentials(userName, password);
 QueryResponse rsp = req.process(solrClient);
 ----
 
-[[BasicAuthenticationPlugin-UsingCommandLinescriptswithBasicAuth]]
 === Using Command Line scripts with BasicAuth
 
 Add the following line to the `solr.in.sh` or `solr.in.cmd` file. This example tells the `bin/solr` command line to to use "basic" as the type of authentication, and to pass credentials with the user-name "solr" and password "SolrRocks":

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/blob-store-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blob-store-api.adoc b/solr/solr-ref-guide/src/blob-store-api.adoc
index 63297b9..267ed1d 100644
--- a/solr/solr-ref-guide/src/blob-store-api.adoc
+++ b/solr/solr-ref-guide/src/blob-store-api.adoc
@@ -28,7 +28,6 @@ When using the blob store, note that the API does not delete or overwrite a prev
 
 The blob store API is implemented as a requestHandler. A special collection named ".system" is used to store the blobs. This collection can be created in advance, but if it does not exist it will be created automatically.
 
-[[BlobStoreAPI-Aboutthe.systemCollection]]
 == About the .system Collection
 
 Before uploading blobs to the blob store, a special collection must be created and it must be named `.system`. Solr will automatically create this collection if it does not already exist, but you can also create it manually if you choose.
@@ -46,7 +45,6 @@ curl http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&rep
 
 IMPORTANT: The `bin/solr` script cannot be used to create the `.system` collection.
 
-[[BlobStoreAPI-UploadFilestoBlobStore]]
 == Upload Files to Blob Store
 
 After the `.system` collection has been created, files can be uploaded to the blob store with a request similar to the following:
@@ -132,7 +130,6 @@ For the latest version of a blob, the \{version} can be omitted,
 curl http://localhost:8983/solr/.system/blob/{blobname}?wt=filestream > {outputfilename}
 ----
 
-[[BlobStoreAPI-UseaBlobinaHandlerorComponent]]
 == Use a Blob in a Handler or Component
 
 To use the blob as the class for a request handler or search component, you create a request handler in `solrconfig.xml` as usual. You will need to define the following parameters:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/charfilterfactories.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/charfilterfactories.adoc b/solr/solr-ref-guide/src/charfilterfactories.adoc
index 6010a31..8f0dd0f 100644
--- a/solr/solr-ref-guide/src/charfilterfactories.adoc
+++ b/solr/solr-ref-guide/src/charfilterfactories.adoc
@@ -22,7 +22,6 @@ CharFilter is a component that pre-processes input characters.
 
 CharFilters can be chained like Token Filters and placed in front of a Tokenizer. CharFilters can add, change, or remove characters while preserving the original character offsets to support features like highlighting.
 
-[[CharFilterFactories-solr.MappingCharFilterFactory]]
 == solr.MappingCharFilterFactory
 
 This filter creates `org.apache.lucene.analysis.MappingCharFilter`, which can be used for changing one string to another (for example, for normalizing `é` to `e`.).
@@ -65,7 +64,6 @@ Mapping file syntax:
 |===
 ** A backslash followed by any other character is interpreted as if the character were present without the backslash.
 
-[[CharFilterFactories-solr.HTMLStripCharFilterFactory]]
 == solr.HTMLStripCharFilterFactory
 
 This filter creates `org.apache.solr.analysis.HTMLStripCharFilter`. This CharFilter strips HTML from the input stream and passes the result to another CharFilter or a Tokenizer.
@@ -114,7 +112,6 @@ Example:
 </analyzer>
 ----
 
-[[CharFilterFactories-solr.ICUNormalizer2CharFilterFactory]]
 == solr.ICUNormalizer2CharFilterFactory
 
 This filter performs pre-tokenization Unicode normalization using http://site.icu-project.org[ICU4J].
@@ -138,7 +135,6 @@ Example:
 </analyzer>
 ----
 
-[[CharFilterFactories-solr.PatternReplaceCharFilterFactory]]
 == solr.PatternReplaceCharFilterFactory
 
 This filter uses http://www.regular-expressions.info/reference.html[regular expressions] to replace or change character patterns.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
index 106fd1c..3d610a9 100644
--- a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -27,7 +27,6 @@ The Collapsing query parser groups documents (collapsing the result set) accordi
 In order to use these features with SolrCloud, the documents must be located on the same shard. To ensure document co-location, you can define the `router.name` parameter as `compositeId` when creating the collection. For more information on this option, see the section <<shards-and-indexing-data-in-solrcloud.adoc#ShardsandIndexingDatainSolrCloud-DocumentRouting,Document Routing>>.
 ====
 
-[[CollapseandExpandResults-CollapsingQueryParser]]
 == Collapsing Query Parser
 
 The `CollapsingQParser` is really a _post filter_ that provides more performant field collapsing than Solr's standard approach when the number of distinct groups in the result set is high. This parser collapses the result set to a single document per group before it forwards the result set to the rest of the search components. So all downstream components (faceting, highlighting, etc...) will work with the collapsed result set.
@@ -121,7 +120,6 @@ fq={!collapse field=group_field hint=top_fc}
 
 The CollapsingQParserPlugin fully supports the QueryElevationComponent.
 
-[[CollapseandExpandResults-ExpandComponent]]
 == Expand Component
 
 The ExpandComponent can be used to expand the groups that were collapsed by the http://heliosearch.org/the-collapsingqparserplugin-solrs-new-high-performance-field-collapsing-postfilter/[CollapsingQParserPlugin].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/command-line-utilities.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/command-line-utilities.adoc b/solr/solr-ref-guide/src/command-line-utilities.adoc
index e927f02..2c0d511 100644
--- a/solr/solr-ref-guide/src/command-line-utilities.adoc
+++ b/solr/solr-ref-guide/src/command-line-utilities.adoc
@@ -36,7 +36,6 @@ The `zkcli.sh` provided by Solr is not the same as the https://zookeeper.apache.
 ZooKeeper's `zkCli.sh` provides a completely general, application-agnostic shell for manipulating data in ZooKeeper. Solr's `zkcli.sh` – discussed in this section – is specific to Solr, and has command line arguments specific to dealing with Solr data in ZooKeeper.
 ====
 
-[[CommandLineUtilities-UsingSolr_sZooKeeperCLI]]
 == Using Solr's ZooKeeper CLI
 
 Use the `help` option to get a list of available commands from the script itself, as in `./server/scripts/cloud-scrips/zkcli.sh help`.
@@ -91,23 +90,20 @@ The short form parameter options may be specified with a single dash (eg: `-c my
 The long form parameter options may be specified using either a single dash (eg: `-collection mycollection`) or a double dash (eg: `--collection mycollection`)
 ====
 
-[[CommandLineUtilities-ZooKeeperCLIExamples]]
 == ZooKeeper CLI Examples
 
 Below are some examples of using the `zkcli.sh` CLI which assume you have already started the SolrCloud example (`bin/solr -e cloud -noprompt`)
 
 If you are on Windows machine, simply replace `zkcli.sh` with `zkcli.bat` in these examples.
 
-[[CommandLineUtilities-Uploadaconfigurationdirectory]]
-=== Upload a configuration directory
+=== Upload a Configuration Directory
 
 [source,bash]
 ----
 ./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd upconfig -confname my_new_config -confdir server/solr/configsets/_default/conf
 ----
 
-[[CommandLineUtilities-BootstrapZooKeeperfromexistingSOLR_HOME]]
-=== Bootstrap ZooKeeper from existing SOLR_HOME
+=== Bootstrap ZooKeeper from an Existing solr.home
 
 [source,bash]
 ----
@@ -120,32 +116,28 @@ If you are on Windows machine, simply replace `zkcli.sh` with `zkcli.bat` in the
 Using the boostrap command with a zookeeper chroot in the `-zkhost` parameter, e.g. `-zkhost 127.0.0.1:2181/solr`, will automatically create the chroot path before uploading the configs.
 ====
 
-[[CommandLineUtilities-PutarbitrarydataintoanewZooKeeperfile]]
-=== Put arbitrary data into a new ZooKeeper file
+=== Put Arbitrary Data into a New ZooKeeper file
 
 [source,bash]
 ----
 ./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd put /my_zk_file.txt 'some data'
 ----
 
-[[CommandLineUtilities-PutalocalfileintoanewZooKeeperfile]]
-=== Put a local file into a new ZooKeeper file
+=== Put a Local File into a New ZooKeeper File
 
 [source,bash]
 ----
 ./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd putfile /my_zk_file.txt /tmp/my_local_file.txt
 ----
 
-[[CommandLineUtilities-Linkacollectiontoaconfigurationset]]
-=== Link a collection to a configuration set
+=== Link a Collection to a ConfigSet
 
 [source,bash]
 ----
 ./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd linkconfig -collection gettingstarted -confname my_new_config
 ----
 
-[[CommandLineUtilities-CreateanewZooKeeperpath]]
-=== Create a new ZooKeeper path
+=== Create a New ZooKeeper Path
 
 This can be useful to create a chroot path in ZooKeeper before first cluster start.
 
@@ -154,9 +146,7 @@ This can be useful to create a chroot path in ZooKeeper before first cluster sta
 ./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 -cmd makepath /solr
 ----
 
-
-[[CommandLineUtilities-Setaclusterproperty]]
-=== Set a cluster property
+=== Set a Cluster Property
 
 This command will add or modify a single cluster property in `clusterprops.json`. Use this command instead of the usual getfile \-> edit \-> putfile cycle.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/configuring-logging.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/configuring-logging.adoc b/solr/solr-ref-guide/src/configuring-logging.adoc
index 7e22f38..05a6c74 100644
--- a/solr/solr-ref-guide/src/configuring-logging.adoc
+++ b/solr/solr-ref-guide/src/configuring-logging.adoc
@@ -25,7 +25,6 @@ Solr logs are a key way to know what's happening in the system. There are severa
 In addition to the logging options described below, there is a way to configure which request parameters (such as parameters sent as part of queries) are logged with an additional request parameter called `logParamsList`. See the section on <<common-query-parameters.adoc#CommonQueryParameters-ThelogParamsListParameter,Common Query Parameters>> for more information.
 ====
 
-[[ConfiguringLogging-TemporaryLoggingSettings]]
 == Temporary Logging Settings
 
 You can control the amount of logging output in Solr by using the Admin Web interface. Select the *LOGGING* link. Note that this page only lets you change settings in the running system and is not saved for the next run. (For more information about the Admin Web interface, see <<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,Using the Solr Administration User Interface>>.)
@@ -59,7 +58,6 @@ Log levels settings are as follows:
 
 Multiple settings at one time are allowed.
 
-[[ConfiguringLogging-LoglevelAPI]]
 === Log level API
 
 There is also a way of sending REST commands to the logging endpoint to do the same. Example:
@@ -70,7 +68,6 @@ There is also a way of sending REST commands to the logging endpoint to do the s
 curl -s http://localhost:8983/solr/admin/info/logging --data-binary "set=root:WARN&wt=json"
 ----
 
-[[ConfiguringLogging-ChoosingLogLevelatStartup]]
 == Choosing Log Level at Startup
 
 You can temporarily choose a different logging level as you start Solr. There are two ways:
@@ -87,7 +84,6 @@ bin/solr start -f -v
 bin/solr start -f -q
 ----
 
-[[ConfiguringLogging-PermanentLoggingSettings]]
 == Permanent Logging Settings
 
 Solr uses http://logging.apache.org/log4j/1.2/[Log4J version 1.2] for logging which is configured using `server/resources/log4j.properties`. Take a moment to inspect the contents of the `log4j.properties` file so that you are familiar with its structure. By default, Solr log messages will be written to `SOLR_LOGS_DIR/solr.log`.
@@ -109,7 +105,6 @@ On every startup of Solr, the start script will clean up old logs and rotate the
 
 You can disable the automatic log rotation at startup by changing the setting `SOLR_LOG_PRESTART_ROTATION` found in `bin/solr.in.sh` or `bin/solr.in.cmd` to false.
 
-[[ConfiguringLogging-LoggingSlowQueries]]
 == Logging Slow Queries
 
 For high-volume search applications, logging every query can generate a large amount of logs and, depending on the volume, potentially impact performance. If you mine these logs for additional insights into your application, then logging every query request may be useful.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
index c68a3ad..f3e8dc9 100644
--- a/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
@@ -35,7 +35,6 @@ If you are using replication to replicate the Solr index (as described in <<lega
 
 NOTE: If the environment variable `SOLR_DATA_HOME` if defined, or if `solr.data.home` is configured for your DirectoryFactory, the location of data directory will be `<SOLR_DATA_HOME>/<instance_name>/data`.
 
-[[DataDirandDirectoryFactoryinSolrConfig-SpecifyingtheDirectoryFactoryForYourIndex]]
 == Specifying the DirectoryFactory For Your Index
 
 The default {solr-javadocs}/solr-core/org/apache/solr/core/StandardDirectoryFactory.html[`solr.StandardDirectoryFactory`] is filesystem based, and tries to pick the best implementation for the current JVM and platform. You can force a particular implementation and/or config options by specifying {solr-javadocs}/solr-core/org/apache/solr/core/MMapDirectoryFactory.html[`solr.MMapDirectoryFactory`], {solr-javadocs}/solr-core/org/apache/solr/core/NIOFSDirectoryFactory.html[`solr.NIOFSDirectoryFactory`], or {solr-javadocs}/solr-core/org/apache/solr/core/SimpleFSDirectoryFactory.html[`solr.SimpleFSDirectoryFactory`].
@@ -57,7 +56,5 @@ The {solr-javadocs}/solr-core/org/apache/solr/core/RAMDirectoryFactory.html[`sol
 
 [NOTE]
 ====
-
 If you are using Hadoop and would like to store your indexes in HDFS, you should use the {solr-javadocs}/solr-core/org/apache/solr/core/HdfsDirectoryFactory.html[`solr.HdfsDirectoryFactory`] instead of either of the above implementations. For more details, see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/dataimport-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/dataimport-screen.adoc b/solr/solr-ref-guide/src/dataimport-screen.adoc
index 363a2bd..9f3cb43 100644
--- a/solr/solr-ref-guide/src/dataimport-screen.adoc
+++ b/solr/solr-ref-guide/src/dataimport-screen.adoc
@@ -23,7 +23,6 @@ The Dataimport screen shows the configuration of the DataImportHandler (DIH) and
 .The Dataimport Screen
 image::images/dataimport-screen/dataimport.png[image,width=485,height=250]
 
-
 This screen also lets you adjust various options to control how the data is imported to Solr, and view the data import configuration file that controls the import.
 
 For more information about data importing with DIH, see the section on <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/de-duplication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/de-duplication.adoc b/solr/solr-ref-guide/src/de-duplication.adoc
index 3e9cd46..67f8d8c 100644
--- a/solr/solr-ref-guide/src/de-duplication.adoc
+++ b/solr/solr-ref-guide/src/de-duplication.adoc
@@ -26,7 +26,6 @@ Preventing duplicate or near duplicate documents from entering an index or taggi
 * Lookup3Signature: 64-bit hash used for exact duplicate detection. This is much faster than MD5 and smaller to index.
 * http://wiki.apache.org/solr/TextProfileSignature[TextProfileSignature]: Fuzzy hashing implementation from Apache Nutch for near duplicate detection. It's tunable but works best on longer text.
 
-
 Other, more sophisticated algorithms for fuzzy/near hashing can be added later.
 
 [IMPORTANT]
@@ -36,12 +35,10 @@ Adding in the de-duplication process will change the `allowDups` setting so that
 Of course the `signatureField` could be the unique field, but generally you want the unique field to be unique. When a document is added, a signature will automatically be generated and attached to the document in the specified `signatureField`.
 ====
 
-[[De-Duplication-ConfigurationOptions]]
 == Configuration Options
 
 There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in `schema.xml`.
 
-[[De-Duplication-Insolrconfig.xml]]
 === In solrconfig.xml
 
 The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an <<update-request-processors.adoc#update-request-processors,Update Request Processor Chain>>, as in this example:
@@ -84,8 +81,6 @@ Set to *false* to disable de-duplication processing. The default is *true*.
 overwriteDupes::
 If true, the default, when a document exists that already matches this signature, it will be overwritten.
 
-
-[[De-Duplication-Inschema.xml]]
 === In schema.xml
 
 If you are using a separate field for storing the signature, you must have it indexed:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/defining-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-fields.adoc b/solr/solr-ref-guide/src/defining-fields.adoc
index 8e6de9c..ef93d60 100644
--- a/solr/solr-ref-guide/src/defining-fields.adoc
+++ b/solr/solr-ref-guide/src/defining-fields.adoc
@@ -20,8 +20,7 @@
 
 Fields are defined in the fields element of `schema.xml`. Once you have the field types set up, defining the fields themselves is simple.
 
-[[DefiningFields-Example]]
-== Example
+== Example Field Definition
 
 The following example defines a field named `price` with a type named `float` and a default value of `0.0`; the `indexed` and `stored` properties are explicitly set to `true`, while any other properties specified on the `float` field type are inherited.
 
@@ -30,7 +29,6 @@ The following example defines a field named `price` with a type named `float` an
 <field name="price" type="float" default="0.0" indexed="true" stored="true"/>
 ----
 
-[[DefiningFields-FieldProperties]]
 == Field Properties
 
 Field definitions can have the following properties:
@@ -44,7 +42,6 @@ The name of the `fieldType` for this field. This will be found in the `name` att
 `default`::
 A default value that will be added automatically to any document that does not have a value in this field when it is indexed. If this property is not specified, there is no default.
 
-[[DefiningFields-OptionalFieldTypeOverrideProperties]]
 == Optional Field Type Override Properties
 
 Fields can have many of the same properties as field types. Properties from the table below which are specified on an individual field will override any explicit value for that property specified on the the `fieldType` of the field, or any implicit default property value provided by the underlying `fieldType` implementation. The table below is reproduced from <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>>, which has more details:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
index 4003f1a..392a0df 100644
--- a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
+++ b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
@@ -31,12 +31,10 @@ For specific information on each of these language identification implementation
 
 For more information about language analysis in Solr, see <<language-analysis.adoc#language-analysis,Language Analysis>>.
 
-[[DetectingLanguagesDuringIndexing-ConfiguringLanguageDetection]]
 == Configuring Language Detection
 
 You can configure the `langid` UpdateRequestProcessor in `solrconfig.xml`. Both implementations take the same parameters, which are described in the following section. At a minimum, you must specify the fields for language identification and a field for the resulting language code.
 
-[[DetectingLanguagesDuringIndexing-ConfiguringTikaLanguageDetection]]
 === Configuring Tika Language Detection
 
 Here is an example of a minimal Tika `langid` configuration in `solrconfig.xml`:
@@ -51,7 +49,6 @@ Here is an example of a minimal Tika `langid` configuration in `solrconfig.xml`:
 </processor>
 ----
 
-[[DetectingLanguagesDuringIndexing-ConfiguringLangDetectLanguageDetection]]
 === Configuring LangDetect Language Detection
 
 Here is an example of a minimal LangDetect `langid` configuration in `solrconfig.xml`:
@@ -66,7 +63,6 @@ Here is an example of a minimal LangDetect `langid` configuration in `solrconfig
 </processor>
 ----
 
-[[DetectingLanguagesDuringIndexing-langidParameters]]
 == langid Parameters
 
 As previously mentioned, both implementations of the `langid` UpdateRequestProcessor take the same parameters.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/distributed-requests.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-requests.adoc b/solr/solr-ref-guide/src/distributed-requests.adoc
index 6d2c585..9fc80a7 100644
--- a/solr/solr-ref-guide/src/distributed-requests.adoc
+++ b/solr/solr-ref-guide/src/distributed-requests.adoc
@@ -22,7 +22,6 @@ When a Solr node receives a search request, the request is routed behind the sce
 
 The chosen replica acts as an aggregator: it creates internal requests to randomly chosen replicas of every shard in the collection, coordinates the responses, issues any subsequent internal requests as needed (for example, to refine facets values, or request additional stored fields), and constructs the final response for the client.
 
-[[DistributedRequests-LimitingWhichShardsareQueried]]
 == Limiting Which Shards are Queried
 
 While one of the advantages of using SolrCloud is the ability to query very large collections distributed among various shards, in some cases <<shards-and-indexing-data-in-solrcloud.adoc#ShardsandIndexingDatainSolrCloud-DocumentRouting,you may know that you are only interested in results from a subset of your shards>>. You have the option of searching over all of your data or just parts of it.
@@ -71,7 +70,6 @@ And of course, you can specify a list of shards (seperated by commas) each defin
 http://localhost:8983/solr/gettingstarted/select?q=*:*&shards=shard1,localhost:7574/solr/gettingstarted|localhost:7500/solr/gettingstarted
 ----
 
-[[DistributedRequests-ConfiguringtheShardHandlerFactory]]
 == Configuring the ShardHandlerFactory
 
 You can directly configure aspects of the concurrency and thread-pooling used within distributed search in Solr. This allows for finer grained control and you can tune it to target your own specific requirements. The default configuration favors throughput over latency.
@@ -118,7 +116,6 @@ If specified, the thread pool will use a backing queue instead of a direct hando
 `fairnessPolicy`::
 Chooses the JVM specifics dealing with fair policy queuing, if enabled distributed searches will be handled in a First in First out fashion at a cost to throughput. If disabled throughput will be favored over latency. The default is `false`.
 
-[[DistributedRequests-ConfiguringstatsCache_DistributedIDF_]]
 == Configuring statsCache (Distributed IDF)
 
 Document and term statistics are needed in order to calculate relevancy. Solr provides four implementations out of the box when it comes to document stats calculation:
@@ -135,15 +132,13 @@ The implementation can be selected by setting `<statsCache>` in `solrconfig.xml`
 <statsCache class="org.apache.solr.search.stats.ExactStatsCache"/>
 ----
 
-[[DistributedRequests-AvoidingDistributedDeadlock]]
 == Avoiding Distributed Deadlock
 
 Each shard serves top-level query requests and then makes sub-requests to all of the other shards. Care should be taken to ensure that the max number of threads serving HTTP requests is greater than the possible number of requests from both top-level clients and other shards. If this is not the case, the configuration may result in a distributed deadlock.
 
 For example, a deadlock might occur in the case of two shards, each with just a single thread to service HTTP requests. Both threads could receive a top-level request concurrently, and make sub-requests to each other. Because there are no more remaining threads to service requests, the incoming requests will be blocked until the other pending requests are finished, but they will not finish since they are waiting for the sub-requests. By ensuring that Solr is configured to handle a sufficient number of threads, you can avoid deadlock situations like this.
 
-[[DistributedRequests-PreferLocalShards]]
-== Prefer Local Shards
+== preferLocalShards Parameter
 
 Solr allows you to pass an optional boolean parameter named `preferLocalShards` to indicate that a distributed query should prefer local replicas of a shard when available. In other words, if a query includes `preferLocalShards=true`, then the query controller will look for local replicas to service the query instead of selecting replicas at random from across the cluster. This is useful when a query requests many fields or large fields to be returned per document because it avoids moving large amounts of data over the network when it is available locally. In addition, this feature can be useful for minimizing the impact of a problematic replica with degraded performance, as it reduces the likelihood that the degraded replica will be hit by other healthy replicas.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
index b1ad8dc..0e6e7d8 100644
--- a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
+++ b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
@@ -26,14 +26,12 @@ Everything on this page is specific to legacy setup of distributed search. Users
 
 Update reorders (i.e., replica A may see update X then Y, and replica B may see update Y then X). *deleteByQuery* also handles reorders the same way, to ensure replicas are consistent. All replicas of a shard are consistent, even if the updates arrive in a different order on different replicas.
 
-[[DistributedSearchwithIndexSharding-DistributingDocumentsacrossShards]]
 == Distributing Documents across Shards
 
 When not using SolrCloud, it is up to you to get all your documents indexed on each shard of your server farm. Solr supports distributed indexing (routing) in its true form only in the SolrCloud mode.
 
 In the legacy distributed mode, Solr does not calculate universal term/doc frequencies. For most large-scale implementations, it is not likely to matter that Solr calculates TF/IDF at the shard level. However, if your collection is heavily skewed in its distribution across servers, you may find misleading relevancy results in your searches. In general, it is probably best to randomly distribute documents to your shards.
 
-[[DistributedSearchwithIndexSharding-ExecutingDistributedSearcheswiththeshardsParameter]]
 == Executing Distributed Searches with the shards Parameter
 
 If a query request includes the `shards` parameter, the Solr server distributes the request across all the shards listed as arguments to the parameter. The `shards` parameter uses this syntax:
@@ -63,7 +61,6 @@ The following components support distributed search:
 * The *Stats* component, which returns simple statistics for numeric fields within the DocSet.
 * The *Debug* component, which helps with debugging.
 
-[[DistributedSearchwithIndexSharding-LimitationstoDistributedSearch]]
 == Limitations to Distributed Search
 
 Distributed searching in Solr has the following limitations:
@@ -78,12 +75,10 @@ Distributed searching in Solr has the following limitations:
 
 Formerly a limitation was that TF/IDF relevancy computations only used shard-local statistics. This is still the case by default. If your data isn't randomly distributed, or if you want more exact statistics, then remember to configure the ExactStatsCache.
 
-[[DistributedSearchwithIndexSharding-AvoidingDistributedDeadlock]]
-== Avoiding Distributed Deadlock
+== Avoiding Distributed Deadlock with Distributed Search
 
 Like in SolrCloud mode, inter-shard requests could lead to a distributed deadlock. It can be avoided by following the instructions in the section  <<distributed-requests.adoc#distributed-requests,Distributed Requests>>.
 
-[[DistributedSearchwithIndexSharding-TestingIndexShardingonTwoLocalServers]]
 == Testing Index Sharding on Two Local Servers
 
 For simple functional testing, it's easiest to just set up two local Solr servers on different ports. (In a production environment, of course, these servers would be deployed on separate machines.)

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/docvalues.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/docvalues.adoc b/solr/solr-ref-guide/src/docvalues.adoc
index b2debda..2ec3677 100644
--- a/solr/solr-ref-guide/src/docvalues.adoc
+++ b/solr/solr-ref-guide/src/docvalues.adoc
@@ -28,7 +28,6 @@ For other features that we now commonly associate with search, such as sorting,
 
 In Lucene 4.0, a new approach was introduced. DocValue fields are now column-oriented fields with a document-to-value mapping built at index time. This approach promises to relieve some of the memory requirements of the fieldCache and make lookups for faceting, sorting, and grouping much faster.
 
-[[DocValues-EnablingDocValues]]
 == Enabling DocValues
 
 To use docValues, you only need to enable it for a field that you will use it with. As with all schema design, you need to define a field type and then define fields of that type with docValues enabled. All of these actions are done in `schema.xml`.
@@ -76,7 +75,6 @@ Lucene index back-compatibility is only supported for the default codec. If you
 
 If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#CommonQueryParameters-ThesortParameter,sorting>>, <<faceting.adoc#faceting,faceting>> or <<function-queries.adoc#function-queries,function queries>>.
 
-[[DocValues-RetrievingDocValuesDuringSearch]]
 === Retrieving DocValues During Search
 
 Field values retrieved during search queries are typically returned from stored values. However, non-stored docValues fields will be also returned along with other stored fields when all fields (or pattern matching globs) are specified to be returned (e.g. "`fl=*`") for search queries depending on the effective value of the `useDocValuesAsStored` parameter for each field. For schema versions >= 1.6, the implicit default is `useDocValuesAsStored="true"`. See <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>> & <<defining-fields.adoc#defining-fields,Defining Fields>> for more details.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index be2025e..a741565 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -24,10 +24,8 @@ This section describes enabling SSL using a self-signed certificate.
 
 For background on SSL certificates and keys, see http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/.
 
-[[EnablingSSL-BasicSSLSetup]]
 == Basic SSL Setup
 
-[[EnablingSSL-Generateaself-signedcertificateandakey]]
 === Generate a Self-Signed Certificate and a Key
 
 To generate a self-signed certificate and a single key that will be used to authenticate both the server and the client, we'll use the JDK https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html[`keytool`] command and create a separate keystore. This keystore will also be used as a truststore below. It's possible to use the keystore that comes with the JDK for these purposes, and to use a separate truststore, but those options aren't covered here.
@@ -45,7 +43,6 @@ keytool -genkeypair -alias solr-ssl -keyalg RSA -keysize 2048 -keypass secret -s
 
 The above command will create a keystore file named `solr-ssl.keystore.jks` in the current directory.
 
-[[EnablingSSL-ConvertthecertificateandkeytoPEMformatforusewithcURL]]
 === Convert the Certificate and Key to PEM Format for Use with cURL
 
 cURL isn't capable of using JKS formatted keystores, so the JKS keystore needs to be converted to PEM format, which cURL understands.
@@ -73,7 +70,6 @@ If you want to use cURL on OS X Yosemite (10.10), you'll need to create a certif
 openssl pkcs12 -nokeys -in solr-ssl.keystore.p12 -out solr-ssl.cacert.pem
 ----
 
-[[EnablingSSL-SetcommonSSLrelatedsystemproperties]]
 === Set Common SSL-Related System Properties
 
 The Solr Control Script is already setup to pass SSL-related Java system properties to the JVM. To activate the SSL settings, uncomment and update the set of properties beginning with SOLR_SSL_* in `bin/solr.in.sh`. (or `bin\solr.in.cmd` on Windows).
@@ -116,7 +112,6 @@ REM Enable clients to authenticate (but not require)
 set SOLR_SSL_WANT_CLIENT_AUTH=false
 ----
 
-[[EnablingSSL-RunSingleNodeSolrusingSSL]]
 === Run Single Node Solr using SSL
 
 Start Solr using the command shown below; by default clients will not be required to authenticate:
@@ -133,12 +128,10 @@ bin/solr -p 8984
 bin\solr.cmd -p 8984
 ----
 
-[[EnablingSSL-SolrCloud]]
 == SSL with SolrCloud
 
 This section describes how to run a two-node SolrCloud cluster with no initial collections and a single-node external ZooKeeper. The commands below assume you have already created the keystore described above.
 
-[[EnablingSSL-ConfigureZooKeeper]]
 === Configure ZooKeeper
 
 NOTE: ZooKeeper does not support encrypted communication with clients like Solr. There are several related JIRA tickets where SSL support is being planned/worked on: https://issues.apache.org/jira/browse/ZOOKEEPER-235[ZOOKEEPER-235]; https://issues.apache.org/jira/browse/ZOOKEEPER-236[ZOOKEEPER-236]; https://issues.apache.org/jira/browse/ZOOKEEPER-1000[ZOOKEEPER-1000]; and https://issues.apache.org/jira/browse/ZOOKEEPER-2120[ZOOKEEPER-2120].
@@ -163,10 +156,8 @@ server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -
 
 If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,chroot for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
 
-[[EnablingSSL-RunSolrCloudwithSSL]]
 === Run SolrCloud with SSL
 
-[[EnablingSSL-CreateSolrhomedirectoriesfortwonodes]]
 ==== Create Solr Home Directories for Two Nodes
 
 Create two copies of the `server/solr/` directory which will serve as the Solr home directories for each of your two SolrCloud nodes:
@@ -187,7 +178,6 @@ xcopy /E server\solr cloud\node1\
 xcopy /E server\solr cloud\node2\
 ----
 
-[[EnablingSSL-StartthefirstSolrnode]]
 ==== Start the First Solr Node
 
 Next, start the first Solr node on port 8984. Be sure to stop the standalone server first if you started it when working through the previous section on this page.
@@ -220,7 +210,6 @@ bin/solr -cloud -s cloud/node1 -z localhost:2181 -p 8984 -Dsolr.ssl.checkPeerNam
 bin\solr.cmd -cloud -s cloud\node1 -z localhost:2181 -p 8984 -Dsolr.ssl.checkPeerName=false
 ----
 
-[[EnablingSSL-StartthesecondSolrnode]]
 ==== Start the Second Solr Node
 
 Finally, start the second Solr node on port 7574 - again, to skip hostname verification, add `-Dsolr.ssl.checkPeerName=false`;
@@ -237,14 +226,13 @@ bin/solr -cloud -s cloud/node2 -z localhost:2181 -p 7574
 bin\solr.cmd -cloud -s cloud\node2 -z localhost:2181 -p 7574
 ----
 
-[[EnablingSSL-ExampleClientActions]]
 == Example Client Actions
 
 [IMPORTANT]
 ====
 cURL on OS X Mavericks (10.9) has degraded SSL support. For more information and workarounds to allow one-way SSL, see http://curl.haxx.se/mail/archive-2013-10/0036.html. cURL on OS X Yosemite (10.10) is improved - 2-way SSL is possible - see http://curl.haxx.se/mail/archive-2014-10/0053.html .
 
-The cURL commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` param must be in PKCS12 format, and the file supplied with the `--cacert` param must contain only the CA certificate, and no key (see <<EnablingSSL-ConvertthecertificateandkeytoPEMformatforusewithcURL,above>> for instructions on creating this file):
+The cURL commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` param must be in PKCS12 format, and the file supplied with the `--cacert` param must contain only the CA certificate, and no key (see <<Convert the Certificate and Key to PEM Format for Use with cURL,above>> for instructions on creating this file):
 
 [source,bash]
 curl -E solr-ssl.keystore.p12:secret --cacert solr-ssl.cacert.pem ...
@@ -271,7 +259,6 @@ bin\solr.cmd create -c mycollection -shards 2
 
 The `create` action will pass the `SOLR_SSL_*` properties set in your include file to the SolrJ code used to create the collection.
 
-[[EnablingSSL-RetrieveSolrCloudclusterstatususingcURL]]
 === Retrieve SolrCloud Cluster Status using cURL
 
 To get the resulting cluster status (again, if you have not enabled client authentication, remove the `-E solr-ssl.pem:secret` option):
@@ -317,7 +304,6 @@ You should get a response that looks like this:
     "properties":{"urlScheme":"https"}}}
 ----
 
-[[EnablingSSL-Indexdocumentsusingpost.jar]]
 === Index Documents using post.jar
 
 Use `post.jar` to index some example documents to the SolrCloud collection created above:
@@ -329,7 +315,6 @@ cd example/exampledocs
 java -Djavax.net.ssl.keyStorePassword=secret -Djavax.net.ssl.keyStore=../../server/etc/solr-ssl.keystore.jks -Djavax.net.ssl.trustStore=../../server/etc/solr-ssl.keystore.jks -Djavax.net.ssl.trustStorePassword=secret -Durl=https://localhost:8984/solr/mycollection/update -jar post.jar *.xml
 ----
 
-[[EnablingSSL-QueryusingcURL]]
 === Query Using cURL
 
 Use cURL to query the SolrCloud collection created above, from a directory containing the PEM formatted certificate and key created above (e.g. `example/etc/`) - if you have not enabled client authentication (system property `-Djetty.ssl.clientAuth=true)`, then you can remove the `-E solr-ssl.pem:secret` option:
@@ -339,8 +324,7 @@ Use cURL to query the SolrCloud collection created above, from a directory conta
 curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/mycollection/select?q=*:*&wt=json&indent=on"
 ----
 
-[[EnablingSSL-IndexadocumentusingCloudSolrClient]]
-=== Index a document using CloudSolrClient
+=== Index a Document using CloudSolrClient
 
 From a java client using SolrJ, index a document. In the code below, the `javax.net.ssl.*` system properties are set programmatically, but you could instead specify them on the java command line, as in the `post.jar` example above:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/exporting-result-sets.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/exporting-result-sets.adoc b/solr/solr-ref-guide/src/exporting-result-sets.adoc
index 33852fa..0f8866d 100644
--- a/solr/solr-ref-guide/src/exporting-result-sets.adoc
+++ b/solr/solr-ref-guide/src/exporting-result-sets.adoc
@@ -25,19 +25,16 @@ This feature uses a stream sorting technique that begins to send records within
 
 The cases where this functionality may be useful include: session analysis, distributed merge joins, time series roll-ups, aggregations on high cardinality fields, fully distributed field collapsing, and sort based stats.
 
-[[ExportingResultSets-FieldRequirements]]
 == Field Requirements
 
 All the fields being sorted and exported must have docValues set to true. For more information, see the section on <<docvalues.adoc#docvalues,DocValues>>.
 
-[[ExportingResultSets-The_exportRequestHandler]]
 == The /export RequestHandler
 
 The `/export` request handler with the appropriate configuration is one of Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for more information.
 
 Note that this request handler's properties are defined as "invariants", which means they cannot be overridden by other properties passed at another time (such as at query time).
 
-[[ExportingResultSets-RequestingResultsExport]]
 == Requesting Results Export
 
 You can use `/export` to make requests to export the result set of a query.
@@ -53,19 +50,16 @@ Here is an example of an export request of some indexed log data:
 http://localhost:8983/solr/core_name/export?q=my-query&sort=severity+desc,timestamp+desc&fl=severity,timestamp,msg
 ----
 
-[[ExportingResultSets-SpecifyingtheSortCriteria]]
 === Specifying the Sort Criteria
 
 The `sort` property defines how documents will be sorted in the exported result set. Results can be sorted by any field that has a field type of int,long, float, double, string. The sort fields must be single valued fields.
 
 Up to four sort fields can be specified per request, with the 'asc' or 'desc' properties.
 
-[[ExportingResultSets-SpecifyingtheFieldList]]
 === Specifying the Field List
 
 The `fl` property defines the fields that will be exported with the result set. Any of the field types that can be sorted (i.e., int, long, float, double, string, date, boolean) can be used in the field list. The fields can be single or multi-valued. However, returning scores and wildcards are not supported at this time.
 
-[[ExportingResultSets-DistributedSupport]]
 == Distributed Support
 
 See the section <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> for distributed support.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
index b0a79c0..44db506 100644
--- a/solr/solr-ref-guide/src/faceting.adoc
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -21,7 +21,7 @@
 
 Faceting is the arrangement of search results into categories based on indexed terms.
 
-Searchers are presented with the indexed terms, along with numerical counts of how many matching documents were found were each term. Faceting makes it easy for users to explore search results, narrowing in on exactly the results they are looking for.
+Searchers are presented with the indexed terms, along with numerical counts of how many matching documents were found for each term. Faceting makes it easy for users to explore search results, narrowing in on exactly the results they are looking for.
 
 [[Faceting-GeneralParameters]]
 == General Parameters

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 89b8e90..27c3222 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -27,7 +27,6 @@ A field type definition can include four types of information:
 * If the field type is `TextField`, a description of the field analysis for the field type.
 * Field type properties - depending on the implementation class, some properties may be mandatory.
 
-[[FieldTypeDefinitionsandProperties-FieldTypeDefinitionsinschema.xml]]
 == Field Type Definitions in schema.xml
 
 Field types are defined in `schema.xml`. Each field type is defined between `fieldType` elements. They can optionally be grouped within a `types` element. Here is an example of a field type definition for a type called `text_general`:
@@ -137,7 +136,6 @@ The default values for each property depend on the underlying `FieldType` class,
 
 // TODO: SOLR-10655 END
 
-[[FieldTypeDefinitionsandProperties-FieldTypeSimilarity]]
 == Field Type Similarity
 
 A field type may optionally specify a `<similarity/>` that will be used when scoring documents that refer to fields with this type, as long as the "global" similarity for the collection allows it.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
index d512660..30dd9b1 100644
--- a/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
@@ -33,10 +33,8 @@ In this section you will learn how to start a SolrCloud cluster using startup sc
 This tutorial assumes that you're already familiar with the basics of using Solr. If you need a refresher, please see the <<getting-started.adoc#getting-started,Getting Started section>> to get a grounding in Solr concepts. If you load documents as part of that exercise, you should start over with a fresh Solr installation for these SolrCloud tutorials.
 ====
 
-[[GettingStartedwithSolrCloud-SolrCloudExample]]
 == SolrCloud Example
 
-[[GettingStartedwithSolrCloud-InteractiveStartup]]
 === Interactive Startup
 
 The `bin/solr` script makes it easy to get started with SolrCloud as it walks you through the process of launching Solr nodes in cloud mode and adding a collection. To get started, simply do:
@@ -120,7 +118,6 @@ To stop Solr in SolrCloud mode, you would use the `bin/solr` script and issue th
 bin/solr stop -all
 ----
 
-[[GettingStartedwithSolrCloud-Startingwith-noprompt]]
 === Starting with -noprompt
 
 You can also get SolrCloud started with all the defaults instead of the interactive session using the following command:
@@ -130,7 +127,6 @@ You can also get SolrCloud started with all the defaults instead of the interact
 bin/solr -e cloud -noprompt
 ----
 
-[[GettingStartedwithSolrCloud-RestartingNodes]]
 === Restarting Nodes
 
 You can restart your SolrCloud nodes using the `bin/solr` script. For instance, to restart node1 running on port 8983 (with an embedded ZooKeeper server), you would do:
@@ -149,7 +145,6 @@ bin/solr restart -c -p 7574 -z localhost:9983 -s example/cloud/node2/solr
 
 Notice that you need to specify the ZooKeeper address (`-z localhost:9983`) when starting node2 so that it can join the cluster with node1.
 
-[[GettingStartedwithSolrCloud-Addinganodetoacluster]]
 === Adding a node to a cluster
 
 Adding a node to an existing cluster is a bit advanced and involves a little more understanding of Solr. Once you startup a SolrCloud cluster using the startup scripts, you can add a new node to it by:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
index 1c17fbc..7fc1943 100644
--- a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
@@ -38,7 +38,6 @@ There are two plugin classes:
 For most SolrCloud or standalone Solr setups, the `HadoopAuthPlugin` should suffice.
 ====
 
-[[HadoopAuthenticationPlugin-PluginConfiguration]]
 == Plugin Configuration
 
 `class`::
@@ -70,11 +69,8 @@ Configures proxy users for the underlying Hadoop authentication mechanism. This
 `clientBuilderFactory`:: No |
 The `HttpClientBuilderFactory` implementation used for the Solr internal communication. Only applicable for `ConfigurableInternodeAuthHadoopPlugin`.
 
-
-[[HadoopAuthenticationPlugin-ExampleConfigurations]]
 == Example Configurations
 
-[[HadoopAuthenticationPlugin-KerberosAuthenticationusingHadoopAuthenticationPlugin]]
 === Kerberos Authentication using Hadoop Authentication Plugin
 
 This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>>.
@@ -105,7 +101,6 @@ To setup this plugin, use the following in your `security.json` file.
 }
 ----
 
-[[HadoopAuthenticationPlugin-SimpleAuthenticationwithDelegationTokens]]
 === Simple Authentication with Delegation Tokens
 
 Similar to the previous example, this is an example of setting up a Solr cluster that uses delegation tokens. Refer to the parameters in the Hadoop authentication library's https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[documentation] or refer to the section <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> for further details. Please note that this example does not use Kerberos and the requests made to Solr must contain valid delegation tokens.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/highlighting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/highlighting.adoc b/solr/solr-ref-guide/src/highlighting.adoc
index b0d094d..dbad2d6 100644
--- a/solr/solr-ref-guide/src/highlighting.adoc
+++ b/solr/solr-ref-guide/src/highlighting.adoc
@@ -24,7 +24,6 @@ The fragments are included in a special section of the query response (the `high
 
 Highlighting is extremely configurable, perhaps more than any other part of Solr. There are many parameters each for fragment sizing, formatting, ordering, backup/alternate behavior, and more options that are hard to categorize. Nonetheless, highlighting is very simple to use.
 
-[[Highlighting-Usage]]
 == Usage
 
 === Common Highlighter Parameters
@@ -36,7 +35,7 @@ Use this parameter to enable or disable highlighting. The default is `false`. If
 `hl.method`::
 The highlighting implementation to use. Acceptable values are: `unified`, `original`, `fastVector`. The default is `original`.
 +
-See the <<Highlighting-ChoosingaHighlighter,Choosing a Highlighter>> section below for more details on the differences between the available highlighters.
+See the <<Choosing a Highlighter>> section below for more details on the differences between the available highlighters.
 
 `hl.fl`::
 Specifies a list of fields to highlight. Accepts a comma- or space-delimited list of fields for which Solr should generate highlighted snippets.
@@ -92,7 +91,6 @@ The default is `51200` characters.
 
 There are more parameters supported as well depending on the highlighter (via `hl.method`) chosen.
 
-[[Highlighting-HighlightingintheQueryResponse]]
 === Highlighting in the Query Response
 
 In the response to a query, Solr includes highlighting data in a section separate from the documents. It is up to a client to determine how to process this response and display the highlights to users.
@@ -136,7 +134,6 @@ Note the two sections `docs` and `highlighting`. The `docs` section contains the
 
 The `highlighting` section includes the ID of each document, and the field that contains the highlighted portion. In this example, we used the `hl.fl` parameter to say we wanted query terms highlighted in the "manu" field. When there is a match to the query term in that field, it will be included for each document ID in the list.
 
-[[Highlighting-ChoosingaHighlighter]]
 == Choosing a Highlighter
 
 Solr provides a `HighlightComponent` (a `SearchComponent`) and it's in the default list of components for search handlers. It offers a somewhat unified API over multiple actual highlighting implementations (or simply "highlighters") that do the business of highlighting.
@@ -173,7 +170,6 @@ The Unified Highlighter is exclusively configured via search parameters. In cont
 
 In addition to further information below, more information can be found in the {solr-javadocs}/solr-core/org/apache/solr/highlight/package-summary.html[Solr javadocs].
 
-[[Highlighting-SchemaOptionsandPerformanceConsiderations]]
 === Schema Options and Performance Considerations
 
 Fundamental to the internals of highlighting are detecting the _offsets_ of the individual words that match the query. Some of the highlighters can run the stored text through the analysis chain defined in the schema, some can look them up from _postings_, and some can look them up from _term vectors._ These choices have different trade-offs:
@@ -198,7 +194,6 @@ This is definitely the fastest option for highlighting wildcard queries on large
 +
 This adds substantial weight to the index – similar in size to the compressed stored text. If you are using the Unified Highlighter then this is not a recommended configuration since it's slower and heavier than postings with light term vectors. However, this could make sense if full term vectors are already needed for another use-case.
 
-[[Highlighting-TheUnifiedHighlighter]]
 == The Unified Highlighter
 
 The Unified Highlighter supports these following additional parameters to the ones listed earlier:
@@ -243,7 +238,6 @@ Indicates which character to break the text on. Use only if you have defined `hl
 This is useful when the text has already been manipulated in advance to have a special delineation character at desired highlight passage boundaries. This character will still appear in the text as the last character of a passage.
 
 
-[[Highlighting-TheOriginalHighlighter]]
 == The Original Highlighter
 
 The Original Highlighter supports these following additional parameters to the ones listed earlier:
@@ -314,7 +308,6 @@ If this may happen and you know you don't need them for highlighting (i.e. your
 
 The Original Highlighter has a plugin architecture that enables new functionality to be registered in `solrconfig.xml`. The "```techproducts```" configset shows most of these settings explicitly. You can use it as a guide to provide your own components to include a `SolrFormatter`, `SolrEncoder`, and `SolrFragmenter.`
 
-[[Highlighting-TheFastVectorHighlighter]]
 == The FastVector Highlighter
 
 The FastVector Highlighter (FVH) can be used in conjunction with the Original Highlighter if not all fields should be highlighted with the FVH. In such a mode, set `hl.method=original` and `f.yourTermVecField.hl.method=fastVector` for all fields that should use the FVH. One annoyance to keep in mind is that the Original Highlighter uses `hl.simple.pre` whereas the FVH (and other highlighters) use `hl.tag.pre`.
@@ -349,15 +342,12 @@ The maximum number of phrases to analyze when searching for the highest-scoring
 `hl.multiValuedSeparatorChar`::
 Text to use to separate one value from the next for a multi-valued field. The default is " " (a space).
 
-
-[[Highlighting-UsingBoundaryScannerswiththeFastVectorHighlighter]]
 === Using Boundary Scanners with the FastVector Highlighter
 
 The FastVector Highlighter will occasionally truncate highlighted words. To prevent this, implement a boundary scanner in `solrconfig.xml`, then use the `hl.boundaryScanner` parameter to specify the boundary scanner for highlighting.
 
 Solr supports two boundary scanners: `breakIterator` and `simple`.
 
-[[Highlighting-ThebreakIteratorBoundaryScanner]]
 ==== The breakIterator Boundary Scanner
 
 The `breakIterator` boundary scanner offers excellent performance right out of the box by taking locale and boundary type into account. In most cases you will want to use the `breakIterator` boundary scanner. To implement the `breakIterator` boundary scanner, add this code to the `highlighting` section of your `solrconfig.xml` file, adjusting the type, language, and country values as appropriate to your application:
@@ -375,7 +365,6 @@ The `breakIterator` boundary scanner offers excellent performance right out of t
 
 Possible values for the `hl.bs.type` parameter are WORD, LINE, SENTENCE, and CHARACTER.
 
-[[Highlighting-ThesimpleBoundaryScanner]]
 ==== The simple Boundary Scanner
 
 The `simple` boundary scanner scans term boundaries for a specified maximum character value (`hl.bs.maxScan`) and for common delimiters such as punctuation marks (`hl.bs.chars`). The `simple` boundary scanner may be useful for some custom To implement the `simple` boundary scanner, add this code to the `highlighting` section of your `solrconfig.xml` file, adjusting the values as appropriate to your application:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/how-solrcloud-works.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/how-solrcloud-works.adoc b/solr/solr-ref-guide/src/how-solrcloud-works.adoc
index 519a888..5e364ce 100644
--- a/solr/solr-ref-guide/src/how-solrcloud-works.adoc
+++ b/solr/solr-ref-guide/src/how-solrcloud-works.adoc
@@ -27,13 +27,11 @@ The following sections cover provide general information about how various SolrC
 
 If you are already familiar with SolrCloud concepts and basic functionality, you can skip to the section covering <<solrcloud-configuration-and-parameters.adoc#solrcloud-configuration-and-parameters,SolrCloud Configuration and Parameters>>.
 
-[[HowSolrCloudWorks-KeySolrCloudConcepts]]
 == Key SolrCloud Concepts
 
 A SolrCloud cluster consists of some "logical" concepts layered on top of some "physical" concepts.
 
-[[HowSolrCloudWorks-Logical]]
-=== Logical
+=== Logical Concepts
 
 * A Cluster can host multiple Collections of Solr Documents.
 * A collection can be partitioned into multiple Shards, which contain a subset of the Documents in the Collection.
@@ -41,8 +39,7 @@ A SolrCloud cluster consists of some "logical" concepts layered on top of some "
 ** The theoretical limit to the number of Documents that Collection can reasonably contain.
 ** The amount of parallelization that is possible for an individual search request.
 
-[[HowSolrCloudWorks-Physical]]
-=== Physical
+=== Physical Concepts
 
 * A Cluster is made up of one or more Solr Nodes, which are running instances of the Solr server process.
 * Each Node can host multiple Cores.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
index 932ac8e..ece3989 100644
--- a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
+++ b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
@@ -43,7 +43,6 @@ This section describes how Solr adds data to its index. It covers the following
 
 * *<<uima-integration.adoc#uima-integration,UIMA Integration>>*: Information about integrating Solr with Apache's Unstructured Information Management Architecture (UIMA). UIMA lets you define custom pipelines of Analysis Engines that incrementally add metadata to your documents as annotations.
 
-[[IndexingandBasicDataOperations-IndexingUsingClientAPIs]]
 == Indexing Using Client APIs
 
 Using client APIs, such as <<using-solrj.adoc#using-solrj,SolrJ>>, from your applications is an important option for updating Solr indexes. See the <<client-apis.adoc#client-apis,Client APIs>> section for more information.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
index 1120e43..180f424 100644
--- a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
@@ -55,8 +55,7 @@ For example, if an `<initParams>` section has the name "myParams", you can call
 [source,xml]
 <requestHandler name="/dump1" class="DumpRequestHandler" initParams="myParams"/>
 
-[[InitParamsinSolrConfig-Wildcards]]
-== Wildcards
+== Wildcards in initParams
 
 An `<initParams>` section can support wildcards to define nested paths that should use the parameters defined. A single asterisk (\*) denotes that a nested path one level deeper should use the parameters. Double asterisks (**) denote all nested paths no matter how deep should use the parameters.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc b/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
index 888d8db..83a9378 100644
--- a/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
+++ b/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
@@ -38,12 +38,10 @@ If the field name is defined in the Schema that is associated with the index, th
 
 For more information on indexing in Solr, see the https://wiki.apache.org/solr/FrontPage[Solr Wiki].
 
-[[IntroductiontoSolrIndexing-TheSolrExampleDirectory]]
 == The Solr Example Directory
 
 When starting Solr with the "-e" option, the `example/` directory will be used as base directory for the example Solr instances that are created. This directory also includes an `example/exampledocs/` subdirectory containing sample documents in a variety of formats that you can use to experiment with indexing into the various examples.
 
-[[IntroductiontoSolrIndexing-ThecurlUtilityforTransferringFiles]]
 == The curl Utility for Transferring Files
 
 Many of the instructions and examples in this section make use of the `curl` utility for transferring content through a URL. `curl` posts and retrieves data over HTTP, FTP, and many other protocols. Most Linux distributions include a copy of `curl`. You'll find curl downloads for Linux, Windows, and many other operating systems at http://curl.haxx.se/download.html. Documentation for `curl` is available here: http://curl.haxx.se/docs/manpage.html.


Mime
View raw message