lucene-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From is...@apache.org
Subject [09/28] lucene-solr:jira/solr-6630: Merging master
Date Sat, 29 Jul 2017 21:59:46 GMT
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/language-analysis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index c2b02ff..a6d04da 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -26,7 +26,6 @@ In other languages the tokenization rules are often not so simple. Some European
 
 For information about language detection at index time, see <<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>.
 
-[[LanguageAnalysis-KeywordMarkerFilterFactory]]
 == KeywordMarkerFilterFactory
 
 Protects words from being modified by stemmers. A customized protected word list may be specified with the "protected" attribute in the schema. Any words in the protected word list will not be modified by any stemmer in Solr.
@@ -44,7 +43,6 @@ A sample Solr `protwords.txt` with comments can be found in the `sample_techprod
 </fieldtype>
 ----
 
-[[LanguageAnalysis-KeywordRepeatFilterFactory]]
 == KeywordRepeatFilterFactory
 
 Emits each token twice, one with the `KEYWORD` attribute and once without.
@@ -69,8 +67,6 @@ A sample fieldType configuration could look like this:
 
 IMPORTANT: When adding the same token twice, it will also score twice (double), so you may have to re-tune your ranking rules.
 
-
-[[LanguageAnalysis-StemmerOverrideFilterFactory]]
 == StemmerOverrideFilterFactory
 
 Overrides stemming algorithms by applying a custom mapping, then protecting these terms from being modified by stemmers.
@@ -90,7 +86,6 @@ A sample http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test-fil
 </fieldtype>
 ----
 
-[[LanguageAnalysis-DictionaryCompoundWordTokenFilter]]
 == Dictionary Compound Word Token Filter
 
 This filter splits, or _decompounds_, compound words into individual words using a dictionary of the component words. Each input token is passed through unchanged. If it can also be decompounded into subwords, each subword is also added to the stream at the same logical position.
@@ -129,7 +124,6 @@ Assume that `germanwords.txt` contains at least the following words: `dumm kopf
 
 *Out:* "Donaudampfschiff"(1), "Donau"(1), "dampf"(1), "schiff"(1), "dummkopf"(2), "dumm"(2), "kopf"(2)
 
-[[LanguageAnalysis-UnicodeCollation]]
 == Unicode Collation
 
 Unicode Collation is a language-sensitive method of sorting text that can also be used for advanced search purposes.
@@ -175,7 +169,6 @@ Expert options:
 
 `variableTop`:: Single character or contraction. Controls what is variable for `alternate`.
 
-[[LanguageAnalysis-SortingTextforaSpecificLanguage]]
 === Sorting Text for a Specific Language
 
 In this example, text is sorted according to the default German rules provided by ICU4J.
@@ -223,7 +216,6 @@ An example using the "city_sort" field to sort:
 q=*:*&fl=city&sort=city_sort+asc
 ----
 
-[[LanguageAnalysis-SortingTextforMultipleLanguages]]
 === Sorting Text for Multiple Languages
 
 There are two approaches to supporting multiple languages: if there is a small list of languages you wish to support, consider defining collated fields for each language and using `copyField`. However, adding a large number of sort fields can increase disk and indexing costs. An alternative approach is to use the Unicode `default` collator.
@@ -237,7 +229,6 @@ The Unicode `default` or `ROOT` locale has rules that are designed to work well
            strength="primary" />
 ----
 
-[[LanguageAnalysis-SortingTextwithCustomRules]]
 === Sorting Text with Custom Rules
 
 You can define your own set of sorting rules. It's easiest to take existing rules that are close to what you want and customize them.
@@ -277,7 +268,6 @@ This rule set can now be used for custom collation in Solr:
            strength="primary" />
 ----
 
-[[LanguageAnalysis-JDKCollation]]
 === JDK Collation
 
 As mentioned above, ICU Unicode Collation is better in several ways than JDK Collation, but if you cannot use ICU4J for some reason, you can use `solr.CollationField`.
@@ -321,7 +311,6 @@ Using a Tailored ruleset:
 
 == ASCII & Decimal Folding Filters
 
-[[LanguageAnalysis-AsciiFolding]]
 === ASCII Folding
 
 This filter converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists. Only those characters with reasonable ASCII alternatives are converted.
@@ -348,7 +337,6 @@ This can increase recall by causing more matches. On the other hand, it can redu
 
 *Out:* "Bjorn", "Angstrom"
 
-[[LanguageAnalysis-DecimalDigitFolding]]
 === Decimal Digit Folding
 
 This filter converts any character in the Unicode "Decimal Number" general category (`Nd`) into their equivalent Basic Latin digits (0-9).
@@ -369,7 +357,6 @@ This can increase recall by causing more matches. On the other hand, it can redu
 </analyzer>
 ----
 
-[[LanguageAnalysis-Language-SpecificFactories]]
 == Language-Specific Factories
 
 These factories are each designed to work with specific languages. The languages covered here are:
@@ -380,8 +367,8 @@ These factories are each designed to work with specific languages. The languages
 * <<Catalan>>
 * <<Traditional Chinese>>
 * <<Simplified Chinese>>
-* <<LanguageAnalysis-Czech,Czech>>
-* <<LanguageAnalysis-Danish,Danish>>
+* <<Czech>>
+* <<Danish>>
 
 * <<Dutch>>
 * <<Finnish>>
@@ -389,7 +376,7 @@ These factories are each designed to work with specific languages. The languages
 * <<Galician>>
 * <<German>>
 * <<Greek>>
-* <<LanguageAnalysis-Hebrew_Lao_Myanmar_Khmer,Hebrew, Lao, Myanmar, Khmer>>
+* <<hebrew-lao-myanmar-khmer,Hebrew, Lao, Myanmar, Khmer>>
 * <<Hindi>>
 * <<Indonesian>>
 * <<Italian>>
@@ -410,7 +397,6 @@ These factories are each designed to work with specific languages. The languages
 * <<Turkish>>
 * <<Ukrainian>>
 
-[[LanguageAnalysis-Arabic]]
 === Arabic
 
 Solr provides support for the http://www.mtholyoke.edu/~lballest/Pubs/arab_stem05.pdf[Light-10] (PDF) stemming algorithm, and Lucene includes an example stopword list.
@@ -432,7 +418,6 @@ This algorithm defines both character normalization and stemming, so these are s
 </analyzer>
 ----
 
-[[LanguageAnalysis-BrazilianPortuguese]]
 === Brazilian Portuguese
 
 This is a Java filter written specifically for stemming the Brazilian dialect of the Portuguese language. It uses the Lucene class `org.apache.lucene.analysis.br.BrazilianStemmer`. Although that stemmer can be configured to use a list of protected words (which should not be stemmed), this factory does not accept any arguments to specify such a list.
@@ -457,7 +442,6 @@ This is a Java filter written specifically for stemming the Brazilian dialect of
 
 *Out:* "pra", "pra"
 
-[[LanguageAnalysis-Bulgarian]]
 === Bulgarian
 
 Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/jacques.savoy/Papers/BUIR.pdf[this algorithm] (PDF), and Lucene includes an example stopword list.
@@ -477,7 +461,6 @@ Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/j
 </analyzer>
 ----
 
-[[LanguageAnalysis-Catalan]]
 === Catalan
 
 Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `language="Catalan"`. Solr includes a set of contractions for Catalan, which can be stripped using `solr.ElisionFilterFactory`.
@@ -507,14 +490,13 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
 
 *Out:* "llengu"(1), "llengu"(2)
 
-[[LanguageAnalysis-TraditionalChinese]]
 === Traditional Chinese
 
-The default configuration of the <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU Tokenizer>> is suitable for Traditional Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
+The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is suitable for Traditional Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
 
-<<tokenizers.adoc#Tokenizers-StandardTokenizer,Standard Tokenizer>> can also be used to tokenize Traditional Chinese text.  Following the Word Break rules from the Unicode Text Segmentation algorithm, it produces one token per Chinese character.  When combined with <<LanguageAnalysis-CJKBigramFilter,CJK Bigram Filter>>, overlapping bigrams of Chinese characters are formed.
+<<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> can also be used to tokenize Traditional Chinese text.  Following the Word Break rules from the Unicode Text Segmentation algorithm, it produces one token per Chinese character.  When combined with <<CJK Bigram Filter>>, overlapping bigrams of Chinese characters are formed.
 
-<<LanguageAnalysis-CJKWidthFilter,CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms.
+<<CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms.
 
 *Examples:*
 
@@ -537,10 +519,9 @@ The default configuration of the <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU T
 </analyzer>
 ----
 
-[[LanguageAnalysis-CJKBigramFilter]]
 === CJK Bigram Filter
 
-Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from <<tokenizers.adoc#Tokenizers-StandardTokenizer,Standard Tokenizer>> or <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU Tokenizer>>.
+Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from <<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> or <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>>.
 
 By default, all CJK characters produce bigrams, but finer grained control is available by specifying orthographic type arguments `han`, `hiragana`, `katakana`, and `hangul`.  When set to `false`, characters of the corresponding type will be passed through as unigrams, and will not be included in any bigrams.
 
@@ -560,18 +541,17 @@ In all cases, all non-CJK input is passed through unmodified.
 
 `outputUnigrams`:: (true/false) If true, in addition to forming bigrams, all characters are also passed through as unigrams. Default is false.
 
-See the example under <<LanguageAnalysis-TraditionalChinese,Traditional Chinese>>.
+See the example under <<Traditional Chinese>>.
 
-[[LanguageAnalysis-SimplifiedChinese]]
 === Simplified Chinese
 
-For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<LanguageAnalysis-HMMChineseTokenizer,HMM Chinese Tokenizer>>. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
+For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<HMM Chinese Tokenizer>>. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
 
-The default configuration of the <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU Tokenizer>> is also suitable for Simplified Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
+The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is also suitable for Simplified Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
 
 Also useful for Chinese analysis:
 
-<<LanguageAnalysis-CJKWidthFilter,CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
+<<CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
 
 *Examples:*
 
@@ -598,7 +578,6 @@ Also useful for Chinese analysis:
 </analyzer>
 ----
 
-[[LanguageAnalysis-HMMChineseTokenizer]]
 === HMM Chinese Tokenizer
 
 For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the `solr.HMMChineseTokenizerFactory` in the `analysis-extras` contrib module. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
@@ -613,9 +592,8 @@ To use the default setup with fallback to English Porter stemmer for English wor
 
 `<analyzer class="org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer"/>`
 
-Or to configure your own analysis setup, use the `solr.HMMChineseTokenizerFactory` along with your custom filter setup.  See an example of this in the <<LanguageAnalysis-SimplifiedChinese,Simplified Chinese>> section.
+Or to configure your own analysis setup, use the `solr.HMMChineseTokenizerFactory` along with your custom filter setup.  See an example of this in the <<Simplified Chinese>> section.
 
-[[LanguageAnalysis-Czech]]
 === Czech
 
 Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.cfm?id=1598600[this algorithm], and Lucene includes an example stopword list.
@@ -641,12 +619,11 @@ Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.c
 
 *Out:* "preziden", "preziden", "preziden"
 
-[[LanguageAnalysis-Danish]]
 === Danish
 
 Solr can stem Danish using the Snowball Porter Stemmer with an argument of `language="Danish"`.
 
-Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization filters>>.
+Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
 *Factory class:* `solr.SnowballPorterFilterFactory`
 
@@ -671,8 +648,6 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Out:* "undersøg"(1), "undersøg"(2)
 
-
-[[LanguageAnalysis-Dutch]]
 === Dutch
 
 Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `language="Dutch"`.
@@ -700,7 +675,6 @@ Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `langu
 
 *Out:* "kanal", "kanal"
 
-[[LanguageAnalysis-Finnish]]
 === Finnish
 
 Solr includes support for stemming Finnish, and Lucene includes an example stopword list.
@@ -726,10 +700,8 @@ Solr includes support for stemming Finnish, and Lucene includes an example stopw
 *Out:* "kala", "kala"
 
 
-[[LanguageAnalysis-French]]
 === French
 
-[[LanguageAnalysis-ElisionFilter]]
 ==== Elision Filter
 
 Removes article elisions from a token stream. This filter can be useful for languages such as French, Catalan, Italian, and Irish.
@@ -760,7 +732,6 @@ Removes article elisions from a token stream. This filter can be useful for lang
 
 *Out:* "histoire", "art"
 
-[[LanguageAnalysis-FrenchLightStemFilter]]
 ==== French Light Stem Filter
 
 Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFactory`, a lighter stemmer called `solr.FrenchLightStemFilterFactory`, and an even less aggressive stemmer called `solr.FrenchMinimalStemFilterFactory`. Lucene includes an example stopword list.
@@ -800,7 +771,6 @@ Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFa
 *Out:* "le", "chat", "le", "chat"
 
 
-[[LanguageAnalysis-Galician]]
 === Galician
 
 Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua/stemming.jsp[this algorithm], and Lucene includes an example stopword list.
@@ -826,8 +796,6 @@ Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua
 
 *Out:* "feliz", "luz"
 
-
-[[LanguageAnalysis-German]]
 === German
 
 Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFactory language="German"`, a stemmer called `solr.GermanStemFilterFactory`, a lighter stemmer called `solr.GermanLightStemFilterFactory`, and an even less aggressive stemmer called `solr.GermanMinimalStemFilterFactory`. Lucene includes an example stopword list.
@@ -868,8 +836,6 @@ Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "haus", "haus"
 
-
-[[LanguageAnalysis-Greek]]
 === Greek
 
 This filter converts uppercase letters in the Greek character set to the equivalent lowercase character.
@@ -893,7 +859,6 @@ Use of custom charsets is no longer supported as of Solr 3.1. If you need to ind
 </analyzer>
 ----
 
-[[LanguageAnalysis-Hindi]]
 === Hindi
 
 Solr includes support for stemming Hindi following http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf[this algorithm] (PDF), support for common spelling differences through the `solr.HindiNormalizationFilterFactory`, support for encoding differences through the `solr.IndicNormalizationFilterFactory` following http://ldc.upenn.edu/myl/IndianScriptsUnicode.html[this algorithm], and Lucene includes an example stopword list.
@@ -914,8 +879,6 @@ Solr includes support for stemming Hindi following http://computing.open.ac.uk/S
 </analyzer>
 ----
 
-
-[[LanguageAnalysis-Indonesian]]
 === Indonesian
 
 Solr includes support for stemming Indonesian (Bahasa Indonesia) following http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf[this algorithm] (PDF), and Lucene includes an example stopword list.
@@ -941,7 +904,6 @@ Solr includes support for stemming Indonesian (Bahasa Indonesia) following http:
 
 *Out:* "bagai", "bagai"
 
-[[LanguageAnalysis-Italian]]
 === Italian
 
 Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFactory language="Italian"`, and a lighter stemmer called `solr.ItalianLightStemFilterFactory`. Lucene includes an example stopword list.
@@ -969,7 +931,6 @@ Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "propag", "propag", "propag"
 
-[[LanguageAnalysis-Irish]]
 === Irish
 
 Solr can stem Irish using the Snowball Porter Stemmer with an argument of `language="Irish"`. Solr includes `solr.IrishLowerCaseFilterFactory`, which can handle Irish-specific constructs. Solr also includes a set of contractions for Irish which can be stripped using `solr.ElisionFilterFactory`.
@@ -999,22 +960,20 @@ Solr can stem Irish using the Snowball Porter Stemmer with an argument of `langu
 
 *Out:* "siopadóir", "síceapaite", "fearr", "athair"
 
-[[LanguageAnalysis-Japanese]]
 === Japanese
 
 Solr includes support for analyzing Japanese, via the Lucene Kuromoji morphological analyzer, which includes several analysis components - more details on each below:
 
-* <<LanguageAnalysis-JapaneseIterationMarkCharFilter,`JapaneseIterationMarkCharFilter`>> normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.
-* <<LanguageAnalysis-JapaneseTokenizer,`JapaneseTokenizer`>> tokenizes Japanese using morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
-* <<LanguageAnalysis-JapaneseBaseFormFilter,`JapaneseBaseFormFilter`>> replaces original terms with their base forms (a.k.a. lemmas).
-* <<LanguageAnalysis-JapanesePartOfSpeechStopFilter,`JapanesePartOfSpeechStopFilter`>> removes terms that have one of the configured parts-of-speech.
-* <<LanguageAnalysis-JapaneseKatakanaStemFilter,`JapaneseKatakanaStemFilter`>> normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
+* <<Japanese Iteration Mark CharFilter,`JapaneseIterationMarkCharFilter`>> normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.
+* <<Japanese Tokenizer,`JapaneseTokenizer`>> tokenizes Japanese using morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
+* <<Japanese Base Form Filter,`JapaneseBaseFormFilter`>> replaces original terms with their base forms (a.k.a. lemmas).
+* <<Japanese Part Of Speech Stop Filter,`JapanesePartOfSpeechStopFilter`>> removes terms that have one of the configured parts-of-speech.
+* <<Japanese Katakana Stem Filter,`JapaneseKatakanaStemFilter`>> normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
 
 Also useful for Japanese analysis, from lucene-analyzers-common:
 
-* <<LanguageAnalysis-CJKWidthFilter,`CJKWidthFilter`>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
+* <<CJK Width Filter,`CJKWidthFilter`>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
 
-[[LanguageAnalysis-JapaneseIterationMarkCharFilter]]
 ==== Japanese Iteration Mark CharFilter
 
 Normalizes horizontal Japanese iteration marks (odoriji) to their expanded form. Vertical iteration marks are not supported.
@@ -1027,7 +986,6 @@ Normalizes horizontal Japanese iteration marks (odoriji) to their expanded form.
 
 `normalizeKana`:: set to `false` to not normalize kana iteration marks (default is `true`)
 
-[[LanguageAnalysis-JapaneseTokenizer]]
 ==== Japanese Tokenizer
 
 Tokenizer for Japanese that uses morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
@@ -1052,7 +1010,6 @@ For some applications it might be good to use `search` mode for indexing and `no
 
 `discardPunctuation`:: set to `false` to keep punctuation, `true` to discard (the default)
 
-[[LanguageAnalysis-JapaneseBaseFormFilter]]
 ==== Japanese Base Form Filter
 
 Replaces original terms' text with the corresponding base form (lemma). (`JapaneseTokenizer` annotates each term with its base form.)
@@ -1061,7 +1018,6 @@ Replaces original terms' text with the corresponding base form (lemma). (`Japane
 
 (no arguments)
 
-[[LanguageAnalysis-JapanesePartOfSpeechStopFilter]]
 ==== Japanese Part Of Speech Stop Filter
 
 Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` annotates terms with parts-of-speech.
@@ -1074,12 +1030,11 @@ Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` an
 
 `enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
 
-[[LanguageAnalysis-JapaneseKatakanaStemFilter]]
 ==== Japanese Katakana Stem Filter
 
 Normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
 
-<<LanguageAnalysis-CJKWidthFilter,`solr.CJKWidthFilterFactory`>> should be specified prior to this filter to normalize half-width katakana to full-width.
+<<CJK Width Filter,`solr.CJKWidthFilterFactory`>> should be specified prior to this filter to normalize half-width katakana to full-width.
 
 *Factory class:* `JapaneseKatakanaStemFilterFactory`
 
@@ -1087,7 +1042,6 @@ Normalizes common katakana spelling variations ending in a long sound character
 
 `minimumLength`:: terms below this length will not be stemmed. Default is 4, value must be 2 or more.
 
-[[LanguageAnalysis-CJKWidthFilter]]
 ==== CJK Width Filter
 
 Folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
@@ -1115,14 +1069,13 @@ Example:
 </fieldType>
 ----
 
-[[LanguageAnalysis-Hebrew_Lao_Myanmar_Khmer]]
+[[hebrew-lao-myanmar-khmer]]
 === Hebrew, Lao, Myanmar, Khmer
 
 Lucene provides support, in addition to UAX#29 word break rules, for Hebrew's use of the double and single quote characters, and for segmenting Lao, Myanmar, and Khmer into syllables with the `solr.ICUTokenizerFactory` in the `analysis-extras` contrib module. To use this tokenizer, see `solr/contrib/analysis-extras/README.txt for` instructions on which jars you need to add to your `solr_home/lib`.
 
-See <<tokenizers.adoc#Tokenizers-ICUTokenizer,the ICUTokenizer>> for more information.
+See <<tokenizers.adoc#icu-tokenizer,the ICUTokenizer>> for more information.
 
-[[LanguageAnalysis-Latvian]]
 === Latvian
 
 Solr includes support for stemming Latvian, and Lucene includes an example stopword list.
@@ -1150,16 +1103,14 @@ Solr includes support for stemming Latvian, and Lucene includes an example stopw
 
 *Out:* "tirg", "tirg"
 
-[[LanguageAnalysis-Norwegian]]
 === Norwegian
 
 Solr includes two classes for stemming Norwegian, `NorwegianLightStemFilterFactory` and `NorwegianMinimalStemFilterFactory`. Lucene includes an example stopword list.
 
 Another option is to use the Snowball Porter Stemmer with an argument of language="Norwegian".
 
-Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization filters>>.
+Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
-[[LanguageAnalysis-NorwegianLightStemmer]]
 ==== Norwegian Light Stemmer
 
 The `NorwegianLightStemFilterFactory` requires a "two-pass" sort for the -dom and -het endings. This means that in the first pass the word "kristendom" is stemmed to "kristen", and then all the general rules apply so it will be further stemmed to "krist". The effect of this is that "kristen," "kristendom," "kristendommen," and "kristendommens" will all be stemmed to "krist."
@@ -1209,7 +1160,6 @@ The second pass is to pick up -dom and -het endings. Consider this example:
 
 *Out:* "forelske"
 
-[[LanguageAnalysis-NorwegianMinimalStemmer]]
 ==== Norwegian Minimal Stemmer
 
 The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns only.
@@ -1244,10 +1194,8 @@ The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns on
 
 *Out:* "bil"
 
-[[LanguageAnalysis-Persian]]
 === Persian
 
-[[LanguageAnalysis-PersianFilterFactories]]
 ==== Persian Filter Factories
 
 Solr includes support for normalizing Persian, and Lucene includes an example stopword list.
@@ -1267,7 +1215,6 @@ Solr includes support for normalizing Persian, and Lucene includes an example st
 </analyzer>
 ----
 
-[[LanguageAnalysis-Polish]]
 === Polish
 
 Solr provides support for Polish stemming with the `solr.StempelPolishStemFilterFactory`, and `solr.MorphologikFilterFactory` for lemmatization, in the `contrib/analysis-extras` module. The `solr.StempelPolishStemFilterFactory` component includes an algorithmic stemmer with tables for Polish. To use either of these filters, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
@@ -1308,7 +1255,6 @@ Note the lower case filter is applied _after_ the Morfologik stemmer; this is be
 
 The Morfologik dictionary parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
 
-[[LanguageAnalysis-Portuguese]]
 === Portuguese
 
 Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilterFactory`, an alternative stemmer called `solr.PortugueseStemFilterFactory`, a lighter stemmer called `solr.PortugueseLightStemFilterFactory`, and an even less aggressive stemmer called `solr.PortugueseMinimalStemFilterFactory`. Lucene includes an example stopword list.
@@ -1352,8 +1298,6 @@ Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilte
 
 *Out:* "pra", "pra"
 
-
-[[LanguageAnalysis-Romanian]]
 === Romanian
 
 Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `language="Romanian"`.
@@ -1375,11 +1319,8 @@ Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `la
 </analyzer>
 ----
 
-
-[[LanguageAnalysis-Russian]]
 === Russian
 
-[[LanguageAnalysis-RussianStemFilter]]
 ==== Russian Stem Filter
 
 Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFactory language="Russian"`, and a lighter stemmer called `solr.RussianLightStemFilterFactory`. Lucene includes an example stopword list.
@@ -1399,11 +1340,9 @@ Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFac
 </analyzer>
 ----
 
-
-[[LanguageAnalysis-Scandinavian]]
 === Scandinavian
 
-Scandinavian is a language group spanning three languages <<LanguageAnalysis-Norwegian,Norwegian>>, <<LanguageAnalysis-Swedish,Swedish>> and <<LanguageAnalysis-Danish,Danish>> which are very similar.
+Scandinavian is a language group spanning three languages <<Norwegian>>, <<Swedish>> and <<Danish>> which are very similar.
 
 Swedish å, ä, ö are in fact the same letters as Norwegian and Danish å, æ, ø and thus interchangeable when used between these languages. They are however folded differently when people type them on a keyboard lacking these characters.
 
@@ -1413,7 +1352,6 @@ There are two filters for helping with normalization between Scandinavian langua
 
 See also each language section for other relevant filters.
 
-[[LanguageAnalysis-ScandinavianNormalizationFilter]]
 ==== Scandinavian Normalization Filter
 
 This filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØ and folded variants (aa, ao, ae, oe and oo) by transforming them to åÅæÆøØ.
@@ -1441,7 +1379,6 @@ It's a semantically less destructive solution than `ScandinavianFoldingFilter`,
 
 *Out:* "blåbærsyltetøj", "blåbærsyltetøj", "blåbærsyltetøj", "blabarsyltetoj"
 
-[[LanguageAnalysis-ScandinavianFoldingFilter]]
 ==== Scandinavian Folding Filter
 
 This filter folds Scandinavian characters åÅäæÄÆ\->a and öÖøØ\->o. It also discriminate against use of double vowels aa, ae, ao, oe and oo, leaving just the first one.
@@ -1469,10 +1406,8 @@ It's a semantically more destructive solution than `ScandinavianNormalizationFil
 
 *Out:* "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj"
 
-[[LanguageAnalysis-Serbian]]
 === Serbian
 
-[[LanguageAnalysis-SerbianNormalizationFilter]]
 ==== Serbian Normalization Filter
 
 Solr includes a filter that normalizes Serbian Cyrillic and Latin characters. Note that this filter only works with lowercased input.
@@ -1499,7 +1434,6 @@ See the Solr wiki for tips & advice on using this filter: https://wiki.apache.or
 </analyzer>
 ----
 
-[[LanguageAnalysis-Spanish]]
 === Spanish
 
 Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFactory language="Spanish"`, and a lighter stemmer called `solr.SpanishLightStemFilterFactory`. Lucene includes an example stopword list.
@@ -1526,15 +1460,13 @@ Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFac
 *Out:* "tor", "tor", "tor"
 
 
-[[LanguageAnalysis-Swedish]]
 === Swedish
 
-[[LanguageAnalysis-SwedishStemFilter]]
 ==== Swedish Stem Filter
 
 Solr includes two stemmers for Swedish: one in the `solr.SnowballPorterFilterFactory language="Swedish"`, and a lighter stemmer called `solr.SwedishLightStemFilterFactory`. Lucene includes an example stopword list.
 
-Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization filters>>.
+Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
 *Factory class:* `solr.SwedishStemFilterFactory`
 
@@ -1557,8 +1489,6 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Out:* "klok", "klok", "klok"
 
-
-[[LanguageAnalysis-Thai]]
 === Thai
 
 This filter converts sequences of Thai characters into individual Thai words. Unlike European languages, Thai does not use whitespace to delimit words.
@@ -1577,7 +1507,6 @@ This filter converts sequences of Thai characters into individual Thai words. Un
 </analyzer>
 ----
 
-[[LanguageAnalysis-Turkish]]
 === Turkish
 
 Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFactory`; support for case-insensitive search with the `solr.TurkishLowerCaseFilterFactory`; support for stripping apostrophes and following suffixes with `solr.ApostropheFilterFactory` (see http://www.ipcsit.com/vol57/015-ICNI2012-M021.pdf[Role of Apostrophes in Turkish Information Retrieval]); support for a form of stemming that truncating tokens at a configurable maximum length through the `solr.TruncateTokenFilterFactory` (see http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf[Information Retrieval on Turkish Texts]); and Lucene includes an example stopword list.
@@ -1613,10 +1542,6 @@ Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFa
 </analyzer>
 ----
 
-[[LanguageAnalysis-BacktoTop#main]]
-===
-
-[[LanguageAnalysis-Ukrainian]]
 === Ukrainian
 
 Solr provides support for Ukrainian lemmatization with the `solr.MorphologikFilterFactory`, in the `contrib/analysis-extras` module. To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index 64a461b..d2687c1 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -22,21 +22,17 @@ With the *Learning To Rank* (or *LTR* for short) contrib module you can configur
 
 The module also supports feature extraction inside Solr. The only thing you need to do outside Solr is train your own ranking model.
 
-[[LearningToRank-Concepts]]
-== Concepts
+== Learning to Rank Concepts
 
-[[LearningToRank-Re-Ranking]]
 === Re-Ranking
 
-Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> page.
+Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, more complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> page.
 
-[[LearningToRank-LearningToRank]]
-=== Learning To Rank
+=== Learning To Rank Models
 
 In information retrieval systems, https://en.wikipedia.org/wiki/Learning_to_rank[Learning to Rank] is used to re-rank the top N retrieved documents using trained machine learning models. The hope is that such sophisticated models can make more nuanced ranking decisions than standard ranking functions like https://en.wikipedia.org/wiki/Tf%E2%80%93idf[TF-IDF] or https://en.wikipedia.org/wiki/Okapi_BM25[BM25].
 
-[[LearningToRank-Model]]
-==== Model
+==== Ranking Model
 
 A ranking model computes the scores used to rerank documents. Irrespective of any particular algorithm or implementation, a ranking model's computation can use three types of inputs:
 
@@ -44,27 +40,23 @@ A ranking model computes the scores used to rerank documents. Irrespective of an
 * features that represent the document being scored
 * features that represent the query for which the document is being scored
 
-[[LearningToRank-Feature]]
 ==== Feature
 
 A feature is a value, a number, that represents some quantity or quality of the document being scored or of the query for which documents are being scored. For example documents often have a 'recency' quality and 'number of past purchases' might be a quantity that is passed to Solr as part of the search query.
 
-[[LearningToRank-Normalizer]]
 ==== Normalizer
 
 Some ranking models expect features on a particular scale. A normalizer can be used to translate arbitrary feature values into normalized values e.g. on a 0..1 or 0..100 scale.
 
-[[LearningToRank-Training]]
-=== Training
+=== Training Models
 
-[[LearningToRank-Featureengineering]]
-==== Feature engineering
+==== Feature Engineering
 
 The LTR contrib module includes several feature classes as well as support for custom features. Each feature class's javadocs contain an example to illustrate use of that class. The process of https://en.wikipedia.org/wiki/Feature_engineering[feature engineering] itself is then entirely up to your domain expertise and creativity.
 
 [cols=",,,",options="header",]
 |===
-|Feature |Class |Example parameters |<<LearningToRank-ExternalFeatureInformation,External Feature Information>>
+|Feature |Class |Example parameters |<<External Feature Information>>
 |field length |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/FieldLengthFeature.html[FieldLengthFeature] |`{"field":"title"}` |not (yet) supported
 |field value |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/FieldValueFeature.html[FieldValueFeature] |`{"field":"hits"}` |not (yet) supported
 |original score |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/OriginalScoreFeature.html[OriginalScoreFeature] |`{}` |not applicable
@@ -84,12 +76,10 @@ The LTR contrib module includes several feature classes as well as support for c
 |(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/Normalizer.html[Normalizer]) |
 |===
 
-[[LearningToRank-Featureextraction]]
 ==== Feature Extraction
 
 The ltr contrib module includes a <<transforming-result-documents.adoc#transforming-result-documents,[features>> transformer] to support the calculation and return of feature values for https://en.wikipedia.org/wiki/Feature_extraction[feature extraction] purposes including and especially when you do not yet have an actual reranking model.
 
-[[LearningToRank-Featureselectionandmodeltraining]]
 ==== Feature Selection and Model Training
 
 Feature selection and model training take place offline and outside Solr. The ltr contrib module supports two generalized forms of models as well as custom models. Each model class's javadocs contain an example to illustrate configuration of that class. In the form of JSON files your trained model or models (e.g. different models for different customer geographies) can then be directly uploaded into Solr using provided REST APIs.
@@ -102,8 +92,7 @@ Feature selection and model training take place offline and outside Solr. The lt
 |(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/LTRScoringModel.html[LTRScoringModel]) |(not applicable)
 |===
 
-[[LearningToRank-QuickStartExample]]
-== Quick Start Example
+== Quick Start with LTR
 
 The `"techproducts"` example included with Solr is pre-configured with the plugins required for learning-to-rank, but they are disabled by default.
 
@@ -114,7 +103,6 @@ To enable the plugins, please specify the `solr.ltr.enabled` JVM System Property
 bin/solr start -e techproducts -Dsolr.ltr.enabled=true
 ----
 
-[[LearningToRank-Uploadingfeatures]]
 === Uploading Features
 
 To upload features in a `/path/myFeatures.json` file, please run:
@@ -154,7 +142,6 @@ To view the features you just uploaded please open the following URL in a browse
 ]
 ----
 
-[[LearningToRank-Extractingfeatures]]
 === Extracting Features
 
 To extract features as part of a query, add `[features]` to the `fl` parameter, for example:
@@ -184,7 +171,6 @@ The output XML will include feature values as a comma-separated list, resembling
   }}
 ----
 
-[[LearningToRank-Uploadingamodel]]
 === Uploading a Model
 
 To upload the model in a `/path/myModel.json` file, please run:
@@ -219,7 +205,6 @@ To view the model you just uploaded please open the following URL in a browser:
 }
 ----
 
-[[LearningToRank-Runningarerankquery]]
 === Running a Rerank Query
 
 To rerank the results of a query, add the `rq` parameter to your search, for example:
@@ -258,12 +243,10 @@ The output XML will include feature values as a comma-separated list, resembling
   }}
 ----
 
-[[LearningToRank-ExternalFeatureInformation]]
 === External Feature Information
 
 The {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/ValueFeature.html[ValueFeature] and {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] classes support the use of external feature information, `efi` for short.
 
-[[LearningToRank-Uploadingfeatures.1]]
 ==== Uploading Features
 
 To upload features in a `/path/myEfiFeatures.json` file, please run:
@@ -308,9 +291,8 @@ To view the features you just uploaded please open the following URL in a browse
 ]
 ----
 
-As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store` in the <<Lifecycle>> section of this page.
+As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store` in the <<LTR Lifecycle>> section of this page.
 
-[[LearningToRank-Extractingfeatures.1]]
 ==== Extracting Features
 
 To extract `myEfiFeatureStore` features as part of a query, add `efi.*` parameters to the `[features]` part of the `fl` parameter, for example:
@@ -321,7 +303,6 @@ http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[featu
 [source,text]
 http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]
 
-[[LearningToRank-Uploadingamodel.1]]
 ==== Uploading a Model
 
 To upload the model in a `/path/myEfiModel.json` file, please run:
@@ -359,7 +340,6 @@ To view the model you just uploaded please open the following URL in a browser:
 }
 ----
 
-[[LearningToRank-Runningarerankquery.1]]
 ==== Running a Rerank Query
 
 To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter and `efi.*` parameters to the `rq` parameter, for example:
@@ -368,39 +348,34 @@ To obtain the feature values computed during reranking, add `[features]` to the
 http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]] link:[]
 
 [source,text]
-http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]]
+http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]
 
 Notice the absence of `efi.*` parameters in the `[features]` part of the `fl` parameter.
 
-[[LearningToRank-Extractingfeatureswhilstreranking]]
 ==== Extracting Features While Reranking
 
 To extract features for `myEfiFeatureStore` features while still reranking with `myModel`:
 
 [source,text]
-http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]] link:[]
+http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]
 
-Notice the absence of `efi.*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
+Notice the absence of `efi.\*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
 
-Read more about model evolution in the <<Lifecycle>> section of this page.
+Read more about model evolution in the <<LTR Lifecycle>> section of this page.
 
-[[LearningToRank-Trainingexample]]
 === Training Example
 
 Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the solr binary release).
 
-[[LearningToRank-Installation]]
-== Installation
+== Installation of LTR
 
 The ltr contrib module requires the `dist/solr-ltr-*.jar` JARs.
 
-[[LearningToRank-Configuration]]
-== Configuration
+== LTR Configuration
 
 Learning-To-Rank is a contrib module and therefore its plugins must be configured in `solrconfig.xml`.
 
-[[LearningToRank-Minimumrequirements]]
-=== Minimum requirements
+=== Minimum Requirements
 
 * Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the `$solr.install.dir`.
 +
@@ -437,15 +412,12 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 </transformer>
 ----
 
-[[LearningToRank-Advancedoptions]]
 === Advanced Options
 
-[[LearningToRank-LTRThreadModule]]
 ==== LTRThreadModule
 
 A thread module can be configured for the query parser and/or the transformer to parallelize the creation of feature weights. For details, please refer to the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/LTRThreadModule.html[LTRThreadModule] javadocs.
 
-[[LearningToRank-Featurevectorcustomization]]
 ==== Feature Vector Customization
 
 The features transformer returns dense CSV values such as `featureA=0.1,featureB=0.2,featureC=0.3,featureD=0.0`.
@@ -462,7 +434,6 @@ For sparse CSV output such as `featureA:0.1 featureB:0.2 featureC:0.3` you can c
 </transformer>
 ----
 
-[[LearningToRank-Implementationandcontributions]]
 ==== Implementation and Contributions
 
 .How does Solr Learning-To-Rank work under the hood?
@@ -481,10 +452,8 @@ Contributions for further models, features and normalizers are welcome. Related
 * http://wiki.apache.org/lucene-java/HowToContribute
 ====
 
-[[LearningToRank-Lifecycle]]
-== Lifecycle
+== LTR Lifecycle
 
-[[LearningToRank-Featurestores]]
 === Feature Stores
 
 It is recommended that you organise all your features into stores which are akin to namespaces:
@@ -501,7 +470,6 @@ To inspect the content of the `commonFeatureStore` feature store:
 
 `\http://localhost:8983/solr/techproducts/schema/feature-store/commonFeatureStore`
 
-[[LearningToRank-Models]]
 === Models
 
 * A model uses features from exactly one feature store.
@@ -537,13 +505,11 @@ To delete the `currentFeatureStore` feature store:
 curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/feature-store/currentFeatureStore'
 ----
 
-[[LearningToRank-Applyingchanges]]
 === Applying Changes
 
 The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
 
-[[LearningToRank-Examples]]
-=== Examples
+=== LTR Examples
 
 ==== One Feature Store, Multiple Ranking Models
 
@@ -628,7 +594,6 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 }
 ----
 
-[[LearningToRank-Modelevolution]]
 ==== Model Evolution
 
 * `linearModel201701` uses features from `featureStore201701`
@@ -752,8 +717,7 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 }
 ----
 
-[[LearningToRank-AdditionalResources]]
-== Additional Resources
+== Additional LTR Resources
 
 * "Learning to Rank in Solr" presentation at Lucene/Solr Revolution 2015 in Austin:
 ** Slides: http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
index 1ed8eea..e7becd7 100644
--- a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
+++ b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
@@ -32,7 +32,6 @@ We can prefix this query string with local parameters to provide more informatio
 
 These local parameters would change the query to require a match on both "solr" and "rocks" while searching the "title" field by default.
 
-[[LocalParametersinQueries-BasicSyntaxofLocalParameters]]
 == Basic Syntax of Local Parameters
 
 To specify a local parameter, insert the following before the argument to be modified:
@@ -45,7 +44,6 @@ To specify a local parameter, insert the following before the argument to be mod
 
 You may specify only one local parameters prefix per argument. Values in the key-value pairs may be quoted via single or double quotes, and backslash escaping works within quoted strings.
 
-[[LocalParametersinQueries-QueryTypeShortForm]]
 == Query Type Short Form
 
 If a local parameter value appears without a name, it is given the implicit name of "type". This allows short-form representation for the type of query parser to use when parsing a query string. Thus
@@ -74,7 +72,6 @@ is equivalent to
 
 `q={!type=dismax qf=myfield v='solr rocks'`}
 
-[[LocalParametersinQueries-ParameterDereferencing]]
 == Parameter Dereferencing
 
 Parameter dereferencing, or indirection, lets you use the value of another argument rather than specifying it directly. This can be used to simplify queries, decouple user input from query parameters, or decouple front-end GUI parameters from defaults set in `solrconfig.xml`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/logging.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/logging.adoc b/solr/solr-ref-guide/src/logging.adoc
index d44dcad..8b847f7 100644
--- a/solr/solr-ref-guide/src/logging.adoc
+++ b/solr/solr-ref-guide/src/logging.adoc
@@ -27,7 +27,6 @@ image::images/logging/logging.png[image,width=621,height=250]
 
 While this example shows logged messages for only one core, if you have multiple cores in a single instance, they will each be listed, with the level for each.
 
-[[Logging-SelectingaLoggingLevel]]
 == Selecting a Logging Level
 
 When you select the *Level* link on the left, you see the hierarchy of classpaths and classnames for your instance. A row highlighted in yellow indicates that the class has logging capabilities. Click on a highlighted row, and a menu will appear to allow you to change the log level for that class. Characters in boldface indicate that the class will not be affected by level changes to root.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
index 6810e4b..9ec44d8 100644
--- a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
+++ b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
@@ -46,9 +46,9 @@ Built on streaming expressions, new in Solr 6 is a <<parallel-sql-interface.adoc
 
 Replication across data centers is now possible with <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>>. Using an active-passive model, a SolrCloud cluster can be replicated to another data center, and monitored with a new API.
 
-=== Graph Query Parser
+=== Graph QueryParser
 
-A new <<other-parsers.adoc#OtherParsers-GraphQueryParser,`graph` query parser>> makes it possible to to graph traversal queries of Directed (Cyclic) Graphs modelled using Solr documents.
+A new <<other-parsers.adoc#graph-query-parser,`graph` query parser>> makes it possible to to graph traversal queries of Directed (Cyclic) Graphs modelled using Solr documents.
 
 [[major-5-6-docvalues]]
 === DocValues

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
index 6f3383c..38da729 100644
--- a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
+++ b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
@@ -28,12 +28,12 @@ Support for backups when running SolrCloud is provided with the <<collections-ap
 
 Two commands are available:
 
-* `action=BACKUP`: This command backs up Solr indexes and configurations. More information is available in the section <<collections-api.adoc#CollectionsAPI-backup,Backup Collection>>.
-* `action=RESTORE`: This command restores Solr indexes and configurations. More information is available in the section <<collections-api.adoc#CollectionsAPI-restore,Restore Collection>>.
+* `action=BACKUP`: This command backs up Solr indexes and configurations. More information is available in the section <<collections-api.adoc#backup,Backup Collection>>.
+* `action=RESTORE`: This command restores Solr indexes and configurations. More information is available in the section <<collections-api.adoc#restore,Restore Collection>>.
 
 == Standalone Mode Backups
 
-Backups and restoration uses Solr's replication handler. Out of the box, Solr includes implicit support for replication so this API can be used. Configuration of the replication handler can, however, be customized by defining your own replication handler in `solrconfig.xml` . For details on configuring the replication handler, see the section <<index-replication.adoc#IndexReplication-ConfiguringtheReplicationHandler,Configuring the ReplicationHandler>>.
+Backups and restoration uses Solr's replication handler. Out of the box, Solr includes implicit support for replication so this API can be used. Configuration of the replication handler can, however, be customized by defining your own replication handler in `solrconfig.xml` . For details on configuring the replication handler, see the section <<index-replication.adoc#configuring-the-replicationhandler,Configuring the ReplicationHandler>>.
 
 === Backup API
 
@@ -58,7 +58,7 @@ The path where the backup will be created. If the path is not absolute then the
 |name |The snapshot will be created in a directory called `snapshot.<name>`. If a name is not specified then the directory name would have the following format: `snapshot.<yyyyMMddHHmmssSSS>`.
 
 `numberToKeep`::
-The number of backups to keep. If `maxNumberOfBackups` has been specified on the replication handler in `solrconfig.xml`, `maxNumberOfBackups` is always used and attempts to use `numberToKeep` will cause an error. Also, this parameter is not taken into consideration if the backup name is specified. More information about `maxNumberOfBackups` can be found in the section <<index-replication.adoc#IndexReplication-ConfiguringtheReplicationHandler,Configuring the ReplicationHandler>>.
+The number of backups to keep. If `maxNumberOfBackups` has been specified on the replication handler in `solrconfig.xml`, `maxNumberOfBackups` is always used and attempts to use `numberToKeep` will cause an error. Also, this parameter is not taken into consideration if the backup name is specified. More information about `maxNumberOfBackups` can be found in the section <<index-replication.adoc#configuring-the-replicationhandler,Configuring the ReplicationHandler>>.
 
 `repository`::
 The name of the repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/managed-resources.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managed-resources.adoc b/solr/solr-ref-guide/src/managed-resources.adoc
index 72b879a..deb10cc 100644
--- a/solr/solr-ref-guide/src/managed-resources.adoc
+++ b/solr/solr-ref-guide/src/managed-resources.adoc
@@ -33,15 +33,13 @@ All of the examples in this section assume you are running the "techproducts" So
 bin/solr -e techproducts
 ----
 
-[[ManagedResources-Overview]]
-== Overview
+== Managed Resources Overview
 
 Let's begin learning about managed resources by looking at a couple of examples provided by Solr for managing stop words and synonyms using a REST API. After reading this section, you'll be ready to dig into the details of how managed resources are implemented in Solr so you can start building your own implementation.
 
-[[ManagedResources-Stopwords]]
-=== Stop Words
+=== Managing Stop Words
 
-To begin, you need to define a field type that uses the <<filter-descriptions.adoc#FilterDescriptions-ManagedStopFilter,ManagedStopFilterFactory>>, such as:
+To begin, you need to define a field type that uses the <<filter-descriptions.adoc#managed-stop-filter,ManagedStopFilterFactory>>, such as:
 
 [source,xml,subs="verbatim,callouts"]
 ----
@@ -56,7 +54,7 @@ To begin, you need to define a field type that uses the <<filter-descriptions.ad
 
 There are two important things to notice about this field type definition:
 
-<1> The filter implementation class is `solr.ManagedStopFilterFactory`. This is a special implementation of the <<filter-descriptions.adoc#FilterDescriptions-StopFilter,StopFilterFactory>> that uses a set of stop words that are managed from a REST API.
+<1> The filter implementation class is `solr.ManagedStopFilterFactory`. This is a special implementation of the <<filter-descriptions.adoc#stop-filter,StopFilterFactory>> that uses a set of stop words that are managed from a REST API.
 
 <2> The `managed=”english”` attribute gives a name to the set of managed stop words, in this case indicating the stop words are for English text.
 
@@ -134,8 +132,7 @@ curl -X DELETE "http://localhost:8983/solr/techproducts/schema/analysis/stopword
 
 NOTE: PUT/POST is used to add terms to an existing list instead of replacing the list entirely. This is because it is more common to add a term to an existing list than it is to replace a list altogether, so the API favors the more common approach of incrementally adding terms especially since deleting individual terms is also supported.
 
-[[ManagedResources-Synonyms]]
-=== Synonyms
+=== Managing Synonyms
 
 For the most part, the API for managing synonyms behaves similar to the API for stop words, except instead of working with a list of words, it uses a map, where the value for each entry in the map is a set of synonyms for a term. As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition in schema.xml:
 
@@ -209,8 +206,7 @@ Note that the expansion is performed when processing the PUT request so the unde
 
 Lastly, you can delete a mapping by sending a DELETE request to the managed endpoint.
 
-[[ManagedResources-ApplyingChanges]]
-== Applying Changes
+== Applying Managed Resource Changes
 
 Changes made to managed resources via this REST API are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
 
@@ -227,7 +223,6 @@ However, the intent of this API implementation is that changes will be applied u
 Changing things like stop words and synonym mappings typically require re-indexing existing documents if being used by index-time analyzers. The RestManager framework does not guard you from this, it simply makes it possible to programmatically build up a set of stop words, synonyms etc.
 ====
 
-[[ManagedResources-RestManagerEndpoint]]
 == RestManager Endpoint
 
 Metadata about registered ManagedResources is available using the `/schema/managed` endpoint for each collection.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/mbean-request-handler.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/mbean-request-handler.adoc b/solr/solr-ref-guide/src/mbean-request-handler.adoc
index eebd082..65845ee 100644
--- a/solr/solr-ref-guide/src/mbean-request-handler.adoc
+++ b/solr/solr-ref-guide/src/mbean-request-handler.adoc
@@ -32,10 +32,9 @@ Restricts results by category name.
 Specifies whether statistics are returned with results. You can override the `stats` parameter on a per-field basis. The default is `false`.
 
 `wt`::
-The output format. This operates the same as the <<response-writers.adoc#response-writers,`wt` parameter in a query>>. The default is `xml`.
+The output format. This operates the same as the <<response-writers.adoc#response-writers,`wt` parameter in a query>>. The default is `json`.
 
-[[MBeanRequestHandler-Examples]]
-== Examples
+== MBeanRequestHandler Examples
 
 The following examples assume you are running Solr's `techproducts` example configuration:
 
@@ -48,9 +47,9 @@ To return information about the CACHE category only:
 
 `\http://localhost:8983/solr/techproducts/admin/mbeans?cat=CACHE`
 
-To return information and statistics about the CACHE category only, formatted in JSON:
+To return information and statistics about the CACHE category only, formatted in XML:
 
-`\http://localhost:8983/solr/techproducts/admin/mbeans?stats=true&cat=CACHE&indent=true&wt=json`
+`\http://localhost:8983/solr/techproducts/admin/mbeans?stats=true&cat=CACHE&wt=xml`
 
 To return information for everything, and statistics for everything except the `fieldCache`:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/merging-indexes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/merging-indexes.adoc b/solr/solr-ref-guide/src/merging-indexes.adoc
index 49afe4e..cf1cd37 100644
--- a/solr/solr-ref-guide/src/merging-indexes.adoc
+++ b/solr/solr-ref-guide/src/merging-indexes.adoc
@@ -27,7 +27,6 @@ To merge indexes, they must meet these requirements:
 
 Optimally, the two indexes should be built using the same schema.
 
-[[MergingIndexes-UsingIndexMergeTool]]
 == Using IndexMergeTool
 
 To merge the indexes, do the following:
@@ -43,9 +42,8 @@ java -cp $SOLR/server/solr-webapp/webapp/WEB-INF/lib/lucene-core-VERSION.jar:$SO
 This will create a new index at `/path/to/newindex` that contains both index1 and index2.
 . Copy this new directory to the location of your application's solr index (move the old one aside first, of course) and start Solr.
 
-[[MergingIndexes-UsingCoreAdmin]]
 == Using CoreAdmin
 
-The `MERGEINDEXES` command of the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> can be used to merge indexes into a new core – either from one or more arbitrary `indexDir` directories or by merging from one or more existing `srcCore` core names.
+The `MERGEINDEXES` command of the <<coreadmin-api.adoc#coreadmin-mergeindexes,CoreAdminHandler>> can be used to merge indexes into a new core – either from one or more arbitrary `indexDir` directories or by merging from one or more existing `srcCore` core names.
 
-See the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> section for details.
+See the <<coreadmin-api.adoc#coreadmin-mergeindexes,CoreAdminHandler>> section for details.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc b/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc
new file mode 100644
index 0000000..e0e5aec
--- /dev/null
+++ b/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc
@@ -0,0 +1,344 @@
+= AsciiDoc Syntax Cheatsheet
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+The definitive manual on AsciiDoc syntax is in the http://asciidoctor.org/docs/user-manual/[Asciidoctor User Manual]. To help people get started, however, here is a simpler cheat sheet.
+
+== AsciiDoc vs Asciidoctor Syntax
+We use tools from the Asciidoctor project to build the HTML and PDF versions of the Ref Guide. Asciidoctor is a Ruby port of the original AsciiDoc project, which was mostly abandoned several years ago.
+
+While much of the syntax between the two is the same, there are many conventions supported by Asciidoctor that did not exist in AsciiDoc. While the Asciidoctor project has tried to provide back-compatibility with the older project, that may not be true forever. For this reason, it's strongly recommended to only use the Asciidoctor User Manual as a reference for any syntax that's not described here.
+
+== Basic AsciiDoc Syntax
+
+=== Bold
+
+Put asterisks around text to make it *bold*.
+
+More info: http://asciidoctor.org/docs/user-manual/#bold-and-italic
+
+=== Italics
+
+Use underlines on either side of a string to put text into _italics_.
+
+More info: http://asciidoctor.org/docs/user-manual/#bold-and-italic
+
+=== Headings
+
+Equal signs (`=`) are used for heading levels. Each equal sign is a level. Each page can *only* have one top level (i.e., only one section with a single `=`).
+
+Levels should be appropriately nested. During the build, validation occurs to ensure that level 3s are preceded by level 2s, level 4s are preceded by level 3s, etc. Including out-of-sequence heading levels (such as a level 3 then a level 5) will not fail the build, but will produce an error.
+
+More info: http://asciidoctor.org/docs/user-manual/#sections
+
+=== Code Examples
+
+Use backticks ``` for text that should be monospaced, such as code or a class name in the body of a paragraph.
+
+More info: http://asciidoctor.org/docs/user-manual/#mono
+
+Longer code examples can be separated from text with `source` blocks. These allow defining the syntax being used so the code is properly highlighted.
+
+.Example Source Block
+[source]
+----
+[source,xml]
+<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
+----
+
+If your code block will include line breaks, put 4 hyphens (`----`) before and after the entire block.
+
+More info: http://asciidoctor.org/docs/user-manual/#source-code-blocks
+
+=== Block Titles
+
+Titles can be added to most blocks (images, source blocks, tables, etc.) by simply prefacing the title with a period (`.`). For example, to add a title to the source block example above:
+
+[source]
+----
+.Example ID field
+[source,xml]
+<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
+----
+
+== Links
+
+=== Link to Sites on the Internet
+When converting content to HTML or PDF, Asciidoctor will automatically render many link types (such as `http:` and `mailto:`) without any additional syntax.
+
+However, you can add a name to a link by adding the URI followed by square brackets:
+
+[source]
+http://lucene.apache.org/solr[Solr Website]
+
+=== Link to Other Pages/Sections of the Guide
+A warning up front, linking to other pages can be a little bit painful. There are slightly different rules depending on the type of link you want to create, and where you are linking from.
+
+The build process includes a validation for internal or inter-page links, so if you can build the docs locally, you can use that to verify you constructed your link properly (or pay attention to the Jenkins build after your commit).
+
+With all of the below examples, you can add text to display as the link title by adding a comma after the section reference followed by the display text, as in:
+
+[source]
+<<schema-api.adoc#modify-the-schema,Modify the Schema>>
+
+==== Link to a Section on the Same Page
+
+To link to an anchor (or section title) on the _same page_, you can simply use double angle brackets (`<< >>`) around the anchor/heading/section title you want to link to. Any section title (a heading that starts with equal signs) automatically becomes an anchor during conversion and is available for deep linking.
+
+Example::
+If I have a section on a page that looks like this (from `defining-fields.adoc`):
++
+[source]
+----
+== Field Properties
+
+Field definitions can have the following properties:
+----
++
+To link to this section from another part of the same `defining-fields.adoc` page, I simply need to put the section title in double angle brackets, as in:
++
+[source]
+See also the <<Field Properties>> section.
++
+The section title will be used as the display text; to customize that add a comma after the the section title, then the text you want used for display.
+
+More info: http://asciidoctor.org/docs/user-manual/#internal-cross-references
+
+==== Link to a Section with an Anchor ID
+When linking to any section (on the same page or another one), you must also be aware of any pre-defined anchors that may be in use (these will be in double brackets, like `[[ ]]`). When the page is converted, those will be the references your link needs to point to.
+
+Example::
+Take this example from `configsets-api.adoc`:
++
+[source]
+----
+[[configsets-create]]
+== Create a ConfigSet
+----
++
+To link to this section, there are two approaches depending on where you are linking from:
+
+* From the same page, simply use the anchor name: `\<<configsets-create>>`.
+* From another page, use the page name and the anchor name: `\<<configsets-api.adoc#configsets-create>>`.
+
+==== Link to Another Page
+To link to _another page_ or a section on another page, you must refer to the full filename and refer to the section you want to link to.
+
+Unfortunately, when you want to refer the reader to another page without deep-linking to a section, you cannot simply put the other file name in angle brackets and call it a day. This is due to the PDF conversion - once all the pages are combined into one big page for one big PDF, the lack of a specific reference causes inter-page links to fail.
+
+So, *you must always link to a specific section*. If all you want is a reference to the top of another page, you can use the `page-shortname` attribute found at the top of every page as your anchor reference.
+
+Example::
+The file `upgrading-solr.adoc` has a `page-shortname` at the top that looks like this:
++
+[source]
+----
+= Upgrading Solr
+:page-shortname: upgrading-solr
+:page-permalink: upgrading-solr.html
+----
++
+To construct a link to this page, we need to refer to the file name (`upgrading-solr.adoc`), then use the `page-shortname` as our anchor reference. As in:
++
+[source]
+For more information about upgrades, see <<upgrading-solr.adoc#upgrading-solr>>.
+
+TIP: As of July 2017, all pages have a `page-shortname` that is equivalent to the filename (without the `.adoc` part).
+
+==== Link to a Section on Another Page
+Linking to a section is the same conceptually as linking to the top of a page, you just need to take a little extra care to format the anchor ID in your link reference properly.
+
+When you link to a section on another page, you must make a simple conversion of the title into the format the section ID will be created during the conversion. These are the rules that transform the sections:
+--
+* All characters are lower-cased.
+** `Using security.json with Solr` becomes `using security.json with solr`
+* All non-alpha characters are removed, with the exception of hyphens (so all periods, commas, ampersands, parentheses, etc., are stripped).
+** `using security.json with solr` becomes `using security json with solr`
+* All whitespaces are replaced with hyphens.
+** `using security json with solr` becomes `using-security-json-with-solr`
+--
+Example::
+The file `schema-api.adoc` has a section "Modify the Schema" that looks like this:
++
+[source]
+----
+== Modify the Schema
+
+`POST /_collection_/schema`
+----
++
+To link from to this section from another page, you would create a link structured like this:
++
+--
+* the file name of the page with the section (`schema-api.adoc`),
+* then the hash symbol (`#`),
+* then the converted section title (`modify-the-schema`),
+* then a comma and any link title for display.
+--
++
+The link in context would look like this:
++
+[source]
+For more information, see the section <<schema-api.adoc#modify-the-schema,Modify the Schema>>.
+
+More info: http://asciidoctor.org/docs/user-manual/#inter-document-cross-references
+
+== Lists
+
+AsciiDoc supports three types of lists:
+
+* Unordered lists
+* Ordered lists
+* Labeled lists
+
+Each type of list can be mixed with the other types. So, you could have an ordered list inside a labeled list if necessary.
+
+=== Unordered Lists
+Simple bulleted lists need each line to start with an asterisk (`*`). It should be the first character of the line, and be followed by a space.
+
+These lists also need to be separated from the
+
+More info: http://asciidoctor.org/docs/user-manual/#unordered-lists
+
+=== Ordered Lists
+Numbered lists need each line to start with a period (`.`). It should be the first character of the line, and be followed by a space.
+
+This style is preferred over manually numbering your list.
+
+More info: http://asciidoctor.org/docs/user-manual/#ordered-lists
+
+=== Labeled Lists
+These are like question & answer lists or glossary definitions. Each line should start with the list item followed by double colons (`::`), then a space or new line.
+
+Labeled lists can be nested by adding an additional colon (such as `:::`, etc.).
+
+If your content will span multiple paragraphs or include source blocks, etc., you will want to add a plus sign (`+`) to keep the sections together for your reader.
+
+TIP: We prefer this style of list for parameters because it allows more freedom in how you present the details for each parameter. For example, it supports ordered or unordered lists inside it automatically, and you can include multiple paragraphs and source blocks without trying to cram them into a smaller table cell.
+
+More info: http://asciidoctor.org/docs/user-manual/#labeled-list
+
+== Images
+
+There are two ways to include an image: inline or as a block.
+
+Inline images are those where text will flow around the image. Block images are those that appear on their own line, set off from any other text on the page.
+
+Both approaches use the `image` tag before the image filename, but the number of colons after `image` define if it is inline or a block. Inline images use one colon (`image:`), while block images use two colons (`image::`).
+
+Block images automatically include a caption label and a number (such as `Figure 1`). If a block image includes a title, it will be included as the text of the caption.
+
+Optional attributes allow you to set the alt text, the size of the image, if it should be a link, float and alignment.
+
+More info: http://asciidoctor.org/docs/user-manual/#images
+
+== Tables
+
+Tables can be complex, but it is pretty easy to make a basic table that fits most needs.
+
+=== Basic Tables
+The basic structure of a table is similar to Markdown, with pipes (`|`) delimiting columns between rows:
+
+[source]
+----
+|===
+| col 1 row 1 | col 2 row 1|
+| col 1 row 2 | col 2 row 2|
+|===
+----
+
+Note the use of `|===` at the start and end. For basic tables that's not exactly required, but it does help to delimit the start and end of the table in case you accidentally introduce (or maybe prefer) spaces between the rows.
+
+=== Header Rows
+To add a header to a table, you need only set the `header` attribute at the start of the table:
+
+[source]
+----
+[options="header"]
+|===
+| header col 1 | header col 2|
+| col 1 row 1 | col 2 row 1|
+| col 1 row 2 | col 2 row 2|
+|===
+----
+
+=== Defining Column Styles
+If you need to define specific styles to all rows in a column, you can do so with the attributes.
+
+This example will center all content in all rows:
+
+[source]
+----
+[cols="2*^" options="header"]
+|===
+| header col 1 | header col 2|
+| col 1 row 1 | col 2 row 1|
+| col 1 row 2 | col 2 row 2|
+|===
+----
+
+Alignments or any other styles can be applied only to a specific column. For example, this would only center the last column of the table:
+
+[source]
+----
+[cols="2*,^" options="header"]
+|===
+| header col 1 | header col 2|
+| col 1 row 1 | col 2 row 1|
+| col 1 row 2 | col 2 row 2|
+|===
+----
+
+Many more examples of formatting:
+
+* Columns: http://asciidoctor.org/docs/user-manual/#cols-format
+* Cells: http://asciidoctor.org/docs/user-manual/#cell
+
+=== More Options
+
+Tables can also be given footer rows, borders, and captions. You can  determine the width of columns, or the width of the table as a whole.
+
+CSV or DSV can also be used instead of formatting the data in pipes.
+
+More info: http://asciidoctor.org/docs/user-manual/#tables
+
+== Admonitions (Notes, Warnings)
+
+AsciiDoc supports several types of callout boxes, called "admonitions":
+
+* NOTE
+* TIP
+* IMPORTANT
+* CAUTION
+* WARNING
+
+It is enough to start a paragraph with one of these words followed by a colon (such as `NOTE:`). When it is converted to HTML or PDF, those sections will be formatted properly - indented from the main text and showing an icon inline.
+
+You can add titles to admonitions by making it an admonition block. The structure of an admonition block is like this:
+
+[source]
+----
+.Title of Note
+[NOTE]
+====
+Text of note
+====
+----
+
+In this example, the type of admonition is included in square brackets (`[NOTE]`), and the title is prefixed with a period. Four equal signs give the start and end points of the note text (which can include new lines, lists, code examples, etc.).
+
+More info: http://asciidoctor.org/docs/user-manual/#admonition

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/meta-docs/editing-tools.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/meta-docs/editing-tools.adoc b/solr/solr-ref-guide/src/meta-docs/editing-tools.adoc
new file mode 100644
index 0000000..81c71d0
--- /dev/null
+++ b/solr/solr-ref-guide/src/meta-docs/editing-tools.adoc
@@ -0,0 +1,39 @@
+= Tools for Working with AsciiDoc Files
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+== AsciiDoc vs Asciidoctor
+
+The Solr Ref Guide is written in _AsciiDoc_ format. This format is generally considered an extension of Markdown, because it has support for tables of contents, better table support, and other features that make it more appropriate for writing technical documentation.
+
+We are using a version of the AsciiDoc syntax along with tools from an open source project called https://asciidoctor.org[Asciidoctor]. This provides full support for the AsciiDoc syntax, but replaces the original Python processor with one written in Ruby. There is a Java implementation, known as https://github.com/asciidoctor/asciidoctorj[AsciidoctorJ]. Further extensions from the original AsciiDoc project include support for font-based icons and UI elements.
+
+== Helpful Tools
+
+You can write AsciiDoc without any special tools. It's simply text, with familiar syntax for bold (`*`) and italics (`_`).
+
+Having some tools in your editor is helpful, though.
+
+=== Doc Preview
+
+This allows you to see your document in something close to what it might appear like when output to HTML.
+
+The following information is from http://asciidoctor.org/docs/editing-asciidoc-with-live-preview.
+
+* Atom has AsciiDoc Preview, which gives you a panel that updates as you type. There are also a couple of other plugins to support AsciiDoc format and auto-complete.
+* There is a Live Preview browser plugin for Chrome, Firefox and Opera which allow you to open your AsciiDoc page in the browser. It will also update as you type.
+* There is also an Intellij IDEA plugin to support AsciiDoc format.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/meta-docs/jekyll.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/meta-docs/jekyll.adoc b/solr/solr-ref-guide/src/meta-docs/jekyll.adoc
new file mode 100644
index 0000000..f2a8b72
--- /dev/null
+++ b/solr/solr-ref-guide/src/meta-docs/jekyll.adoc
@@ -0,0 +1,88 @@
+= Making Changes to HTML Version
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+The Solr Ref Guide uses Jekyll to build the HTML version of the site.
+
+== What is Jekyll?
+
+Jekyll is a static site generator, meaning that it takes some set of documents and produces HTML pages. It allows for templating of the pages, so each page has the same look and feel without having to code headers, footers, logos, etc., into every page.
+
+Jekyll is an open source project written in Ruby, online at https://jekyllrb.com/.
+
+== Jekyll-Asciidoctor Plugin
+We use a plugin for Jekyll from the Asciidoctor project to integrate Jekyll with Asciidoc formatted content. The source for the plugin is available at https://github.com/asciidoctor/jekyll-asciidoc.
+
+This plugin allows us to use Asciidoctor-style variables with Jekyll, instead of having to maintain two sets of the same variables (one for HTML version and another for PDF version).
+
+== Jekyll Basics
+
+The following sections describe the main features of Jekyll that you will encounter while working with the Solr Ref Guide.
+
+=== _config.yml
+
+The `_config.yml` is a global configuration file that drives many of the options used when building the site (particularly in our use of Jekyll).
+
+We have templatized `_config.yml` so in our use of Jekyll you will find it as `solr-ref-guide/_config.yml.template`. This allows us to define some variables during the build, and use common Lucene/Solr build parameters (such as versions, etc.).
+
+=== Front Matter
+
+Front matter for Jekyll is like a header that defines the title of the page, and any other variables that may be helpful or even required when rendering the page.
+
+Every document that will be converted to HTML *must* include at least the page title at the top of the page.
+
+Many guides to Jekyll also say that defining the layout in the front matter is required. However, since we only have one layout for all pages, we have defined this as a default.
+
+The Solr Ref Guide uses the front matter to define some custom attributes on a per page basis:
+
+* `page-shortname` - uniquely identifying the page
+* `page-permalink` - permanent URL of a page,
+* `page-children` - ordered list of child pages, this is used to build the site navigation menu that appears to the left of each page's content (and to order the pages in the PDF)
+
+There are also some optional custom attributes that can be defined in pages to affect the Table of Contents presentation in jekyll:
+
+* `page-toclevels` - changes how "deep" the TOC will be in terms of nested section/sub-section titles (default = 2)
+* `page-tocclass` - changes the CSS class applied to the TOC, default = "normal", resulting in the class name `toc-normal`. The other option is "right", to put the TOC on the right side of the page.
+* `page-toc` - if this is false, then no TOCs will be generated for the page at all.
+
+NOTE: The special macro `{section-toc}` can be used anywhere in a page to create an "In this Section" TOC covering only the sub-headings in the same section.  `:page-toc: false` will also prevent this macro from working, so if you want no "top level" TOC, but you do want section TOCs, use `:page-toclevels: 0`
+
+=== Layouts
+
+Layouts define the "look and feel" of each page.
+
+Jekyll uses Liquid for page templates.
+
+For our implementation of Jekyll, layouts are found in `solr-ref-guide/src/_layouts`
+
+=== Includes
+
+Include files are usually small files that are pulled into a layout when a page is being built. They are Liquid templates that define an area of the page. This allows flexibility across layouts - all pages can have the same header without duplicating code, but different pages could have different menu options.
+
+Include files that we use define the top navigation, the page header, the page footer, and tables of contents.
+
+For our implementation of Jekyll, include files are found in `solr-ref-guide/src/_includes`.
+
+=== Data Files
+
+Data files include data such as lists, that should be included in each page. The left-hand navigation is an example of a data file.
+
+For our implementation of Jekyll, data files are found in `solr-ref-guide/src/_data`.
+
+== Building the HTML Site
+
+An Ant target `build-site` will build the full HTML site. This target builds the navigation for the left-hand menu, and converts all `.adoc` files to `.html`, including navigation and inter-document links.


Mime
View raw message