lucene-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hoss...@apache.org
Subject lucene-solr:jira/solr-10290: manual cleanup of the T files
Date Sat, 06 May 2017 00:17:18 GMT
Repository: lucene-solr
Updated Branches:
  refs/heads/jira/solr-10290 287ffe43c -> 790541615


manual cleanup of the T files


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/79054161
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/79054161
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/79054161

Branch: refs/heads/jira/solr-10290
Commit: 7905416153ff5726d97fde2774f13224c42759c1
Parents: 287ffe4
Author: Chris Hostetter <hossman@apache.org>
Authored: Fri May 5 17:16:57 2017 -0700
Committer: Chris Hostetter <hossman@apache.org>
Committed: Fri May 5 17:17:09 2017 -0700

----------------------------------------------------------------------
 .../src/taking-solr-to-production.adoc          | 48 +++++++-------
 .../src/the-dismax-query-parser.adoc            |  8 +--
 .../src/the-extended-dismax-query-parser.adoc   | 18 +++---
 .../src/the-query-elevation-component.adoc      |  8 ++-
 .../src/the-standard-query-parser.adoc          | 21 +++---
 .../solr-ref-guide/src/the-stats-component.adoc |  6 +-
 .../src/the-term-vector-component.adoc          |  6 +-
 .../solr-ref-guide/src/the-terms-component.adoc |  2 +-
 .../src/the-well-configured-solr-instance.adoc  |  6 +-
 solr/solr-ref-guide/src/thread-dump.adoc        |  7 +-
 solr/solr-ref-guide/src/tokenizers.adoc         |  4 +-
 .../transforming-and-indexing-custom-json.adoc  |  6 +-
 .../src/transforming-result-documents.adoc      | 68 ++++++++++----------
 13 files changed, 109 insertions(+), 99 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/taking-solr-to-production.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/taking-solr-to-production.adoc b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
index 34bfdd6..f81c245 100644
--- a/solr/solr-ref-guide/src/taking-solr-to-production.adoc
+++ b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
@@ -17,12 +17,12 @@ We recommend separating your live Solr files, such as logs and index files,
from
 [[TakingSolrtoProduction-SolrInstallationDirectory]]
 ==== Solr Installation Directory
 
-By default, the service installation script will extract the distribution archive into `/opt`.
You can change this location using the `-i` option when running the installation script. The
script will also create a symbolic link to the versioned directory of Solr. For instance,
if you run the installation script for Solr X.0.0, then the following directory structure
will be used:
+By default, the service installation script will extract the distribution archive into `/opt`.
You can change this location using the `-i` option when running the installation script. The
script will also create a symbolic link to the versioned directory of Solr. For instance,
if you run the installation script for Solr {solr-docs-version}.0, then the following directory
structure will be used:
 
-[source,plain]
+[source,plain,subs="attributes"]
 ----
-/opt/solr-X.0.0
-/opt/solr -> /opt/solr-X.0.0
+/opt/solr-{solr-docs-version}.0
+/opt/solr -> /opt/solr-{solr-docs-version}.0
 ----
 
 Using a symbolic link insulates any scripts from being dependent on the specific Solr version.
If, down the road, you need to upgrade to a later version of Solr, you can just update the
symbolic link to point to the upgraded version of Solr. We’ll use `/opt/solr` to refer to
the Solr installation directory in the remaining sections of this page.
@@ -42,37 +42,37 @@ You are now ready to run the installation script.
 [[TakingSolrtoProduction-RuntheSolrInstallationScript]]
 === Run the Solr Installation Script
 
-To run the script, you'll need to download the latest Solr distribution archive and then
do the following (NOTE: replace `solr-X.Y.Z` with the actual version number):
+To run the script, you'll need to download the latest Solr distribution archive and then
do the following:
 
-[source,plain]
+[source,bash,subs="attributes"]
 ----
-$ tar xzf solr-X.Y.Z.tgz solr-X.Y.Z/bin/install_solr_service.sh --strip-components=2
+$ tar xzf solr-{solr-docs-version}.0.tgz solr-{solr-docs-version}.0/bin/install_solr_service.sh
--strip-components=2
 ----
 
 The previous command extracts the `install_solr_service.sh` script from the archive into
the current directory. If installing on Red Hat, please make sure *lsof* is installed before
running the Solr installation script (`sudo yum install lsof`). The installation script must
be run as root:
 
-[source,plain]
+[source,bash,subs="attributes"]
 ----
-$ sudo bash ./install_solr_service.sh solr-X.Y.Z.tgz
+$ sudo bash ./install_solr_service.sh solr-{solr-docs-version}.0.tgz
 ----
 
 By default, the script extracts the distribution archive into `/opt`, configures Solr to
write files into `/var/solr`, and runs Solr as the `solr` user. Consequently, the following
command produces the same result as the previous command:
 
-[source,plain]
+[source,bash,subs="attributes"]
 ----
-$ sudo bash ./install_solr_service.sh solr-X.Y.Z.tgz -i /opt -d /var/solr -u solr -s solr
-p 8983
+$ sudo bash ./install_solr_service.sh solr-{solr-docs-version}.0.tgz -i /opt -d /var/solr
-u solr -s solr -p 8983
 ----
 
 You can customize the service name, installation directories, port, and owner using options
passed to the installation script. To see available options, simply do:
 
-[source,plain]
+[source,bash]
 ----
 $ sudo bash ./install_solr_service.sh -help
 ----
 
 Once the script completes, Solr will be installed as a service and running in the background
on your server (on port 8983). To verify, you can do:
 
-[source,plain]
+[source,bash]
 ----
 $ sudo service solr status
 ----
@@ -84,7 +84,7 @@ We'll cover some additional configuration settings you can make to fine-tune
you
 [[TakingSolrtoProduction-SolrHomeDirectory]]
 ==== Solr Home Directory
 
-The Solr home directory (not to be confused with the Solr installation directory) is where
Solr manages core directories with index files. By default, the installation script uses `/var/solr/data`.
If the `-d` option is used on the install script, then this will change to the `data` subdirectory
in the location given to the -d option. Take a moment to inspect the contents of the Solr
home directory on your system. If you do not<<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,store
`solr.xml` in ZooKeeper>>, the home directory must contain a `solr.xml` file. When Solr
starts up, the Solr Control Script passes the location of the home directory using the `-Dsolr.solr.home
`system property.
+The Solr home directory (not to be confused with the Solr installation directory) is where
Solr manages core directories with index files. By default, the installation script uses `/var/solr/data`.
If the `-d` option is used on the install script, then this will change to the `data` subdirectory
in the location given to the -d option. Take a moment to inspect the contents of the Solr
home directory on your system. If you do not <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,store
`solr.xml` in ZooKeeper>>, the home directory must contain a `solr.xml` file. When Solr
starts up, the Solr Control Script passes the location of the home directory using the `-Dsolr.solr.home=...`
system property.
 
 [[TakingSolrtoProduction-Environmentoverridesincludefile]]
 ==== Environment overrides include file
@@ -126,9 +126,9 @@ RUNAS=solr
 
 The `SOLR_INSTALL_DIR` and `SOLR_ENV` variables should be self-explanatory. The `RUNAS` variable
sets the owner of the Solr process, such as `solr`; if you don’t set this value, the script
will run Solr as **root**, which is not recommended for production. You can use the `/etc/init.d/solr`
script to start Solr by doing the following as root:
 
-[source,plain]
+[source,bash]
 ----
-# service solr start
+$ service solr start
 ----
 
 The `/etc/init.d/solr` script also supports the **stop**, **restart**, and *status* commands.
Please keep in mind that the init script that ships with Solr is very basic and is intended
to show you how to setup Solr as a service. However, it’s also common to use more advanced
tools like *supervisord* or *upstart* to control Solr as a service on Linux. While showing
how to integrate Solr with tools like supervisord is beyond the scope of this guide, the `init.d/solr`
script should provide enough guidance to help you get started. Also, the installation script
sets the Solr service to start automatically when the host machine initializes.
@@ -138,7 +138,7 @@ The `/etc/init.d/solr` script also supports the **stop**, **restart**,
and *stat
 
 In the next section, we cover some additional environment settings to help you fine-tune
your production setup. However, before we move on, let's review what we've achieved thus far.
Specifically, you should be able to control Solr using `/etc/init.d/solr`. Please verify the
following commands work with your setup:
 
-[source,plain]
+[source,bash]
 ----
 $ sudo service solr restart
 $ sudo service solr status
@@ -146,7 +146,7 @@ $ sudo service solr status
 
 The status command should give some basic information about the running Solr node that looks
similar to:
 
-[source,plain]
+[source,bash]
 ----
 Solr process PID running on port 8983
 {
@@ -202,7 +202,7 @@ ZK_HOST=zk1,zk2,zk3/solr
 
 Before using a chroot for the first time, you need to create the root path (znode) in ZooKeeper
by using the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr
Control Script>>. We can use the mkroot command for that:
 
-[source,plain]
+[source,bash]
 ----
 $ bin/solr zk mkroot /solr -z <ZK_node>:<ZK_PORT>
 ----
@@ -231,7 +231,7 @@ Setting the hostname of the Solr server is recommended, especially when
running
 
 Solr allows configuration properties to be overridden using Java system properties passed
at startup using the `-Dproperty=value` syntax. For instance, in `solrconfig.xml`, the default
auto soft commit settings are set to:
 
-[source,plain]
+[source,xml]
 ----
 <autoSoftCommit>
   <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
@@ -240,7 +240,7 @@ Solr allows configuration properties to be overridden using Java system
properti
 
 In general, whenever you see a property in a Solr configuration file that uses the `${solr.PROPERTY:DEFAULT_VALUE}`
syntax, then you know it can be overridden using a Java system property. For instance, to
set the maxTime for soft-commits to be 10 seconds, then you can start Solr with `-Dsolr.autoSoftCommit.maxTime=10000`,
such as:
 
-[source,plain]
+[source,bash]
 ----
 $ bin/solr start -Dsolr.autoSoftCommit.maxTime=10000
 ----
@@ -273,14 +273,14 @@ Because of the potential garbage collection issues and the particular
issues tha
 
 If your use case requires multiple instances, at a minimum you will need unique Solr home
directories for each node you want to run; ideally, each home should be on a different physical
disk so that multiple Solr nodes don’t have to compete with each other when accessing files
on disk. Having different Solr home directories implies that you’ll need a different include
file for each node. Moreover, if using the `/etc/init.d/solr` script to control Solr as a
service, then you’ll need a separate script for each node. The easiest approach is to use
the service installation script to add multiple services on the same host, such as:
 
-[source,plain]
+[source,bash,subs="attributes"]
 ----
-$ sudo bash ./install_solr_service.sh solr-X.Y.Z.tgz -s solr2 -p 8984
+$ sudo bash ./install_solr_service.sh solr-{solr-docs-version}.0.tgz -s solr2 -p 8984
 ----
 
 The command shown above will add a service named `solr2` running on port 8984 using `/var/solr2`
for writable (aka "live") files; the second server will still be owned and run by the `solr`
user and will use the Solr distribution files in `/opt`. After installing the solr2 service,
verify it works correctly by doing:
 
-[source,plain]
+[source,bash]
 ----
 $ sudo service solr2 restart
 $ sudo service solr2 status

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-dismax-query-parser.adoc b/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
index 32e28d5..af87dd6 100644
--- a/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
+++ b/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
@@ -10,9 +10,9 @@ The DisMax query parser supports an extremely simplified subset of the Lucene
Qu
 
 Interested in the technical concept behind the DisMax name? DisMax stands for Maximum Disjunction.
Here's a definition of a Maximum Disjunction or "DisMax" query:
 
-___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
+____
 A query that generates the union of documents produced by its subqueries, and that scores
each document with the maximum score for that document as produced by any subquery, plus a
tie breaking increment for any additional matching subqueries.
-___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
+____
 
 Whether or not you remember this explanation, do remember that the DisMax Query Parser was
primarily designed to be easy to use and to accept almost any input without returning an error.
 
@@ -21,7 +21,7 @@ Whether or not you remember this explanation, do remember that the DisMax
Query
 
 In addition to the common request parameter, highlighting parameters, and simple facet parameters,
the DisMax query parser supports the parameters described below. Like the standard query parser,
the DisMax query parser allows default parameter values to be specified in `solrconfig.xml`,
or overridden by query-time values in the request.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",cols="20%,80%",options="header",]
 |===
 |Parameter |Description
 |<<TheDisMaxQueryParser-TheqParameter,q>> |Defines the raw input strings for
the query.
@@ -191,7 +191,7 @@ Note that this instance is also configured with a default field list,
which can
 
 You can also override which fields are searched on and how much boost each field gets.
 
-`http://localhost:8983/solr/techproducts/select?defType=dismax&q=video&qf=features^20.0+text^0.3`
+`http://localhost:8983/solr/techproducts/select?defType=dismax&q=video&qf=features\^20.0+text^0.3`
 
 You can boost results that have a field that matches a specific value.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc b/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
index 20d198e..b13287d 100644
--- a/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
+++ b/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
@@ -2,11 +2,13 @@
 :page-shortname: the-extended-dismax-query-parser
 :page-permalink: the-extended-dismax-query-parser.html
 
-The Extended DisMax (eDisMax) query parser is an improved version of the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax
query parser>>. In addition to supporting all the DisMax query parser parameters, Extended
Dismax:
+The Extended DisMax (eDisMax) query parser is an improved version of the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax
query parser>>.
+
+In addition to supporting all the DisMax query parser parameters, Extended Dismax:
 
 * supports the <<the-standard-query-parser.adoc#the-standard-query-parser,full Lucene
query parser syntax>>.
 * supports queries such as AND, OR, NOT, -, and +.
-* treats "and" and "or" as "AND" and "OR" in Lucene syntax mode.respects the 'magic field'
names `_val_` and `_query_`. These are not a real fields in the Schema, but if used it helps
do special things (like a function query in the case of `_val_` or a nested query in the case
of `_query_`). If `_val_` is used in a term or phrase query, the value is parsed as a function.
+* treats "and" and "or" as "AND" and "OR" in Lucene syntax mode.respects the 'magic field'
names `\_val_` and `\_query_`. These are not a real fields in the Schema, but if used it helps
do special things (like a function query in the case of `\_val_` or a nested query in the
case of `\_query_`). If `\_val_` is used in a term or phrase query, the value is parsed as
a function.
 * includes improved smart partial escaping in the case of syntax errors; fielded queries,
+/-, and phrase queries are still supported in this mode.
 * improves proximity boosting by using word shingles; you do not need the query to match
all words in the document before proximity boosting is applied.
 * includes advanced stopword handling: stopwords are not required in the mandatory part of
the query but are still used in the proximity boosting part. If a query consists of all stopwords,
such as "to be or not to be", then all words are required.
@@ -74,7 +76,7 @@ A Boolean parameter indicating if the `StopFilterFactory` configured in
the quer
 [[TheExtendedDisMaxQueryParser-TheufParameter]]
 === The `uf` Parameter
 
-Specifies which schema fields the end user is allowed to explicitly query. This parameter
supports wildcards. The default is to allow all fields, equivalent to `uf=*`. To allow only
title field, use `uf=title`. To allow title and all fields ending with _s, use `uf=title,*_s`.
To allow all fields except title, use `uf=*-title`. To disallow all fielded searches, use
`uf=-*`.
+Specifies which schema fields the end user is allowed to explicitly query. This parameter
supports wildcards. The default is to allow all fields, equivalent to `uf=\*`. To allow only
title field, use `uf=title`. To allow title and all fields ending with '_s', use `uf=title,*_s`.
To allow all fields except title, use `uf=*,-title`. To disallow all fielded searches, use
`uf=-*`.
 
 [[TheExtendedDisMaxQueryParser-Fieldaliasingusingper-fieldqfoverrides]]
 === Field aliasing using per-field `qf` overrides
@@ -167,7 +169,7 @@ With these parameters, the Dismax Query Parser generates a query that
looks some
 
 But it also generates another query that will only be used for boosting results:
 
-[source,java]
+[source,plain]
 ----
 field1:"foo bar"^50 OR field2:"foo bar"^20
 ----
@@ -206,11 +208,11 @@ Finally, in addition to the phrase fields (`pf`) parameter, `edismax`
also suppo
 // OLD_CONFLUENCE_ID: TheExtendedDisMaxQueryParser-Usingthe'magicfields'_val_and_query_
 
 [[TheExtendedDisMaxQueryParser-Usingthe_magicfields__val_and_query_]]
-== Using the 'magic fields' _val_ and _query_
+== Using the 'magic fields' `\_val_` and `\_query_`
 
-The Solr Query Parser's use of `_val_` and `_query_` differs from the Lucene Query Parser
in the following ways:
+The Solr Query Parser's use of `\_val_` and `\_query_` differs from the Lucene Query Parser
in the following ways:
 
-* If the magic field name `_val_` is used in a term or phrase query, the value is parsed
as a function.
+* If the magic field name `\_val_` is used in a term or phrase query, the value is parsed
as a function.
 
 * It provides a hook into http://wiki.apache.org/solr/FunctionQuery[`FunctionQuery`] syntax.
Quotes are necessary to encapsulate the function when it includes parentheses. For example:
 +
@@ -242,6 +244,6 @@ createdate:[1976-03-06T23:59:59.999Z/YEAR TO 1976-03-06T23:59:59.999Z]
 [IMPORTANT]
 ====
 
-TO must be uppercase, or Solr will report a 'Range Group' error.
+`TO` must be uppercase, or Solr will report a 'Range Group' error.
 
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-query-elevation-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-query-elevation-component.adoc b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
index 6005189..227ee2a 100644
--- a/solr/solr-ref-guide/src/the-query-elevation-component.adoc
+++ b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
@@ -2,7 +2,9 @@
 :page-shortname: the-query-elevation-component
 :page-permalink: the-query-elevation-component.html
 
-The https://wiki.apache.org/solr/QueryElevationComponent[Query Elevation Component] lets
you configure the top results for a given query regardless of the normal Lucene scoring. This
is sometimes called "sponsored search," "editorial boosting," or "best bets." This component
matches the user query text to a configured map of top results. The text can be any string
or non-string IDs, as long as it's indexed. Although this component will work with any QueryParser,
it makes the most sense to use with <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>>
or <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,eDisMax>>.
+The https://wiki.apache.org/solr/QueryElevationComponent[Query Elevation Component] lets
you configure the top results for a given query regardless of the normal Lucene scoring.
+
+This is sometimes called "sponsored search," "editorial boosting," or "best bets." This component
matches the user query text to a configured map of top results. The text can be any string
or non-string IDs, as long as it's indexed. Although this component will work with any QueryParser,
it makes the most sense to use with <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>>
or <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,eDisMax>>.
 
 The https://wiki.apache.org/solr/QueryElevationComponent[Query Elevation Component] is supported
by distributed searching.
 
@@ -107,11 +109,11 @@ You can force Solr to return only the results specified in the elevation
file by
 
 The `[elevated]` <<transforming-result-documents.adoc#transforming-result-documents,Document
Transformer>> can be used to annotate each document with information about whether or
not it was elevated:
 
-http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&fl=id,%5Belevated%5D[`http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&fl=id,[elevated]`]
+`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&fl=id,[elevated]`
 
 Likewise, it can be helpful when troubleshooting to see all matching documents – including
documents that the elevation configuration would normally exclude. This is possible by using
the `markExcludes=true` parameter, and then using the `[excluded]` transformer:
 
-http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&markExcludes=true&fl=id,%5Belevated%5D,%5Bexcluded%5D[`http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&markExcludes=true&fl=id,[elevated],[excluded]`]
+`\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&markExcludes=true&fl=id,[elevated],[excluded]`
 
 [[TheQueryElevationComponent-TheelevateIdsandexcludeIdsParameters]]
 === The `elevateIds` and `excludeIds` Parameters

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-standard-query-parser.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-standard-query-parser.adoc b/solr/solr-ref-guide/src/the-standard-query-parser.adoc
index 6e13ad1..0902648 100644
--- a/solr/solr-ref-guide/src/the-standard-query-parser.adoc
+++ b/solr/solr-ref-guide/src/the-standard-query-parser.adoc
@@ -63,10 +63,7 @@ Results:
 
 Here's an example of a query with a limited field list.
 
-[source,html]
-----
-http://localhost:8983/solr/techproducts/select?q=id:SP2514N&fl=id+name
-----
+`http://localhost:8983/solr/techproducts/select?q=id:SP2514N&fl=id+name`
 
 Results:
 
@@ -153,20 +150,20 @@ The distance referred to here is the number of term movements needed
to match th
 [[TheStandardQueryParser-RangeSearches]]
 === Range Searches
 
-A range search specifies a range of values for a field (a range with an upper bound and a
lower bound). The query matches documents whose values for the specified field or fields fall
within the range. Range queries can be inclusive or exclusive of the upper and lower bounds.
Sorting is done lexicographically, except on numeric fields. For example, the range query
below matches all documents whose `mod_date` field has a value between 20020101 and 20030101,
inclusive.
+A range search specifies a range of values for a field (a range with an upper bound and a
lower bound). The query matches documents whose values for the specified field or fields fall
within the range. Range queries can be inclusive or exclusive of the upper and lower bounds.
Sorting is done lexicographically, except on numeric fields. For example, the range query
below matches all documents whose `popularity` field has a value between 52 and 10,000, inclusive.
 
-`mod_date:[20020101 TO 20030101]`
+`popularity:[52 TO 10000]`
 
 Range queries are not limited to date fields or even numerical fields. You could also use
range queries with non-date fields:
 
-`title:{Aida TO Carmen`}
+`title:{Aida TO Carmen}`
 
 This will find all documents whose titles are between Aida and Carmen, but not including
Aida and Carmen.
 
 The brackets around a query determine its inclusiveness.
 
-* Square brackets [ ] denote an inclusive range query that matches values including the upper
and lower bound.
-* Curly brackets \{ } denote an exclusive range query that matches values between the upper
and lower bounds, but excluding the upper and lower bounds themselves.
+* Square brackets `[` & `]` denote an inclusive range query that matches values including
the upper and lower bound.
+* Curly brackets `{` & `}` denote an exclusive range query that matches values between
the upper and lower bounds, but excluding the upper and lower bounds themselves.
 * You can mix these types so one end of the range is inclusive and the other is exclusive.
Here's an example: `count:{1 TO 10]`
 
 // OLD_CONFLUENCE_ID: TheStandardQueryParser-BoostingaTermwith^
@@ -319,11 +316,11 @@ For example, to search for documents that contain "jakarta apache" but
not "Apac
 
 Solr gives the following characters special meaning when they appear in a query:
 
-+ - && || ! ( ) \{ } [ ] ^ " ~ * ? : /
+`+` `-` `&&` `||` `!` `(` `)` `{` `}` `[` `]` `^` `"` `~` `*` `?` `:` `/`
 
-To make Solr interpret any of these characters literally, rather as a special character,
precede the character with a backslash character \. For example, to search for (1+1):2 without
having Solr interpret the plus sign and parentheses as special characters for formulating
a sub-query with two terms, escape the characters by preceding each one with a backslash:
+To make Solr interpret any of these characters literally, rather as a special character,
precede the character with a backslash character `\`. For example, to search for (1+1):2 without
having Solr interpret the plus sign and parentheses as special characters for formulating
a sub-query with two terms, escape the characters by preceding each one with a backslash:
 
-[source,html]
+[source,plain]
 ----
 \(1\+1\)\:2
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-stats-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-stats-component.adoc b/solr/solr-ref-guide/src/the-stats-component.adoc
index e7db6f9..246a369 100644
--- a/solr/solr-ref-guide/src/the-stats-component.adoc
+++ b/solr/solr-ref-guide/src/the-stats-component.adoc
@@ -42,9 +42,9 @@ This parameter can be specified using per-filed override (ie: `f.<field>.stats.c
 [[TheStatsComponent-Example]]
 === Example
 
-The query below demonstrates computing stats against two different fields numeric fields,
as well as stats over the results of a a 'termfreq()' function call using the 'text' field:
+The query below demonstrates computing stats against two different fields numeric fields,
as well as stats over the results of a `termfreq()` function call using the `text` field:
 
-`http://localhost:8983/solr/techproducts/select?q=*:*&stats=true&stats.field={!func}termfreq('text','memory')&stats.field=price&stats.field=popularity&rows=0&indent=true`
+`\http://localhost:8983/solr/techproducts/select?q=*:*&stats=true&stats.field={!func}termfreq('text','memory')&stats.field=price&stats.field=popularity&rows=0&indent=true`
 
 [source,xml]
 ----
@@ -153,7 +153,7 @@ Additional "Expert" local params are supported in some cases for affecting
the b
 
 Here we compute some statistics for the price field. The min, max, mean, 90th, and 99th percentile
price values are computed against all products that are in stock (`q=*:*` and `fq=inStock:true`),
and independently all of the default statistics are computed against all products regardless
of whether they are in stock or not (by excluding that filter).
 
-`http://localhost:8983/solr/techproducts/select?q=*:*&fq={!tag=stock_check}inStock:true&stats=true&stats.field={!ex=stock_check+key=instock_prices+min=true+max=true+mean=true+percentiles='90,99'}price&stats.field={!key=all_prices}price&rows=0&indent=true`
+`\http://localhost:8983/solr/techproducts/select?q=*:*&fq={!tag=stock_check}inStock:true&stats=true&stats.field={!ex=stock_check+key=instock_prices+min=true+max=true+mean=true+percentiles='90,99'}price&stats.field={!key=all_prices}price&rows=0&indent=true`
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-term-vector-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-term-vector-component.adoc b/solr/solr-ref-guide/src/the-term-vector-component.adoc
index af4d072..9b5a355 100644
--- a/solr/solr-ref-guide/src/the-term-vector-component.adoc
+++ b/solr/solr-ref-guide/src/the-term-vector-component.adoc
@@ -37,7 +37,7 @@ A request handler must then be configured to use this component name. In
the `te
 </requestHandler>
 ----
 
-Once your handler is defined, you may use in conjunction with any schema (that has a `uniqueKeyField)`
to fetch term vectors for fields configured with the `termVector` attribute, such as in the
`techproducts`for example:
+Once your handler is defined, you may use in conjunction with any schema (that has a `uniqueKeyField)`
to fetch term vectors for fields configured with the `termVector` attribute, such as in the
`techproducts` sample schema.  For example:
 
 [source,xml]
 ----
@@ -56,7 +56,7 @@ Once your handler is defined, you may use in conjunction with any schema
(that h
 
 The example below shows an invocation of this component using the above configuration:
 
-`http://localhost:8983/solr/techproducts/tvrh?q=*%3A*&start=0&rows=10&fl=id,includes`
+`\http://localhost:8983/solr/techproducts/tvrh?q=*:*&start=0&rows=10&fl=id,includes`
 
 [source,xml]
 ----
@@ -113,7 +113,7 @@ The example below shows an invocation of this component using the above
configur
 
 The example below shows the available request parameters for this component:
 
-`http://localhost:8983/solr/techproducts/tvrh?q=includes:[* TO *]&rows=10&indent=true&tv=true&tv.tf=true&tv.df=true&tv.positions=true&tv.offsets=true&tv.payloads=true&tv.fl=includes`
+`\http://localhost:8983/solr/techproducts/tvrh?q=includes:[* TO *]&rows=10&indent=true&tv=true&tv.tf=true&tv.df=true&tv.positions=true&tv.offsets=true&tv.payloads=true&tv.fl=includes`
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-terms-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-terms-component.adoc b/solr/solr-ref-guide/src/the-terms-component.adoc
index 54a7753..fb33951 100644
--- a/solr/solr-ref-guide/src/the-terms-component.adoc
+++ b/solr/solr-ref-guide/src/the-terms-component.adoc
@@ -262,7 +262,7 @@ You can use the parameter `omitHeader=true` to omit the response header
from the
 
 Result:
 
-[source,plain]
+[source,json]
 ----
 {
   "terms": {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc b/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
index 3995efb..4408811 100644
--- a/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
+++ b/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
@@ -3,7 +3,11 @@
 :page-permalink: the-well-configured-solr-instance.html
 :page-children: configuring-solrconfig-xml, solr-cores-and-solr-xml, configuration-apis,
implicit-requesthandlers, solr-plugins, jvm-settings
 
-This section tells you how to fine-tune your Solr instance for optimum performance. This
section covers the following topics:
+This section tells you how to fine-tune your Solr instance for optimum performance.
+
+// TODO: this page is basically a large TOC - do we want to keep it or reword it?
+
+This section covers the following topics:
 
 <<configuring-solrconfig-xml.adoc#configuring-solrconfig-xml,Configuring solrconfig.xml>>:
Describes how to work with the main configuration file for Solr, `solrconfig.xml`, covering
the major sections of the file.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/thread-dump.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/thread-dump.adoc b/solr/solr-ref-guide/src/thread-dump.adoc
index 9d7abe1..2b66841 100644
--- a/solr/solr-ref-guide/src/thread-dump.adoc
+++ b/solr/solr-ref-guide/src/thread-dump.adoc
@@ -2,8 +2,11 @@
 :page-shortname: thread-dump
 :page-permalink: thread-dump.html
 
-The Thread Dump screen lets you inspect the currently active threads on your server. Each
thread is listed and access to the stacktraces is available where applicable. Icons to the
left indicate the state of the thread: for example, threads with a green check-mark in a green
circle are in a "RUNNABLE" state. On the right of the thread name, a down-arrow means you
can expand to see the stacktrace for that thread.
+The Thread Dump screen lets you inspect the currently active threads on your server.
 
+Each thread is listed and access to the stacktraces is available where applicable. Icons
to the left indicate the state of the thread: for example, threads with a green check-mark
in a green circle are in a "RUNNABLE" state. On the right of the thread name, a down-arrow
means you can expand to see the stacktrace for that thread.
+
+.List of Threads
 image::images/thread-dump/thread_dump_1.png[image,width=484,height=250]
 
 
@@ -22,9 +25,9 @@ When you move your cursor over a thread name, a box floats over the name
with th
 
 When you click on one of the threads that can be expanded, you'll see the stacktrace, as
in the example below:
 
+.Inspecting a Thread
 image::images/thread-dump/thread_dump_2.png[image,width=453,height=250]
 
 
-_Inspecting a thread_
 
 You can also check the *Show all Stacktraces* button to automatically enable expansion for
all threads.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/tokenizers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/tokenizers.adoc b/solr/solr-ref-guide/src/tokenizers.adoc
index aa982b9..0c0cf31 100644
--- a/solr/solr-ref-guide/src/tokenizers.adoc
+++ b/solr/solr-ref-guide/src/tokenizers.adoc
@@ -2,6 +2,8 @@
 :page-shortname: tokenizers
 :page-permalink: tokenizers.html
 
+Tokenizers are responsible for breaking field data into lexical units, or _tokens_.
+
 You configure the tokenizer for a text field type in `schema.xml` with a `<tokenizer>`
element, as a child of `<analyzer>`:
 
 [source,xml]
@@ -154,7 +156,7 @@ Tokenizes the input stream by delimiting at non-letters and then converting
all
 </analyzer>
 ----
 
-*In:* "I just *LOVE* my iPhone!"
+*In:* "I just \*LOVE* my iPhone!"
 
 *Out:* "i", "just", "love", "my", "iphone"
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
index 0eb5b0b..667dbc6 100644
--- a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
+++ b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
@@ -144,13 +144,13 @@ If you are working in <<schemaless-mode.adoc#schemaless-mode,Schemaless
Mode>>,
 [[TransformingandIndexingCustomJSON-Wildcards]]
 == Wildcards
 
-Instead of specifying all the field names explicitly, it is possible to specify wildcards
to map fields automatically. There are two restrictions: wildcards can only be used at the
end of the `json-path`, and the split path cannot use wildcards. A single asterisk "*" maps
only to direct children, and a double asterisk "**" maps recursively to all descendants. The
following are example wildcard path mappings:
+Instead of specifying all the field names explicitly, it is possible to specify wildcards
to map fields automatically. There are two restrictions: wildcards can only be used at the
end of the `json-path`, and the split path cannot use wildcards. A single asterisk `\*` maps
only to direct children, and a double asterisk `\*\*` maps recursively to all descendants.
The following are example wildcard path mappings:
 
 * `f=$FQN:/**`: maps all fields to the fully qualified name (`$FQN`) of the JSON field. The
fully qualified name is obtained by concatenating all the keys in the hierarchy with a period
(`.`) as a delimiter. This is the default behavior if no `f` path mappings are specified.
 * `f=/docs/*`: maps all the fields under docs and in the name as given in json
 * `f=/docs/**`: maps all the fields under docs and its children in the name as given in json
-* http://searchField/docs/*[`f=searchField:/docs/*`] : maps all fields under /docs to a single
field called ‘searchField’
-* http://searchField/docs/**[`f=searchField:/docs/**`] : maps all fields under /docs and
its children to searchField
+* `f=searchField:/docs/*` : maps all fields under /docs to a single field called ‘searchField’
+* `f=searchField:/docs/**` : maps all fields under /docs and its children to searchField
 
 With wildcards we can further simplify our previous example as follows:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/79054161/solr/solr-ref-guide/src/transforming-result-documents.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-result-documents.adoc b/solr/solr-ref-guide/src/transforming-result-documents.adoc
index 9af3ef6..47a7177 100644
--- a/solr/solr-ref-guide/src/transforming-result-documents.adoc
+++ b/solr/solr-ref-guide/src/transforming-result-documents.adoc
@@ -9,21 +9,21 @@ Document Transformers can be used to modify the information returned about
each
 
 When executing a request, a document transformer can be used by including it in the `fl`
parameter using square brackets, for example:
 
-[source,java]
+[source,plain]
 ----
 fl=id,name,score,[shard]
 ----
 
 Some transformers allow, or require, local parameters which can be specified as key value
pairs inside the brackets:
 
-[source,java]
+[source,plain]
 ----
 fl=id,name,score,[explain style=nl]
 ----
 
 As with regular fields, you can change the key used when a Transformer adds a field to a
document via a prefix:
 
-[source,java]
+[source,plain]
 ----
 fl=id,name,score,my_val_a:[value v=42 t=int],my_val_b:[value v=7 t=float]
 ----
@@ -40,7 +40,7 @@ The sections below discuss exactly what these various transformers do.
 
 Modifies every document to include the exact same value, as if it were a stored field in
every document:
 
-[source,java]
+[source,plain]
 ----
 q=*:*&fl=id,greeting:[value v='hello']
 ----
@@ -59,7 +59,7 @@ The above query would produce results like the following:
 
 By default, values are returned as a String, but a "```t```" parameter can be specified using
a value of int, float, double, or date to force a specific return type:
 
-[source,java]
+[source,plain]
 ----
 q=*:*&fl=id,my_number:[value v=42 t=int],my_string:[value v=42]
 ----
@@ -85,7 +85,7 @@ The "```value```" option forces an explicit value to always be used, while
the "
 
 Augments each document with an inline explanation of its score exactly like the information
available about each document in the debug section:
 
-[source,java]
+[source,plain]
 ----
 q=features:cache&wt=json&fl=id,[explain style=nl]
 ----
@@ -94,7 +94,7 @@ Supported values for "```style```" are "```text```", and "```html```", and
"nl"
 
 [source,json]
 ----
-  "response":{"numFound":2,"start":0,"docs":[
+{ "response":{"numFound":2,"start":0,"docs":[
       {
         "id":"6H500F0",
         "[explain]":{
@@ -121,7 +121,7 @@ A default style can be configured by specifying an "args" parameter in
your conf
 
 This transformer returns all <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-NestedChildDocuments,descendant
documents>> of each parent document matching your query in a flat list nested inside
the matching parent document. This is useful when you have indexed nested child documents
and want to retrieve the child documents for the relevant parent documents for any type of
search query.
 
-[source,java]
+[source,plain]
 ----
 fl=id,[child parentFilter=doc_type:book childFilter=doc_type:chapter limit=100]
 ----
@@ -161,14 +161,14 @@ These transformers are available only when using the <<the-query-elevation-compo
 * `[elevated]` annotates each document to indicate if it was elevated or not.
 * `[excluded]` annotates each document to indicate if it would have been excluded - this
is only supported if you also use the `markExcludes` parameter.
 
-[source,java]
+[source,plain]
 ----
 fl=id,[elevated],[excluded]&excludeIds=GB18030TEST&elevateIds=6H500F0&markExcludes=true
 ----
 
 [source,json]
 ----
-  "response":{"numFound":32,"start":0,"docs":[
+{ "response":{"numFound":32,"start":0,"docs":[
       {
         "id":"6H500F0",
         "[elevated]":true,
@@ -191,7 +191,7 @@ fl=id,[elevated],[excluded]&excludeIds=GB18030TEST&elevateIds=6H500F0&markExclud
 
 These transformers replace field value containing a string representation of a valid XML
or JSON structure with the actual raw XML or JSON structure rather than just the string value.
Each applies only to the specific writer, such that `[json]` only applies to `wt=json` and
`[xml]` only applies to `wt=xml`.
 
-[source,java]
+[source,plain]
 ----
 fl=id,source_s:[json]&wt=json
 ----
@@ -231,24 +231,24 @@ Here is how it looks like in various formats:
 
 [source,json]
 ----
-"response":{
-  "numFound":2, "start":0,
-  "docs":[
-    {
-      "id":1,
-      "subject":["parentDocument"],
-      "title":["xrxvomgu"],
-      "children":{ 
-         "numFound":1, "start":0,
-         "docs":[
-            { "id":2,
-              "cat":["childDocument"]
-            }
-          ]
-    }},
-    {
-       "id":4,
-    ...
+{ "response":{
+    "numFound":2, "start":0,
+    "docs":[
+      {
+        "id":1,
+        "subject":["parentDocument"],
+        "title":["xrxvomgu"],
+        "children":{ 
+           "numFound":1, "start":0,
+           "docs":[
+              { "id":2,
+                "cat":["childDocument"]
+              }
+            ]
+      }},
+      {
+         "id":4,
+      ...
 ----
 
 [source,java]
@@ -261,7 +261,7 @@ Here is how it looks like in various formats:
 
 To appear in subquery document list, a field should be specified both fl parameters, in main
one fl (despite the main result documents have no this field) and in subquery's one eg `foo.fl`.
Of course, you can use wildcard in any or both of these parameters. For example, if field
title should appear in categories subquery, it can be done via one of these ways.
 
-[source,java]
+[source,plain]
 ----
 fl=...title,categories:[subquery]&categories.fl=title&categories.q=...
 fl=...title,categories:[subquery]&categories.fl=*&categories.q=...
@@ -304,7 +304,7 @@ If subquery collection has a different unique key field name (let's say
`foo_id`
 // OLD_CONFLUENCE_ID: TransformingResultDocuments-[geo]-Geospatialformatter
 
 [[TransformingResultDocuments-_geo_-Geospatialformatter]]
-=== [geo] - Geospatial formatter
+=== `[geo]` - Geospatial formatter
 
 Formats spatial data from a spatial field using a designated format type name. Two inner
parameters are required: `f` for the field name, and `w` for the format name. Example: `geojson:[geo
f=mySpatialField w=GeoJSON]`.
 
@@ -315,18 +315,18 @@ In addition, this feature is very useful with the `RptWithGeometrySpatialField`
 // OLD_CONFLUENCE_ID: TransformingResultDocuments-[features]-LTRFeatureLoggerTransformerFactory
 
 [[TransformingResultDocuments-_features_-LTRFeatureLoggerTransformerFactory]]
-=== [features] - LTRFeatureLoggerTransformerFactory
+=== `[features]` - LTRFeatureLoggerTransformerFactory
 
 The "LTR" prefix stands for <<learning-to-rank.adoc#learning-to-rank,Learning To Rank>>.
This transformer returns the values of features and it can be used for feature extraction
and feature logging.
 
-[source,java]
+[source,plain]
 ----
 fl=id,[features store=yourFeatureStore]
 ----
 
 This will return the values of the features in the `yourFeatureStore` store.
 
-[source,java]
+[source,plain]
 ----
 fl=id,[features]&rq={!ltr model=yourModel}
 ----


Mime
View raw message