lucene-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From cpoersc...@apache.org
Subject [35/50] [abbrv] lucene-solr:jira/solr-8668: squash merge jira/solr-10290 into master
Date Fri, 12 May 2017 13:43:28 GMT
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/blob-store-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blob-store-api.adoc b/solr/solr-ref-guide/src/blob-store-api.adoc
new file mode 100644
index 0000000..8e23ed9
--- /dev/null
+++ b/solr/solr-ref-guide/src/blob-store-api.adoc
@@ -0,0 +1,135 @@
+= Blob Store API
+:page-shortname: blob-store-api
+:page-permalink: blob-store-api.html
+
+The Blob Store REST API provides REST methods to store, retrieve or list files in a Lucene
index.
+
+It can be used to upload a jar file which contains standard solr components such as RequestHandlers,
SearchComponents, or other custom code you have written for Solr. Schema components _do not_
yet support the Blob Store.
+
+When using the blob store, note that the API does not delete or overwrite a previous object
if a new one is uploaded with the same name. It always adds a new version of the blob to the
index. Deletes can be performed with standard REST delete commands.
+
+*The blob store is only available when running in SolrCloud mode.* Solr in standalone mode
does not support use of a blob store.
+
+The blob store API is implemented as a requestHandler. A special collection named ".system"
is used to store the blobs. This collection can be created in advance, but if it does not
exist it will be created automatically.
+
+[[BlobStoreAPI-Aboutthe.systemCollection]]
+== About the .system Collection
+
+Before uploading blobs to the blob store, a special collection must be created and it must
be named `.system`. Solr will automatically create this collection if it does not already
exist, but you can also create it manually if you choose.
+
+The BlobHandler is automatically registered in the .system collection. The `solrconfig.xml`,
Schema, and other configuration files for the collection are automatically provided by the
system and don't need to be defined specifically.
+
+If you do not use the `-shards` or `-replicationFactor` options, then defaults of numShards=1
and replicationFactor=3 (or maximum nodes in the cluster) will be used.
+
+You can create the `.system` collection with the <<collections-api.adoc#collections-api,Collections
API>>, as in this example:
+
+[source,bash]
+----
+curl http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&replicationFactor=2
+----
+
+IMPORTANT: The `bin/solr` script cannot be used to create the `.system` collection.
+
+[[BlobStoreAPI-UploadFilestoBlobStore]]
+== Upload Files to Blob Store
+
+After the `.system` collection has been created, files can be uploaded to the blob store
with a request similar to the following:
+
+[source,bash]
+----
+curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @{filename} http://localhost:8983/solr/.system/blob/{blobname}
+----
+
+For example, to upload a file named "test1.jar" as a blob named "test", you would make a
POST request like:
+
+[source,bash]
+----
+curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @test1.jar http://localhost:8983/solr/.system/blob/test
+----
+
+A GET request will return the list of blobs and other details:
+
+[source,bash]
+----
+curl http://localhost:8983/solr/.system/blob?omitHeader=true
+----
+
+Output:
+
+[source,json]
+----
+{
+  "response":{"numFound":1,"start":0,"docs":[
+      {
+        "id":"test/1",
+        "md5":"20ff915fa3f5a5d66216081ae705c41b",
+        "blobName":"test",
+        "version":1,
+        "timestamp":"2015-02-04T16:45:48.374Z",
+        "size":13108}]
+  }
+}
+----
+
+Details on individual blobs can be accessed with a request similar to:
+
+[source,bash]
+----
+curl http://localhost:8983/solr/.system/blob/{blobname}
+----
+
+For example, this request will return only the blob named 'test':
+
+[source,bash]
+----
+curl http://localhost:8983/solr/.system/blob/test?omitHeader=true
+----
+
+Output:
+
+[source,json]
+----
+{
+  "response":{"numFound":1,"start":0,"docs":[
+      {
+        "id":"test/1",
+        "md5":"20ff915fa3f5a5d66216081ae705c41b",
+        "blobName":"test",
+        "version":1,
+        "timestamp":"2015-02-04T16:45:48.374Z",
+        "size":13108}]
+  }
+}
+----
+
+The filestream response writer can return a particular version of a blob for download, as
in:
+
+[source,bash]
+----
+curl http://localhost:8983/solr/.system/blob/{blobname}/{version}?wt=filestream > {outputfilename}
+----
+
+For the latest version of a blob, the \{version} can be omitted,
+
+[source,bash]
+----
+curl http://localhost:8983/solr/.system/blob/{blobname}?wt=filestream > {outputfilename}
+----
+
+[[BlobStoreAPI-UseaBlobinaHandlerorComponent]]
+== Use a Blob in a Handler or Component
+
+To use the blob as the class for a request handler or search component, you create a request
handler in `solrconfig.xml` as usual. You will need to define the following parameters:
+
+`class`:: the fully qualified class name. For example, if you created a new request handler
class called CRUDHandler, you would enter `org.apache.solr.core.CRUDHandler`.
+`runtimeLib`:: Set to true to require that this component should be loaded from the classloader
that loads the runtime jars.
+
+For example, to use a blob named test, you would configure `solrconfig.xml` like this:
+
+[source,xml]
+----
+<requestHandler name="/myhandler" class="org.apache.solr.core.myHandler" runtimeLib="true"
version="1">
+</requestHandler>
+----
+
+If there are parameters available in the custom handler, you can define them in the same
way as any other request handler definition.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/blockjoin-faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blockjoin-faceting.adoc b/solr/solr-ref-guide/src/blockjoin-faceting.adoc
new file mode 100644
index 0000000..05194de
--- /dev/null
+++ b/solr/solr-ref-guide/src/blockjoin-faceting.adoc
@@ -0,0 +1,99 @@
+= BlockJoin Faceting
+:page-shortname: blockjoin-faceting
+:page-permalink: blockjoin-faceting.html
+
+BlockJoin facets allow you to aggregate children facet counts by their parents.
+
+It is a common requirement that if a parent document has several children documents, all
of them need to increment facet value count only once. This functionality is provided by `BlockJoinDocSetFacetComponent`,
and `BlockJoinFacetComponent` just an alias for compatibility.
+
+CAUTION: This component is considered experimental, and must be explicitly enabled for a
request handler in `solrconfig.xml`, in the same way as any other <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,search
component>>.
+
+This example shows how you could add this search components to `solrconfig.xml` and define
it in request handler:
+
+[source,xml]
+----
+  <searchComponent name="bjqFacetComponent" class="org.apache.solr.search.join.BlockJoinDocSetFacetComponent"/>
+
+   <requestHandler name="/bjqfacet" class="org.apache.solr.handler.component.SearchHandler">
+    <lst name="defaults">
+      <str name="shards.qt">/bjqfacet</str>
+    </lst>
+    <arr name="last-components">
+      <str>bjqFacetComponent</str>
+    </arr>
+  </requestHandler>
+----
+
+This component can be added into any search request handler. This component work with distributed
search in SolrCloud mode.
+
+Documents should be added in children-parent blocks as described in <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-NestedChildDocuments,indexing
nested child documents>>. Examples:
+
+.Sample document
+[source,xml]
+----
+<add>
+  <doc>
+    <field name="id">1</field>
+    <field name="type_s">parent</field>
+    <doc>
+      <field name="id">11</field>
+      <field name="COLOR_s">Red</field>
+      <field name="SIZE_s">XL</field>
+      <field name="PRICE_i">6</field>
+    </doc>
+    <doc>
+      <field name="id">12</field>
+      <field name="COLOR_s">Red</field>
+      <field name="SIZE_s">XL</field>
+      <field name="PRICE_i">7</field>
+    </doc>
+    <doc>
+      <field name="id">13</field>
+      <field name="COLOR_s">Blue</field>
+      <field name="SIZE_s">L</field>
+      <field name="PRICE_i">5</field>
+    </doc>
+  </doc>
+  <doc>
+    <field name="id">2</field>
+    <field name="type_s">parent</field>
+    <doc>
+      <field name="id">21</field>
+      <field name="COLOR_s">Blue</field>
+      <field name="SIZE_s">XL</field>
+      <field name="PRICE_i">6</field>
+    </doc>
+    <doc>
+      <field name="id">22</field>
+      <field name="COLOR_s">Blue</field>
+      <field name="SIZE_s">XL</field>
+      <field name="PRICE_i">7</field>
+    </doc>
+    <doc>
+      <field name="id">23</field>
+      <field name="COLOR_s">Red</field>
+      <field name="SIZE_s">L</field>
+      <field name="PRICE_i">5</field>
+    </doc>
+  </doc>
+</add>
+----
+
+Queries are constructed the same way as for a <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,Parent
Block Join query>>. For example:
+
+[source,text]
+----
+http://localhost:8983/solr/bjqfacet?q={!parent which=type_s:parent}SIZE_s:XL&child.facet.field=COLOR_s
+----
+
+As a result we should have facets for Red(1) and Blue(1), because matches on children `id=11`
and `id=12` are aggregated into single hit into parent with `id=1`. The key components of
the request are:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599
is fixed
+
+[cols="30,70",options="header"]
+|===
+|URL Part | Meaning
+|`/bjqfacet` |The name of the request handler that has been defined with one of block join
facet components enabled.
+|`q={!parent ...}..` |The mandatory parent query as a main query. The parent query could
also be a subordinate clause in a more complex query.
+|`child.facet.field=...` |The child document field, which might be repeated many times with
several fields, as necessary.
+|===

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/charfilterfactories.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/charfilterfactories.adoc b/solr/solr-ref-guide/src/charfilterfactories.adoc
new file mode 100644
index 0000000..20ff949
--- /dev/null
+++ b/solr/solr-ref-guide/src/charfilterfactories.adoc
@@ -0,0 +1,159 @@
+= CharFilterFactories
+:page-shortname: charfilterfactories
+:page-permalink: charfilterfactories.html
+
+CharFilter is a component that pre-processes input characters.
+
+CharFilters can be chained like Token Filters and placed in front of a Tokenizer. CharFilters
can add, change, or remove characters while preserving the original character offsets to support
features like highlighting.
+
+[[CharFilterFactories-solr.MappingCharFilterFactory]]
+== solr.MappingCharFilterFactory
+
+This filter creates `org.apache.lucene.analysis.MappingCharFilter`, which can be used for
changing one string to another (for example, for normalizing `é` to `e`.).
+
+This filter requires specifying a `mapping` argument, which is the path and name of a file
containing the mappings to perform.
+
+Example:
+
+[source,xml]
+----
+<analyzer>
+  <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-FoldToASCII.txt"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+
+Mapping file syntax:
+
+* Comment lines beginning with a hash mark (`#`), as well as blank lines, are ignored.
+* Each non-comment, non-blank line consists of a mapping of the form: `"source" => "target"`
+** Double-quoted source string, optional whitespace, an arrow (`=>`), optional whitespace,
double-quoted target string.
+* Trailing comments on mapping lines are not allowed.
+* The source string must contain at least one character, but the target string may be empty.
+* The following character escape sequences are recognized within source and target strings:
++
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599
is fixed
++
+[cols="20,30,20,30",options="header"]
+|===
+|Escape Sequence |Resulting Character (http://www.ecma-international.org/publications/standards/Ecma-048.htm[ECMA-48]
alias) |Unicode Character |Example Mapping Line
+|`\\` |`\` |U+005C |`"\\" => "/"`
+|`\"` |`"` |U+0022 |`"\"and\"" => "'and'"`
+|`\b` |backspace (BS) |U+0008 |`"\b" => " "`
+|`\t` |tab (HT) |U+0009 |`"\t" => ","`
+|`\n` |newline (LF) |U+000A |`"\n" => "<br>"`
+|`\f` |form feed (FF) |U+000C |`"\f" => "\n"`
+|`\r` |carriage return (CR) |U+000D |`"\r" => "/carriage-return/"`
+|`\uXXXX` |Unicode char referenced by the 4 hex digits |U+XXXX |`"\uFEFF" => ""`
+|===
+** A backslash followed by any other character is interpreted as if the character were present
without the backslash.
+
+[[CharFilterFactories-solr.HTMLStripCharFilterFactory]]
+== solr.HTMLStripCharFilterFactory
+
+This filter creates `org.apache.solr.analysis.HTMLStripCharFilter`. This CharFilter strips
HTML from the input stream and passes the result to another CharFilter or a Tokenizer.
+
+This filter:
+
+* Removes HTML/XML tags while preserving other content.
+* Removes attributes within tags and supports optional attribute quoting.
+* Removes XML processing instructions, such as: <?foo bar?>
+* Removes XML comments.
+* Removes XML elements starting with <!>.
+* Removes contents of <script> and <style> elements.
+* Handles XML comments inside these elements (normal comment processing will not always work).
+* Replaces numeric character entities references like `&#65`; or `&#x7f`; with the
corresponding character.
+* The terminating ';' is optional if the entity reference at the end of the input; otherwise
the terminating ';' is mandatory, to avoid false matches on something like "Alpha&Omega
Corp".
+* Replaces all named character entity references with the corresponding character.
+* `&nbsp`; is replaced with a space instead of the 0xa0 character.
+* Newlines are substituted for block-level elements.
+* <CDATA> sections are recognized.
+* Inline tags, such as `<b>`, `<i>`, or `<span>` will be removed.
+* Uppercase character entities like `quot`, `gt`, `lt` and `amp` are recognized and handled
as lowercase.
+
+TIP: The input need not be an HTML document. The filter removes only constructs that look
like HTML. If the input doesn't include anything that looks like HTML, the filter won't remove
any input.
+
+The table below presents examples of HTML stripping.
+
+[width="100%",options="header",]
+|===
+|Input |Output
+|`my <a href="www.foo.bar">link</a>` |my link
+|`<br>hello<!--comment-->` |hello
+|`hello<script><!-- f('<!--internal--></script>'); --></script>`
|hello
+|`if a<b then print a;` |if a<b then print a;
+|`hello <td height=22 nowrap align="left">` |hello
+|`a<b &#65 Alpha&Omega Ω` |a<b A Alpha&Omega Ω
+|===
+
+Example:
+
+[source,xml]
+----
+<analyzer>
+  <charFilter class="solr.HTMLStripCharFilterFactory"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+
+[[CharFilterFactories-solr.ICUNormalizer2CharFilterFactory]]
+== solr.ICUNormalizer2CharFilterFactory
+
+This filter performs pre-tokenization Unicode normalization using http://site.icu-project.org[ICU4J].
+
+Arguments:
+
+`name`:: A http://unicode.org/reports/tr15/[Unicode Normalization Form], one of `nfc`, `nfkc`,
`nfkc_cf`. Default is `nfkc_cf`.
+
+`mode`:: Either `compose` or `decompose`. Default is `compose`. Use `decompose` with `name="nfc"`
or `name="nfkc"` to get NFD or NFKD, respectively.
+
+`filter`:: A http://www.icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]
pattern. Codepoints outside the set are always left unchanged. Default is `[]` (the null set,
no filtering - all codepoints are subject to normalization).
+
+Example:
+
+[source,xml]
+----
+<analyzer>
+  <charFilter class="solr.ICUNormalizer2CharFilterFactory"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+
+[[CharFilterFactories-solr.PatternReplaceCharFilterFactory]]
+== solr.PatternReplaceCharFilterFactory
+
+This filter uses http://www.regular-expressions.info/reference.html[regular expressions]
to replace or change character patterns.
+
+Arguments:
+
+`pattern`:: the regular expression pattern to apply to the incoming text.
+
+`replacement`:: the text to use to replace matching patterns.
+
+You can configure this filter in `schema.xml` like this:
+
+[source,xml]
+----
+<analyzer>
+  <charFilter class="solr.PatternReplaceCharFilterFactory"
+             pattern="([nN][oO]\.)\s*(\d+)" replacement="$1$2"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+
+The table below presents examples of regex-based pattern replacement:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599
is fixed
+
+[cols="20,20,10,20,30",options="header"]
+|===
+|Input |Pattern |Replacement |Output |Description
+|see-ing looking |`(\w+)(ing)` |`$1` |see-ing look |Removes "ing" from the end of word.
+|see-ing looking |`(\w+)ing` |`$1` |see-ing look |Same as above. 2nd parentheses can be omitted.
+|No.1 NO. no. 543 |`[nN][oO]\.\s*(\d+)` |`#$1` |#1 NO. #543 |Replace some string literals
+|abc=1234=5678 |`(\w+)=(\d+)=(\d+)` |`$3=$1=$2` |5678=abc=1234 |Change the order of the groups.
+|===

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/choosing-an-output-format.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/choosing-an-output-format.adoc b/solr/solr-ref-guide/src/choosing-an-output-format.adoc
new file mode 100644
index 0000000..133bed8
--- /dev/null
+++ b/solr/solr-ref-guide/src/choosing-an-output-format.adoc
@@ -0,0 +1,9 @@
+= Choosing an Output Format
+:page-shortname: choosing-an-output-format
+:page-permalink: choosing-an-output-format.html
+
+Many programming environments are able to send HTTP requests and retrieve responses. Parsing
the responses is a slightly more thorny problem. Fortunately, Solr makes it easy to choose
an output format that will be easy to handle on the client side.
+
+Specify a response format using the `wt` parameter in a query. The available response formats
are documented in <<response-writers.adoc#response-writers,Response Writers>>.
+
+Most client APIs hide this detail for you, so for many types of client applications, you
won't ever have to specify a `wt` parameter. In JavaScript, however, the interface to Solr
is a little closer to the metal, so you will need to add this parameter yourself.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/client-api-lineup.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/client-api-lineup.adoc b/solr/solr-ref-guide/src/client-api-lineup.adoc
new file mode 100644
index 0000000..06014e0
--- /dev/null
+++ b/solr/solr-ref-guide/src/client-api-lineup.adoc
@@ -0,0 +1,29 @@
+= Client API Lineup
+:page-shortname: client-api-lineup
+:page-permalink: client-api-lineup.html
+
+The Solr Wiki contains a list of client APIs at http://wiki.apache.org/solr/IntegratingSolr.
+
+Here is the list of client APIs, current at this writing (November 2011):
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599
is fixed
+
+[cols="20,20,60",options="header"]
+|===
+|Name |Environment |URL
+|SolRuby |Ruby |https://github.com/rsolr/rsolr
+|DelSolr |Ruby |https://github.com/avvo/delsolr
+|acts_as_solr |Rails |http://acts-as-solr.rubyforge.org/, http://rubyforge.org/projects/background-solr/
+|Flare |Rails |http://wiki.apache.org/solr/Flare
+|SolPHP |PHP |http://wiki.apache.org/solr/SolPHP
+|SolrJ |Java |http://wiki.apache.org/solr/SolJava
+|Python API |Python |http://wiki.apache.org/solr/SolPython
+|PySolr |Python |http://code.google.com/p/pysolr/
+|SolPerl |Perl |http://wiki.apache.org/solr/SolPerl
+|Solr.pm |Perl |http://search.cpan.org/~garafola/Solr-0.03/lib/Solr.pm
+|SolrForrest |Forrest/Cocoon |http://wiki.apache.org/solr/SolrForrest
+|SolrSharp |C# |http://www.codeplex.com/solrsharp
+|SolColdfusion |ColdFusion |http://solcoldfusion.riaforge.org/
+|SolrNet |.NET |https://github.com/mausch/SolrNet
+|AJAX Solr |AJAX |http://github.com/evolvingweb/ajax-solr/wiki
+|===

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/client-apis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/client-apis.adoc b/solr/solr-ref-guide/src/client-apis.adoc
new file mode 100644
index 0000000..9272520
--- /dev/null
+++ b/solr/solr-ref-guide/src/client-apis.adoc
@@ -0,0 +1,22 @@
+= Client APIs
+:page-shortname: client-apis
+:page-permalink: client-apis.html
+:page-children: introduction-to-client-apis, choosing-an-output-format, client-api-lineup,
using-javascript, using-python, using-solrj, using-solr-from-ruby
+
+This section discusses the available client APIs for Solr. It covers the following topics:
+
+<<introduction-to-client-apis.adoc#introduction-to-client-apis,Introduction to Client
APIs>>: A conceptual overview of Solr client APIs.
+
+<<choosing-an-output-format.adoc#choosing-an-output-format,Choosing an Output Format>>:
Information about choosing a response format in Solr.
+
+<<using-javascript.adoc#using-javascript,Using JavaScript>>: Explains why a client
API is not needed for JavaScript responses.
+
+<<using-python.adoc#using-python,Using Python>>: Information about Python and
JSON responses.
+
+<<client-api-lineup.adoc#client-api-lineup,Client API Lineup>>: A list of all
Solr Client APIs, with links.
+
+<<using-solrj.adoc#using-solrj,Using SolrJ>>: Detailed information about SolrJ,
an API for working with Java applications.
+
+<<using-solr-from-ruby.adoc#using-solr-from-ruby,Using Solr From Ruby>>: Detailed
information about using Solr with Ruby applications.
+
+<<mbean-request-handler.adoc#mbean-request-handler,MBean Request Handler>>: Describes
the MBean request handler for programmatic access to Solr server statistics and information.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/cloud-screens.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cloud-screens.adoc b/solr/solr-ref-guide/src/cloud-screens.adoc
new file mode 100644
index 0000000..927faed
--- /dev/null
+++ b/solr/solr-ref-guide/src/cloud-screens.adoc
@@ -0,0 +1,29 @@
+= Cloud Screens
+:page-shortname: cloud-screens
+:page-permalink: cloud-screens.html
+
+When running in <<solrcloud.adoc#solrcloud,SolrCloud>> mode, a "Cloud" option
will appear in the Admin UI between <<logging.adoc#logging,Logging>> and <<collections-core-admin.adoc#collections-core-admin,Collections/Core
Admin>>.
+
+This screen provides status information about each collection & node in your cluster,
as well as access to the low level data being stored in <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,Zookeeper>>.
+
+.Only Visible When using SolrCloud
+[NOTE]
+====
+The "Cloud" menu option is only available on Solr instances running in <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,SolrCloud
mode>>. Single node or master/slave replication instances of Solr will not display this
option.
+====
+
+Click on the Cloud option in the left-hand navigation, and a small sub-menu appears with
options called "Tree", "Graph", "Graph (Radial)" and "Dump". The default view ("Graph") shows
a graph of each collection, the shards that make up those collections, and the addresses of
each replica for each shard.
+
+This example shows the very simple two-node cluster created using the `bin/solr -e cloud
-noprompt` example command. In addition to the 2 shard, 2 replica "gettingstarted" collection,
there is an additional "films" collection consisting of a single shard/replica:
+
+image::images/cloud-screens/cloud-graph.png[image,width=512,height=250]
+
+The "Graph (Radial)" option provides a different visual view of each node. Using the same
example cluster, the radial graph view looks like:
+
+image::images/cloud-screens/cloud-radial.png[image,width=478,height=250]
+
+The "Tree" option shows a directory structure of the data in ZooKeeper, including cluster
wide information regarding the `live_nodes` and `overseer` status, as well as collection specific
information such as the `state.json`, current shard leaders, and configuration files in use.
In this example, we see the `state.json` file definition for the "films" collection:
+
+image::images/cloud-screens/cloud-tree.png[image,width=487,height=250]
+
+The final option is "Dump", which returns a JSON document containing all nodes, their contents
and their children (recursively). This can be used to export a snapshot of all the data that
Solr has kept inside ZooKeeper and can aid in debugging SolrCloud problems.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/codec-factory.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/codec-factory.adoc b/solr/solr-ref-guide/src/codec-factory.adoc
new file mode 100644
index 0000000..2e80788
--- /dev/null
+++ b/solr/solr-ref-guide/src/codec-factory.adoc
@@ -0,0 +1,21 @@
+= Codec Factory
+:page-shortname: codec-factory
+:page-permalink: codec-factory.html
+
+A `codecFactory` can be specified in `solrconfig.xml` to determine which Lucene {lucene-javadocs}/core/org/apache/lucene/codecs/Codec.html[`Codec`]
is used when writing the index to disk.
+
+If not specified, Lucene's default codec is implicitly used, but a {solr-javadocs}/solr-core/org/apache/solr/core/SchemaCodecFactory.html[`solr.SchemaCodecFactory`]
is also available which supports 2 key features:
+
+* Schema based per-fieldtype configuration for `docValuesFormat` and `postingsFormat` - see
the <<field-type-definitions-and-properties.adoc#field-type-properties,Field Type Properties>>
section for more details.
+* A `compressionMode` option:
+** `BEST_SPEED` (default) is optimized for search speed performance
+** `BEST_COMPRESSION` is optimized for disk space usage
+
+Example:
+
+[source,xml]
+----
+<codecFactory class="solr.SchemaCodecFactory">
+  <str name="compressionMode">BEST_COMPRESSION</str>
+</codecFactory>
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
new file mode 100644
index 0000000..481d61c
--- /dev/null
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -0,0 +1,133 @@
+= Collapse and Expand Results
+:page-shortname: collapse-and-expand-results
+:page-permalink: collapse-and-expand-results.html
+
+The Collapsing query parser and the Expand component combine to form an approach to grouping
documents for field collapsing in search results.
+
+The Collapsing query parser groups documents (collapsing the result set) according to your
parameters, while the Expand component provides access to documents in the collapsed group
for use in results display or other processing by a client application. Collapse & Expand
can together do what the older <<result-grouping.adoc#result-grouping,Result Grouping>>
(`group=true`) does for _most_ use-cases but not all. Generally, you should prefer Collapse
& Expand.
+
+[IMPORTANT]
+====
+In order to use these features with SolrCloud, the documents must be located on the same
shard. To ensure document co-location, you can define the `router.name` parameter as `compositeId`
when creating the collection. For more information on this option, see the section <<shards-and-indexing-data-in-solrcloud.adoc#ShardsandIndexingDatainSolrCloud-DocumentRouting,Document
Routing>>.
+====
+
+[[CollapseandExpandResults-CollapsingQueryParser]]
+== Collapsing Query Parser
+
+The `CollapsingQParser` is really a _post filter_ that provides more performant field collapsing
than Solr's standard approach when the number of distinct groups in the result set is high.
This parser collapses the result set to a single document per group before it forwards the
result set to the rest of the search components. So all downstream components (faceting, highlighting,
etc...) will work with the collapsed result set.
+
+The CollapsingQParser accepts the following local parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599
is fixed
+
+[cols="20,60,20",options="header"]
+|===
+|Parameter |Description |Default
+|field |The field that is being collapsed on. The field must be a single valued String, Int
or Float |none
+|min \| max a|
+Selects the group head document for each group based on which document has the min or max
value of the specified numeric field or <<function-queries.adoc#function-queries,function
query>>.
+
+At most only one of the min, max, or sort (see below) parameters may be specified.
+
+If none are specified, the group head document of each group will be selected based on the
highest scoring document in that group. |none
+|sort a|
+Selects the group head document for each group based on which document comes first according
to the specified <<common-query-parameters.adoc#CommonQueryParameters-ThesortParameter,sort
string>>.
+
+At most only one of the min, max, (see above) or sort parameters may be specified.
+
+If none are specified, the group head document of each group will be selected based on the
highest scoring document in that group. |none
+|nullPolicy a|
+There are three null policies:
+
+* *ignore*: removes documents with a null value in the collapse field. This is the default.
+* *expand*: treats each document with a null value in the collapse field as a separate group.
+* *collapse*: collapses all documents with a null value into a single group using either
highest score, or minimum/maximum.
+
+ |ignore
+|hint |Currently there is only one hint available: `top_fc`, which stands for top level FieldCache.
The `top_fc` hint is only available when collapsing on String fields. `top_fc` usually provides
the best query time speed but takes the longest to warm on startup or following a commit.
`top_fc` will also result in having the collapsed field cached in memory twice if it's used
for faceting or sorting. For very high cardinality (high distinct count) fields, `top_fc`
may not fare so well. |none
+|size |Sets the initial size of the collapse data structures when collapsing on a *numeric
field only*. The data structures used for collapsing grow dynamically when collapsing on numeric
fields. Setting the size above the number of results expected in the result set will eliminate
the resizing cost. |100,000
+|===
+
+*Sample Syntax:*
+
+Collapse on `group_field` selecting the document in each group with the highest scoring document:
+
+[source,text]
+----
+fq={!collapse field=group_field}
+----
+
+Collapse on `group_field` selecting the document in each group with the minimum value of
`numeric_field`:
+
+[source,text]
+----
+fq={!collapse field=group_field min=numeric_field}
+----
+
+Collapse on `group_field` selecting the document in each group with the maximum value of
`numeric_field`:
+
+[source,text]
+----
+fq={!collapse field=group_field max=numeric_field}
+----
+
+Collapse on `group_field` selecting the document in each group with the maximum value of
a function. Note that the *cscore()* function can be used with the min/max options to use
the score of the current document being collapsed.
+
+[source,text]
+----
+fq={!collapse field=group_field max=sum(cscore(),numeric_field)}
+----
+
+Collapse on `group_field` with a null policy so that all docs that do not have a value in
the `group_field` will be treated as a single group. For each group, the selected document
will be based first on a `numeric_field`, but ties will be broken by score:
+
+[source,text]
+----
+fq={!collapse field=group_field nullPolicy=collapse sort='numeric_field asc, score desc'}
+----
+
+Collapse on `group_field` with a hint to use the top level field cache:
+
+[source,text]
+----
+fq={!collapse field=group_field hint=top_fc}
+----
+
+The CollapsingQParserPlugin fully supports the QueryElevationComponent.
+
+[[CollapseandExpandResults-ExpandComponent]]
+== Expand Component
+
+The ExpandComponent can be used to expand the groups that were collapsed by the http://heliosearch.org/the-collapsingqparserplugin-solrs-new-high-performance-field-collapsing-postfilter/[CollapsingQParserPlugin].
+
+Example usage with the CollapsingQParserPlugin:
+
+[source,text]
+----
+q=foo&fq={!collapse field=ISBN}
+----
+
+In the query above, the CollapsingQParserPlugin will collapse the search results on the _ISBN_
field. The main search results will contain the highest ranking document from each book.
+
+The ExpandComponent can now be used to expand the results so you can see the documents grouped
by ISBN. For example:
+
+[source,text]
+----
+q=foo&fq={!collapse field=ISBN}&expand=true
+----
+
+The “expand=true” parameter turns on the ExpandComponent. The ExpandComponent adds a
new section to the search output labeled “expanded”.
+
+Inside the expanded section there is a _map_ with each group head pointing to the expanded
documents that are within the group. As applications iterate the main collapsed result set,
they can access the _expanded_ map to retrieve the expanded groups.
+
+The ExpandComponent has the following parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599
is fixed
+
+[cols="20,60,20",options="header"]
+|===
+|Parameter |Description |Default
+|expand.sort |Orders the documents within the expanded groups |score desc
+|expand.rows |The number of rows to display in each group |5
+|expand.q |Overrides the main q parameter, determines which documents to include in the main
group. |main q
+|expand.fq |Overrides main fq's, determines which documents to include in the main group.
|main fq's
+|===

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/collection-specific-tools.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collection-specific-tools.adoc b/solr/solr-ref-guide/src/collection-specific-tools.adoc
new file mode 100644
index 0000000..b94572a
--- /dev/null
+++ b/solr/solr-ref-guide/src/collection-specific-tools.adoc
@@ -0,0 +1,30 @@
+= Collection-Specific Tools
+:page-shortname: collection-specific-tools
+:page-permalink: collection-specific-tools.html
+:page-children: analysis-screen, dataimport-screen, documents-screen, files-screen, query-screen,
stream-screen, schema-browser-screen
+
+In the left-hand navigation bar, you will see a pull-down menu titled "Collection Selector"
that can be used to access collection specific administration screens.
+
+.Only Visible When Using SolrCloud
+[NOTE]
+====
+The "Collection Selector" pull-down menu is only available on Solr instances running in <<solrcloud.adoc#solrcloud,SolrCloud
mode>>.
+
+Single node or master/slave replication instances of Solr will not display this menu, instead
the Collection specific UI pages described in this section will be available in the <<core-specific-tools.adoc#core-specific-tools,Core
Selector pull-down menu>>.
+====
+
+Clicking on the Collection Selector pull-down menu will show a list of the collections in
your Solr cluster, with a search box that can be used to find a specific collection by name.
When you select a collection from the pull-down, the main display of the page will display
some basic metadata about the collection, and a secondary menu will appear in the left nav
with links to additional collection specific administration screens.
+
+image::images/collection-specific-tools/collection_dashboard.png[image,width=482,height=250]
+
+The collection-specific UI screens are listed below, with a link to the section of this guide
to find out more:
+
+// TODO: SOLR-10655 BEGIN: refactor this into a 'collection-screens-list.include.adoc' file
for reuse
+* <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data
found in specific fields.
+* <<dataimport-screen.adoc#dataimport-screen,Dataimport>> - shows you information
about the current status of the Data Import Handler.
+* <<documents-screen.adoc#documents-screen,Documents>> - provides a simple form
allowing you to execute various Solr indexing commands directly from the browser.
+* <<files-screen.adoc#files-screen,Files>> - shows the current core configuration
files such as `solrconfig.xml`.
+* <<query-screen.adoc#query-screen,Query>> - lets you submit a structured query
about various elements of a core.
+* <<stream-screen.adoc#stream-screen,Stream>> - allows you to submit streaming
expressions and see results and parsing explanations.
+* <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser>> - displays
schema data in a browser window.
+// TODO: SOLR-10655 END


Mime
View raw message