lucenenet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lai...@apache.org
Subject [lucenenet] branch master updated: Website & API Doc site generator using DocFx script (#206)
Date Tue, 26 Feb 2019 14:15:04 GMT
This is an automated email from the ASF dual-hosted git repository.

laimis pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git


The following commit(s) were added to refs/heads/master by this push:
     new 0d56d20  Website & API Doc site generator using DocFx script (#206)
0d56d20 is described below

commit 0d56d207a1dd1e75a71f4c39dc482c1225aafba6
Author: Shannon Deminick <sdeminick@gmail.com>
AuthorDate: Wed Feb 27 01:14:09 2019 +1100

    Website & API Doc site generator using DocFx script (#206)
    
    * Initial commit of powershell build to create an API doc site with docfx
    
    * Updates styles, etc... for the docs
    
    * updates build script to serve website
    
    * updates build to properly serve with an option to not clean cache files
    
    * adds index file for api docs
    
    * fixes a couple of crefs
    
    * creates custom docs files
    
    * updates script to ensure it only includes csproj's that are in the sln file
    
    * Adds wiki example docs, fixes up some toc, adds logging to build, fixes filter config, updates to latest docfx version, updates to correct LUCENENET TODO
    
    * Removes use of custom template files since we can just use the built in new metadata instead
    
    * Adds test files, fixing up some doc refs
    
    * Fixes namespace overwrite issue, adds solution for custom markdown plugin for parsing lucene tokens.
    
    * fixes exposed name of plugin
    
    * Moves source code for docs formatter to the 'src' folder
    
    * Updates build script to ensure the custom DocFx plugin is built for the custom tag parsing, adds readme, updates to latest docfx, includes more projects including the CLI project
    
    * Updates to latest docfx version
    
    * Splitting build into separate projects so we can browse APIs per project, fixes build issues with VS 2017 15.3, removes other test docs - will focus purely on the API docs part for now.
    
    * Gets projects all building separately, added a custom toc, and now we can browse by 'package'
    
    * updates build, ignore and toc
    
    * OK, gets projects -> namespace api docs working but the breadcrumb isn't working so need to figure that out
    
    * turns it into a 3 level toc for now which is better than before, awaiting feedback on gh
    
    * updates to latest docfx including the references in the docs plugin package
    
    * Gets CLI docs building and included as a header link and adds toc files for each folder
    
    * fixes some csproj refs
    
    * adds the Kuromoji package
    
    * Gets more building, includes the markdown docs for use as the namespace documentation
    
    * removes the replicator from the docs since that was erroring for some reason
    
    * Moves the docfx build yml files to a better temporary folder making it easier to cleanup, fixes the docfx build so that it properly builds - this was due to a change with the docfx version, puts the Replicator build back in
    
    * fixes the sln file since there was a duplicate project declared
    
    * fixes toc references
    
    * ensure the docfx log location is absolute
    
    * Adds demo, removes old unused doc example files, updates and includes a few more package.md files and updates the home page with correct linking
    
    * re-organizes the files that are included as files vs namespace overrides, updates docfx version and proj versions
    
    * Get the correct /api URI paths for the generated api docs
    
    * fix whitespace for the @lucene.experimental thing to work
    
    * Updates build to include TestFramework, updates index to match the java lucene docs with the propery xref's
    
    * Gets the index page back to normal with the deep links to API docs, fixes up a bunch of xref links
    
    * removes duplicate entry
    
    * removes the test framework docs from building because this causes collision issues with the same namespaces in the classes... the test framework classes would need a namespace change
    
    * Gets the website up and running with a nice template, updates styles across both websites
    
    * moves the quick start into a partial
    
    * Gets most info and links all ready for the website
    
    * Updates more docs for the website and fixes some invalid links
    
    * commits whitespace changes as a result of a slightly different doc converting logic
    
    * Revert "commits whitespace changes as a result of a slightly different doc converting logic"
    
    This reverts commit c7847af4023321583ac420f36db99e53507f7d85.
    
    * Updates docs based on the new output of the converter
    
    * Gets more docs converted properly with the converter
    
    * Updates the doc converter to append yaml headers correctly
    
    * Fixes most of the xref links
    
    * Fixes link parsing in the doc converter
    
    * removes breadcrumb from download doc, more xrefs fixed
    
    * Attempting to modify the markdig markdown engine to process special tags inside the triple-slash comments ... but can't get it to work so will revert back to dfm, just keeping this here for history
    
    * Revert "Attempting to modify the markdig markdown engine to process special tags inside the triple-slash comments ... but can't get it to work so will revert back to dfm, just keeping this here for history"
    
    This reverts commit efb0b002c5f35c990cffd0b3badffdaefb3e82f1.
    
    * Gets the DFM markdown engine running again so the @lucene.experimental tags are replaced.
    
    * Updates some website info
    
    * Adds separate docs page to link to the various docs for different versions
    
    * fix typo
    
    * bumps the date, small change to the source code doc
    
    * Gets the download page all working for the different versions with checksums, etc...
    
    * Fixing links to the download-package
---
 .gitignore                                         |  11 +-
 Lucene.Net.sln                                     |  11 +-
 .../Analysis/Cjk/package.md                        |   2 +-
 .../Analysis/Cn/package.md                         |   2 +-
 .../Analysis/Compound/package.md                   |   8 +-
 .../Analysis/Payloads/package.md                   |  11 +-
 .../Analysis/Sinks/package.md                      |  15 +-
 .../Analysis/Snowball/package.md                   |   2 +-
 .../Analysis/Standard/Std31/package.md             |   2 +-
 .../Analysis/Standard/Std34/package.md             |   2 +-
 .../Analysis/Standard/Std36/package.md             |   2 +-
 .../Analysis/Standard/Std40/package.md             |   2 +-
 .../Analysis/Standard/package.md                   |  38 +--
 .../Collation/TokenAttributes/package.md           |   2 +-
 .../Collation/package.md                           |   4 +-
 src/Lucene.Net.Analysis.Common/overview.md         |  11 +-
 .../Collation/TokenAttributes/package.md           |   2 +-
 src/Lucene.Net.Analysis.ICU/overview.md            |  21 +-
 src/Lucene.Net.Analysis.Kuromoji/overview.md       |  13 +-
 src/Lucene.Net.Analysis.Phonetic/overview.md       |  13 +-
 src/Lucene.Net.Analysis.SmartCn/HHMM/package.md    |   2 +-
 src/Lucene.Net.Analysis.SmartCn/overview.md        |   6 +-
 src/Lucene.Net.Analysis.SmartCn/package.md         |   9 +-
 src/Lucene.Net.Analysis.Stempel/overview.md        |   7 +-
 src/Lucene.Net.Benchmark/ByTask/package.md         |  11 +-
 src/Lucene.Net.Benchmark/overview.md               |  11 +-
 src/Lucene.Net.Benchmark/package.md                |  11 +-
 src/Lucene.Net.Classification/overview.md          |  11 +-
 src/Lucene.Net.Demo/overview.md                    |  19 +-
 src/Lucene.Net.Expressions/JS/package.md           |   4 +-
 src/Lucene.Net.Expressions/overview.md             |  11 +-
 src/Lucene.Net.Expressions/package.md              |   6 +-
 src/Lucene.Net.Facet/Facets.cs                     |   2 +-
 .../Range/DoubleRangeFacetCounts.cs                |   2 +-
 src/Lucene.Net.Facet/Range/LongRangeFacetCounts.cs |   2 +-
 src/Lucene.Net.Facet/Range/Range.cs                |   2 +-
 src/Lucene.Net.Facet/Range/RangeFacetCounts.cs     |   2 +-
 src/Lucene.Net.Facet/SortedSet/package.md          |   2 +-
 .../Taxonomy/TaxonomyFacetSumFloatAssociations.cs  |   2 +-
 .../Taxonomy/TaxonomyFacetSumIntAssociations.cs    |   2 +-
 .../Taxonomy/TaxonomyFacetSumValueSource.cs        |   2 +-
 src/Lucene.Net.Facet/package.md                    |  13 +-
 src/Lucene.Net.Grouping/Function/package.md        |   2 +-
 src/Lucene.Net.Grouping/Term/package.md            |   2 +-
 src/Lucene.Net.Grouping/package.md                 |  13 +-
 .../VectorHighlight/package.md                     |   6 +-
 src/Lucene.Net.Highlighter/overview.md             |  11 +-
 src/Lucene.Net.Join/package.md                     |  21 +-
 src/Lucene.Net.Memory/overview.md                  |   4 +-
 src/Lucene.Net.Memory/package.md                   |  14 +-
 src/Lucene.Net.Misc/Index/Sorter/package.md        |   4 +-
 src/Lucene.Net.Misc/overview.md                    |  13 +-
 src/Lucene.Net.Queries/overview.md                 |  11 +-
 src/Lucene.Net.QueryParser/Classic/package.md      |  11 +-
 .../Flexible/Core/Builders/package.md              |   2 +-
 .../Flexible/Core/Config/package.md                |   4 +-
 .../Flexible/Core/Nodes/package.md                 |   8 +-
 .../Flexible/Core/Processors/package.md            |  10 +-
 .../Flexible/Core/package.md                       |   6 +-
 .../Flexible/Precedence/Processors/package.md      |   6 +-
 .../Flexible/Precedence/package.md                 |   2 +-
 .../Flexible/Standard/Builders/package.md          |   4 +-
 .../Flexible/Standard/package.md                   |   2 +-
 src/Lucene.Net.QueryParser/overview.md             |  25 +-
 src/Lucene.Net.Replicator/overview.md              |   4 +-
 src/Lucene.Net.Replicator/package.md               |   7 +-
 src/Lucene.Net.Sandbox/overview.md                 |  11 +-
 src/Lucene.Net.Spatial/overview.md                 |  11 +-
 src/Lucene.Net.Suggest/overview.md                 |  11 +-
 src/Lucene.Net.TestFramework/Analysis/package.md   |   2 +-
 .../Codecs/Compressing/package.md                  |   2 +-
 .../Codecs/Lucene40/package.md                     |   2 +-
 .../Codecs/Lucene41/package.md                     |   2 +-
 .../Codecs/Lucene41Ords/package.md                 |   2 +-
 .../Codecs/Lucene42/package.md                     |   2 +-
 .../Codecs/Lucene45/package.md                     |   2 +-
 .../Codecs/MockSep/package.md                      |   2 +-
 .../Codecs/NestedPulsing/package.md                |   2 +-
 src/Lucene.Net.TestFramework/Index/package.md      |   2 +-
 src/Lucene.Net.TestFramework/Search/package.md     |   2 +-
 src/Lucene.Net.TestFramework/Store/package.md      |   2 +-
 .../Util/Automaton/package.md                      |   2 +-
 src/Lucene.Net.TestFramework/Util/package.md       |   2 +-
 src/Lucene.Net.TestFramework/overview.md           |   7 +-
 src/Lucene.Net/Analysis/package.md                 | 115 +++----
 .../Compressing/CompressingStoredFieldsFormat.cs   |   3 +-
 src/Lucene.Net/Codecs/Compressing/package.md       |   7 +-
 src/Lucene.Net/Codecs/Lucene3x/package.md          |   7 +-
 src/Lucene.Net/Codecs/Lucene40/package.md          |  69 +++--
 src/Lucene.Net/Codecs/Lucene41/package.md          |  71 ++---
 src/Lucene.Net/Codecs/Lucene42/package.md          |  71 ++---
 src/Lucene.Net/Codecs/Lucene45/package.md          |  71 ++---
 src/Lucene.Net/Codecs/Lucene46/package.md          |  71 ++---
 src/Lucene.Net/Codecs/package.md                   |  15 +-
 src/Lucene.Net/Document/package.md                 |  19 +-
 src/Lucene.Net/Index/package.md                    |  27 +-
 src/Lucene.Net/Search/Payloads/package.md          |  18 +-
 src/Lucene.Net/Search/Similarities/package.md      |  25 +-
 src/Lucene.Net/Search/Spans/package.md             |   9 +-
 src/Lucene.Net/Search/package.md                   |  85 +++---
 src/Lucene.Net/Store/package.md                    |   7 +-
 src/Lucene.Net/Util/Automaton/package.md           |   4 +-
 src/Lucene.Net/Util/Fst/package.md                 |  10 +-
 src/Lucene.Net/Util/Packed/package.md              |  22 +-
 src/Lucene.Net/Util/package.md                     |   7 +-
 src/Lucene.Net/overview.md                         | 110 +++----
 src/docs/LuceneDocsPlugins/LuceneDocsPlugins.sln   |  22 ++
 .../LuceneDocsPlugins/LuceneDfmEngineCustomizer.cs |  27 ++
 .../LuceneDocsPlugins/LuceneDocsPlugins.csproj     | 107 +++++++
 .../LuceneDocsPlugins/LuceneNoteBlockRule.cs       |  26 ++
 .../LuceneDocsPlugins/LuceneNoteBlockToken.cs      |  26 ++
 .../LuceneRendererPartProvider.cs                  |  18 ++
 .../LuceneDocsPlugins/LuceneTokenRendererPart.cs   |  20 ++
 .../LuceneDocsPlugins/Properties/AssemblyInfo.cs   |  36 +++
 .../LuceneDocsPlugins/packages.config              |  20 ++
 src/docs/readme.md                                 |   0
 .../JavaDocToMarkdownConverter/App.config          |   8 +
 .../JavaDocToMarkdownConverter/DocConverter.cs     | 272 +++++++----------
 .../Formatters/CodeLinkReplacer.cs                 |  98 ++++++
 .../Formatters/DocTypeReplacer.cs                  |  17 ++
 .../Formatters/ExtraHtmlElementReplacer.cs         |  61 ++++
 .../Formatters/IReplacer.cs                        |  10 +
 .../Formatters/JavaDocFormatters.cs                |  25 ++
 .../Formatters/PatternReplacer.cs                  |  21 ++
 .../Formatters/RepoLinkReplacer.cs                 |  39 +++
 .../JavaDocToMarkdownConverter.csproj              |   8 +
 .../JavaDocToMarkdownConverter/Program.cs          |   2 +-
 .../JavaDocToMarkdownConverter/StringExtensions.cs | 134 +++++++++
 src/dotnet/tools/lucene-cli/docs/analysis/toc.yml  |   6 +
 .../tools/lucene-cli/docs/benchmark/index.md       |   7 +-
 src/dotnet/tools/lucene-cli/docs/benchmark/toc.yml |  12 +
 src/dotnet/tools/lucene-cli/docs/demo/toc.yml      |  18 ++
 src/dotnet/tools/lucene-cli/docs/index/toc.yml     |  26 ++
 src/dotnet/tools/lucene-cli/docs/lock/toc.yml      |   4 +
 src/dotnet/tools/lucene-cli/docs/toc.yml           |  15 +
 websites/apidocs/api/toc.yml                       |  48 +++
 websites/apidocs/docfx.json                        | 330 +++++++++++++++++++++
 websites/apidocs/docs.ps1                          | 165 +++++++++++
 websites/apidocs/filterConfig.yml                  |   4 +
 websites/apidocs/index.md                          |  68 +++++
 .../lucenetemplate/partials/navbar.tmpl.partial    |  22 ++
 websites/apidocs/lucenetemplate/styles/main.css    |  73 +++++
 websites/apidocs/lucenetemplate/styles/main.js     |  32 ++
 websites/apidocs/lucenetemplate/web.config         |   9 +
 websites/apidocs/toc.yml                           |   8 +
 websites/site/contributing/current-status.md       |  13 +
 websites/site/contributing/documentation.md        |  29 ++
 websites/site/contributing/index.md                |  53 ++++
 websites/site/contributing/issue-tracker.md        |   9 +
 websites/site/contributing/mailing-lists.md        |  34 +++
 websites/site/contributing/source.md               |  24 ++
 websites/site/contributing/toc.yml                 |  12 +
 websites/site/contributing/wiki.md                 |   9 +
 websites/site/docfx.json                           |  48 +++
 websites/site/docs.md                              |  20 ++
 websites/site/download/download.md                 |  26 ++
 websites/site/download/toc.yml                     |   6 +
 websites/site/download/version-2.md                |  22 ++
 websites/site/download/version-3.md                |  55 ++++
 websites/site/download/version-4.md                |  66 +++++
 websites/site/index.md                             |  18 ++
 websites/site/lucenetemplate/index.html.tmpl       |  58 ++++
 .../partials/head-content.tmpl.partial             |  27 ++
 .../site/lucenetemplate/partials/head.tmpl.partial |  24 ++
 .../partials/home-quick-start.tmpl.partial         |  70 +++++
 .../lucenetemplate/partials/navbar.tmpl.partial    |  22 ++
 websites/site/lucenetemplate/styles/main.css       |  73 +++++
 websites/site/lucenetemplate/styles/site.css       | 131 ++++++++
 websites/site/lucenetemplate/web.config            |   9 +
 websites/site/site.ps1                             |  86 ++++++
 websites/site/toc.yml                              |  12 +
 171 files changed, 3423 insertions(+), 793 deletions(-)

diff --git a/.gitignore b/.gitignore
index 7b482db..ce84baa 100644
--- a/.gitignore
+++ b/.gitignore
@@ -49,4 +49,13 @@ release/
 .tools/
 
 # NUnit test result file produced by nunit3-console.exe
-[Tt]est[Rr]esult.xml
\ No newline at end of file
+[Tt]est[Rr]esult.xml
+websites/**/_site/*
+websites/**/tools/*
+websites/**/_exported_templates/*
+websites/**/api/.manifest
+websites/**/docfx.log
+websites/**/lucenetemplate/plugins/*
+websites/apidocs/api/**/*.yml
+websites/apidocs/api/**/*.manifest
+!websites/apidocs/api/toc.yml
\ No newline at end of file
diff --git a/Lucene.Net.sln b/Lucene.Net.sln
index d80ff61..1991479 100644
--- a/Lucene.Net.sln
+++ b/Lucene.Net.sln
@@ -112,6 +112,15 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Join", "sr
 EndProject
 Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Memory", "src\Lucene.Net.Tests.Memory\Lucene.Net.Tests.Memory.csproj", "{3BE7B6EA-8DBC-45E2-947C-1CA7E63B5603}"
 EndProject
+Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "apidocs", "apidocs", "{58FD6E39-F30F-4566-90E5-B7C9D6BC0660}"
+	ProjectSection(SolutionItems) = preProject
+		apidocs\docfx.filter.yml = apidocs\docfx.filter.yml
+		apidocs\docfx.json = apidocs\docfx.json
+		apidocs\docs.ps1 = apidocs\docs.ps1
+		apidocs\index.md = apidocs\index.md
+		apidocs\toc.yml = apidocs\toc.yml
+	EndProjectSection
+EndProject
 Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Misc", "src\Lucene.Net.Tests.Misc\Lucene.Net.Tests.Misc.csproj", "{F8DDC5B7-A621-4B67-AB4B-BBE083C05BB8}"
 EndProject
 Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Queries", "src\Lucene.Net.Tests.Queries\Lucene.Net.Tests.Queries.csproj", "{AC750DC0-05A3-4F96-8CC5-CFC8FD01D4CF}"
@@ -357,8 +366,8 @@ Global
 		HideSolutionNode = FALSE
 	EndGlobalSection
 	GlobalSection(NestedProjects) = preSolution
-		{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
 		{4DF7EACE-2B25-43F6-B558-8520BF20BD76} = {8CA61D33-3590-4024-A304-7B1F75B50653}
+		{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
 		{119BBACD-D4DB-4E3B-922F-3DA83E0B29E2} = {4DF7EACE-2B25-43F6-B558-8520BF20BD76}
 		{CF3A74CA-FEFD-4F41-961B-CC8CF8D96286} = {8CA61D33-3590-4024-A304-7B1F75B50653}
 		{4B054831-5275-44E2-A4D4-CA0B19BEE19A} = {8CA61D33-3590-4024-A304-7B1F75B50653}
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md
index b4b5e73..c5bb917 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md
@@ -16,7 +16,7 @@
  limitations under the License.
 -->
 
-<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
+
 
 Analyzer for Chinese, Japanese, and Korean, which indexes bigrams. 
 This analyzer generates bigram terms, which are overlapping groups of two adjacent Han, Hiragana, Katakana, or Hangul characters.
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md
index 50a3555..51fbfdc 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md
@@ -16,7 +16,7 @@
  limitations under the License.
 -->
 
-<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
+
 
 Analyzer for Chinese, which indexes unigrams (individual chinese characters).
 
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
index 77585b4..c807b87 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
@@ -74,8 +74,8 @@ filter available:
 
 #### HyphenationCompoundWordTokenFilter
 
-The [](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter
-HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
+The [
+HyphenationCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
 potential subwords that a worth to check against the dictionary. It can be used
 without a dictionary as well but then produces a lot of "nonword" tokens.
 The quality of the output tokens is directly connected to the quality of the
@@ -101,8 +101,8 @@ Credits for the hyphenation code go to the
 
 #### DictionaryCompoundWordTokenFilter
 
-The [](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter
-DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
+The [
+DictionaryCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
 find subwords in a compound word. It is much slower than the one that
 uses the hyphenation grammars. You can use it as a first start to
 see if your dictionary is good or not because it is much simpler in design.
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md
index bf1ec16..dc5c944 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md
@@ -15,11 +15,8 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<HTML>
-<HEAD>
-    <TITLE>org.apache.lucene.analysis.payloads</TITLE>
-</HEAD>
-<BODY>
+
+
+
 Provides various convenience classes for creating payloads on Tokens.
-</BODY>
-</HTML>
\ No newline at end of file
+
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md
index d9b4794..4e89cd4 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md
@@ -15,13 +15,10 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<HTML>
-<HEAD>
-   <TITLE>org.apache.lucene.analysis.sinks</TITLE>
-</HEAD>
-<BODY>
-[](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter) and implementations
-of [](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter) that
+
+
+
+<xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter> and implementations
+of <xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter> that
 might be useful.
-</BODY>
-</HTML>
\ No newline at end of file
+
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Snowball/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Snowball/package.md
index fc93a1d..48ae57e 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Snowball/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Snowball/package.md
@@ -16,7 +16,7 @@
  limitations under the License.
 -->
 
-[](xref:Lucene.Net.Analysis.TokenFilter) and [](xref:Lucene.Net.Analysis.Analyzer) implementations that use Snowball
+<xref:Lucene.Net.Analysis.TokenFilter> and <xref:Lucene.Net.Analysis.Analyzer> implementations that use Snowball
 stemmers.
 
  This project provides pre-compiled version of the Snowball stemmers based on revision 500 of the Tartarus Snowball repository, together with classes integrating them with the Lucene search engine. 
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std31/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std31/package.md
index aaee44b..7d67974 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std31/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std31/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_31)
\ No newline at end of file
+Backwards-compatible implementation to match [#LUCENE_31](xref:Lucene.Net.Util.Version)
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std34/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std34/package.md
index 0417d24..4f5fe5f 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std34/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std34/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_34)
\ No newline at end of file
+Backwards-compatible implementation to match [#LUCENE_34](xref:Lucene.Net.Util.Version)
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std36/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std36/package.md
index ee550da..a4be333 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std36/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std36/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_36)
\ No newline at end of file
+Backwards-compatible implementation to match [#LUCENE_36](xref:Lucene.Net.Util.Version)
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std40/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std40/package.md
index 038f829..78c2c19 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std40/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Standard/Std40/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_40)
\ No newline at end of file
+Backwards-compatible implementation to match [#LUCENE_40](xref:Lucene.Net.Util.Version)
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md b/src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md
index fa2696c..10033d4 100644
--- a/src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md
+++ b/src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md
@@ -20,7 +20,7 @@
 
 The `org.apache.lucene.analysis.standard` package contains three fast grammar-based tokenizers constructed with JFlex:
 
-*   [](xref:Lucene.Net.Analysis.Standard.StandardTokenizer):
+*   <xref:Lucene.Net.Analysis.Standard.StandardTokenizer>:
         as of Lucene 3.1, implements the Word Break rules from the Unicode Text 
         Segmentation algorithm, as specified in 
         [Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
@@ -28,32 +28,32 @@ The `org.apache.lucene.analysis.standard` package contains three fast grammar-ba
     **not** tokenized as single tokens, but are instead split up into 
         tokens according to the UAX#29 word break rules.
 
-        [](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer) includes
-        [](xref:Lucene.Net.Analysis.Standard.StandardTokenizer StandardTokenizer),
-        [](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter), 
-        [](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
-        and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
+        [StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) includes
+        [StandardTokenizer](xref:Lucene.Net.Analysis.Standard.StandardTokenizer),
+        [StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter), 
+        [LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
+        and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
         When the `Version` specified in the constructor is lower than 
-    3.1, the [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer)
+    3.1, the [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer)
         implementation is invoked.
-*   [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer):
+*   [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer):
         this class was formerly (prior to Lucene 3.1) named 
         `StandardTokenizer`.  (Its tokenization rules are not
         based on the Unicode Text Segmentation algorithm.)
-        [](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer ClassicAnalyzer) includes
-        [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer),
-        [](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter), 
-        [](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
-        and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
+        [ClassicAnalyzer](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer) includes
+        [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer),
+        [StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter), 
+        [LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
+        and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
 
-*   [](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer):
+*   [UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer):
         implements the Word Break rules from the Unicode Text Segmentation
         algorithm, as specified in 
         [Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
         URLs and email addresses are also tokenized according to the relevant RFCs.
 
-        [](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer UAX29URLEmailAnalyzer) includes
-        [](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer),
-        [](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
-        [](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
-        and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
\ No newline at end of file
+        [UAX29URLEmailAnalyzer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer) includes
+        [UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer),
+        [StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
+        [LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
+        and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md b/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md
index 1fcb461..1a702a6 100644
--- a/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md
+++ b/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
\ No newline at end of file
+Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Common/Collation/package.md b/src/Lucene.Net.Analysis.Common/Collation/package.md
index 7d4f844..cca82e8 100644
--- a/src/Lucene.Net.Analysis.Common/Collation/package.md
+++ b/src/Lucene.Net.Analysis.Common/Collation/package.md
@@ -28,8 +28,8 @@
     very slow.)
 
 *   Effective Locale-specific normalization (case differences, diacritics, etc.).
-    ([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and 
-    [](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
+    (<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and 
+    <xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
     in a generic way that doesn't take into account locale-specific needs.)
 
 ## Example Usages
diff --git a/src/Lucene.Net.Analysis.Common/overview.md b/src/Lucene.Net.Analysis.Common/overview.md
index bd1a57a..7d8c3cf 100644
--- a/src/Lucene.Net.Analysis.Common/overview.md
+++ b/src/Lucene.Net.Analysis.Common/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Analysis.Common
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -17,6 +22,6 @@
 
   Analyzers for indexing content in different languages and domains.
 
- For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation. 
+ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
 
- This module contains concrete components ([](xref:Lucene.Net.Analysis.CharFilter)s, [](xref:Lucene.Net.Analysis.Tokenizer)s, and ([](xref:Lucene.Net.Analysis.TokenFilter)s) for analyzing different types of content. It also provides a number of [](xref:Lucene.Net.Analysis.Analyzer)s for different languages that you can use to get started quickly. 
\ No newline at end of file
+ This module contains concrete components (<xref:Lucene.Net.Analysis.CharFilter>s, <xref:Lucene.Net.Analysis.Tokenizer>s, and (<xref:Lucene.Net.Analysis.TokenFilter>s) for analyzing different types of content. It also provides a number of <xref:Lucene.Net.Analysis.Analyzer>s for different languages that you can use to get started quickly. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.ICU/Collation/TokenAttributes/package.md b/src/Lucene.Net.Analysis.ICU/Collation/TokenAttributes/package.md
index 1fcb461..1a702a6 100644
--- a/src/Lucene.Net.Analysis.ICU/Collation/TokenAttributes/package.md
+++ b/src/Lucene.Net.Analysis.ICU/Collation/TokenAttributes/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
\ No newline at end of file
+Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.ICU/overview.md b/src/Lucene.Net.Analysis.ICU/overview.md
index 2800513..c0f1c6d 100644
--- a/src/Lucene.Net.Analysis.ICU/overview.md
+++ b/src/Lucene.Net.Analysis.ICU/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Analysis.Icu
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -16,10 +21,8 @@
 -->
 <!-- :Post-Release-Update-Version.LUCENE_XY: - several mentions in this file -->
 
-    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-    <title>
-      Apache Lucene ICU integration module
-    </title>
+    
+    
 
 This module exposes functionality from 
 [ICU](http://site.icu-project.org/) to Apache Lucene. ICU4J is a Java
@@ -27,7 +30,7 @@ library that enhances Java's internationalization support by improving
 performance, keeping current with the Unicode Standard, and providing richer
 APIs. 
 
-For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
+For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
 
  This module exposes the following functionality: 
 
@@ -84,8 +87,8 @@ For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysi
     very slow.)
 
 *   Effective Locale-specific normalization (case differences, diacritics, etc.).
-    ([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and 
-    [](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
+    (<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and 
+    <xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
     in a generic way that doesn't take into account locale-specific needs.)
 
 ## Example Usages
@@ -266,7 +269,7 @@ For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysi
 
 # [Backwards Compatibility]()
 
- This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict [](xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter) to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}. 
+ This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict <xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter> to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}. 
 
 ## Example Usages
 
diff --git a/src/Lucene.Net.Analysis.Kuromoji/overview.md b/src/Lucene.Net.Analysis.Kuromoji/overview.md
index 99acca2..8e5bcb1 100644
--- a/src/Lucene.Net.Analysis.Kuromoji/overview.md
+++ b/src/Lucene.Net.Analysis.Kuromoji/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Analysis.Kuromoji
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,12 +20,10 @@
  limitations under the License.
 -->
 
-    <title>
-      Apache Lucene Kuromoji Analyzer
-    </title>
+    
 
   Kuromoji is a morphological analyzer for Japanese text.  
 
  This module provides support for Japanese text analysis, including features such as part-of-speech tagging, lemmatization, and compound word analysis. 
 
- For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation. 
\ No newline at end of file
+ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Phonetic/overview.md b/src/Lucene.Net.Analysis.Phonetic/overview.md
index 77bee89..164ece7 100644
--- a/src/Lucene.Net.Analysis.Phonetic/overview.md
+++ b/src/Lucene.Net.Analysis.Phonetic/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Analysis.Phonetic
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,12 +20,10 @@
  limitations under the License.
 -->
 
-    <title>
-      analyzers-phonetic
-    </title>
+    
 
   Analysis for indexing phonetic signatures (for sounds-alike search)
 
- For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation. 
+ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
 
  This module provides analysis components (using encoders from [Apache Commons Codec](http://commons.apache.org/codec/)) that index and search phonetic signatures. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.SmartCn/HHMM/package.md b/src/Lucene.Net.Analysis.SmartCn/HHMM/package.md
index eccb59d..d493f00 100644
--- a/src/Lucene.Net.Analysis.SmartCn/HHMM/package.md
+++ b/src/Lucene.Net.Analysis.SmartCn/HHMM/package.md
@@ -16,7 +16,7 @@
  limitations under the License.
 -->
 
-<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+
 
 SmartChineseAnalyzer Hidden Markov Model package.
 @lucene.experimental
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.SmartCn/overview.md b/src/Lucene.Net.Analysis.SmartCn/overview.md
index 0a7e1ff..e844e2a 100644
--- a/src/Lucene.Net.Analysis.SmartCn/overview.md
+++ b/src/Lucene.Net.Analysis.SmartCn/overview.md
@@ -15,10 +15,8 @@
  limitations under the License.
 -->
 
-    <title>
-      smartcn
-    </title>
+    
 
   Analyzer for Simplified Chinese, which indexes words.
 
- For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation. 
\ No newline at end of file
+ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.SmartCn/package.md b/src/Lucene.Net.Analysis.SmartCn/package.md
index 6afbed8..ad648d5 100644
--- a/src/Lucene.Net.Analysis.SmartCn/package.md
+++ b/src/Lucene.Net.Analysis.SmartCn/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Analysis.Smartcn
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -16,7 +21,7 @@
  limitations under the License.
 -->
 
-<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+
 
 Analyzer for Simplified Chinese, which indexes words.
 @lucene.experimental
diff --git a/src/Lucene.Net.Analysis.Stempel/overview.md b/src/Lucene.Net.Analysis.Stempel/overview.md
index a31c1ae..394ea91 100644
--- a/src/Lucene.Net.Analysis.Stempel/overview.md
+++ b/src/Lucene.Net.Analysis.Stempel/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Analysis.Stempel
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
diff --git a/src/Lucene.Net.Benchmark/ByTask/package.md b/src/Lucene.Net.Benchmark/ByTask/package.md
index 9efd463..6703104 100644
--- a/src/Lucene.Net.Benchmark/ByTask/package.md
+++ b/src/Lucene.Net.Benchmark/ByTask/package.md
@@ -15,11 +15,9 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<html>
-<head>
-    <title>Benchmarking Lucene By Tasks</title>
-</head>
-<body>
+
+
+
 Benchmarking Lucene By Tasks.
 <div>
 
@@ -495,5 +493,4 @@ Example: max.buffered=buf:10:10:100:100 -
 
 </div>
 <div> </div>
-</body>
-</html>
\ No newline at end of file
+
diff --git a/src/Lucene.Net.Benchmark/overview.md b/src/Lucene.Net.Benchmark/overview.md
index b786443..2c2e6e1 100644
--- a/src/Lucene.Net.Benchmark/overview.md
+++ b/src/Lucene.Net.Benchmark/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Benchmark
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,8 +20,6 @@
  limitations under the License.
 -->
 
-    <title>
-      benchmark
-    </title>
+    
 
   benchmark
\ No newline at end of file
diff --git a/src/Lucene.Net.Benchmark/package.md b/src/Lucene.Net.Benchmark/package.md
index b96f567..b9c74f9 100644
--- a/src/Lucene.Net.Benchmark/package.md
+++ b/src/Lucene.Net.Benchmark/package.md
@@ -15,11 +15,9 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<html>
-<head>
-    <title>Lucene Benchmarking Package</title>
-</head>
-<body>
+
+
+
 The benchmark contribution contains tools for benchmarking Lucene using standard, freely available corpora.
 <div>
 
@@ -42,5 +40,4 @@ The benchmark contribution contains tools for benchmarking Lucene using standard
     The original code for these classes was donated by Andrzej Bialecki at http://issues.apache.org/jira/browse/LUCENE-675 and has been updated by Grant Ingersoll to make some parts of the code reusable in other benchmarkers
 </div>
 <div> </div>
-</body>
-</html>
\ No newline at end of file
+
diff --git a/src/Lucene.Net.Classification/overview.md b/src/Lucene.Net.Classification/overview.md
index fa0f140..ecf2c14 100644
--- a/src/Lucene.Net.Classification/overview.md
+++ b/src/Lucene.Net.Classification/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Classification
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,8 +20,6 @@
  limitations under the License.
 -->
 
-  <title>
-    classification
-  </title>
+  
 
 Provides a classification module which leverages Lucene index information.
\ No newline at end of file
diff --git a/src/Lucene.Net.Demo/overview.md b/src/Lucene.Net.Demo/overview.md
index ad0bdd0..4f87725 100644
--- a/src/Lucene.Net.Demo/overview.md
+++ b/src/Lucene.Net.Demo/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Demo
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -103,7 +108,7 @@ The files discussed here are linked into this documentation directly: * [IndexFi
 
 As we discussed in the previous walk-through, the [IndexFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/IndexFiles.cs) class creates a Lucene Index. Let's take a look at how it does this.
 
-The <span class="codefrag">main()</span> method parses the command-line parameters, then in preparation for instantiating [](xref:Lucene.Net.Index.IndexWriter IndexWriter), opens a [](xref:Lucene.Net.Store.Directory Directory), and instantiates [](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer) and [](xref:Lucene.Net.Index.IndexWriterConfig IndexWriterConfig).
+The <span class="codefrag">main()</span> method parses the command-line parameters, then in preparation for instantiating [IndexWriter](xref:Lucene.Net.Index.IndexWriter), opens a [Directory](xref:Lucene.Net.Store.Directory), and instantiates [StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) and [IndexWriterConfig](xref:Lucene.Net.Index.IndexWriterConfig).
 
 The value of the <span class="codefrag">-index</span> command-line parameter is the name of the filesystem directory where all index information should be stored. If <span class="codefrag">IndexFiles</span> is invoked with a relative path given in the <span class="codefrag">-index</span> command-line parameter, or if the <span class="codefrag">-index</span> command-line parameter is not given, causing the default relative index path "<span class="codefrag">index</span>" to be used, the i [...]
 
@@ -111,13 +116,13 @@ The <span class="codefrag">-docs</span> command-line parameter value is the loca
 
 The <span class="codefrag">-update</span> command-line parameter tells <span class="codefrag">IndexFiles</span> not to delete the index if it already exists. When <span class="codefrag">-update</span> is not given, <span class="codefrag">IndexFiles</span> will first wipe the slate clean before indexing any documents.
 
-Lucene [](xref:Lucene.Net.Store.Directory Directory)s are used by the <span class="codefrag">IndexWriter</span> to store information in the index. In addition to the [](xref:Lucene.Net.Store.FSDirectory FSDirectory) implementation we are using, there are several other <span class="codefrag">Directory</span> subclasses that can write to RAM, to databases, etc.
+Lucene [Directory](xref:Lucene.Net.Store.Directory)s are used by the <span class="codefrag">IndexWriter</span> to store information in the index. In addition to the [FSDirectory](xref:Lucene.Net.Store.FSDirectory) implementation we are using, there are several other <span class="codefrag">Directory</span> subclasses that can write to RAM, to databases, etc.
 
-Lucene [](xref:Lucene.Net.Analysis.Analyzer Analyzer)s are processing pipelines that break up text into indexed tokens, a.k.a. terms, and optionally perform other operations on these tokens, e.g. downcasing, synonym insertion, filtering out unwanted tokens, etc. The <span class="codefrag">Analyzer</span> we are using is <span class="codefrag">StandardAnalyzer</span>, which creates tokens using the Word Break rules from the Unicode Text Segmentation algorithm specified in [Unicode Standar [...]
+Lucene [Analyzer](xref:Lucene.Net.Analysis.Analyzer)s are processing pipelines that break up text into indexed tokens, a.k.a. terms, and optionally perform other operations on these tokens, e.g. downcasing, synonym insertion, filtering out unwanted tokens, etc. The <span class="codefrag">Analyzer</span> we are using is <span class="codefrag">StandardAnalyzer</span>, which creates tokens using the Word Break rules from the Unicode Text Segmentation algorithm specified in [Unicode Standard [...]
 
 The <span class="codefrag">IndexWriterConfig</span> instance holds all configuration for <span class="codefrag">IndexWriter</span>. For example, we set the <span class="codefrag">OpenMode</span> to use here based on the value of the <span class="codefrag">-update</span> command-line parameter.
 
-Looking further down in the file, after <span class="codefrag">IndexWriter</span> is instantiated, you should see the <span class="codefrag">indexDocs()</span> code. This recursive function crawls the directories and creates [](xref:Lucene.Net.Documents.Document Document) objects. The <span class="codefrag">Document</span> is simply a data object to represent the text content from the file as well as its creation time and location. These instances are added to the <span class="codefrag"> [...]
+Looking further down in the file, after <span class="codefrag">IndexWriter</span> is instantiated, you should see the <span class="codefrag">indexDocs()</span> code. This recursive function crawls the directories and creates [Document](xref:Lucene.Net.Documents.Document) objects. The <span class="codefrag">Document</span> is simply a data object to represent the text content from the file as well as its creation time and location. These instances are added to the <span class="codefrag">I [...]
 
 </div>
 
@@ -125,8 +130,8 @@ Looking further down in the file, after <span class="codefrag">IndexWriter</span
 
 <div class="section">
 
-The [SearchFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/SearchFiles.cs) class is quite simple. It primarily collaborates with an [](xref:Lucene.Net.Search.IndexSearcher IndexSearcher), [](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer), (which is used in the [IndexFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/IndexFiles.cs) class as well) and a [](xref:Lucene.Net.QueryParsers.Classic.QueryParser QueryParser). T [...]
+The [SearchFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/SearchFiles.cs) class is quite simple. It primarily collaborates with an [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher), [StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer), (which is used in the [IndexFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/IndexFiles.cs) class as well) and a [QueryParser](xref:Lucene.Net.QueryParsers.Classic.QueryParser). The  [...]
 
-<span class="codefrag">SearchFiles</span> uses the [](xref:Lucene.Net.Search.IndexSearcher.Search(Lucene.Net.Search.Query,int) IndexSearcher.Search(query,n)) method that returns [](xref:Lucene.Net.Search.TopDocs TopDocs) with max <span class="codefrag">n</span> hits. The results are printed in pages, sorted by score (i.e. relevance).
+<span class="codefrag">SearchFiles</span> uses the [IndexSearcher.search](xref:Lucene.Net.Search.IndexSearcher#methods) method that returns [TopDocs](xref:Lucene.Net.Search.TopDocs) with max <span class="codefrag">n</span> hits. The results are printed in pages, sorted by score (i.e. relevance).
 
 </div>
\ No newline at end of file
diff --git a/src/Lucene.Net.Expressions/JS/package.md b/src/Lucene.Net.Expressions/JS/package.md
index 3bf25be..8b6d7c2 100644
--- a/src/Lucene.Net.Expressions/JS/package.md
+++ b/src/Lucene.Net.Expressions/JS/package.md
@@ -28,8 +28,8 @@ A Javascript expression is a numeric expression specified using an expression sy
 *   Trigonometric library functions: `acosh acos asinh asin atanh atan atan2 cosh cos sinh sin tanh tan`
 *   Distance functions: `haversin`
 *   Miscellaneous functions: `min, max`
-*   Arbitrary external variables - see [](xref:Lucene.Net.Expressions.Bindings)
+*   Arbitrary external variables - see <xref:Lucene.Net.Expressions.Bindings>
 
  JavaScript order of precedence rules apply for operators. Shortcut evaluation is used for logical operators—the second argument is only evaluated if the value of the expression cannot be determined after evaluating the first argument. For example, in the expression `a || b`, `b` is only evaluated if a is not true. 
 
- To compile an expression, use [](xref:Lucene.Net.Expressions.Js.JavascriptCompiler). 
\ No newline at end of file
+ To compile an expression, use <xref:Lucene.Net.Expressions.Js.JavascriptCompiler>. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Expressions/overview.md b/src/Lucene.Net.Expressions/overview.md
index d3f1c5a..77ce8cf 100644
--- a/src/Lucene.Net.Expressions/overview.md
+++ b/src/Lucene.Net.Expressions/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Expressions
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -19,6 +24,6 @@
 
  The expressions module is new to Lucene 4.6. It provides an API for dynamically computing per-document values based on string expressions. 
 
- The module is organized in two sections: 1. [](xref:Lucene.Net.Expressions) - The abstractions and simple utilities for common operations like sorting on an expression 2. [](xref:Lucene.Net.Expressions.Js) - A compiler for a subset of JavaScript expressions 
+ The module is organized in two sections: 1. <xref:Lucene.Net.Expressions> - The abstractions and simple utilities for common operations like sorting on an expression 2. <xref:Lucene.Net.Expressions.Js> - A compiler for a subset of JavaScript expressions 
 
- For sample code showing how to use the API, see [](xref:Lucene.Net.Expressions.Expression). 
\ No newline at end of file
+ For sample code showing how to use the API, see <xref:Lucene.Net.Expressions.Expression>. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Expressions/package.md b/src/Lucene.Net.Expressions/package.md
index 07ef42d..bfa5392 100644
--- a/src/Lucene.Net.Expressions/package.md
+++ b/src/Lucene.Net.Expressions/package.md
@@ -17,8 +17,8 @@
 
 # expressions
 
- [](xref:Lucene.Net.Expressions.Expression) - result of compiling an expression, which can evaluate it for a given document. Each expression can have external variables are resolved by {@code Bindings}. 
+ <xref:Lucene.Net.Expressions.Expression> - result of compiling an expression, which can evaluate it for a given document. Each expression can have external variables are resolved by {@code Bindings}. 
 
- [](xref:Lucene.Net.Expressions.Bindings) - abstraction for binding external variables to a way to get a value for those variables for a particular document (ValueSource). 
+ <xref:Lucene.Net.Expressions.Bindings> - abstraction for binding external variables to a way to get a value for those variables for a particular document (ValueSource). 
 
- [](xref:Lucene.Net.Expressions.SimpleBindings) - default implementation of bindings which provide easy ways to bind sort fields and other expressions to external variables 
\ No newline at end of file
+ <xref:Lucene.Net.Expressions.SimpleBindings> - default implementation of bindings which provide easy ways to bind sort fields and other expressions to external variables 
\ No newline at end of file
diff --git a/src/Lucene.Net.Facet/Facets.cs b/src/Lucene.Net.Facet/Facets.cs
index 71c08a2..9a0ea3e 100644
--- a/src/Lucene.Net.Facet/Facets.cs
+++ b/src/Lucene.Net.Facet/Facets.cs
@@ -22,7 +22,7 @@ namespace Lucene.Net.Facet
     /// <summary>
     /// Common base class for all facets implementations.
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public abstract class Facets
     {
diff --git a/src/Lucene.Net.Facet/Range/DoubleRangeFacetCounts.cs b/src/Lucene.Net.Facet/Range/DoubleRangeFacetCounts.cs
index b212387..22e3ab0 100644
--- a/src/Lucene.Net.Facet/Range/DoubleRangeFacetCounts.cs
+++ b/src/Lucene.Net.Facet/Range/DoubleRangeFacetCounts.cs
@@ -46,7 +46,7 @@ namespace Lucene.Net.Facet.Range
     ///  <see cref="DoubleFieldSource"/> (this is the default used when you
     ///  pass just a the field name).
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </para>
     /// </summary>
     public class DoubleRangeFacetCounts : RangeFacetCounts
diff --git a/src/Lucene.Net.Facet/Range/LongRangeFacetCounts.cs b/src/Lucene.Net.Facet/Range/LongRangeFacetCounts.cs
index 7cebc9d..fd5fddb 100644
--- a/src/Lucene.Net.Facet/Range/LongRangeFacetCounts.cs
+++ b/src/Lucene.Net.Facet/Range/LongRangeFacetCounts.cs
@@ -40,7 +40,7 @@ namespace Lucene.Net.Facet.Range
     /// <para/>
     /// NOTE: This was LongRangeFacetCounts in Lucene
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public class Int64RangeFacetCounts : RangeFacetCounts
     {
diff --git a/src/Lucene.Net.Facet/Range/Range.cs b/src/Lucene.Net.Facet/Range/Range.cs
index 7e5df1d..6ccbef5 100644
--- a/src/Lucene.Net.Facet/Range/Range.cs
+++ b/src/Lucene.Net.Facet/Range/Range.cs
@@ -23,7 +23,7 @@
     /// <summary>
     /// Base class for a single labeled range.
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public abstract class Range
     {
diff --git a/src/Lucene.Net.Facet/Range/RangeFacetCounts.cs b/src/Lucene.Net.Facet/Range/RangeFacetCounts.cs
index e1cdea4..b10591c 100644
--- a/src/Lucene.Net.Facet/Range/RangeFacetCounts.cs
+++ b/src/Lucene.Net.Facet/Range/RangeFacetCounts.cs
@@ -24,7 +24,7 @@ namespace Lucene.Net.Facet.Range
     /// <summary>
     /// Base class for range faceting.
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public abstract class RangeFacetCounts : Facets
     {
diff --git a/src/Lucene.Net.Facet/SortedSet/package.md b/src/Lucene.Net.Facet/SortedSet/package.md
index ae01d92..09d48bd 100644
--- a/src/Lucene.Net.Facet/SortedSet/package.md
+++ b/src/Lucene.Net.Facet/SortedSet/package.md
@@ -15,4 +15,4 @@
  limitations under the License.
 -->
 
-Provides faceting capabilities over facets that were indexed with [](xref:Lucene.Net.Facet.Sortedset.SortedSetDocValuesFacetField).
\ No newline at end of file
+Provides faceting capabilities over facets that were indexed with <xref:Lucene.Net.Facet.Sortedset.SortedSetDocValuesFacetField>.
\ No newline at end of file
diff --git a/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumFloatAssociations.cs b/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumFloatAssociations.cs
index 2b907cb..c1301ae 100644
--- a/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumFloatAssociations.cs
+++ b/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumFloatAssociations.cs
@@ -32,7 +32,7 @@ namespace Lucene.Net.Facet.Taxonomy
     /// <para/>
     /// NOTE: This was TaxonomyFacetSumFloatAssociations in Lucene
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public class TaxonomyFacetSumSingleAssociations : SingleTaxonomyFacets
     {
diff --git a/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumIntAssociations.cs b/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumIntAssociations.cs
index 7702b25..6804818 100644
--- a/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumIntAssociations.cs
+++ b/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumIntAssociations.cs
@@ -31,7 +31,7 @@ namespace Lucene.Net.Facet.Taxonomy
     /// <para/>
     /// NOTE: This was TaxonomyFacetSumIntAssociations in Lucene
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public class TaxonomyFacetSumInt32Associations : Int32TaxonomyFacets
     {
diff --git a/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumValueSource.cs b/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumValueSource.cs
index 438931d..d362dfa 100644
--- a/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumValueSource.cs
+++ b/src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumValueSource.cs
@@ -37,7 +37,7 @@ namespace Lucene.Net.Facet.Taxonomy
     /// Aggregates sum of values from <see cref="FunctionValues.DoubleVal(int)"/> and <see cref="FunctionValues.DoubleVal(int, double[])"/>, 
     /// for each facet label.
     /// 
-    ///  @lucene.experimental 
+    /// @lucene.experimental 
     /// </summary>
     public class TaxonomyFacetSumValueSource : SingleTaxonomyFacets
     {
diff --git a/src/Lucene.Net.Facet/package.md b/src/Lucene.Net.Facet/package.md
index e3caf3e..d017190 100644
--- a/src/Lucene.Net.Facet/package.md
+++ b/src/Lucene.Net.Facet/package.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Facet
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -17,8 +22,8 @@
 
 # faceted search
 
- This module provides multiple methods for computing facet counts and value aggregations: * Taxonomy-based methods rely on a separate taxonomy index to map hierarchical facet paths to global int ordinals for fast counting at search time; these methods can compute counts (([](xref:Lucene.Net.Facet.Taxonomy.FastTaxonomyFacetCounts), [](xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetCounts)) aggregate long or double values [](xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetSumIntAssociations), []( [...]
+ This module provides multiple methods for computing facet counts and value aggregations: * Taxonomy-based methods rely on a separate taxonomy index to map hierarchical facet paths to global int ordinals for fast counting at search time; these methods can compute counts ((<xref:Lucene.Net.Facet.Taxonomy.FastTaxonomyFacetCounts>, <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetCounts>) aggregate long or double values <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetSumIntAssociations>, <xref:Luc [...]
 
- At search time you first run your search, but pass a [](xref:Lucene.Net.Facet.FacetsCollector) to gather all hits (and optionally, scores for each hit). Then, instantiate whichever facet methods you'd like to use to compute aggregates. Finally, all methods implement a common [](xref:Lucene.Net.Facet.Facets) base API that you use to obtain specific facet counts. 
+ At search time you first run your search, but pass a <xref:Lucene.Net.Facet.FacetsCollector> to gather all hits (and optionally, scores for each hit). Then, instantiate whichever facet methods you'd like to use to compute aggregates. Finally, all methods implement a common <xref:Lucene.Net.Facet.Facets> base API that you use to obtain specific facet counts. 
 
- The various [](xref:Lucene.Net.Facet.FacetsCollector.Search) utility methods are useful for doing an "ordinary" search (sorting by score, or by a specified Sort) but also collecting into a [](xref:Lucene.Net.Facet.FacetsCollector) for subsequent faceting. 
\ No newline at end of file
+ The various [#search](xref:Lucene.Net.Facet.FacetsCollector) utility methods are useful for doing an "ordinary" search (sorting by score, or by a specified Sort) but also collecting into a <xref:Lucene.Net.Facet.FacetsCollector> for subsequent faceting. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Grouping/Function/package.md b/src/Lucene.Net.Grouping/Function/package.md
index f6cb8a3..1b5043c 100644
--- a/src/Lucene.Net.Grouping/Function/package.md
+++ b/src/Lucene.Net.Grouping/Function/package.md
@@ -15,4 +15,4 @@
  limitations under the License.
 -->
 
-Support for grouping by [](xref:Lucene.Net.Queries.Function.ValueSource).
\ No newline at end of file
+Support for grouping by <xref:Lucene.Net.Queries.Function.ValueSource>.
\ No newline at end of file
diff --git a/src/Lucene.Net.Grouping/Term/package.md b/src/Lucene.Net.Grouping/Term/package.md
index f7dbcef..3008b51 100644
--- a/src/Lucene.Net.Grouping/Term/package.md
+++ b/src/Lucene.Net.Grouping/Term/package.md
@@ -15,4 +15,4 @@
  limitations under the License.
 -->
 
-Support for grouping by indexed terms via [](xref:Lucene.Net.Search.FieldCache).
\ No newline at end of file
+Support for grouping by indexed terms via <xref:Lucene.Net.Search.FieldCache>.
\ No newline at end of file
diff --git a/src/Lucene.Net.Grouping/package.md b/src/Lucene.Net.Grouping/package.md
index b5668ef..4e85c1e 100644
--- a/src/Lucene.Net.Grouping/package.md
+++ b/src/Lucene.Net.Grouping/package.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Grouping
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -50,15 +55,15 @@ Grouping requires a number of inputs:
      `withinGroupOffset`: which "slice" of top
       documents you want to retrieve from each group.
 
-The implementation is two-pass: the first pass ([](xref:Lucene.Net.Search.Grouping.Term.TermFirstPassGroupingCollector)) gathers the top groups, and the second pass ([](xref:Lucene.Net.Search.Grouping.Term.TermSecondPassGroupingCollector)) gathers documents within those groups. If the search is costly to run you may want to use the [](xref:Lucene.Net.Search.CachingCollector) class, which caches hits and can (quickly) replay them for the second pass. This way you only run the query once,  [...]
+The implementation is two-pass: the first pass (<xref:Lucene.Net.Search.Grouping.Term.TermFirstPassGroupingCollector>) gathers the top groups, and the second pass (<xref:Lucene.Net.Search.Grouping.Term.TermSecondPassGroupingCollector>) gathers documents within those groups. If the search is costly to run you may want to use the <xref:Lucene.Net.Search.CachingCollector> class, which caches hits and can (quickly) replay them for the second pass. This way you only run the query once, but yo [...]
 
  This module abstracts away what defines group and how it is collected. All grouping collectors are abstract and have currently term based implementations. One can implement collectors that for example group on multiple fields. 
 
 Known limitations:
 
 *   For the two-pass grouping search, the group field must be a
-    single-valued indexed field (or indexed as a [](xref:Lucene.Net.Documents.SortedDocValuesField)).
-    [](xref:Lucene.Net.Search.FieldCache) is used to load the [](xref:Lucene.Net.Index.SortedDocValues) for this field.
+    single-valued indexed field (or indexed as a <xref:Lucene.Net.Documents.SortedDocValuesField>).
+    <xref:Lucene.Net.Search.FieldCache> is used to load the <xref:Lucene.Net.Index.SortedDocValues> for this field.
    Although Solr support grouping by function and this module has abstraction of what a group is, there are currently only
     implementations for grouping based on terms.
    Sharding is not directly supported, though is not too
diff --git a/src/Lucene.Net.Highlighter/VectorHighlight/package.md b/src/Lucene.Net.Highlighter/VectorHighlight/package.md
index 75f50d9..5dbfe39 100644
--- a/src/Lucene.Net.Highlighter/VectorHighlight/package.md
+++ b/src/Lucene.Net.Highlighter/VectorHighlight/package.md
@@ -61,7 +61,7 @@ For your convenience, here is the offsets and positions info of the sample text.
 
 ### Step 1.
 
-In Step 1, Fast Vector Highlighter generates [](xref:Lucene.Net.Search.VectorHighlight.FieldQuery.QueryPhraseMap) from the user query. `QueryPhraseMap` consists of the following members:
+In Step 1, Fast Vector Highlighter generates <xref:Lucene.Net.Search.VectorHighlight.FieldQuery.QueryPhraseMap> from the user query. `QueryPhraseMap` consists of the following members:
 
     public class QueryPhraseMap {
       boolean terminal;
@@ -85,7 +85,7 @@ From the sample user query, the following `QueryPhraseMap` will be generated:
 
 ### Step 2.
 
-In Step 2, Fast Vector Highlighter generates [](xref:Lucene.Net.Search.VectorHighlight.FieldTermStack). Fast Vector Highlighter uses term vector data (must be stored [](xref:Lucene.Net.Documents.FieldType.SetStoreTermVectorOffsets(boolean)) and [](xref:Lucene.Net.Documents.FieldType.SetStoreTermVectorPositions(boolean))) to generate it. `FieldTermStack` keeps the terms in the user query. Therefore, in this sample case, Fast Vector Highlighter generates the following `FieldTermStack`:
+In Step 2, Fast Vector Highlighter generates <xref:Lucene.Net.Search.VectorHighlight.FieldTermStack>. Fast Vector Highlighter uses term vector data (must be stored [#setStoreTermVectorOffsets(boolean)](xref:Lucene.Net.Documents.FieldType) and [#setStoreTermVectorPositions(boolean)](xref:Lucene.Net.Documents.FieldType)) to generate it. `FieldTermStack` keeps the terms in the user query. Therefore, in this sample case, Fast Vector Highlighter generates the following `FieldTermStack`:
 
        FieldTermStack
     +------------------+
@@ -99,7 +99,7 @@ In Step 2, Fast Vector Highlighter generates [](xref:Lucene.Net.Search.VectorHig
 
 ### Step 3.
 
-In Step 3, Fast Vector Highlighter generates [](xref:Lucene.Net.Search.VectorHighlight.FieldPhraseList) by reference to `QueryPhraseMap` and `FieldTermStack`.
+In Step 3, Fast Vector Highlighter generates <xref:Lucene.Net.Search.VectorHighlight.FieldPhraseList> by reference to `QueryPhraseMap` and `FieldTermStack`.
 
        FieldPhraseList
     +----------------+-----------------+---+
diff --git a/src/Lucene.Net.Highlighter/overview.md b/src/Lucene.Net.Highlighter/overview.md
index 29046cb..294d1f9 100644
--- a/src/Lucene.Net.Highlighter/overview.md
+++ b/src/Lucene.Net.Highlighter/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Highlighter
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,9 +20,7 @@
  limitations under the License.
 -->
 
-    <title>
-      Highlighter
-    </title>
+    
 
   The highlight package contains classes to provide "keyword in context" features
   typically used to highlight search terms in the text of results pages.
\ No newline at end of file
diff --git a/src/Lucene.Net.Join/package.md b/src/Lucene.Net.Join/package.md
index c21b566..f79806d 100644
--- a/src/Lucene.Net.Join/package.md
+++ b/src/Lucene.Net.Join/package.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Join
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -19,15 +24,15 @@ This modules support index-time and query-time joins.
 
 ## Index-time joins
 
-The index-time joining support joins while searching, where joined documents are indexed as a single document block using [](xref:Lucene.Net.Index.IndexWriter.AddDocuments IndexWriter.AddDocuments()). This is useful for any normalized content (XML documents or database tables). In database terms, all rows for all joined tables matching a single row of the primary table must be indexed as a single document block, with the parent document being last in the group.
+The index-time joining support joins while searching, where joined documents are indexed as a single document block using [IndexWriter.addDocuments](xref:Lucene.Net.Index.IndexWriter#methods). This is useful for any normalized content (XML documents or database tables). In database terms, all rows for all joined tables matching a single row of the primary table must be indexed as a single document block, with the parent document being last in the group.
 
-When you index in this way, the documents in your index are divided into parent documents (the last document of each block) and child documents (all others). You provide a [](xref:Lucene.Net.Search.Filter) that identifies the parent documents, as Lucene does not currently record any information about doc blocks.
+When you index in this way, the documents in your index are divided into parent documents (the last document of each block) and child documents (all others). You provide a <xref:Lucene.Net.Search.Filter> that identifies the parent documents, as Lucene does not currently record any information about doc blocks.
 
-At search time, use [](xref:Lucene.Net.Search.Join.ToParentBlockJoinQuery) to remap/join matches from any child [](xref:Lucene.Net.Search.Query) (ie, a query that matches only child documents) up to the parent document space. The resulting query can then be used as a clause in any query that matches parent.
+At search time, use <xref:Lucene.Net.Search.Join.ToParentBlockJoinQuery> to remap/join matches from any child <xref:Lucene.Net.Search.Query> (ie, a query that matches only child documents) up to the parent document space. The resulting query can then be used as a clause in any query that matches parent.
 
-If you only care about the parent documents matching the query, you can use any collector to collect the parent hits, but if you'd also like to see which child documents match for each parent document, use the [](xref:Lucene.Net.Search.Join.ToParentBlockJoinCollector) to collect the hits. Once the search is done, you retrieve a [](xref:Lucene.Net.Search.Grouping.TopGroups) instance from the [](xref:Lucene.Net.Search.Join.ToParentBlockJoinCollector.GetTopGroups ToParentBlockJoinCollector. [...]
+If you only care about the parent documents matching the query, you can use any collector to collect the parent hits, but if you'd also like to see which child documents match for each parent document, use the <xref:Lucene.Net.Search.Join.ToParentBlockJoinCollector> to collect the hits. Once the search is done, you retrieve a <xref:Lucene.Net.Search.Grouping.TopGroups> instance from the [ToParentBlockJoinCollector.getTopGroups](xref:Lucene.Net.Search.Join.ToParentBlockJoinCollector#metho [...]
 
-To map/join in the opposite direction, use [](xref:Lucene.Net.Search.Join.ToChildBlockJoinQuery).  This wraps
+To map/join in the opposite direction, use <xref:Lucene.Net.Search.Join.ToChildBlockJoinQuery>.  This wraps
   any query matching parent documents, creating the joined query
   matching only child documents.
 
@@ -41,11 +46,11 @@ Query time joining has the following input:
   `fromQuery`:  The query executed to collect the from terms. This is usually the user specified query.
   `multipleValuesPerDocument`:  Whether the fromField contains more than one value per document
   `scoreMode`:  Defines how scores are translated to the other join side. If you don't care about scoring
-  use [](xref:Lucene.Net.Search.Join.ScoreMode.None) mode. This will disable scoring and is therefore more
+  use [#None](xref:Lucene.Net.Search.Join.ScoreMode) mode. This will disable scoring and is therefore more
   efficient (requires less memory and is faster).
   `toField`: The to field to join to
 
- Basically the query-time joining is accessible from one static method. The user of this method supplies the method with the described input and a `IndexSearcher` where the from terms need to be collected from. The returned query can be executed with the same `IndexSearcher`, but also with another `IndexSearcher`. Example usage of the [](xref:Lucene.Net.Search.Join.JoinUtil.CreateJoinQuery(String, boolean, String, Lucene.Net.Search.Query, Lucene.Net.Search.IndexSearcher, Lucene.Net.Searc [...]
+ Basically the query-time joining is accessible from one static method. The user of this method supplies the method with the described input and a `IndexSearcher` where the from terms need to be collected from. The returned query can be executed with the same `IndexSearcher`, but also with another `IndexSearcher`. Example usage of the [JoinUtil.createJoinQuery](xref:Lucene.Net.Search.Join.JoinUtil#methods) : 
 
       String fromField = "from"; // Name of the from field
       boolean multipleValuesPerDocument = false; // Set only yo true in the case when your fromField has multiple values per document in your index
diff --git a/src/Lucene.Net.Memory/overview.md b/src/Lucene.Net.Memory/overview.md
index 8c82483..d317437 100644
--- a/src/Lucene.Net.Memory/overview.md
+++ b/src/Lucene.Net.Memory/overview.md
@@ -15,8 +15,6 @@
  limitations under the License.
 -->
 
-    <title>
-      memory
-    </title>
+    
 
   memory
\ No newline at end of file
diff --git a/src/Lucene.Net.Memory/package.md b/src/Lucene.Net.Memory/package.md
index f0b262c..57a0b84 100644
--- a/src/Lucene.Net.Memory/package.md
+++ b/src/Lucene.Net.Memory/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Index.Memory
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -15,8 +20,7 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<HTML>
-<BODY>
+
+
 High-performance single-document main memory Apache Lucene fulltext search index.
-</BODY>
-</HTML>
\ No newline at end of file
+
diff --git a/src/Lucene.Net.Misc/Index/Sorter/package.md b/src/Lucene.Net.Misc/Index/Sorter/package.md
index b489a6e..4d6056c 100644
--- a/src/Lucene.Net.Misc/Index/Sorter/package.md
+++ b/src/Lucene.Net.Misc/Index/Sorter/package.md
@@ -22,10 +22,10 @@ reverse the order of the documents (by using SortField.Type.DOC in reverse).
 Multi-level sorts can be specified the same way you would when searching, by
 building Sort from multiple SortFields.
 
-[](xref:Lucene.Net.Index.Sorter.SortingMergePolicy) can be used to
+<xref:Lucene.Net.Index.Sorter.SortingMergePolicy> can be used to
 make Lucene sort segments before merging them. This will ensure that every
 segment resulting from a merge will be sorted according to the provided
-[](xref:Lucene.Net.Search.Sort). This however makes merging and
+<xref:Lucene.Net.Search.Sort>. This however makes merging and
 thus indexing slower.
 
 Sorted segments allow for early query termination when the sort order
diff --git a/src/Lucene.Net.Misc/overview.md b/src/Lucene.Net.Misc/overview.md
index c47d765..f937f63 100644
--- a/src/Lucene.Net.Misc/overview.md
+++ b/src/Lucene.Net.Misc/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Misc
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,9 +20,7 @@
  limitations under the License.
 -->
 
-    <title>
-      miscellaneous
-    </title>
+    
 
 ## Misc Tools
 
@@ -29,7 +32,7 @@ changing norms, finding high freq terms, and others.
 **NOTE**: This uses C++ sources (accessible via JNI), which you'll
 have to compile on your platform.
 
-[](xref:Lucene.Net.Store.NativeUnixDirectory) is a Directory implementation that bypasses the
+<xref:Lucene.Net.Store.NativeUnixDirectory> is a Directory implementation that bypasses the
 OS's buffer cache (using direct IO) for any IndexInput and IndexOutput
 used during merging of segments larger than a specified size (default
 10 MB).  This avoids evicting hot pages that are still in-use for
diff --git a/src/Lucene.Net.Queries/overview.md b/src/Lucene.Net.Queries/overview.md
index ba3f288..bf3d67e 100644
--- a/src/Lucene.Net.Queries/overview.md
+++ b/src/Lucene.Net.Queries/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Queries
+summary: *content
+---
+
+<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
@@ -15,8 +20,6 @@
   limitations under the License.
   -->
 
-    <title>
-      Queries
-    </title>
+    
 
   Queries
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Classic/package.md b/src/Lucene.Net.QueryParser/Classic/package.md
index ee90202..99f7d70 100644
--- a/src/Lucene.Net.QueryParser/Classic/package.md
+++ b/src/Lucene.Net.QueryParser/Classic/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.QueryParsers.Classic
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -25,7 +30,7 @@ Sorry.
 Note that because JavaCC defines a class named <tt>Token</tt>, <tt>org.apache.lucene.analysis.Token</tt>
 must always be fully qualified in source code in this package.
 
-**NOTE**: [](xref:Lucene.Net.QueryParsers.Flexible.Standard) has an alternative queryparser that matches the syntax of this one, but is more modular,
+**NOTE**: <xref:Lucene.Net.QueryParsers.Flexible.Standard> has an alternative queryparser that matches the syntax of this one, but is more modular,
 enabling substantial customization to how a query is created.
 
 ## Query Parser Syntax
@@ -156,7 +161,7 @@ Note: You cannot use a * or ? symbol as the first character of a search.
 
 ### Regular Expression Searches
 
-Lucene supports regular expression searches matching a pattern between forward slashes "/". The syntax may change across releases, but the current supported syntax is documented in the [](xref:Lucene.Net.Util.Automaton.RegExp RegExp) class. For example to find documents containing "moat" or "boat": 
+Lucene supports regular expression searches matching a pattern between forward slashes "/". The syntax may change across releases, but the current supported syntax is documented in the [RegExp](xref:Lucene.Net.Util.Automaton.RegExp) class. For example to find documents containing "moat" or "boat": 
 
 /[mb]oat/
 
diff --git a/src/Lucene.Net.QueryParser/Flexible/Core/Builders/package.md b/src/Lucene.Net.QueryParser/Flexible/Core/Builders/package.md
index e61d138..718bc65 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Core/Builders/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Core/Builders/package.md
@@ -20,4 +20,4 @@ Necessary classes to implement query builders.
 
 ## Query Parser Builders
 
- The package <tt>org.apache.lucene.queryParser.builders</tt> contains the interface that builders must implement, it also contain a utility [](xref:Lucene.Net.QueryParsers.Flexible.Core.Builders.QueryTreeBuilder), which walks the tree and call the Builder for each node in the tree. Builder normally convert QueryNode Object into a Lucene Query Object, and normally it's a one-to-one mapping class. But other builders implementations can by written to convert QueryNode objects to other non l [...]
\ No newline at end of file
+ The package <tt>org.apache.lucene.queryParser.builders</tt> contains the interface that builders must implement, it also contain a utility <xref:Lucene.Net.QueryParsers.Flexible.Core.Builders.QueryTreeBuilder>, which walks the tree and call the Builder for each node in the tree. Builder normally convert QueryNode Object into a Lucene Query Object, and normally it's a one-to-one mapping class. But other builders implementations can by written to convert QueryNode objects to other non luc [...]
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Core/Config/package.md b/src/Lucene.Net.QueryParser/Flexible/Core/Config/package.md
index 09b1859..ef21be5 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Core/Config/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Core/Config/package.md
@@ -22,6 +22,6 @@ Base classes used to configure the query processing.
 
  The package <tt>org.apache.lucene.queryparser.flexible.config</tt> contains query configuration handler abstract class that all config handlers should extend. 
 
- See [](xref:Lucene.Net.QueryParsers.Flexible.Standard.Config.StandardQueryConfigHandler) for a reference implementation. 
+ See <xref:Lucene.Net.QueryParsers.Flexible.Standard.Config.StandardQueryConfigHandler> for a reference implementation. 
 
- The [](xref:Lucene.Net.QueryParsers.Flexible.Core.Config.QueryConfigHandler) and [](xref:Lucene.Net.QueryParsers.Flexible.Core.Config.FieldConfig) are used in the processors to access config information in a flexible and independent way. See [](xref:Lucene.Net.QueryParsers.Flexible.Standard.Processors.TermRangeQueryNodeProcessor) for a reference implementation. 
\ No newline at end of file
+ The <xref:Lucene.Net.QueryParsers.Flexible.Core.Config.QueryConfigHandler> and <xref:Lucene.Net.QueryParsers.Flexible.Core.Config.FieldConfig> are used in the processors to access config information in a flexible and independent way. See <xref:Lucene.Net.QueryParsers.Flexible.Standard.Processors.TermRangeQueryNodeProcessor> for a reference implementation. 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Core/Nodes/package.md b/src/Lucene.Net.QueryParser/Flexible/Core/Nodes/package.md
index ba4712e..51ac092 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Core/Nodes/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Core/Nodes/package.md
@@ -20,11 +20,11 @@ Query nodes commonly used by query parser implementations.
 
 ## Query Nodes
 
- The package <tt>org.apache.lucene.queryParser.nodes</tt> contains all the basic query nodes. The interface that represents a query node is [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode). 
+ The package <tt>org.apache.lucene.queryParser.nodes</tt> contains all the basic query nodes. The interface that represents a query node is <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode>. 
 
- [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode)s are used by the text parser to create a syntax tree. These nodes are designed to be used by UI or other text parsers. The default Lucene text parser is [](xref:Lucene.Net.QueryParsers.Flexible.Standard.Parser.StandardSyntaxParser), it implements Lucene's standard syntax. 
+ <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode>s are used by the text parser to create a syntax tree. These nodes are designed to be used by UI or other text parsers. The default Lucene text parser is <xref:Lucene.Net.QueryParsers.Flexible.Standard.Parser.StandardSyntaxParser>, it implements Lucene's standard syntax. 
 
- [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) interface should be implemented by all query nodes, the class [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNodeImpl) implements [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) and is extended by all current query node implementations. 
+ <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> interface should be implemented by all query nodes, the class <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNodeImpl> implements <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> and is extended by all current query node implementations. 
 
  A query node tree can be printed to the a stream, and it generates a pseudo XML representation with all the nodes. 
 
@@ -32,6 +32,6 @@ Query nodes commonly used by query parser implementations.
 
  Grouping nodes: * AndQueryNode - used for AND operator * AnyQueryNode - used for ANY operator * OrQueryNode - used for OR operator * BooleanQueryNode - used when no operator is specified * ModifierQueryNode - used for modifier operator * GroupQueryNode - used for parenthesis * BoostQueryNode - used for boost operator * SlopQueryNode - phrase slop * FuzzyQueryNode - fuzzy node * TermRangeQueryNode - used for parametric field:[low_value TO high_value] * ProximityQueryNode - used for proxi [...]
 
- Leaf Nodes: * FieldQueryNode - field/value node * NumericQueryNode - used for numeric search * PathQueryNode - [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) object used with path-like queries * OpaqueQueryNode - Used as for part of the query that can be parsed by other parsers. schema/value * PrefixWildcardQueryNode - non-phrase wildcard query * QuotedFieldQUeryNode - regular phrase node * WildcardQueryNode - non-phrase wildcard query 
+ Leaf Nodes: * FieldQueryNode - field/value node * NumericQueryNode - used for numeric search * PathQueryNode - <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> object used with path-like queries * OpaqueQueryNode - Used as for part of the query that can be parsed by other parsers. schema/value * PrefixWildcardQueryNode - non-phrase wildcard query * QuotedFieldQUeryNode - regular phrase node * WildcardQueryNode - non-phrase wildcard query 
 
  Utility Nodes: * DeletedQueryNode - used by processors on optimizations * MatchAllDocsQueryNode - used by processors on optimizations * MatchNoDocsQueryNode - used by processors on optimizations * NoTokenFoundQueryNode - used by tokenizers/lemmatizers/analyzers 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Core/Processors/package.md b/src/Lucene.Net.QueryParser/Flexible/Core/Processors/package.md
index f2c74e8..6259637 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Core/Processors/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Core/Processors/package.md
@@ -22,12 +22,12 @@ Interfaces and implementations used by query node processors
 
  The package <tt>org.apache.lucene.queryParser.processors</tt> contains interfaces that should be implemented by every query node processor. 
 
- The interface that every query node processor should implement is [](xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessor). 
+ The interface that every query node processor should implement is <xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessor>. 
 
- A query node processor should be used to process a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree. [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) trees can be programmatically created or generated by a text parser. See [](xref:Lucene.Net.QueryParsers.Flexible.Core.Parser) for more details about text parsers. 
+ A query node processor should be used to process a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree. <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> trees can be programmatically created or generated by a text parser. See <xref:Lucene.Net.QueryParsers.Flexible.Core.Parser> for more details about text parsers. 
 
- A query node processor should be used to process a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree. [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) trees can be programmatically created or generated by a text parser. See [](xref:Lucene.Net.QueryParsers.Flexible.Core.Parser) for more details about text parsers. 
+ A query node processor should be used to process a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree. <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> trees can be programmatically created or generated by a text parser. See <xref:Lucene.Net.QueryParsers.Flexible.Core.Parser> for more details about text parsers. 
 
- A pipeline of processors can be assembled using [](xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessorPipeline). 
+ A pipeline of processors can be assembled using <xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessorPipeline>. 
 
- Implementors may want to extend [](xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessorImpl), which simplifies the implementation, because it walks automatically the [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode). See [](xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessorImpl) for more details. 
\ No newline at end of file
+ Implementors may want to extend <xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessorImpl>, which simplifies the implementation, because it walks automatically the <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode>. See <xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessorImpl> for more details. 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Core/package.md b/src/Lucene.Net.QueryParser/Flexible/Core/package.md
index e2972ce..cd95505 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Core/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Core/package.md
@@ -26,12 +26,12 @@ Core classes of the flexible query parser framework.
 
 ### First Phase: Text Parsing
 
- The text parsing phase is performed by a text parser, which implements [](xref:Lucene.Net.QueryParsers.Flexible.Core.Parser.SyntaxParser) interface. A text parser is responsible to get a query string and convert it to a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree, which is an object structure that represents the elements defined in the query string. 
+ The text parsing phase is performed by a text parser, which implements <xref:Lucene.Net.QueryParsers.Flexible.Core.Parser.SyntaxParser> interface. A text parser is responsible to get a query string and convert it to a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree, which is an object structure that represents the elements defined in the query string. 
 
 ### Second (optional) Phase: Query Processing
 
- The query processing phase is performed by a query processor, which implements [](xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessor). A query processor is responsible to perform any processing on a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree. This phase is optional and is used only if an extra processing, validation, query expansion, etc needs to be performed in a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree. The [](x [...]
+ The query processing phase is performed by a query processor, which implements <xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessor>. A query processor is responsible to perform any processing on a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree. This phase is optional and is used only if an extra processing, validation, query expansion, etc needs to be performed in a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree. The <xref:Luce [...]
 
 ### Third Phase: Query Building
 
- The query building phase is performed by a query builder, which implements [](xref:Lucene.Net.QueryParsers.Flexible.Core.Builders.QueryBuilder). A query builder is responsible to convert a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree into an arbitrary object, which is usually used to be executed against a search index. 
\ No newline at end of file
+ The query building phase is performed by a query builder, which implements <xref:Lucene.Net.QueryParsers.Flexible.Core.Builders.QueryBuilder>. A query builder is responsible to convert a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree into an arbitrary object, which is usually used to be executed against a search index. 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Precedence/Processors/package.md b/src/Lucene.Net.QueryParser/Flexible/Precedence/Processors/package.md
index b9c02c9..a0d3b63 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Precedence/Processors/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Precedence/Processors/package.md
@@ -20,8 +20,8 @@ Processors used by Precedence Query Parser
 
 ## Lucene Precedence Query Parser Processors
 
- This package contains the 2 [](xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessor)s used by [](xref:Lucene.Net.QueryParsers.Flexible.Precedence.PrecedenceQueryParser). 
+ This package contains the 2 <xref:Lucene.Net.QueryParsers.Flexible.Core.Processors.QueryNodeProcessor>s used by <xref:Lucene.Net.QueryParsers.Flexible.Precedence.PrecedenceQueryParser>. 
 
- [](xref:Lucene.Net.QueryParsers.Flexible.Precedence.Processors.BooleanModifiersQueryNodeProcessor): this processor is used to apply [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.ModifierQueryNode)s on [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.BooleanQueryNode) children according to the boolean type or the default operator. 
+ <xref:Lucene.Net.QueryParsers.Flexible.Precedence.Processors.BooleanModifiersQueryNodeProcessor>: this processor is used to apply <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.ModifierQueryNode>s on <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.BooleanQueryNode> children according to the boolean type or the default operator. 
 
- [](xref:Lucene.Net.QueryParsers.Flexible.Precedence.Processors.PrecedenceQueryNodeProcessorPipeline): this processor pipeline is used by [](xref:Lucene.Net.QueryParsers.Flexible.Precedence.PrecedenceQueryParser). It extends [](xref:Lucene.Net.QueryParsers.Flexible.Standard.Processors.StandardQueryNodeProcessorPipeline) and rearrange the pipeline so the boolean precedence is processed correctly. Check [](xref:Lucene.Net.QueryParsers.Flexible.Precedence.Processors.PrecedenceQueryNodeProce [...]
\ No newline at end of file
+ <xref:Lucene.Net.QueryParsers.Flexible.Precedence.Processors.PrecedenceQueryNodeProcessorPipeline>: this processor pipeline is used by <xref:Lucene.Net.QueryParsers.Flexible.Precedence.PrecedenceQueryParser>. It extends <xref:Lucene.Net.QueryParsers.Flexible.Standard.Processors.StandardQueryNodeProcessorPipeline> and rearrange the pipeline so the boolean precedence is processed correctly. Check <xref:Lucene.Net.QueryParsers.Flexible.Precedence.Processors.PrecedenceQueryNodeProcessorPipe [...]
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Precedence/package.md b/src/Lucene.Net.QueryParser/Flexible/Precedence/package.md
index ebf4a0f..336cb6f 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Precedence/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Precedence/package.md
@@ -22,4 +22,4 @@ Precedence Query Parser Implementation
 
  The Precedence Query Parser extends the Standard Query Parser and enables the boolean precedence. So, the query <a AND b OR c AND d> is parsed to <(+a +b) (+c +d)> instead of <+a +b +c +d>. 
 
- Check [](xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser) for more details about the supported syntax and query parser functionalities. 
\ No newline at end of file
+ Check <xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser> for more details about the supported syntax and query parser functionalities. 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Standard/Builders/package.md b/src/Lucene.Net.QueryParser/Flexible/Standard/Builders/package.md
index 7bb2656..bd893fa 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Standard/Builders/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Standard/Builders/package.md
@@ -20,6 +20,6 @@ Standard Lucene Query Node Builders.
 
 ## Standard Lucene Query Node Builders
 
- The package org.apache.lucene.queryparser.flexible.standard.builders contains all the builders needed to build a Lucene Query object from a query node tree. These builders expect the query node tree was already processed by the [](xref:Lucene.Net.QueryParsers.Flexible.Standard.Processors.StandardQueryNodeProcessorPipeline). 
+ The package org.apache.lucene.queryparser.flexible.standard.builders contains all the builders needed to build a Lucene Query object from a query node tree. These builders expect the query node tree was already processed by the <xref:Lucene.Net.QueryParsers.Flexible.Standard.Processors.StandardQueryNodeProcessorPipeline>. 
 
- [](xref:Lucene.Net.QueryParsers.Flexible.Standard.Builders.StandardQueryTreeBuilder) is a builder that already contains a defined map that maps each QueryNode object with its respective builder. 
\ No newline at end of file
+ <xref:Lucene.Net.QueryParsers.Flexible.Standard.Builders.StandardQueryTreeBuilder> is a builder that already contains a defined map that maps each QueryNode object with its respective builder. 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Flexible/Standard/package.md b/src/Lucene.Net.QueryParser/Flexible/Standard/package.md
index e011c86..3484e17 100644
--- a/src/Lucene.Net.QueryParser/Flexible/Standard/package.md
+++ b/src/Lucene.Net.QueryParser/Flexible/Standard/package.md
@@ -24,4 +24,4 @@ Implementation of the {@linkplain org.apache.lucene.queryparser.classic Lucene c
 
  The classes contained in the package org.apache.lucene.queryParser.standard are used to reproduce the same behavior as the old query parser. 
 
- Check [](xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser) to quick start using the Lucene query parser. 
\ No newline at end of file
+ Check <xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser> to quick start using the Lucene query parser. 
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/overview.md b/src/Lucene.Net.QueryParser/overview.md
index d572721..0ffaa9f 100644
--- a/src/Lucene.Net.QueryParser/overview.md
+++ b/src/Lucene.Net.QueryParser/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Queryparser
+summary: *content
+---
+
+<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
@@ -15,9 +20,7 @@
   limitations under the License.
   -->
 
-    <title>
-      QueryParsers
-    </title>
+    
 
   Apache Lucene QueryParsers.
 
@@ -53,7 +56,7 @@
 
  This project contains the new Lucene query parser implementation, which matches the syntax of the core QueryParser but offers a more modular architecture to enable customization. 
 
- It's currently divided in 2 main packages: * [](xref:Lucene.Net.QueryParsers.Flexible.Core): it contains the query parser API classes, which should be extended by query parser implementations. * [](xref:Lucene.Net.QueryParsers.Flexible.Standard): it contains the current Lucene query parser implementation using the new query parser API. 
+ It's currently divided in 2 main packages: * <xref:Lucene.Net.QueryParsers.Flexible.Core>: it contains the query parser API classes, which should be extended by query parser implementations. * <xref:Lucene.Net.QueryParsers.Flexible.Standard>: it contains the current Lucene query parser implementation using the new query parser API. 
 
 ### Features
 
@@ -88,8 +91,8 @@
 <dt>QueryParser</dt>
 <dd>
 This layer is the text parsing layer which simply transforms the
-query text string into a [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) tree. Every text parser
-must implement the interface [](xref:Lucene.Net.QueryParsers.Flexible.Core.Parser.SyntaxParser).
+query text string into a <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> tree. Every text parser
+must implement the interface <xref:Lucene.Net.QueryParsers.Flexible.Core.Parser.SyntaxParser>.
 Lucene default implementations implements it using JavaCC.
 </dd>
 
@@ -103,7 +106,7 @@ terms.
 
 <dt>QueryBuilder</dt>
 <dd>
-The third layer is a configurable map of builders, which map [](xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode) types to its specific 
+The third layer is a configurable map of builders, which map <xref:Lucene.Net.QueryParsers.Flexible.Core.Nodes.QueryNode> types to its specific 
 builder that will transform the QueryNode into Lucene Query object.
 </dd>
 
@@ -116,15 +119,15 @@ builder that will transform the QueryNode into Lucene Query object.
 ### StandardQueryParser and QueryParserWrapper
 
 The classic Lucene query parser is located under
-[](xref:Lucene.Net.QueryParsers.Classic).
+<xref:Lucene.Net.QueryParsers.Classic>.
 
 To make it simpler to use the new query parser 
-the class [](xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser) may be helpful,
+the class <xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser> may be helpful,
 specially for people that do not want to extend the Query Parser.
 It uses the default Lucene query processors, text parser and builders, so
 you don't need to worry about dealing with those.
 
-[](xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser) usage:
+<xref:Lucene.Net.QueryParsers.Flexible.Standard.StandardQueryParser> usage:
 
           StandardQueryParser qpHelper = new StandardQueryParser();
           StandardQueryConfigHandler config =  qpHelper.getQueryConfigHandler();
diff --git a/src/Lucene.Net.Replicator/overview.md b/src/Lucene.Net.Replicator/overview.md
index f52ce85..c9d7d78 100644
--- a/src/Lucene.Net.Replicator/overview.md
+++ b/src/Lucene.Net.Replicator/overview.md
@@ -15,8 +15,6 @@
  limitations under the License.
 -->
 
-    <title>
-      replicator
-    </title>
+    
 
   Provides index files replication capabilities.
\ No newline at end of file
diff --git a/src/Lucene.Net.Replicator/package.md b/src/Lucene.Net.Replicator/package.md
index a628226..caa4753 100644
--- a/src/Lucene.Net.Replicator/package.md
+++ b/src/Lucene.Net.Replicator/package.md
@@ -1,4 +1,9 @@
-<!-- 
+---
+uid: Lucene.Net.Replicator
+summary: *content
+---
+
+<!-- 
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
diff --git a/src/Lucene.Net.Sandbox/overview.md b/src/Lucene.Net.Sandbox/overview.md
index bcbd313..77eff9c 100644
--- a/src/Lucene.Net.Sandbox/overview.md
+++ b/src/Lucene.Net.Sandbox/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Sandbox
+summary: *content
+---
+
+<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
@@ -15,8 +20,6 @@
   limitations under the License.
   -->
 
-    <title>
-      Sandbox
-    </title>
+    
 
   Sandbox
\ No newline at end of file
diff --git a/src/Lucene.Net.Spatial/overview.md b/src/Lucene.Net.Spatial/overview.md
index ebf20d2..51b1967 100644
--- a/src/Lucene.Net.Spatial/overview.md
+++ b/src/Lucene.Net.Spatial/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Spatial
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -17,10 +22,10 @@
 
 # The Spatial Module for Apache Lucene
 
- The spatial module is new to Lucene 4, replacing the old "contrib" module that came before it. The principle interface to the module is a [](xref:Lucene.Net.Spatial.SpatialStrategy) which encapsulates an approach to indexing and searching based on shapes. Different Strategies have different features and performance profiles, which are documented at each Strategy implementation class level. 
+ The spatial module is new to Lucene 4, replacing the old "contrib" module that came before it. The principle interface to the module is a <xref:Lucene.Net.Spatial.SpatialStrategy> which encapsulates an approach to indexing and searching based on shapes. Different Strategies have different features and performance profiles, which are documented at each Strategy implementation class level. 
 
  For some sample code showing how to use the API, see SpatialExample.java in the tests. 
 
  The spatial module uses [Spatial4j](https://github.com/spatial4j/spatial4j) heavily. Spatial4j is an ASL licensed library with these capabilities: * Provides shape implementations, namely point, rectangle, and circle. Both geospatial contexts and plain 2D Euclidean/Cartesian contexts are supported. With an additional dependency, it adds polygon and other geometry shape support via integration with [JTS Topology Suite](http://sourceforge.net/projects/jts-topo-suite/). This includes datel [...]
 
- Historical note: The new spatial module was once known as Lucene Spatial Playground (LSP) as an external project. In ~March 2012, LSP split into this new module as part of Lucene and Spatial4j externally. A large chunk of the LSP implementation originated as SOLR-2155 which uses trie/prefix-tree algorithms with a geohash encoding. That approach is implemented in [](xref:Lucene.Net.Spatial.Prefix.RecursivePrefixTreeStrategy) today. 
\ No newline at end of file
+ Historical note: The new spatial module was once known as Lucene Spatial Playground (LSP) as an external project. In ~March 2012, LSP split into this new module as part of Lucene and Spatial4j externally. A large chunk of the LSP implementation originated as SOLR-2155 which uses trie/prefix-tree algorithms with a geohash encoding. That approach is implemented in <xref:Lucene.Net.Spatial.Prefix.RecursivePrefixTreeStrategy> today. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Suggest/overview.md b/src/Lucene.Net.Suggest/overview.md
index 1b0f104..5b894cc 100644
--- a/src/Lucene.Net.Suggest/overview.md
+++ b/src/Lucene.Net.Suggest/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Suggest
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -15,8 +20,6 @@
  limitations under the License.
 -->
 
-    <title>
-      suggest
-    </title>
+    
 
   Auto-suggest and spellchecking support.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Analysis/package.md b/src/Lucene.Net.TestFramework/Analysis/package.md
index abbc244..981c620 100644
--- a/src/Lucene.Net.TestFramework/Analysis/package.md
+++ b/src/Lucene.Net.TestFramework/Analysis/package.md
@@ -18,4 +18,4 @@
 
 Support for testing analysis components.
 
- The main classes of interest are: * [](xref:Lucene.Net.Analysis.BaseTokenStreamTestCase): Highly recommended to use its helper methods, (especially in conjunction with [](xref:Lucene.Net.Analysis.MockAnalyzer) or [](xref:Lucene.Net.Analysis.MockTokenizer)), as it contains many assertions and checks to catch bugs. * [](xref:Lucene.Net.Analysis.MockTokenizer): Tokenizer for testing. Tokenizer that serves as a replacement for WHITESPACE, SIMPLE, and KEYWORD tokenizers. If you are writing a [...]
\ No newline at end of file
+ The main classes of interest are: * <xref:Lucene.Net.Analysis.BaseTokenStreamTestCase>: Highly recommended to use its helper methods, (especially in conjunction with <xref:Lucene.Net.Analysis.MockAnalyzer> or <xref:Lucene.Net.Analysis.MockTokenizer>), as it contains many assertions and checks to catch bugs. * <xref:Lucene.Net.Analysis.MockTokenizer>: Tokenizer for testing. Tokenizer that serves as a replacement for WHITESPACE, SIMPLE, and KEYWORD tokenizers. If you are writing a compone [...]
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Compressing/package.md b/src/Lucene.Net.TestFramework/Codecs/Compressing/package.md
index 08aeed5..6ddc8ae 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Compressing/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Compressing/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Support for testing [](xref:Lucene.Net.Codecs.Compressing.CompressingStoredFieldsFormat).
\ No newline at end of file
+Support for testing <xref:Lucene.Net.Codecs.Compressing.CompressingStoredFieldsFormat>.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Lucene40/package.md b/src/Lucene.Net.TestFramework/Codecs/Lucene40/package.md
index a98655b..0fc2c8c 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Lucene40/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Lucene40/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Support for testing [](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat).
\ No newline at end of file
+Support for testing <xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat>.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Lucene41/package.md b/src/Lucene.Net.TestFramework/Codecs/Lucene41/package.md
index 456fa6b..1b35629 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Lucene41/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Lucene41/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Support for testing [](xref:Lucene.Net.Codecs.Lucene41.Lucene41Codec).
\ No newline at end of file
+Support for testing <xref:Lucene.Net.Codecs.Lucene41.Lucene41Codec>.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md b/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md
index fb0ab32..8d11e53 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Codec for testing that supports [](xref:Lucene.Net.Index.TermsEnum.Ord())
\ No newline at end of file
+Codec for testing that supports [#ord()](xref:Lucene.Net.Index.TermsEnum)
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Lucene42/package.md b/src/Lucene.Net.TestFramework/Codecs/Lucene42/package.md
index 5123560..4e10dfd 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Lucene42/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Lucene42/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Support for testing [](xref:Lucene.Net.Codecs.Lucene42.Lucene42Codec).
\ No newline at end of file
+Support for testing <xref:Lucene.Net.Codecs.Lucene42.Lucene42Codec>.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Lucene45/package.md b/src/Lucene.Net.TestFramework/Codecs/Lucene45/package.md
index 588c2e8..31f4f8d 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Lucene45/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Lucene45/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Support for testing [](xref:Lucene.Net.Codecs.Lucene45.Lucene45Codec).
\ No newline at end of file
+Support for testing <xref:Lucene.Net.Codecs.Lucene45.Lucene45Codec>.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/MockSep/package.md b/src/Lucene.Net.TestFramework/Codecs/MockSep/package.md
index caad729..285255b 100644
--- a/src/Lucene.Net.TestFramework/Codecs/MockSep/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/MockSep/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Very simple implementations of [](xref:Lucene.Net.Codecs.Sep) for testing.
\ No newline at end of file
+Very simple implementations of <xref:Lucene.Net.Codecs.Sep> for testing.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/NestedPulsing/package.md b/src/Lucene.Net.TestFramework/Codecs/NestedPulsing/package.md
index b9d4eb5..e94d513 100644
--- a/src/Lucene.Net.TestFramework/Codecs/NestedPulsing/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/NestedPulsing/package.md
@@ -16,4 +16,4 @@
  limitations under the License.
 -->
 
-Codec for testing that wraps [](xref:Lucene.Net.Codecs.Pulsing.PulsingPostingsFormat) with itself.
\ No newline at end of file
+Codec for testing that wraps <xref:Lucene.Net.Codecs.Pulsing.PulsingPostingsFormat> with itself.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Index/package.md b/src/Lucene.Net.TestFramework/Index/package.md
index 2d2255d..71866dd 100644
--- a/src/Lucene.Net.TestFramework/Index/package.md
+++ b/src/Lucene.Net.TestFramework/Index/package.md
@@ -18,4 +18,4 @@
 
 Support for testing of indexes. 
 
- The primary classes are: * [](xref:Lucene.Net.Index.RandomIndexWriter): Randomizes the indexing experience. [](xref:Lucene.Net.Index.MockRandomMergePolicy): MergePolicy that makes random decisions. 
\ No newline at end of file
+ The primary classes are: * <xref:Lucene.Net.Index.RandomIndexWriter>: Randomizes the indexing experience. <xref:Lucene.Net.Index.MockRandomMergePolicy>: MergePolicy that makes random decisions. 
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Search/package.md b/src/Lucene.Net.TestFramework/Search/package.md
index 4a9f0da..a8e5978 100644
--- a/src/Lucene.Net.TestFramework/Search/package.md
+++ b/src/Lucene.Net.TestFramework/Search/package.md
@@ -18,4 +18,4 @@
 
 Support for testing search components. 
 
- The primary classes are: * [](xref:Lucene.Net.Search.QueryUtils): Useful methods for testing Query classes. [](xref:Lucene.Net.Search.ShardSearchingTestBase): Base class for simulating distributed search. 
\ No newline at end of file
+ The primary classes are: * <xref:Lucene.Net.Search.QueryUtils>: Useful methods for testing Query classes. <xref:Lucene.Net.Search.ShardSearchingTestBase>: Base class for simulating distributed search. 
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Store/package.md b/src/Lucene.Net.TestFramework/Store/package.md
index aa87950..39fd766 100644
--- a/src/Lucene.Net.TestFramework/Store/package.md
+++ b/src/Lucene.Net.TestFramework/Store/package.md
@@ -18,5 +18,5 @@
 
 Support for testing store mechanisms. 
 
-The primary class is [](xref:Lucene.Net.Store.MockDirectoryWrapper), which
+The primary class is <xref:Lucene.Net.Store.MockDirectoryWrapper>, which
 wraps any Directory implementation and provides additional checks.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Util/Automaton/package.md b/src/Lucene.Net.TestFramework/Util/Automaton/package.md
index c9888f2..c21144f 100644
--- a/src/Lucene.Net.TestFramework/Util/Automaton/package.md
+++ b/src/Lucene.Net.TestFramework/Util/Automaton/package.md
@@ -16,5 +16,5 @@
  limitations under the License.
 -->
 
-Support for testing automata. The primary class is [](xref:Lucene.Net.Util.Automaton.AutomatonTestUtil),
+Support for testing automata. The primary class is <xref:Lucene.Net.Util.Automaton.AutomatonTestUtil>,
 which can generate random automata, has simplified implementations for testing, etc.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Util/package.md b/src/Lucene.Net.TestFramework/Util/package.md
index 542b6a8..2b3f064 100644
--- a/src/Lucene.Net.TestFramework/Util/package.md
+++ b/src/Lucene.Net.TestFramework/Util/package.md
@@ -16,5 +16,5 @@
  limitations under the License.
 -->
 
-General test support.  The primary class is [](xref:Lucene.Net.Util.LuceneTestCase),
+General test support.  The primary class is <xref:Lucene.Net.Util.LuceneTestCase>,
 which extends JUnit with additional functionality.
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/overview.md b/src/Lucene.Net.TestFramework/overview.md
index 429e1ce..a29ad4c 100644
--- a/src/Lucene.Net.TestFramework/overview.md
+++ b/src/Lucene.Net.TestFramework/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net.Testframework
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
diff --git a/src/Lucene.Net/Analysis/package.md b/src/Lucene.Net/Analysis/package.md
index 861b25f..8e8e105 100644
--- a/src/Lucene.Net/Analysis/package.md
+++ b/src/Lucene.Net/Analysis/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Analysis
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -16,7 +21,7 @@
  limitations under the License.
 -->
 
-API and code to convert text into indexable/searchable tokens. Covers [](xref:Lucene.Net.Analysis.Analyzer) and related classes.
+API and code to convert text into indexable/searchable tokens. Covers <xref:Lucene.Net.Analysis.Analyzer> and related classes.
 
 ## Parsing? Tokenization? Analysis!
 
@@ -63,9 +68,9 @@ and proximity searches (though sentence identification is not provided by Lucene
 
  The analysis package provides the mechanism to convert Strings and Readers into tokens that can be indexed by Lucene. There are four main classes in the package from which all analysis processes are derived. These are: 
 
-*   [](xref:Lucene.Net.Analysis.Analyzer) – An Analyzer is 
+*   <xref:Lucene.Net.Analysis.Analyzer> – An Analyzer is 
     responsible for building a 
-    [](xref:Lucene.Net.Analysis.TokenStream) which can be consumed
+    <xref:Lucene.Net.Analysis.TokenStream> which can be consumed
     by the indexing and searching processes.  See below for more information
     on implementing your own Analyzer.
 
@@ -79,41 +84,41 @@ and proximity searches (though sentence identification is not provided by Lucene
     constructors and reset() methods accept a CharFilter.  CharFilters may
     be chained to perform multiple pre-tokenization modifications.
 
-*   [](xref:Lucene.Net.Analysis.Tokenizer) – A Tokenizer is a 
-    [](xref:Lucene.Net.Analysis.TokenStream) and is responsible for
+*   <xref:Lucene.Net.Analysis.Tokenizer> – A Tokenizer is a 
+    <xref:Lucene.Net.Analysis.TokenStream> and is responsible for
     breaking up incoming text into tokens. In most cases, an Analyzer will
     use a Tokenizer as the first step in the analysis process.  However,
     to modify text prior to tokenization, use a CharStream subclass (see
     above).
 
-*   [](xref:Lucene.Net.Analysis.TokenFilter) – A TokenFilter is
-    also a [](xref:Lucene.Net.Analysis.TokenStream) and is responsible
+*   <xref:Lucene.Net.Analysis.TokenFilter> – A TokenFilter is
+    also a <xref:Lucene.Net.Analysis.TokenStream> and is responsible
     for modifying tokens that have been created by the Tokenizer.  Common 
     modifications performed by a TokenFilter are: deletion, stemming, synonym 
     injection, and down casing.  Not all Analyzers require TokenFilters.
 
 ## Hints, Tips and Traps
 
- The synergy between [](xref:Lucene.Net.Analysis.Analyzer) and [](xref:Lucene.Net.Analysis.Tokenizer) is sometimes confusing. To ease this confusion, some clarifications: 
+ The synergy between <xref:Lucene.Net.Analysis.Analyzer> and <xref:Lucene.Net.Analysis.Tokenizer> is sometimes confusing. To ease this confusion, some clarifications: 
 
-*   The [](xref:Lucene.Net.Analysis.Analyzer) is responsible for the entire task of 
-    <u>creating</u> tokens out of the input text, while the [](xref:Lucene.Net.Analysis.Tokenizer)
+*   The <xref:Lucene.Net.Analysis.Analyzer> is responsible for the entire task of 
+    <u>creating</u> tokens out of the input text, while the <xref:Lucene.Net.Analysis.Tokenizer>
     is only responsible for <u>breaking</u> the input text into tokens. Very likely, tokens created 
-    by the [](xref:Lucene.Net.Analysis.Tokenizer) would be modified or even omitted 
-    by the [](xref:Lucene.Net.Analysis.Analyzer) (via one or more
-    [](xref:Lucene.Net.Analysis.TokenFilter)s) before being returned.
+    by the <xref:Lucene.Net.Analysis.Tokenizer> would be modified or even omitted 
+    by the <xref:Lucene.Net.Analysis.Analyzer> (via one or more
+    <xref:Lucene.Net.Analysis.TokenFilter>s) before being returned.
 
-*   [](xref:Lucene.Net.Analysis.Tokenizer) is a [](xref:Lucene.Net.Analysis.TokenStream), 
-    but [](xref:Lucene.Net.Analysis.Analyzer) is not.
+*   <xref:Lucene.Net.Analysis.Tokenizer> is a <xref:Lucene.Net.Analysis.TokenStream>, 
+    but <xref:Lucene.Net.Analysis.Analyzer> is not.
 
-*   [](xref:Lucene.Net.Analysis.Analyzer) is "field aware", but 
-    [](xref:Lucene.Net.Analysis.Tokenizer) is not.
+*   <xref:Lucene.Net.Analysis.Analyzer> is "field aware", but 
+    <xref:Lucene.Net.Analysis.Tokenizer> is not.
 
  Lucene Java provides a number of analysis capabilities, the most commonly used one being the StandardAnalyzer. Many applications will have a long and industrious life with nothing more than the StandardAnalyzer. However, there are a few other classes/packages that are worth mentioning: 
 
 1.  PerFieldAnalyzerWrapper – Most Analyzers perform the same operation on all
-    [](xref:Lucene.Net.Documents.Field)s.  The PerFieldAnalyzerWrapper can be used to associate a different Analyzer with different
-    [](xref:Lucene.Net.Documents.Field)s.
+    <xref:Lucene.Net.Documents.Field>s.  The PerFieldAnalyzerWrapper can be used to associate a different Analyzer with different
+    <xref:Lucene.Net.Documents.Field>s.
 
 2.  The analysis library located at the root of the Lucene distribution has a number of different Analyzer implementations to solve a variety
     of different problems related to searching.  Many of the Analyzers are designed to analyze non-English languages.
@@ -127,7 +132,7 @@ and proximity searches (though sentence identification is not provided by Lucene
  Applications usually do not invoke analysis – Lucene does it for them: 
 
 *   At indexing, as a consequence of 
-    [](xref:Lucene.Net.Index.IndexWriter.AddDocument(Iterable) addDocument(doc)),
+    [AddDocument](xref:Lucene.Net.Index.IndexWriter#methods),
     the Analyzer in effect for indexing is invoked for each indexed field of the added document.
 
 *   At search, a QueryParser may invoke the Analyzer during parsing.  Note that for some queries, analysis does not
@@ -143,7 +148,7 @@ and proximity searches (though sentence identification is not provided by Lucene
         try {
           ts.reset(); // Resets this stream to the beginning. (Required)
           while (ts.incrementToken()) {
-            // Use [](xref:Lucene.Net.Util.AttributeSource.ReflectAsString(boolean))
+            // Use [#reflectAsString(boolean)](xref:Lucene.Net.Util.AttributeSource)
             // for token stream debugging.
             System.out.println("token: " + ts.reflectAsString(true));
 
@@ -167,13 +172,13 @@ and proximity searches (though sentence identification is not provided by Lucene
 
 ### Field Section Boundaries
 
- When [](xref:Lucene.Net.Documents.Document.Add(Lucene.Net.Index.IndexableField) document.Add(field)) is called multiple times for the same field name, we could say that each such call creates a new section for that field in that document. In fact, a separate call to [](xref:Lucene.Net.Analysis.Analyzer.TokenStream(java.Lang.String, java.Io.Reader) tokenStream(field,reader)) would take place for each of these so called "sections". However, the default Analyzer behavior is to treat all th [...]
+ When [Document.add](xref:Lucene.Net.Documents.Document#methods) is called multiple times for the same field name, we could say that each such call creates a new section for that field in that document. In fact, a separate call to [TokenStream](xref:Lucene.Net.Analysis.Analyzer#methods) would take place for each of these so called "sections". However, the default Analyzer behavior is to treat all these sections as one large section. This allows phrase search and proximity search to seaml [...]
 
         document.add(new Field("f","first ends",...);
         document.add(new Field("f","starts two",...);
         indexWriter.addDocument(document);
 
- Then, a phrase search for "ends starts" would find that document. Where desired, this behavior can be modified by introducing a "position gap" between consecutive field "sections", simply by overriding [](xref:Lucene.Net.Analysis.Analyzer.GetPositionIncrementGap(java.Lang.String) Analyzer.GetPositionIncrementGap(fieldName)): 
+ Then, a phrase search for "ends starts" would find that document. Where desired, this behavior can be modified by introducing a "position gap" between consecutive field "sections", simply by overriding [Analyzer.getPositionIncrementGap](xref:Lucene.Net.Analysis.Analyzer#methods): 
 
       Version matchVersion = Version.LUCENE_XY; // Substitute desired Lucene version for XY
       Analyzer myAnalyzer = new StandardAnalyzer(matchVersion) {
@@ -184,7 +189,7 @@ and proximity searches (though sentence identification is not provided by Lucene
 
 ### Token Position Increments
 
- By default, all tokens created by Analyzers and Tokenizers have a [](xref:Lucene.Net.Analysis.TokenAttributes.PositionIncrementAttribute.GetPositionIncrement() position increment) of one. This means that the position stored for that token in the index would be one more than that of the previous token. Recall that phrase and proximity searches rely on position info. 
+ By default, all tokens created by Analyzers and Tokenizers have a [Increment](xref:Lucene.Net.Analysis.TokenAttributes.PositionIncrementAttribute#methods) of one. This means that the position stored for that token in the index would be one more than that of the previous token. Recall that phrase and proximity searches rely on position info. 
 
  If the selected analyzer filters the stop words "is" and "the", then for a document containing the string "blue is the sky", only the tokens "blue", "sky" are indexed, with position("sky") = 3 + position("blue"). Now, a phrase query "blue is the sky" would find that document, because the same analyzer filters the same stop words from that query. But the phrase query "blue sky" would not find that document because the position increment between "blue" and "sky" is only 1. 
 
@@ -229,7 +234,7 @@ and proximity searches (though sentence identification is not provided by Lucene
 
 ### Token Position Length
 
- By default, all tokens created by Analyzers and Tokenizers have a [](xref:Lucene.Net.Analysis.TokenAttributes.PositionLengthAttribute.GetPositionLength() position length) of one. This means that the token occupies a single position. This attribute is not indexed and thus not taken into account for positional queries, but is used by eg. suggesters. 
+ By default, all tokens created by Analyzers and Tokenizers have a [Length](xref:Lucene.Net.Analysis.TokenAttributes.PositionLengthAttribute#methods) of one. This means that the token occupies a single position. This attribute is not indexed and thus not taken into account for positional queries, but is used by eg. suggesters. 
 
  The main use case for positions lengths is multi-word synonyms. With single-word synonyms, setting the position increment to 0 is enough to denote the fact that two words are synonyms, for example: 
 
@@ -264,17 +269,17 @@ and proximity searches (though sentence identification is not provided by Lucene
 *   Tokens that have the same start position must have the same start offset.
 *   Tokens that have the same end position (taking into account the
   position length) must have the same end offset.
-*   Tokenizers must call [](xref:Lucene.Net.Util.AttributeSource.ClearAttributes()) in
+*   Tokenizers must call [#clearAttributes()](xref:Lucene.Net.Util.AttributeSource) in
   incrementToken().
-*   Tokenizers must override [](xref:Lucene.Net.Analysis.TokenStream.End()), and pass the final
+*   Tokenizers must override [#end()](xref:Lucene.Net.Analysis.TokenStream), and pass the final
   offset (the total number of input characters processed) to both
-  parameters of [](xref:Lucene.Net.Analysis.TokenAttributes.OffsetAttribute.SetOffset(int, int)).
+  parameters of [Int)](xref:Lucene.Net.Analysis.TokenAttributes.OffsetAttribute#methods).
 
  Although these rules might seem easy to follow, problems can quickly happen when chaining badly implemented filters that play with positions and offsets, such as synonym or n-grams filters. Here are good practices for writing correct filters: 
 
 *   Token filters should not modify offsets. If you feel that your filter would need to modify offsets, then it should probably be implemented as a tokenizer.
 *   Token filters should not insert positions. If a filter needs to add tokens, then they should all have a position increment of 0.
-*   When they add tokens, token filters should call [](xref:Lucene.Net.Util.AttributeSource.ClearAttributes()) first.
+*   When they add tokens, token filters should call [#clearAttributes()](xref:Lucene.Net.Util.AttributeSource) first.
 *   When they remove tokens, token filters should increment the position increment of the following token.
 *   Token filters should preserve position lengths.
 
@@ -284,13 +289,13 @@ and proximity searches (though sentence identification is not provided by Lucene
 
 ### Attribute and AttributeSource
 
- Classes [](xref:Lucene.Net.Util.Attribute) and [](xref:Lucene.Net.Util.AttributeSource) serve as the basis upon which the analysis elements of "Flexible Indexing" are implemented. An Attribute holds a particular piece of information about a text token. For example, [](xref:Lucene.Net.Analysis.TokenAttributes.CharTermAttribute) contains the term text of a token, and [](xref:Lucene.Net.Analysis.TokenAttributes.OffsetAttribute) contains the start and end character offsets of a token. An At [...]
+ Classes <xref:Lucene.Net.Util.Attribute> and <xref:Lucene.Net.Util.AttributeSource> serve as the basis upon which the analysis elements of "Flexible Indexing" are implemented. An Attribute holds a particular piece of information about a text token. For example, <xref:Lucene.Net.Analysis.TokenAttributes.CharTermAttribute> contains the term text of a token, and <xref:Lucene.Net.Analysis.TokenAttributes.OffsetAttribute> contains the start and end character offsets of a token. An AttributeS [...]
 
  Lucene provides seven Attributes out of the box: 
 
 <table rules="all" frame="box" cellpadding="3">
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.CharTermAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.CharTermAttribute></td>
     <td>
       The term text of a token.  Implements {@link java.lang.CharSequence} 
       (providing methods length() and charAt(), and allowing e.g. for direct
@@ -299,31 +304,31 @@ and proximity searches (though sentence identification is not provided by Lucene
     </td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.OffsetAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.OffsetAttribute></td>
     <td>The start and end offset of a token in characters.</td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.PositionIncrementAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.PositionIncrementAttribute></td>
     <td>See above for detailed information about position increment.</td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.PositionLengthAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.PositionLengthAttribute></td>
     <td>The number of positions occupied by a token.</td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.PayloadAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.PayloadAttribute></td>
     <td>The payload that a Token can optionally have.</td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.TypeAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.TypeAttribute></td>
     <td>The type of the token. Default is 'word'.</td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.FlagsAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.FlagsAttribute></td>
     <td>Optional flags a token can have.</td>
   </tr>
   <tr>
-    <td>[](xref:Lucene.Net.Analysis.TokenAttributes.KeywordAttribute)</td>
+    <td><xref:Lucene.Net.Analysis.TokenAttributes.KeywordAttribute></td>
     <td>
       Keyword-aware TokenStreams/-Filters skip modification of tokens that
       return true from this attribute's isKeyword() method. 
@@ -343,48 +348,48 @@ The code fragment of the [analysis workflow
 protocol](#analysis-workflow) above shows a token stream being obtained, used, and then
 left for garbage. However, that does not mean that the components of
 that token stream will, in fact, be discarded. The default is just the
-opposite. [](xref:Lucene.Net.Analysis.Analyzer) applies a reuse
+opposite. <xref:Lucene.Net.Analysis.Analyzer> applies a reuse
 strategy to the tokenizer and the token filters. It will reuse
-them. For each new input, it calls [](xref:Lucene.Net.Analysis.Tokenizer.SetReader(java.Io.Reader)) 
+them. For each new input, it calls [#setReader(java.io.Reader)](xref:Lucene.Net.Analysis.Tokenizer) 
 to set the input. Your components must be prepared for this scenario,
 as described below.
 
 #### Tokenizer
 
-*   You should create your tokenizer class by extending [](xref:Lucene.Net.Analysis.Tokenizer).
+*   You should create your tokenizer class by extending <xref:Lucene.Net.Analysis.Tokenizer>.
 
 *   Your tokenizer must **never** make direct use of the
   {@link java.io.Reader} supplied to its constructor(s). (A future
   release of Apache Lucene may remove the reader parameters from the
   Tokenizer constructors.)
-  [](xref:Lucene.Net.Analysis.Tokenizer) wraps the reader in an
+  <xref:Lucene.Net.Analysis.Tokenizer> wraps the reader in an
   object that helps enforce that applications comply with the [analysis workflow](#analysis-workflow). Thus, your class
   should only reference the input via the protected 'input' field
   of Tokenizer.
 
-*   Your tokenizer **must** override [](xref:Lucene.Net.Analysis.TokenStream.End()).
+*   Your tokenizer **must** override [#end()](xref:Lucene.Net.Analysis.TokenStream).
   Your implementation **must** call
   `super.end()`. It must set a correct final offset into
   the offset attribute, and finish up and other attributes to reflect
   the end of the stream.
 
-*   If your tokenizer overrides [](xref:Lucene.Net.Analysis.TokenStream.Reset())
-  or [](xref:Lucene.Net.Analysis.TokenStream.Close()), it
+*   If your tokenizer overrides [#reset()](xref:Lucene.Net.Analysis.TokenStream)
+  or [#close()](xref:Lucene.Net.Analysis.TokenStream), it
     **must** call the corresponding superclass method.
 
 #### Token Filter
 
-  You should create your token filter class by extending [](xref:Lucene.Net.Analysis.TokenFilter).
-  If your token filter overrides [](xref:Lucene.Net.Analysis.TokenStream.Reset()),
-  [](xref:Lucene.Net.Analysis.TokenStream.End())
-  or [](xref:Lucene.Net.Analysis.TokenStream.Close()), it
+  You should create your token filter class by extending <xref:Lucene.Net.Analysis.TokenFilter>.
+  If your token filter overrides [#reset()](xref:Lucene.Net.Analysis.TokenStream),
+  [#end()](xref:Lucene.Net.Analysis.TokenStream)
+  or [#close()](xref:Lucene.Net.Analysis.TokenStream), it
   **must** call the corresponding superclass method.
 
 #### Creating delegates
 
-  Forwarding classes (those which extend [](xref:Lucene.Net.Analysis.Tokenizer) but delegate
+  Forwarding classes (those which extend <xref:Lucene.Net.Analysis.Tokenizer> but delegate
   selected logic to another tokenizer) must also set the reader to the delegate in the overridden
-  [](xref:Lucene.Net.Analysis.Tokenizer.Reset()) method, e.g.:
+  [#reset()](xref:Lucene.Net.Analysis.Tokenizer) method, e.g.:
 
         public class ForwardingTokenizer extends Tokenizer {
            private Tokenizer delegate;
@@ -609,9 +614,9 @@ Now we're going to implement our own custom Attribute for part-of-speech tagging
 
  Now we also need to write the implementing class. The name of that class is important here: By default, Lucene checks if there is a class with the name of the Attribute with the suffix 'Impl'. In this example, we would consequently call the implementing class `PartOfSpeechAttributeImpl`. 
 
- This should be the usual behavior. However, there is also an expert-API that allows changing these naming conventions: [](xref:Lucene.Net.Util.AttributeSource.AttributeFactory). The factory accepts an Attribute interface as argument and returns an actual instance. You can implement your own factory if you need to change the default behavior. 
+ This should be the usual behavior. However, there is also an expert-API that allows changing these naming conventions: <xref:Lucene.Net.Util.AttributeSource.AttributeFactory>. The factory accepts an Attribute interface as argument and returns an actual instance. You can implement your own factory if you need to change the default behavior. 
 
- Now here is the actual class that implements our new Attribute. Notice that the class has to extend [](xref:Lucene.Net.Util.AttributeImpl): 
+ Now here is the actual class that implements our new Attribute. Notice that the class has to extend <xref:Lucene.Net.Util.AttributeImpl>: 
 
     public final class PartOfSpeechAttributeImpl extends AttributeImpl 
                                       implements PartOfSpeechAttribute {
@@ -759,7 +764,7 @@ Analyzers take Java {@link java.io.Reader}s as input. Of course you can wrap you
 to manipulate content, but this would have the big disadvantage that character offsets might be inconsistent with your original
 text.
 
-[](xref:Lucene.Net.Analysis.CharFilter) is designed to allow you to pre-process input like a FilterReader would, but also
+<xref:Lucene.Net.Analysis.CharFilter> is designed to allow you to pre-process input like a FilterReader would, but also
 preserve the original offsets associated with those characters. This way mechanisms like highlighting still work correctly.
 CharFilters can be chained.
 
diff --git a/src/Lucene.Net/Codecs/Compressing/CompressingStoredFieldsFormat.cs b/src/Lucene.Net/Codecs/Compressing/CompressingStoredFieldsFormat.cs
index c88d8e7..7dcda78 100644
--- a/src/Lucene.Net/Codecs/Compressing/CompressingStoredFieldsFormat.cs
+++ b/src/Lucene.Net/Codecs/Compressing/CompressingStoredFieldsFormat.cs
@@ -48,7 +48,7 @@ namespace Lucene.Net.Codecs.Compressing
         /// Create a new <see cref="CompressingStoredFieldsFormat"/> with an empty segment
         /// suffix.
         /// </summary>
-        /// <seealso cref="CompressingStoredFieldsFormat.CompressingStoredFieldsFormat(string, string, CompressionMode, int)"/>
+        /// <seealso cref="CompressingStoredFieldsFormat(string, string, CompressionMode, int)"/>
         public CompressingStoredFieldsFormat(string formatName, CompressionMode compressionMode, int chunkSize)
             : this(formatName, "", compressionMode, chunkSize)
         {
@@ -83,6 +83,7 @@ namespace Lucene.Net.Codecs.Compressing
         /// to the size of your index).
         /// </summary>
         /// <param name="formatName"> The name of the <see cref="StoredFieldsFormat"/>. </param>
+        /// <param name="segmentSuffix"></param>
         /// <param name="compressionMode"> The <see cref="CompressionMode"/> to use. </param>
         /// <param name="chunkSize"> The minimum number of bytes of a single chunk of stored documents. </param>
         /// <seealso cref="CompressionMode"/>
diff --git a/src/Lucene.Net/Codecs/Compressing/package.md b/src/Lucene.Net/Codecs/Compressing/package.md
index 0216013..9b50655 100644
--- a/src/Lucene.Net/Codecs/Compressing/package.md
+++ b/src/Lucene.Net/Codecs/Compressing/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Compressing
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
diff --git a/src/Lucene.Net/Codecs/Lucene3x/package.md b/src/Lucene.Net/Codecs/Lucene3x/package.md
index ff55d41..ccadf82 100644
--- a/src/Lucene.Net/Codecs/Lucene3x/package.md
+++ b/src/Lucene.Net/Codecs/Lucene3x/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Lucene3x
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
diff --git a/src/Lucene.Net/Codecs/Lucene40/package.md b/src/Lucene.Net/Codecs/Lucene40/package.md
index 7cf4475..5493a56 100644
--- a/src/Lucene.Net/Codecs/Lucene40/package.md
+++ b/src/Lucene.Net/Codecs/Lucene40/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Lucene40
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -75,7 +80,7 @@ In Lucene, fields may be *stored*, in which case their text is stored in the ind
 
 The text of a field may be *tokenized* into terms to be indexed, or the text of a field may be used literally as a term to be indexed. Most fields are tokenized, but sometimes it is useful for certain identifier fields to be indexed literally.
 
-See the [](xref:Lucene.Net.Documents.Field Field) java docs for more information on Fields.
+See the [Field](xref:Lucene.Net.Documents.Field) java docs for more information on Fields.
 
 ### Segments
 
@@ -108,52 +113,52 @@ When documents are deleted, gaps are created in the numbering. These are eventua
 
 Each segment index maintains the following:
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment info).
+*   [Segment info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat).
    This contains metadata about a segment, such as the number of documents,
    what files it uses, 
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat Field names). 
+*   [Field names](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat). 
    This contains the set of field names used in the index.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40StoredFieldsFormat Stored Field values). 
+*   [Stored Field values](xref:Lucene.Net.Codecs.Lucene40.Lucene40StoredFieldsFormat). 
 This contains, for each document, a list of attribute-value pairs, where the attributes 
 are field names. These are used to store auxiliary information about the document, such as 
 its title, url, or an identifier to access a database. The set of stored fields are what is 
 returned for each hit when searching. This is keyed by document number.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Term dictionary). 
+*   [Term dictionary](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat). 
 A dictionary containing all of the terms used in all of the
 indexed fields of all of the documents. The dictionary also contains the number
 of documents which contain the term, and pointers to the term's frequency and
 proximity data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Term Frequency data). 
+*   [Term Frequency data](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat). 
 For each term in the dictionary, the numbers of all the
 documents that contain that term, and the frequency of the term in that
 document, unless frequencies are omitted (IndexOptions.DOCS_ONLY)
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Term Proximity data). 
+*   [Term Proximity data](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat). 
 For each term in the dictionary, the positions that the
 term occurs in each document. Note that this will not exist if all fields in
 all documents omit position data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat Normalization factors). 
+*   [Normalization factors](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat). 
 For each field in each document, a value is stored
 that is multiplied into the score for hits on that field.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vectors). 
+*   [Term Vectors](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat). 
 For each field in each document, the term vector (sometimes
 called document vector) may be stored. A term vector consists of term text and
 term frequency. To add Term Vectors to your index see the 
-[](xref:Lucene.Net.Documents.Field Field) constructors
+[Field](xref:Lucene.Net.Documents.Field) constructors
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat Per-document values). 
+*   [Per-document values](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat). 
 Like stored values, these are also keyed by document
 number, but are generally intended to be loaded into main memory for fast
 access. Whereas stored values are generally intended for summary results from
 searches, per-document values are useful for things like scoring factors.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted documents). 
+*   [Deleted documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat). 
 An optional file indicating which documents are deleted.
 
 Details on each of these are provided in their linked pages.
@@ -185,7 +190,7 @@ The following table summarizes the names and extensions of the files in Lucene:
 <th>Brief Description</th>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Index.SegmentInfos Segments File)</td>
+<td>[Segments File](xref:Lucene.Net.Index.SegmentInfos)</td>
 <td>segments.gen, segments_N</td>
 <td>Stores information about a commit point</td>
 </tr>
@@ -196,78 +201,78 @@ The following table summarizes the names and extensions of the files in Lucene:
 file.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment Info)</td>
+<td>[Segment Info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat)</td>
 <td>.si</td>
 <td>Stores metadata about a segment</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Store.CompoundFileDirectory Compound File)</td>
+<td>[Compound File](xref:Lucene.Net.Store.CompoundFileDirectory)</td>
 <td>.cfs, .cfe</td>
 <td>An optional "virtual" file consisting of all the other index files for
 systems that frequently run out of file handles.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat Fields)</td>
+<td>[Fields](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat)</td>
 <td>.fnm</td>
 <td>Stores information about the fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40StoredFieldsFormat Field Index)</td>
+<td>[Field Index](xref:Lucene.Net.Codecs.Lucene40.Lucene40StoredFieldsFormat)</td>
 <td>.fdx</td>
 <td>Contains pointers to field data</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40StoredFieldsFormat Field Data)</td>
+<td>[Field Data](xref:Lucene.Net.Codecs.Lucene40.Lucene40StoredFieldsFormat)</td>
 <td>.fdt</td>
 <td>The stored fields for documents</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Term Dictionary)</td>
+<td>[Term Dictionary](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat)</td>
 <td>.tim</td>
 <td>The term dictionary, stores term info</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Term Index)</td>
+<td>[Term Index](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat)</td>
 <td>.tip</td>
 <td>The index into the Term Dictionary</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Frequencies)</td>
+<td>[Frequencies](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat)</td>
 <td>.frq</td>
 <td>Contains the list of docs which contain each term along with frequency</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat Positions)</td>
+<td>[Positions](xref:Lucene.Net.Codecs.Lucene40.Lucene40PostingsFormat)</td>
 <td>.prx</td>
 <td>Stores position information about where a term occurs in the index</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat Norms)</td>
+<td>[Norms](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat)</td>
 <td>.nrm.cfs, .nrm.cfe</td>
 <td>Encodes length and boost factors for docs and fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat Per-Document Values)</td>
+<td>[Per-Document Values](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat)</td>
 <td>.dv.cfs, .dv.cfe</td>
 <td>Encodes additional scoring factors or other per-document information.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vector Index)</td>
+<td>[Term Vector Index](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat)</td>
 <td>.tvx</td>
 <td>Stores offset into the document data file</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vector Documents)</td>
+<td>[Term Vector Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat)</td>
 <td>.tvd</td>
 <td>Contains information about each document that has term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vector Fields)</td>
+<td>[Term Vector Fields](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat)</td>
 <td>.tvf</td>
 <td>The field level info about term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted Documents)</td>
+<td>[Deleted Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat)</td>
 <td>.del</td>
 <td>Info about what files are deleted</td>
 </tr>
@@ -321,9 +326,9 @@ file, previously they were stored in text format only.
 *   In version 3.4, fields can omit position data while still indexing term
 frequencies.
 *   In version 4.0, the format of the inverted index became extensible via
-the [](xref:Lucene.Net.Codecs.Codec Codec) api. Fast per-document storage
+the [Codec](xref:Lucene.Net.Codecs.Codec) api. Fast per-document storage
 ({@code DocValues}) was introduced. Normalization factors need no longer be a 
-single byte, they can be any [](xref:Lucene.Net.Index.NumericDocValues NumericDocValues). 
+single byte, they can be any [NumericDocValues](xref:Lucene.Net.Index.NumericDocValues). 
 Terms need not be unicode strings, they can be any byte sequence. Term offsets 
 can optionally be indexed into the postings lists. Payloads can be stored in the 
 term vectors.
@@ -332,6 +337,6 @@ term vectors.
 
 <div>
 
-Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [](xref:Lucene.Net.Store.DataOutput.WriteVInt VInt) values which have no limit.
+Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [VInt](xref:Lucene.Net.Store.DataOutput#methods) values which have no limit.
 
 </div>
\ No newline at end of file
diff --git a/src/Lucene.Net/Codecs/Lucene41/package.md b/src/Lucene.Net/Codecs/Lucene41/package.md
index 7da6918..de69212 100644
--- a/src/Lucene.Net/Codecs/Lucene41/package.md
+++ b/src/Lucene.Net/Codecs/Lucene41/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Lucene41
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -75,7 +80,7 @@ In Lucene, fields may be *stored*, in which case their text is stored in the ind
 
 The text of a field may be *tokenized* into terms to be indexed, or the text of a field may be used literally as a term to be indexed. Most fields are tokenized, but sometimes it is useful for certain identifier fields to be indexed literally.
 
-See the [](xref:Lucene.Net.Documents.Field Field) java docs for more information on Fields.
+See the [Field](xref:Lucene.Net.Documents.Field) java docs for more information on Fields.
 
 ### Segments
 
@@ -108,52 +113,52 @@ When documents are deleted, gaps are created in the numbering. These are eventua
 
 Each segment index maintains the following:
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment info).
+*   [Segment info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat).
    This contains metadata about a segment, such as the number of documents,
    what files it uses, 
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat Field names). 
+*   [Field names](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat). 
    This contains the set of field names used in the index.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Stored Field values). 
+*   [Stored Field values](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat). 
 This contains, for each document, a list of attribute-value pairs, where the attributes 
 are field names. These are used to store auxiliary information about the document, such as 
 its title, url, or an identifier to access a database. The set of stored fields are what is 
 returned for each hit when searching. This is keyed by document number.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term dictionary). 
+*   [Term dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 A dictionary containing all of the terms used in all of the
 indexed fields of all of the documents. The dictionary also contains the number
 of documents which contain the term, and pointers to the term's frequency and
 proximity data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Frequency data). 
+*   [Term Frequency data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the numbers of all the
 documents that contain that term, and the frequency of the term in that
 document, unless frequencies are omitted (IndexOptions.DOCS_ONLY)
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Proximity data). 
+*   [Term Proximity data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the positions that the
 term occurs in each document. Note that this will not exist if all fields in
 all documents omit position data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat Normalization factors). 
+*   [Normalization factors](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat). 
 For each field in each document, a value is stored
 that is multiplied into the score for hits on that field.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vectors). 
+*   [Term Vectors](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat). 
 For each field in each document, the term vector (sometimes
 called document vector) may be stored. A term vector consists of term text and
 term frequency. To add Term Vectors to your index see the 
-[](xref:Lucene.Net.Documents.Field Field) constructors
+[Field](xref:Lucene.Net.Documents.Field) constructors
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat Per-document values). 
+*   [Per-document values](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat). 
 Like stored values, these are also keyed by document
 number, but are generally intended to be loaded into main memory for fast
 access. Whereas stored values are generally intended for summary results from
 searches, per-document values are useful for things like scoring factors.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted documents). 
+*   [Deleted documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat). 
 An optional file indicating which documents are deleted.
 
 Details on each of these are provided in their linked pages.
@@ -185,7 +190,7 @@ The following table summarizes the names and extensions of the files in Lucene:
 <th>Brief Description</th>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Index.SegmentInfos Segments File)</td>
+<td>[Segments File](xref:Lucene.Net.Index.SegmentInfos)</td>
 <td>segments.gen, segments_N</td>
 <td>Stores information about a commit point</td>
 </tr>
@@ -196,83 +201,83 @@ The following table summarizes the names and extensions of the files in Lucene:
 file.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment Info)</td>
+<td>[Segment Info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat)</td>
 <td>.si</td>
 <td>Stores metadata about a segment</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Store.CompoundFileDirectory Compound File)</td>
+<td>[Compound File](xref:Lucene.Net.Store.CompoundFileDirectory)</td>
 <td>.cfs, .cfe</td>
 <td>An optional "virtual" file consisting of all the other index files for
 systems that frequently run out of file handles.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat Fields)</td>
+<td>[Fields](xref:Lucene.Net.Codecs.Lucene40.Lucene40FieldInfosFormat)</td>
 <td>.fnm</td>
 <td>Stores information about the fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Index)</td>
+<td>[Field Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdx</td>
 <td>Contains pointers to field data</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Data)</td>
+<td>[Field Data](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdt</td>
 <td>The stored fields for documents</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Dictionary)</td>
+<td>[Term Dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tim</td>
 <td>The term dictionary, stores term info</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Index)</td>
+<td>[Term Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tip</td>
 <td>The index into the Term Dictionary</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Frequencies)</td>
+<td>[Frequencies](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.doc</td>
 <td>Contains the list of docs which contain each term along with frequency</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Positions)</td>
+<td>[Positions](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pos</td>
 <td>Stores position information about where a term occurs in the index</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Payloads)</td>
+<td>[Payloads](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pay</td>
 <td>Stores additional per-position metadata information such as character offsets and user payloads</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat Norms)</td>
+<td>[Norms](xref:Lucene.Net.Codecs.Lucene40.Lucene40NormsFormat)</td>
 <td>.nrm.cfs, .nrm.cfe</td>
 <td>Encodes length and boost factors for docs and fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat Per-Document Values)</td>
+<td>[Per-Document Values](xref:Lucene.Net.Codecs.Lucene40.Lucene40DocValuesFormat)</td>
 <td>.dv.cfs, .dv.cfe</td>
 <td>Encodes additional scoring factors or other per-document information.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vector Index)</td>
+<td>[Term Vector Index](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat)</td>
 <td>.tvx</td>
 <td>Stores offset into the document data file</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vector Documents)</td>
+<td>[Term Vector Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat)</td>
 <td>.tvd</td>
 <td>Contains information about each document that has term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat Term Vector Fields)</td>
+<td>[Term Vector Fields](xref:Lucene.Net.Codecs.Lucene40.Lucene40TermVectorsFormat)</td>
 <td>.tvf</td>
 <td>The field level info about term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted Documents)</td>
+<td>[Deleted Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat)</td>
 <td>.del</td>
 <td>Info about what files are deleted</td>
 </tr>
@@ -326,9 +331,9 @@ file, previously they were stored in text format only.
 *   In version 3.4, fields can omit position data while still indexing term
 frequencies.
 *   In version 4.0, the format of the inverted index became extensible via
-the [](xref:Lucene.Net.Codecs.Codec Codec) api. Fast per-document storage
+the [Codec](xref:Lucene.Net.Codecs.Codec) api. Fast per-document storage
 ({@code DocValues}) was introduced. Normalization factors need no longer be a 
-single byte, they can be any [](xref:Lucene.Net.Index.NumericDocValues NumericDocValues). 
+single byte, they can be any [NumericDocValues](xref:Lucene.Net.Index.NumericDocValues). 
 Terms need not be unicode strings, they can be any byte sequence. Term offsets 
 can optionally be indexed into the postings lists. Payloads can be stored in the 
 term vectors.
@@ -341,6 +346,6 @@ the term dictionary. Stored fields are compressed by default.
 
 <div>
 
-Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [](xref:Lucene.Net.Store.DataOutput.WriteVInt VInt) values which have no limit.
+Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [VInt](xref:Lucene.Net.Store.DataOutput#methods) values which have no limit.
 
 </div>
\ No newline at end of file
diff --git a/src/Lucene.Net/Codecs/Lucene42/package.md b/src/Lucene.Net/Codecs/Lucene42/package.md
index c3679a3..015b522 100644
--- a/src/Lucene.Net/Codecs/Lucene42/package.md
+++ b/src/Lucene.Net/Codecs/Lucene42/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Lucene42
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -75,7 +80,7 @@ In Lucene, fields may be *stored*, in which case their text is stored in the ind
 
 The text of a field may be *tokenized* into terms to be indexed, or the text of a field may be used literally as a term to be indexed. Most fields are tokenized, but sometimes it is useful for certain identifier fields to be indexed literally.
 
-See the [](xref:Lucene.Net.Documents.Field Field) java docs for more information on Fields.
+See the [Field](xref:Lucene.Net.Documents.Field) java docs for more information on Fields.
 
 ### Segments
 
@@ -108,52 +113,52 @@ When documents are deleted, gaps are created in the numbering. These are eventua
 
 Each segment index maintains the following:
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment info).
+*   [Segment info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat).
    This contains metadata about a segment, such as the number of documents,
    what files it uses, 
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat Field names). 
+*   [Field names](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat). 
    This contains the set of field names used in the index.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Stored Field values). 
+*   [Stored Field values](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat). 
 This contains, for each document, a list of attribute-value pairs, where the attributes 
 are field names. These are used to store auxiliary information about the document, such as 
 its title, url, or an identifier to access a database. The set of stored fields are what is 
 returned for each hit when searching. This is keyed by document number.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term dictionary). 
+*   [Term dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 A dictionary containing all of the terms used in all of the
 indexed fields of all of the documents. The dictionary also contains the number
 of documents which contain the term, and pointers to the term's frequency and
 proximity data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Frequency data). 
+*   [Term Frequency data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the numbers of all the
 documents that contain that term, and the frequency of the term in that
 document, unless frequencies are omitted (IndexOptions.DOCS_ONLY)
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Proximity data). 
+*   [Term Proximity data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the positions that the
 term occurs in each document. Note that this will not exist if all fields in
 all documents omit position data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat Normalization factors). 
+*   [Normalization factors](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat). 
 For each field in each document, a value is stored
 that is multiplied into the score for hits on that field.
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vectors). 
+*   [Term Vectors](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat). 
 For each field in each document, the term vector (sometimes
 called document vector) may be stored. A term vector consists of term text and
 term frequency. To add Term Vectors to your index see the 
-[](xref:Lucene.Net.Documents.Field Field) constructors
+[Field](xref:Lucene.Net.Documents.Field) constructors
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat Per-document values). 
+*   [Per-document values](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat). 
 Like stored values, these are also keyed by document
 number, but are generally intended to be loaded into main memory for fast
 access. Whereas stored values are generally intended for summary results from
 searches, per-document values are useful for things like scoring factors.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted documents). 
+*   [Deleted documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat). 
 An optional file indicating which documents are deleted.
 
 Details on each of these are provided in their linked pages.
@@ -185,7 +190,7 @@ The following table summarizes the names and extensions of the files in Lucene:
 <th>Brief Description</th>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Index.SegmentInfos Segments File)</td>
+<td>[Segments File](xref:Lucene.Net.Index.SegmentInfos)</td>
 <td>segments.gen, segments_N</td>
 <td>Stores information about a commit point</td>
 </tr>
@@ -196,83 +201,83 @@ The following table summarizes the names and extensions of the files in Lucene:
 file.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment Info)</td>
+<td>[Segment Info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat)</td>
 <td>.si</td>
 <td>Stores metadata about a segment</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Store.CompoundFileDirectory Compound File)</td>
+<td>[Compound File](xref:Lucene.Net.Store.CompoundFileDirectory)</td>
 <td>.cfs, .cfe</td>
 <td>An optional "virtual" file consisting of all the other index files for
 systems that frequently run out of file handles.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat Fields)</td>
+<td>[Fields](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat)</td>
 <td>.fnm</td>
 <td>Stores information about the fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Index)</td>
+<td>[Field Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdx</td>
 <td>Contains pointers to field data</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Data)</td>
+<td>[Field Data](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdt</td>
 <td>The stored fields for documents</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Dictionary)</td>
+<td>[Term Dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tim</td>
 <td>The term dictionary, stores term info</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Index)</td>
+<td>[Term Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tip</td>
 <td>The index into the Term Dictionary</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Frequencies)</td>
+<td>[Frequencies](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.doc</td>
 <td>Contains the list of docs which contain each term along with frequency</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Positions)</td>
+<td>[Positions](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pos</td>
 <td>Stores position information about where a term occurs in the index</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Payloads)</td>
+<td>[Payloads](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pay</td>
 <td>Stores additional per-position metadata information such as character offsets and user payloads</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat Norms)</td>
+<td>[Norms](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat)</td>
 <td>.nvd, .nvm</td>
 <td>Encodes length and boost factors for docs and fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat Per-Document Values)</td>
+<td>[Per-Document Values](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat)</td>
 <td>.dvd, .dvm</td>
 <td>Encodes additional scoring factors or other per-document information.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Index)</td>
+<td>[Term Vector Index](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvx</td>
 <td>Stores offset into the document data file</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Documents)</td>
+<td>[Term Vector Documents](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvd</td>
 <td>Contains information about each document that has term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Fields)</td>
+<td>[Term Vector Fields](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvf</td>
 <td>The field level info about term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted Documents)</td>
+<td>[Deleted Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat)</td>
 <td>.del</td>
 <td>Info about what files are deleted</td>
 </tr>
@@ -326,9 +331,9 @@ file, previously they were stored in text format only.
 *   In version 3.4, fields can omit position data while still indexing term
 frequencies.
 *   In version 4.0, the format of the inverted index became extensible via
-the [](xref:Lucene.Net.Codecs.Codec Codec) api. Fast per-document storage
+the [Codec](xref:Lucene.Net.Codecs.Codec) api. Fast per-document storage
 ({@code DocValues}) was introduced. Normalization factors need no longer be a 
-single byte, they can be any [](xref:Lucene.Net.Index.NumericDocValues NumericDocValues). 
+single byte, they can be any [NumericDocValues](xref:Lucene.Net.Index.NumericDocValues). 
 Terms need not be unicode strings, they can be any byte sequence. Term offsets 
 can optionally be indexed into the postings lists. Payloads can be stored in the 
 term vectors.
@@ -344,6 +349,6 @@ on multi-valued fields.
 
 <div>
 
-Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [](xref:Lucene.Net.Store.DataOutput.WriteVInt VInt) values which have no limit.
+Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [VInt](xref:Lucene.Net.Store.DataOutput#methods) values which have no limit.
 
 </div>
\ No newline at end of file
diff --git a/src/Lucene.Net/Codecs/Lucene45/package.md b/src/Lucene.Net/Codecs/Lucene45/package.md
index 7ca94f4..6d167be 100644
--- a/src/Lucene.Net/Codecs/Lucene45/package.md
+++ b/src/Lucene.Net/Codecs/Lucene45/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Lucene45
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -75,7 +80,7 @@ In Lucene, fields may be *stored*, in which case their text is stored in the ind
 
 The text of a field may be *tokenized* into terms to be indexed, or the text of a field may be used literally as a term to be indexed. Most fields are tokenized, but sometimes it is useful for certain identifier fields to be indexed literally.
 
-See the [](xref:Lucene.Net.Documents.Field Field) java docs for more information on Fields.
+See the [Field](xref:Lucene.Net.Documents.Field) java docs for more information on Fields.
 
 ### Segments
 
@@ -108,52 +113,52 @@ When documents are deleted, gaps are created in the numbering. These are eventua
 
 Each segment index maintains the following:
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment info).
+*   [Segment info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat).
    This contains metadata about a segment, such as the number of documents,
    what files it uses, 
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat Field names). 
+*   [Field names](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat). 
    This contains the set of field names used in the index.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Stored Field values). 
+*   [Stored Field values](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat). 
 This contains, for each document, a list of attribute-value pairs, where the attributes 
 are field names. These are used to store auxiliary information about the document, such as 
 its title, url, or an identifier to access a database. The set of stored fields are what is 
 returned for each hit when searching. This is keyed by document number.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term dictionary). 
+*   [Term dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 A dictionary containing all of the terms used in all of the
 indexed fields of all of the documents. The dictionary also contains the number
 of documents which contain the term, and pointers to the term's frequency and
 proximity data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Frequency data). 
+*   [Term Frequency data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the numbers of all the
 documents that contain that term, and the frequency of the term in that
 document, unless frequencies are omitted (IndexOptions.DOCS_ONLY)
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Proximity data). 
+*   [Term Proximity data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the positions that the
 term occurs in each document. Note that this will not exist if all fields in
 all documents omit position data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat Normalization factors). 
+*   [Normalization factors](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat). 
 For each field in each document, a value is stored
 that is multiplied into the score for hits on that field.
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vectors). 
+*   [Term Vectors](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat). 
 For each field in each document, the term vector (sometimes
 called document vector) may be stored. A term vector consists of term text and
 term frequency. To add Term Vectors to your index see the 
-[](xref:Lucene.Net.Documents.Field Field) constructors
+[Field](xref:Lucene.Net.Documents.Field) constructors
 
-*   [](xref:Lucene.Net.Codecs.Lucene45.Lucene45DocValuesFormat Per-document values). 
+*   [Per-document values](xref:Lucene.Net.Codecs.Lucene45.Lucene45DocValuesFormat). 
 Like stored values, these are also keyed by document
 number, but are generally intended to be loaded into main memory for fast
 access. Whereas stored values are generally intended for summary results from
 searches, per-document values are useful for things like scoring factors.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted documents). 
+*   [Deleted documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat). 
 An optional file indicating which documents are deleted.
 
 Details on each of these are provided in their linked pages.
@@ -185,7 +190,7 @@ The following table summarizes the names and extensions of the files in Lucene:
 <th>Brief Description</th>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Index.SegmentInfos Segments File)</td>
+<td>[Segments File](xref:Lucene.Net.Index.SegmentInfos)</td>
 <td>segments.gen, segments_N</td>
 <td>Stores information about a commit point</td>
 </tr>
@@ -196,83 +201,83 @@ The following table summarizes the names and extensions of the files in Lucene:
 file.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment Info)</td>
+<td>[Segment Info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat)</td>
 <td>.si</td>
 <td>Stores metadata about a segment</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Store.CompoundFileDirectory Compound File)</td>
+<td>[Compound File](xref:Lucene.Net.Store.CompoundFileDirectory)</td>
 <td>.cfs, .cfe</td>
 <td>An optional "virtual" file consisting of all the other index files for
 systems that frequently run out of file handles.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat Fields)</td>
+<td>[Fields](xref:Lucene.Net.Codecs.Lucene42.Lucene42FieldInfosFormat)</td>
 <td>.fnm</td>
 <td>Stores information about the fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Index)</td>
+<td>[Field Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdx</td>
 <td>Contains pointers to field data</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Data)</td>
+<td>[Field Data](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdt</td>
 <td>The stored fields for documents</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Dictionary)</td>
+<td>[Term Dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tim</td>
 <td>The term dictionary, stores term info</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Index)</td>
+<td>[Term Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tip</td>
 <td>The index into the Term Dictionary</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Frequencies)</td>
+<td>[Frequencies](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.doc</td>
 <td>Contains the list of docs which contain each term along with frequency</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Positions)</td>
+<td>[Positions](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pos</td>
 <td>Stores position information about where a term occurs in the index</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Payloads)</td>
+<td>[Payloads](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pay</td>
 <td>Stores additional per-position metadata information such as character offsets and user payloads</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat Norms)</td>
+<td>[Norms](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat)</td>
 <td>.nvd, .nvm</td>
 <td>Encodes length and boost factors for docs and fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene45.Lucene45DocValuesFormat Per-Document Values)</td>
+<td>[Per-Document Values](xref:Lucene.Net.Codecs.Lucene45.Lucene45DocValuesFormat)</td>
 <td>.dvd, .dvm</td>
 <td>Encodes additional scoring factors or other per-document information.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Index)</td>
+<td>[Term Vector Index](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvx</td>
 <td>Stores offset into the document data file</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Documents)</td>
+<td>[Term Vector Documents](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvd</td>
 <td>Contains information about each document that has term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Fields)</td>
+<td>[Term Vector Fields](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvf</td>
 <td>The field level info about term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted Documents)</td>
+<td>[Deleted Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat)</td>
 <td>.del</td>
 <td>Info about what files are deleted</td>
 </tr>
@@ -326,9 +331,9 @@ file, previously they were stored in text format only.
 *   In version 3.4, fields can omit position data while still indexing term
 frequencies.
 *   In version 4.0, the format of the inverted index became extensible via
-the [](xref:Lucene.Net.Codecs.Codec Codec) api. Fast per-document storage
+the [Codec](xref:Lucene.Net.Codecs.Codec) api. Fast per-document storage
 ({@code DocValues}) was introduced. Normalization factors need no longer be a 
-single byte, they can be any [](xref:Lucene.Net.Index.NumericDocValues NumericDocValues). 
+single byte, they can be any [NumericDocValues](xref:Lucene.Net.Index.NumericDocValues). 
 Terms need not be unicode strings, they can be any byte sequence. Term offsets 
 can optionally be indexed into the postings lists. Payloads can be stored in the 
 term vectors.
@@ -345,6 +350,6 @@ on multi-valued fields.
 
 <div>
 
-Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [](xref:Lucene.Net.Store.DataOutput.WriteVInt VInt) values which have no limit.
+Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [VInt](xref:Lucene.Net.Store.DataOutput#methods) values which have no limit.
 
 </div>
\ No newline at end of file
diff --git a/src/Lucene.Net/Codecs/Lucene46/package.md b/src/Lucene.Net/Codecs/Lucene46/package.md
index 8923857..754f86d 100644
--- a/src/Lucene.Net/Codecs/Lucene46/package.md
+++ b/src/Lucene.Net/Codecs/Lucene46/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs.Lucene46
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -75,7 +80,7 @@ In Lucene, fields may be *stored*, in which case their text is stored in the ind
 
 The text of a field may be *tokenized* into terms to be indexed, or the text of a field may be used literally as a term to be indexed. Most fields are tokenized, but sometimes it is useful for certain identifier fields to be indexed literally.
 
-See the [](xref:Lucene.Net.Documents.Field Field) java docs for more information on Fields.
+See the [Field](xref:Lucene.Net.Documents.Field) java docs for more information on Fields.
 
 ### Segments
 
@@ -108,52 +113,52 @@ When documents are deleted, gaps are created in the numbering. These are eventua
 
 Each segment index maintains the following:
 
-*   [](xref:Lucene.Net.Codecs.Lucene46.Lucene46SegmentInfoFormat Segment info).
+*   [Segment info](xref:Lucene.Net.Codecs.Lucene46.Lucene46SegmentInfoFormat).
    This contains metadata about a segment, such as the number of documents,
    what files it uses, 
 
-*   [](xref:Lucene.Net.Codecs.Lucene46.Lucene46FieldInfosFormat Field names). 
+*   [Field names](xref:Lucene.Net.Codecs.Lucene46.Lucene46FieldInfosFormat). 
    This contains the set of field names used in the index.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Stored Field values). 
+*   [Stored Field values](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat). 
 This contains, for each document, a list of attribute-value pairs, where the attributes 
 are field names. These are used to store auxiliary information about the document, such as 
 its title, url, or an identifier to access a database. The set of stored fields are what is 
 returned for each hit when searching. This is keyed by document number.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term dictionary). 
+*   [Term dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 A dictionary containing all of the terms used in all of the
 indexed fields of all of the documents. The dictionary also contains the number
 of documents which contain the term, and pointers to the term's frequency and
 proximity data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Frequency data). 
+*   [Term Frequency data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the numbers of all the
 documents that contain that term, and the frequency of the term in that
 document, unless frequencies are omitted (IndexOptions.DOCS_ONLY)
 
-*   [](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Proximity data). 
+*   [Term Proximity data](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat). 
 For each term in the dictionary, the positions that the
 term occurs in each document. Note that this will not exist if all fields in
 all documents omit position data.
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat Normalization factors). 
+*   [Normalization factors](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat). 
 For each field in each document, a value is stored
 that is multiplied into the score for hits on that field.
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vectors). 
+*   [Term Vectors](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat). 
 For each field in each document, the term vector (sometimes
 called document vector) may be stored. A term vector consists of term text and
 term frequency. To add Term Vectors to your index see the 
-[](xref:Lucene.Net.Documents.Field Field) constructors
+[Field](xref:Lucene.Net.Documents.Field) constructors
 
-*   [](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat Per-document values). 
+*   [Per-document values](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat). 
 Like stored values, these are also keyed by document
 number, but are generally intended to be loaded into main memory for fast
 access. Whereas stored values are generally intended for summary results from
 searches, per-document values are useful for things like scoring factors.
 
-*   [](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted documents). 
+*   [Deleted documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat). 
 An optional file indicating which documents are deleted.
 
 Details on each of these are provided in their linked pages.
@@ -185,7 +190,7 @@ The following table summarizes the names and extensions of the files in Lucene:
 <th>Brief Description</th>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Index.SegmentInfos Segments File)</td>
+<td>[Segments File](xref:Lucene.Net.Index.SegmentInfos)</td>
 <td>segments.gen, segments_N</td>
 <td>Stores information about a commit point</td>
 </tr>
@@ -196,83 +201,83 @@ The following table summarizes the names and extensions of the files in Lucene:
 file.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat Segment Info)</td>
+<td>[Segment Info](xref:Lucene.Net.Codecs.Lucene40.Lucene40SegmentInfoFormat)</td>
 <td>.si</td>
 <td>Stores metadata about a segment</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Store.CompoundFileDirectory Compound File)</td>
+<td>[Compound File](xref:Lucene.Net.Store.CompoundFileDirectory)</td>
 <td>.cfs, .cfe</td>
 <td>An optional "virtual" file consisting of all the other index files for
 systems that frequently run out of file handles.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene46.Lucene46FieldInfosFormat Fields)</td>
+<td>[Fields](xref:Lucene.Net.Codecs.Lucene46.Lucene46FieldInfosFormat)</td>
 <td>.fnm</td>
 <td>Stores information about the fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Index)</td>
+<td>[Field Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdx</td>
 <td>Contains pointers to field data</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat Field Data)</td>
+<td>[Field Data](xref:Lucene.Net.Codecs.Lucene41.Lucene41StoredFieldsFormat)</td>
 <td>.fdt</td>
 <td>The stored fields for documents</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Dictionary)</td>
+<td>[Term Dictionary](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tim</td>
 <td>The term dictionary, stores term info</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Term Index)</td>
+<td>[Term Index](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.tip</td>
 <td>The index into the Term Dictionary</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Frequencies)</td>
+<td>[Frequencies](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.doc</td>
 <td>Contains the list of docs which contain each term along with frequency</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Positions)</td>
+<td>[Positions](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pos</td>
 <td>Stores position information about where a term occurs in the index</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat Payloads)</td>
+<td>[Payloads](xref:Lucene.Net.Codecs.Lucene41.Lucene41PostingsFormat)</td>
 <td>.pay</td>
 <td>Stores additional per-position metadata information such as character offsets and user payloads</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat Norms)</td>
+<td>[Norms](xref:Lucene.Net.Codecs.Lucene42.Lucene42NormsFormat)</td>
 <td>.nvd, .nvm</td>
 <td>Encodes length and boost factors for docs and fields</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat Per-Document Values)</td>
+<td>[Per-Document Values](xref:Lucene.Net.Codecs.Lucene42.Lucene42DocValuesFormat)</td>
 <td>.dvd, .dvm</td>
 <td>Encodes additional scoring factors or other per-document information.</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Index)</td>
+<td>[Term Vector Index](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvx</td>
 <td>Stores offset into the document data file</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Documents)</td>
+<td>[Term Vector Documents](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvd</td>
 <td>Contains information about each document that has term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat Term Vector Fields)</td>
+<td>[Term Vector Fields](xref:Lucene.Net.Codecs.Lucene42.Lucene42TermVectorsFormat)</td>
 <td>.tvf</td>
 <td>The field level info about term vectors</td>
 </tr>
 <tr>
-<td>[](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat Deleted Documents)</td>
+<td>[Deleted Documents](xref:Lucene.Net.Codecs.Lucene40.Lucene40LiveDocsFormat)</td>
 <td>.del</td>
 <td>Info about what files are deleted</td>
 </tr>
@@ -326,9 +331,9 @@ file, previously they were stored in text format only.
 *   In version 3.4, fields can omit position data while still indexing term
 frequencies.
 *   In version 4.0, the format of the inverted index became extensible via
-the [](xref:Lucene.Net.Codecs.Codec Codec) api. Fast per-document storage
+the [Codec](xref:Lucene.Net.Codecs.Codec) api. Fast per-document storage
 ({@code DocValues}) was introduced. Normalization factors need no longer be a 
-single byte, they can be any [](xref:Lucene.Net.Index.NumericDocValues NumericDocValues). 
+single byte, they can be any [NumericDocValues](xref:Lucene.Net.Index.NumericDocValues). 
 Terms need not be unicode strings, they can be any byte sequence. Term offsets 
 can optionally be indexed into the postings lists. Payloads can be stored in the 
 term vectors.
@@ -350,6 +355,6 @@ contain the zlib-crc32 checksum of the file.
 
 <div>
 
-Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [](xref:Lucene.Net.Store.DataOutput.WriteVInt VInt) values which have no limit.
+Lucene uses a Java `int` to refer to document numbers, and the index file format uses an `Int32` on-disk to store document numbers. This is a limitation of both the index file format and the current implementation. Eventually these should be replaced with either `UInt64` values, or better yet, [VInt](xref:Lucene.Net.Store.DataOutput#methods) values which have no limit.
 
 </div>
\ No newline at end of file
diff --git a/src/Lucene.Net/Codecs/package.md b/src/Lucene.Net/Codecs/package.md
index 7128d2f..15f8f70 100644
--- a/src/Lucene.Net/Codecs/package.md
+++ b/src/Lucene.Net/Codecs/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Codecs
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -18,13 +23,13 @@
 
 Codecs API: API for customization of the encoding and structure of the index.
 
- The Codec API allows you to customise the way the following pieces of index information are stored: * Postings lists - see [](xref:Lucene.Net.Codecs.PostingsFormat) * DocValues - see [](xref:Lucene.Net.Codecs.DocValuesFormat) * Stored fields - see [](xref:Lucene.Net.Codecs.StoredFieldsFormat) * Term vectors - see [](xref:Lucene.Net.Codecs.TermVectorsFormat) * FieldInfos - see [](xref:Lucene.Net.Codecs.FieldInfosFormat) * SegmentInfo - see [](xref:Lucene.Net.Codecs.SegmentInfoFormat) * N [...]
+ The Codec API allows you to customise the way the following pieces of index information are stored: * Postings lists - see <xref:Lucene.Net.Codecs.PostingsFormat> * DocValues - see <xref:Lucene.Net.Codecs.DocValuesFormat> * Stored fields - see <xref:Lucene.Net.Codecs.StoredFieldsFormat> * Term vectors - see <xref:Lucene.Net.Codecs.TermVectorsFormat> * FieldInfos - see <xref:Lucene.Net.Codecs.FieldInfosFormat> * SegmentInfo - see <xref:Lucene.Net.Codecs.SegmentInfoFormat> * Norms - see < [...]
 
   For some concrete implementations beyond Lucene's official index format, see
   the [Codecs module]({@docRoot}/../codecs/overview-summary.html).
 
- Codecs are identified by name through the Java Service Provider Interface. To create your own codec, extend [](xref:Lucene.Net.Codecs.Codec) and pass the new codec's name to the super() constructor: public class MyCodec extends Codec { public MyCodec() { super("MyCodecName"); } ... } You will need to register the Codec class so that the {@link java.util.ServiceLoader ServiceLoader} can find it, by including a META-INF/services/org.apache.lucene.codecs.Codec file on your classpath that c [...]
+ Codecs are identified by name through the Java Service Provider Interface. To create your own codec, extend <xref:Lucene.Net.Codecs.Codec> and pass the new codec's name to the super() constructor: public class MyCodec extends Codec { public MyCodec() { super("MyCodecName"); } ... } You will need to register the Codec class so that the {@link java.util.ServiceLoader ServiceLoader} can find it, by including a META-INF/services/org.apache.lucene.codecs.Codec file on your classpath that con [...]
 
- If you just want to customise the [](xref:Lucene.Net.Codecs.PostingsFormat), or use different postings formats for different fields, then you can register your custom postings format in the same way (in META-INF/services/org.apache.lucene.codecs.PostingsFormat), and then extend the default [](xref:Lucene.Net.Codecs.Lucene46.Lucene46Codec) and override [](xref:Lucene.Net.Codecs.Lucene46.Lucene46Codec.GetPostingsFormatForField(String)) to return your custom postings format. 
+ If you just want to customise the <xref:Lucene.Net.Codecs.PostingsFormat>, or use different postings formats for different fields, then you can register your custom postings format in the same way (in META-INF/services/org.apache.lucene.codecs.PostingsFormat), and then extend the default <xref:Lucene.Net.Codecs.Lucene46.Lucene46Codec> and override [#getPostingsFormatForField(String)](xref:Lucene.Net.Codecs.Lucene46.Lucene46Codec) to return your custom postings format. 
 
- Similarly, if you just want to customise the [](xref:Lucene.Net.Codecs.DocValuesFormat) per-field, have a look at [](xref:Lucene.Net.Codecs.Lucene46.Lucene46Codec.GetDocValuesFormatForField(String)). 
\ No newline at end of file
+ Similarly, if you just want to customise the <xref:Lucene.Net.Codecs.DocValuesFormat> per-field, have a look at [#getDocValuesFormatForField(String)](xref:Lucene.Net.Codecs.Lucene46.Lucene46Codec). 
\ No newline at end of file
diff --git a/src/Lucene.Net/Document/package.md b/src/Lucene.Net/Document/package.md
index 6ac14e0..47ffde8 100644
--- a/src/Lucene.Net/Document/package.md
+++ b/src/Lucene.Net/Document/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Documents
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -16,18 +21,18 @@
  limitations under the License.
 -->
 
-The logical representation of a [](xref:Lucene.Net.Documents.Document) for indexing and searching.
+The logical representation of a <xref:Lucene.Net.Documents.Document> for indexing and searching.
 
-The document package provides the user level logical representation of content to be indexed and searched. The package also provides utilities for working with [](xref:Lucene.Net.Documents.Document)s and [](xref:Lucene.Net.Index.IndexableField)s.
+The document package provides the user level logical representation of content to be indexed and searched. The package also provides utilities for working with <xref:Lucene.Net.Documents.Document>s and <xref:Lucene.Net.Index.IndexableField>s.
 
 ## Document and IndexableField
 
-A [](xref:Lucene.Net.Documents.Document) is a collection of [](xref:Lucene.Net.Index.IndexableField)s. A [](xref:Lucene.Net.Index.IndexableField) is a logical representation of a user's content that needs to be indexed or stored. [](xref:Lucene.Net.Index.IndexableField)s have a number of properties that tell Lucene how to treat the content (like indexed, tokenized, stored, etc.) See the [](xref:Lucene.Net.Documents.Field) implementation of [](xref:Lucene.Net.Index.IndexableField) for spe [...]
+A <xref:Lucene.Net.Documents.Document> is a collection of <xref:Lucene.Net.Index.IndexableField>s. A <xref:Lucene.Net.Index.IndexableField> is a logical representation of a user's content that needs to be indexed or stored. <xref:Lucene.Net.Index.IndexableField>s have a number of properties that tell Lucene how to treat the content (like indexed, tokenized, stored, etc.) See the <xref:Lucene.Net.Documents.Field> implementation of <xref:Lucene.Net.Index.IndexableField> for specifics on th [...]
 
-Note: it is common to refer to [](xref:Lucene.Net.Documents.Document)s having [](xref:Lucene.Net.Documents.Field)s, even though technically they have [](xref:Lucene.Net.Index.IndexableField)s.
+Note: it is common to refer to <xref:Lucene.Net.Documents.Document>s having <xref:Lucene.Net.Documents.Field>s, even though technically they have <xref:Lucene.Net.Index.IndexableField>s.
 
 ## Working with Documents
 
-First and foremost, a [](xref:Lucene.Net.Documents.Document) is something created by the user application. It is your job to create Documents based on the content of the files you are working with in your application (Word, txt, PDF, Excel or any other format.) How this is done is completely up to you. That being said, there are many tools available in other projects that can make the process of taking a file and converting it into a Lucene [](xref:Lucene.Net.Documents.Document). 
+First and foremost, a <xref:Lucene.Net.Documents.Document> is something created by the user application. It is your job to create Documents based on the content of the files you are working with in your application (Word, txt, PDF, Excel or any other format.) How this is done is completely up to you. That being said, there are many tools available in other projects that can make the process of taking a file and converting it into a Lucene <xref:Lucene.Net.Documents.Document>. 
 
-The [](xref:Lucene.Net.Documents.DateTools) is a utility class to make dates and times searchable (remember, Lucene only searches text). [](xref:Lucene.Net.Documents.IntField), [](xref:Lucene.Net.Documents.LongField), [](xref:Lucene.Net.Documents.FloatField) and [](xref:Lucene.Net.Documents.DoubleField) are a special helper class to simplify indexing of numeric values (and also dates) for fast range range queries with [](xref:Lucene.Net.Search.NumericRangeQuery) (using a special sortable [...]
\ No newline at end of file
+The <xref:Lucene.Net.Documents.DateTools> is a utility class to make dates and times searchable (remember, Lucene only searches text). <xref:Lucene.Net.Documents.IntField>, <xref:Lucene.Net.Documents.LongField>, <xref:Lucene.Net.Documents.FloatField> and <xref:Lucene.Net.Documents.DoubleField> are a special helper class to simplify indexing of numeric values (and also dates) for fast range range queries with <xref:Lucene.Net.Search.NumericRangeQuery> (using a special sortable string repr [...]
\ No newline at end of file
diff --git a/src/Lucene.Net/Index/package.md b/src/Lucene.Net/Index/package.md
index a1f0996..9d299c3 100644
--- a/src/Lucene.Net/Index/package.md
+++ b/src/Lucene.Net/Index/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Index
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -27,45 +32,45 @@ Code to maintain and access indices.
 #### 
     Fields
 
- [](xref:Lucene.Net.Index.Fields) is the initial entry point into the postings APIs, this can be obtained in several ways: // access indexed fields for an index segment Fields fields = reader.fields(); // access term vector fields for a specified document Fields fields = reader.getTermVectors(docid); Fields implements Java's Iterable interface, so its easy to enumerate the list of fields: // enumerate list of fields for (String field : fields) { // access the terms for this field Terms t [...]
+ <xref:Lucene.Net.Index.Fields> is the initial entry point into the postings APIs, this can be obtained in several ways: // access indexed fields for an index segment Fields fields = reader.fields(); // access term vector fields for a specified document Fields fields = reader.getTermVectors(docid); Fields implements Java's Iterable interface, so its easy to enumerate the list of fields: // enumerate list of fields for (String field : fields) { // access the terms for this field Terms ter [...]
 
 #### 
     Terms
 
- [](xref:Lucene.Net.Index.Terms) represents the collection of terms within a field, exposes some metadata and [statistics](#fieldstats), and an API for enumeration. // metadata about the field System.out.println("positions? " + terms.hasPositions()); System.out.println("offsets? " + terms.hasOffsets()); System.out.println("payloads? " + terms.hasPayloads()); // iterate through terms TermsEnum termsEnum = terms.iterator(null); BytesRef term = null; while ((term = termsEnum.next()) != null [...]
+ <xref:Lucene.Net.Index.Terms> represents the collection of terms within a field, exposes some metadata and [statistics](#fieldstats), and an API for enumeration. // metadata about the field System.out.println("positions? " + terms.hasPositions()); System.out.println("offsets? " + terms.hasOffsets()); System.out.println("payloads? " + terms.hasPayloads()); // iterate through terms TermsEnum termsEnum = terms.iterator(null); BytesRef term = null; while ((term = termsEnum.next()) != null)  [...]
 
 #### 
     Documents
 
- [](xref:Lucene.Net.Index.DocsEnum) is an extension of [](xref:Lucene.Net.Search.DocIdSetIterator)that iterates over the list of documents for a term, along with the term frequency within that document. int docid; while ((docid = docsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { System.out.println(docid); System.out.println(docsEnum.freq()); } 
+ <xref:Lucene.Net.Index.DocsEnum> is an extension of <xref:Lucene.Net.Search.DocIdSetIterator>that iterates over the list of documents for a term, along with the term frequency within that document. int docid; while ((docid = docsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { System.out.println(docid); System.out.println(docsEnum.freq()); } 
 
 #### 
     Positions
 
- [](xref:Lucene.Net.Index.DocsAndPositionsEnum) is an extension of [](xref:Lucene.Net.Index.DocsEnum) that additionally allows iteration of the positions a term occurred within the document, and any additional per-position information (offsets and payload) int docid; while ((docid = docsAndPositionsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { System.out.println(docid); int freq = docsAndPositionsEnum.freq(); for (int i = 0; i < freq;="" i++)="" {="" system.out.println(docsandposit [...]
+ <xref:Lucene.Net.Index.DocsAndPositionsEnum> is an extension of <xref:Lucene.Net.Index.DocsEnum> that additionally allows iteration of the positions a term occurred within the document, and any additional per-position information (offsets and payload) int docid; while ((docid = docsAndPositionsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) { System.out.println(docid); int freq = docsAndPositionsEnum.freq(); for (int i = 0; i < freq;="" i++)="" {="" system.out.println(docsandpositions [...]
 
 ## Index Statistics
 
 #### 
     Term statistics
 
- * [](xref:Lucene.Net.Index.TermsEnum.DocFreq): Returns the number of documents that contain at least one occurrence of the term. This statistic is always available for an indexed term. Note that it will also count deleted documents, when segments are merged the statistic is updated as those deleted documents are merged away. [](xref:Lucene.Net.Index.TermsEnum.TotalTermFreq): Returns the number of occurrences of this term across all documents. Note that this statistic is unavailable (ret [...]
+ * [#docFreq](xref:Lucene.Net.Index.TermsEnum): Returns the number of documents that contain at least one occurrence of the term. This statistic is always available for an indexed term. Note that it will also count deleted documents, when segments are merged the statistic is updated as those deleted documents are merged away. [#totalTermFreq](xref:Lucene.Net.Index.TermsEnum): Returns the number of occurrences of this term across all documents. Note that this statistic is unavailable (ret [...]
 
 #### 
     Field statistics
 
- * [](xref:Lucene.Net.Index.Terms.Size): Returns the number of unique terms in the field. This statistic may be unavailable (returns `-1`) for some Terms implementations such as [](xref:Lucene.Net.Index.MultiTerms), where it cannot be efficiently computed. Note that this count also includes terms that appear only in deleted documents: when segments are merged such terms are also merged away and the statistic is then updated. [](xref:Lucene.Net.Index.Terms.GetDocCount): Returns the number [...]
+ * [#size](xref:Lucene.Net.Index.Terms): Returns the number of unique terms in the field. This statistic may be unavailable (returns `-1`) for some Terms implementations such as <xref:Lucene.Net.Index.MultiTerms>, where it cannot be efficiently computed. Note that this count also includes terms that appear only in deleted documents: when segments are merged such terms are also merged away and the statistic is then updated. [#getDocCount](xref:Lucene.Net.Index.Terms): Returns the number o [...]
 
 #### 
     Segment statistics
 
- * [](xref:Lucene.Net.Index.IndexReader.MaxDoc): Returns the number of documents (including deleted documents) in the index. [](xref:Lucene.Net.Index.IndexReader.NumDocs): Returns the number of live documents (excluding deleted documents) in the index. [](xref:Lucene.Net.Index.IndexReader.NumDeletedDocs): Returns the number of deleted documents in the index. [](xref:Lucene.Net.Index.Fields.Size): Returns the number of indexed fields. [](xref:Lucene.Net.Index.Fields.GetUniqueTermCount): R [...]
+ * [#maxDoc](xref:Lucene.Net.Index.IndexReader): Returns the number of documents (including deleted documents) in the index. [#numDocs](xref:Lucene.Net.Index.IndexReader): Returns the number of live documents (excluding deleted documents) in the index. [#numDeletedDocs](xref:Lucene.Net.Index.IndexReader): Returns the number of deleted documents in the index. [#size](xref:Lucene.Net.Index.Fields): Returns the number of indexed fields. [#getUniqueTermCount](xref:Lucene.Net.Index.Fields): R [...]
 
 #### 
     Document statistics
 
- Document statistics are available during the indexing process for an indexed field: typically a [](xref:Lucene.Net.Search.Similarities.Similarity) implementation will store some of these values (possibly in a lossy way), into the normalization value for the document in its [](xref:Lucene.Net.Search.Similarities.Similarity.ComputeNorm) method. 
+ Document statistics are available during the indexing process for an indexed field: typically a <xref:Lucene.Net.Search.Similarities.Similarity> implementation will store some of these values (possibly in a lossy way), into the normalization value for the document in its [#computeNorm](xref:Lucene.Net.Search.Similarities.Similarity) method. 
 
- * [](xref:Lucene.Net.Index.FieldInvertState.GetLength): Returns the number of tokens for this field in the document. Note that this is just the number of times that [](xref:Lucene.Net.Analysis.TokenStream.IncrementToken) returned true, and is unrelated to the values in [](xref:Lucene.Net.Analysis.TokenAttributes.PositionIncrementAttribute). [](xref:Lucene.Net.Index.FieldInvertState.GetNumOverlap): Returns the number of tokens for this field in the document that had a position increment  [...]
+ * [#getLength](xref:Lucene.Net.Index.FieldInvertState): Returns the number of tokens for this field in the document. Note that this is just the number of times that [#incrementToken](xref:Lucene.Net.Analysis.TokenStream) returned true, and is unrelated to the values in <xref:Lucene.Net.Analysis.TokenAttributes.PositionIncrementAttribute>. [#getNumOverlap](xref:Lucene.Net.Index.FieldInvertState): Returns the number of tokens for this field in the document that had a position increment of [...]
 
- Additional user-supplied statistics can be added to the document as DocValues fields and accessed via [](xref:Lucene.Net.Index.AtomicReader.GetNumericDocValues). 
\ No newline at end of file
+ Additional user-supplied statistics can be added to the document as DocValues fields and accessed via [#getNumericDocValues](xref:Lucene.Net.Index.AtomicReader). 
\ No newline at end of file
diff --git a/src/Lucene.Net/Search/Payloads/package.md b/src/Lucene.Net/Search/Payloads/package.md
index 998c3b5..be6d2da 100644
--- a/src/Lucene.Net/Search/Payloads/package.md
+++ b/src/Lucene.Net/Search/Payloads/package.md
@@ -1,4 +1,9 @@
-<html>
+---
+uid: Lucene.Net.Search.Payloads
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -15,13 +20,10 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<head>
-    <title>org.apache.lucene.search.payloads</title>
-</head>
-<body>
+
+
 The payloads package provides Query mechanisms for finding and using payloads.
 
- The following Query implementations are provided: 1. [](xref:Lucene.Net.Search.Payloads.PayloadTermQuery PayloadTermQuery) -- Boost a term's score based on the value of the payload located at that term. 2. [](xref:Lucene.Net.Search.Payloads.PayloadNearQuery PayloadNearQuery) -- A [](xref:Lucene.Net.Search.Spans.SpanNearQuery SpanNearQuery) that factors in the value of the payloads located at each of the positions where the spans occur. 
+ The following Query implementations are provided: 1. [PayloadTermQuery](xref:Lucene.Net.Search.Payloads.PayloadTermQuery) -- Boost a term's score based on the value of the payload located at that term. 2. [PayloadNearQuery](xref:Lucene.Net.Search.Payloads.PayloadNearQuery) -- A [SpanNearQuery](xref:Lucene.Net.Search.Spans.SpanNearQuery) that factors in the value of the payloads located at each of the positions where the spans occur. 
+
 
-</body>
-</html>
\ No newline at end of file
diff --git a/src/Lucene.Net/Search/Similarities/package.md b/src/Lucene.Net/Search/Similarities/package.md
index c655791..242b1e0 100644
--- a/src/Lucene.Net/Search/Similarities/package.md
+++ b/src/Lucene.Net/Search/Similarities/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Search.Similarities
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -17,7 +22,7 @@
 -->
 
 This package contains the various ranking models that can be used in Lucene. The
-abstract class [](xref:Lucene.Net.Search.Similarities.Similarity) serves
+abstract class <xref:Lucene.Net.Search.Similarities.Similarity> serves
 as the base for ranking functions. For searching, users can employ the models
 already implemented or create their own by extending one of the classes in this
 package.
@@ -28,28 +33,28 @@ package.
 
 ## Summary of the Ranking Methods
 
-[](xref:Lucene.Net.Search.Similarities.DefaultSimilarity) is the original Lucene scoring function. It is based on a highly optimized [Vector Space Model](http://en.wikipedia.org/wiki/Vector_Space_Model). For more information, see [](xref:Lucene.Net.Search.Similarities.TFIDFSimilarity).
+<xref:Lucene.Net.Search.Similarities.DefaultSimilarity> is the original Lucene scoring function. It is based on a highly optimized [Vector Space Model](http://en.wikipedia.org/wiki/Vector_Space_Model). For more information, see <xref:Lucene.Net.Search.Similarities.TFIDFSimilarity>.
 
-[](xref:Lucene.Net.Search.Similarities.BM25Similarity) is an optimized implementation of the successful Okapi BM25 model.
+<xref:Lucene.Net.Search.Similarities.BM25Similarity> is an optimized implementation of the successful Okapi BM25 model.
 
-[](xref:Lucene.Net.Search.Similarities.SimilarityBase) provides a basic implementation of the Similarity contract and exposes a highly simplified interface, which makes it an ideal starting point for new ranking functions. Lucene ships the following methods built on [](xref:Lucene.Net.Search.Similarities.SimilarityBase): * Amati and Rijsbergen's {@linkplain org.apache.lucene.search.similarities.DFRSimilarity DFR} framework; * Clinchant and Gaussier's {@linkplain org.apache.lucene.search. [...]
+<xref:Lucene.Net.Search.Similarities.SimilarityBase> provides a basic implementation of the Similarity contract and exposes a highly simplified interface, which makes it an ideal starting point for new ranking functions. Lucene ships the following methods built on <xref:Lucene.Net.Search.Similarities.SimilarityBase>: * Amati and Rijsbergen's {@linkplain org.apache.lucene.search.similarities.DFRSimilarity DFR} framework; * Clinchant and Gaussier's {@linkplain org.apache.lucene.search.simi [...]
 
 ## Changing Similarity
 
 Chances are the available Similarities are sufficient for all your searching needs. However, in some applications it may be necessary to customize your [Similarity](Similarity.html) implementation. For instance, some applications do not need to distinguish between shorter and longer documents (see [a "fair" similarity](http://www.gossamer-threads.com/lists/lucene/java-user/38967#38967)).
 
-To change [](xref:Lucene.Net.Search.Similarities.Similarity), one must do so for both indexing and searching, and the changes must happen before either of these actions take place. Although in theory there is nothing stopping you from changing mid-stream, it just isn't well-defined what is going to happen. 
+To change <xref:Lucene.Net.Search.Similarities.Similarity>, one must do so for both indexing and searching, and the changes must happen before either of these actions take place. Although in theory there is nothing stopping you from changing mid-stream, it just isn't well-defined what is going to happen. 
 
-To make this change, implement your own [](xref:Lucene.Net.Search.Similarities.Similarity) (likely you'll want to simply subclass an existing method, be it [](xref:Lucene.Net.Search.Similarities.DefaultSimilarity) or a descendant of [](xref:Lucene.Net.Search.Similarities.SimilarityBase)), and then register the new class by calling [](xref:Lucene.Net.Index.IndexWriterConfig.SetSimilarity(Similarity)) before indexing and [](xref:Lucene.Net.Search.IndexSearcher.SetSimilarity(Similarity)) be [...]
+To make this change, implement your own <xref:Lucene.Net.Search.Similarities.Similarity> (likely you'll want to simply subclass an existing method, be it <xref:Lucene.Net.Search.Similarities.DefaultSimilarity> or a descendant of <xref:Lucene.Net.Search.Similarities.SimilarityBase>), and then register the new class by calling [#setSimilarity(Similarity)](xref:Lucene.Net.Index.IndexWriterConfig) before indexing and [#setSimilarity(Similarity)](xref:Lucene.Net.Search.IndexSearcher) before s [...]
 
 ### Extending {@linkplain org.apache.lucene.search.similarities.SimilarityBase}
 
- The easiest way to quickly implement a new ranking method is to extend [](xref:Lucene.Net.Search.Similarities.SimilarityBase), which provides basic implementations for the low level . Subclasses are only required to implement the [](xref:Lucene.Net.Search.Similarities.SimilarityBase.Score(BasicStats, float, float)) and [](xref:Lucene.Net.Search.Similarities.SimilarityBase.ToString()) methods.
+ The easiest way to quickly implement a new ranking method is to extend <xref:Lucene.Net.Search.Similarities.SimilarityBase>, which provides basic implementations for the low level . Subclasses are only required to implement the [Float)](xref:Lucene.Net.Search.Similarities.SimilarityBase#methods) and [#toString()](xref:Lucene.Net.Search.Similarities.SimilarityBase) methods.
 
-Another option is to extend one of the [frameworks](#framework) based on [](xref:Lucene.Net.Search.Similarities.SimilarityBase). These Similarities are implemented modularly, e.g. [](xref:Lucene.Net.Search.Similarities.DFRSimilarity) delegates computation of the three parts of its formula to the classes [](xref:Lucene.Net.Search.Similarities.BasicModel), [](xref:Lucene.Net.Search.Similarities.AfterEffect) and [](xref:Lucene.Net.Search.Similarities.Normalization). Instead of subclassing t [...]
+Another option is to extend one of the [frameworks](#framework) based on <xref:Lucene.Net.Search.Similarities.SimilarityBase>. These Similarities are implemented modularly, e.g. <xref:Lucene.Net.Search.Similarities.DFRSimilarity> delegates computation of the three parts of its formula to the classes <xref:Lucene.Net.Search.Similarities.BasicModel>, <xref:Lucene.Net.Search.Similarities.AfterEffect> and <xref:Lucene.Net.Search.Similarities.Normalization>. Instead of subclassing the Similar [...]
 
 ### Changing {@linkplain org.apache.lucene.search.similarities.DefaultSimilarity}
 
- If you are interested in use cases for changing your similarity, see the Lucene users's mailing list at [Overriding Similarity](http://www.gossamer-threads.com/lists/lucene/java-user/39125). In summary, here are a few use cases: 1. <p>The `SweetSpotSimilarity` in `org.apache.lucene.misc` gives small increases as the frequency increases a small amount and then greater increases when you hit the "sweet spot", i.e. where you think the frequency of terms is more significant.</p> 2. <p>Overr [...]
+ If you are interested in use cases for changing your similarity, see the Lucene users's mailing list at [Overriding Similarity](http://www.gossamer-threads.com/lists/lucene/java-user/39125). In summary, here are a few use cases: 1. <p>The `SweetSpotSimilarity` in `org.apache.lucene.misc` gives small increases as the frequency increases a small amount and then greater increases when you hit the "sweet spot", i.e. where you think the frequency of terms is more significant.</p> 2. <p>Overr [...]
 
 > [One would override the Similarity in] ... any situation where you know more about your data then just that it's "text" is a situation where it *might* make sense to to override your Similarity method.
\ No newline at end of file
diff --git a/src/Lucene.Net/Search/Spans/package.md b/src/Lucene.Net/Search/Spans/package.md
index 4f49917..db79d82 100644
--- a/src/Lucene.Net/Search/Spans/package.md
+++ b/src/Lucene.Net/Search/Spans/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Search.Spans
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -20,7 +25,7 @@ The calculus of spans.
 
 A span is a `<doc,startPosition,endPosition>` tuple.
 
-The following span query operators are implemented: * A [](xref:Lucene.Net.Search.Spans.SpanTermQuery SpanTermQuery) matches all spans containing a particular [](xref:Lucene.Net.Index.Term Term). * A [](xref:Lucene.Net.Search.Spans.SpanNearQuery SpanNearQuery) matches spans which occur near one another, and can be used to implement things like phrase search (when constructed from [](xref:Lucene.Net.Search.Spans.SpanTermQuery)s) and inter-phrase proximity (when constructed from other [](x [...]
+The following span query operators are implemented: * A [SpanTermQuery](xref:Lucene.Net.Search.Spans.SpanTermQuery) matches all spans containing a particular [Term](xref:Lucene.Net.Index.Term). * A [SpanNearQuery](xref:Lucene.Net.Search.Spans.SpanNearQuery) matches spans which occur near one another, and can be used to implement things like phrase search (when constructed from <xref:Lucene.Net.Search.Spans.SpanTermQuery>s) and inter-phrase proximity (when constructed from other <xref:Luc [...]
 
 For example, a span query which matches "John Kerry" within ten
 words of "George Bush" within the first 100 words of the document
diff --git a/src/Lucene.Net/Search/package.md b/src/Lucene.Net/Search/package.md
index 7da753e..e2d61bf 100644
--- a/src/Lucene.Net/Search/package.md
+++ b/src/Lucene.Net/Search/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Search
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
@@ -24,25 +29,25 @@ Code to search indices.
 
 ## Search Basics
 
- Lucene offers a wide variety of [](xref:Lucene.Net.Search.Query) implementations, most of which are in this package, its subpackages ([](xref:Lucene.Net.Search.Spans spans), [](xref:Lucene.Net.Search.Payloads payloads)), or the [queries module]({@docRoot}/../queries/overview-summary.html). These implementations can be combined in a wide variety of ways to provide complex querying capabilities along with information about where matches took place in the document collection. The [Query Cl [...]
+ Lucene offers a wide variety of <xref:Lucene.Net.Search.Query> implementations, most of which are in this package, its subpackages ([spans](xref:Lucene.Net.Search.Spans), [payloads](xref:Lucene.Net.Search.Payloads)), or the [queries module]({@docRoot}/../queries/overview-summary.html). These implementations can be combined in a wide variety of ways to provide complex querying capabilities along with information about where matches took place in the document collection. The [Query Classe [...]
 
- To perform a search, applications usually call [](xref:Lucene.Net.Search.IndexSearcher.Search(Query,int)) or [](xref:Lucene.Net.Search.IndexSearcher.Search(Query,Filter,int)). 
+ To perform a search, applications usually call [#search(Query,int)](xref:Lucene.Net.Search.IndexSearcher) or [#search(Query,Filter,int)](xref:Lucene.Net.Search.IndexSearcher). 
 
- Once a Query has been created and submitted to the [](xref:Lucene.Net.Search.IndexSearcher IndexSearcher), the scoring process begins. After some infrastructure setup, control finally passes to the [](xref:Lucene.Net.Search.Weight Weight) implementation and its [](xref:Lucene.Net.Search.Scorer Scorer) or [](xref:Lucene.Net.Search.BulkScorer BulkScore) instances. See the [Algorithm](#algorithm) section for more notes on the process. 
+ Once a Query has been created and submitted to the [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher), the scoring process begins. After some infrastructure setup, control finally passes to the [Weight](xref:Lucene.Net.Search.Weight) implementation and its [Scorer](xref:Lucene.Net.Search.Scorer) or [BulkScore](xref:Lucene.Net.Search.BulkScorer) instances. See the [Algorithm](#algorithm) section for more notes on the process. 
 
     <!-- TODO: this page over-links the same things too many times -->
 
 ## Query Classes
 
 #### 
-    [](xref:Lucene.Net.Search.TermQuery TermQuery)
+    [TermQuery](xref:Lucene.Net.Search.TermQuery)
 
-Of the various implementations of [](xref:Lucene.Net.Search.Query Query), the [](xref:Lucene.Net.Search.TermQuery TermQuery) is the easiest to understand and the most often used in applications. A [](xref:Lucene.Net.Search.TermQuery TermQuery) matches all the documents that contain the specified [](xref:Lucene.Net.Index.Term Term), which is a word that occurs in a certain [](xref:Lucene.Net.Documents.Field Field). Thus, a [](xref:Lucene.Net.Search.TermQuery TermQuery) identifies and scor [...]
+Of the various implementations of [Query](xref:Lucene.Net.Search.Query), the [TermQuery](xref:Lucene.Net.Search.TermQuery) is the easiest to understand and the most often used in applications. A [TermQuery](xref:Lucene.Net.Search.TermQuery) matches all the documents that contain the specified [Term](xref:Lucene.Net.Index.Term), which is a word that occurs in a certain [Field](xref:Lucene.Net.Documents.Field). Thus, a [TermQuery](xref:Lucene.Net.Search.TermQuery) identifies and scores all [...]
 
 #### 
-    [](xref:Lucene.Net.Search.BooleanQuery BooleanQuery)
+    [BooleanQuery](xref:Lucene.Net.Search.BooleanQuery)
 
-Things start to get interesting when one combines multiple [](xref:Lucene.Net.Search.TermQuery TermQuery) instances into a [](xref:Lucene.Net.Search.BooleanQuery BooleanQuery). A [](xref:Lucene.Net.Search.BooleanQuery BooleanQuery) contains multiple [](xref:Lucene.Net.Search.BooleanClause BooleanClause)s, where each clause contains a sub-query ([](xref:Lucene.Net.Search.Query Query) instance) and an operator (from [](xref:Lucene.Net.Search.BooleanClause.Occur BooleanClause.Occur)) descri [...]
+Things start to get interesting when one combines multiple [TermQuery](xref:Lucene.Net.Search.TermQuery) instances into a [BooleanQuery](xref:Lucene.Net.Search.BooleanQuery). A [BooleanQuery](xref:Lucene.Net.Search.BooleanQuery) contains multiple [BooleanClause](xref:Lucene.Net.Search.BooleanClause)s, where each clause contains a sub-query ([Query](xref:Lucene.Net.Search.Query) instance) and an operator (from [BooleanClause.Occur](xref:Lucene.Net.Search.BooleanClause.Occur)) describing h [...]
 
 #### Phrases
 
@@ -51,33 +56,33 @@ Another common search is to find documents containing certain phrases. This
 
 1.  
 
-[](xref:Lucene.Net.Search.PhraseQuery PhraseQuery) — Matches a sequence of [](xref:Lucene.Net.Index.Term Term)s. [](xref:Lucene.Net.Search.PhraseQuery PhraseQuery) uses a slop factor to determine how many positions may occur between any two terms in the phrase and still be considered a match. The slop is 0 by default, meaning the phrase must match exactly.
+[PhraseQuery](xref:Lucene.Net.Search.PhraseQuery) — Matches a sequence of [Term](xref:Lucene.Net.Index.Term)s. [PhraseQuery](xref:Lucene.Net.Search.PhraseQuery) uses a slop factor to determine how many positions may occur between any two terms in the phrase and still be considered a match. The slop is 0 by default, meaning the phrase must match exactly.
 
 2.  
 
-[](xref:Lucene.Net.Search.MultiPhraseQuery MultiPhraseQuery) — A more general form of PhraseQuery that accepts multiple Terms for a position in the phrase. For example, this can be used to perform phrase queries that also incorporate synonyms. 3. <p>[](xref:Lucene.Net.Search.Spans.SpanNearQuery SpanNearQuery) — Matches a sequence of other [](xref:Lucene.Net.Search.Spans.SpanQuery SpanQuery) instances. [](xref:Lucene.Net.Search.Spans.SpanNearQuery SpanNearQuery) allows for much more compl [...]
+[MultiPhraseQuery](xref:Lucene.Net.Search.MultiPhraseQuery) — A more general form of PhraseQuery that accepts multiple Terms for a position in the phrase. For example, this can be used to perform phrase queries that also incorporate synonyms. 3. <p>[SpanNearQuery](xref:Lucene.Net.Search.Spans.SpanNearQuery) — Matches a sequence of other [SpanQuery](xref:Lucene.Net.Search.Spans.SpanQuery) instances. [SpanNearQuery](xref:Lucene.Net.Search.Spans.SpanNearQuery) allows for much more complicat [...]
 
 #### 
-    [](xref:Lucene.Net.Search.TermRangeQuery TermRangeQuery)
+    [TermRangeQuery](xref:Lucene.Net.Search.TermRangeQuery)
 
-The [](xref:Lucene.Net.Search.TermRangeQuery TermRangeQuery) matches all documents that occur in the exclusive range of a lower [](xref:Lucene.Net.Index.Term Term) and an upper [](xref:Lucene.Net.Index.Term Term) according to [](xref:Lucene.Net.Index.TermsEnum.GetComparator TermsEnum.GetComparator()). It is not intended for numerical ranges; use [](xref:Lucene.Net.Search.NumericRangeQuery NumericRangeQuery) instead. For example, one could find all documents that have terms beginning with [...]
+The [TermRangeQuery](xref:Lucene.Net.Search.TermRangeQuery) matches all documents that occur in the exclusive range of a lower [Term](xref:Lucene.Net.Index.Term) and an upper [Term](xref:Lucene.Net.Index.Term) according to [TermsEnum.getComparator](xref:Lucene.Net.Index.TermsEnum#methods). It is not intended for numerical ranges; use [NumericRangeQuery](xref:Lucene.Net.Search.NumericRangeQuery) instead. For example, one could find all documents that have terms beginning with the letters  [...]
 
 #### 
-    [](xref:Lucene.Net.Search.NumericRangeQuery NumericRangeQuery)
+    [NumericRangeQuery](xref:Lucene.Net.Search.NumericRangeQuery)
 
-The [](xref:Lucene.Net.Search.NumericRangeQuery NumericRangeQuery) matches all documents that occur in a numeric range. For NumericRangeQuery to work, you must index the values using a one of the numeric fields ([](xref:Lucene.Net.Documents.IntField IntField), [](xref:Lucene.Net.Documents.LongField LongField), [](xref:Lucene.Net.Documents.FloatField FloatField), or [](xref:Lucene.Net.Documents.DoubleField DoubleField)). 
+The [NumericRangeQuery](xref:Lucene.Net.Search.NumericRangeQuery) matches all documents that occur in a numeric range. For NumericRangeQuery to work, you must index the values using a one of the numeric fields ([IntField](xref:Lucene.Net.Documents.IntField), [LongField](xref:Lucene.Net.Documents.LongField), [FloatField](xref:Lucene.Net.Documents.FloatField), or [DoubleField](xref:Lucene.Net.Documents.DoubleField)). 
 
 #### 
-    [](xref:Lucene.Net.Search.PrefixQuery PrefixQuery),
-    [](xref:Lucene.Net.Search.WildcardQuery WildcardQuery),
-    [](xref:Lucene.Net.Search.RegexpQuery RegexpQuery)
+    [PrefixQuery](xref:Lucene.Net.Search.PrefixQuery),
+    [WildcardQuery](xref:Lucene.Net.Search.WildcardQuery),
+    [RegexpQuery](xref:Lucene.Net.Search.RegexpQuery)
 
-While the [](xref:Lucene.Net.Search.PrefixQuery PrefixQuery) has a different implementation, it is essentially a special case of the [](xref:Lucene.Net.Search.WildcardQuery WildcardQuery). The [](xref:Lucene.Net.Search.PrefixQuery PrefixQuery) allows an application to identify all documents with terms that begin with a certain string. The [](xref:Lucene.Net.Search.WildcardQuery WildcardQuery) generalizes this by allowing for the use of <tt>*</tt> (matches 0 or more characters) and <tt>?< [...]
+While the [PrefixQuery](xref:Lucene.Net.Search.PrefixQuery) has a different implementation, it is essentially a special case of the [WildcardQuery](xref:Lucene.Net.Search.WildcardQuery). The [PrefixQuery](xref:Lucene.Net.Search.PrefixQuery) allows an application to identify all documents with terms that begin with a certain string. The [WildcardQuery](xref:Lucene.Net.Search.WildcardQuery) generalizes this by allowing for the use of <tt>*</tt> (matches 0 or more characters) and <tt>?</tt> [...]
 
 #### 
-    [](xref:Lucene.Net.Search.FuzzyQuery FuzzyQuery)
+    [FuzzyQuery](xref:Lucene.Net.Search.FuzzyQuery)
 
-A [](xref:Lucene.Net.Search.FuzzyQuery FuzzyQuery) matches documents that contain terms similar to the specified term. Similarity is determined using [Levenshtein (edit) distance](http://en.wikipedia.org/wiki/Levenshtein). This type of query can be useful when accounting for spelling variations in the collection. 
+A [FuzzyQuery](xref:Lucene.Net.Search.FuzzyQuery) matches documents that contain terms similar to the specified term. Similarity is determined using [Levenshtein (edit) distance](http://en.wikipedia.org/wiki/Levenshtein). This type of query can be useful when accounting for spelling variations in the collection. 
 
 ## Scoring — Introduction
 
@@ -85,16 +90,16 @@ Lucene scoring is the heart of why we all love Lucene. It is blazingly fast and
 
 While this document won't answer your specific scoring issues, it will, hopefully, point you to the places that can help you figure out the *what* and *why* of Lucene scoring. 
 
-Lucene scoring supports a number of pluggable information retrieval [models](http://en.wikipedia.org/wiki/Information_retrieval#Model_types), including: * [Vector Space Model (VSM)](http://en.wikipedia.org/wiki/Vector_Space_Model) * [Probablistic Models](http://en.wikipedia.org/wiki/Probabilistic_relevance_model) such as [Okapi BM25](http://en.wikipedia.org/wiki/Probabilistic_relevance_model_(BM25)) and [DFR](http://en.wikipedia.org/wiki/Divergence-from-randomness_model) * [Language mode [...]
+Lucene scoring supports a number of pluggable information retrieval [models](http://en.wikipedia.org/wiki/Information_retrieval#Model_types), including: * [Vector Space Model (VSM)](http://en.wikipedia.org/wiki/Vector_Space_Model) * [Probablistic Models](http://en.wikipedia.org/wiki/Probabilistic_relevance_model) such as [Okapi BM25](http://en.wikipedia.org/wiki/Probabilistic_relevance_model_(BM25)) and [DFR](http://en.wikipedia.org/wiki/Divergence-from-randomness_model) * [Language mode [...]
 
-The rest of this document will cover [Scoring basics](#scoringBasics) and explain how to change your [](xref:Lucene.Net.Search.Similarities.Similarity Similarity). Next, it will cover ways you can customize the lucene internals in [Custom Queries -- Expert Level](#customQueriesExpert), which gives details on implementing your own [](xref:Lucene.Net.Search.Query Query) class and related functionality. Finally, we will finish up with some reference material in the [Appendix](#algorithm). 
+The rest of this document will cover [Scoring basics](#scoringBasics) and explain how to change your [Similarity](xref:Lucene.Net.Search.Similarities.Similarity). Next, it will cover ways you can customize the lucene internals in [Custom Queries -- Expert Level](#customQueriesExpert), which gives details on implementing your own [Query](xref:Lucene.Net.Search.Query) class and related functionality. Finally, we will finish up with some reference material in the [Appendix](#algorithm). 
 
 ## Scoring — Basics
 
 Scoring is very much dependent on the way documents are indexed, so it is important to understand 
    indexing. (see [Lucene overview]({@docRoot}/overview-summary.html#overview_description) 
    before continuing on with this section) Be sure to use the useful
-   [](xref:Lucene.Net.Search.IndexSearcher.Explain(Lucene.Net.Search.Query, int) IndexSearcher.Explain(Query, doc))
+   [Doc)](xref:Lucene.Net.Search.IndexSearcher#methods)
    to understand how the score for a certain matching document was
    computed.
 
@@ -102,45 +107,45 @@ Generally, the Query determines which documents match (a binary decision), while
 
 #### Fields and Documents
 
-In Lucene, the objects we are scoring are [](xref:Lucene.Net.Documents.Document Document)s. A Document is a collection of [](xref:Lucene.Net.Documents.Field Field)s. Each Field has [](xref:Lucene.Net.Documents.FieldType semantics) about how it is created and stored ([](xref:Lucene.Net.Documents.FieldType.Tokenized() tokenized), [](xref:Lucene.Net.Documents.FieldType.Stored() stored), etc). It is important to note that Lucene scoring works on Fields and then combines the results to return [...]
+In Lucene, the objects we are scoring are [Document](xref:Lucene.Net.Documents.Document)s. A Document is a collection of [Field](xref:Lucene.Net.Documents.Field)s. Each Field has [semantics](xref:Lucene.Net.Documents.FieldType) about how it is created and stored ([Tokenized](xref:Lucene.Net.Documents.FieldType#methods), [Stored](xref:Lucene.Net.Documents.FieldType#methods), etc). It is important to note that Lucene scoring works on Fields and then combines the results to return Documents [...]
 
 #### Score Boosting
 
-Lucene allows influencing search results by "boosting" at different times: * **Index-time boost** by calling [](xref:Lucene.Net.Documents.Field.SetBoost(float) Field.SetBoost()) before a document is added to the index. * **Query-time boost** by setting a boost on a query clause, calling [](xref:Lucene.Net.Search.Query.SetBoost(float) Query.SetBoost()). 
+Lucene allows influencing search results by "boosting" at different times: * **Index-time boost** by calling [Field.setBoost](xref:Lucene.Net.Documents.Field#methods) before a document is added to the index. * **Query-time boost** by setting a boost on a query clause, calling [Query.setBoost](xref:Lucene.Net.Search.Query#methods). 
 
-Indexing time boosts are pre-processed for storage efficiency and written to storage for a field as follows: * All boosts of that field (i.e. all boosts under the same field name in that doc) are multiplied. * The boost is then encoded into a normalization value by the Similarity object at index-time: [](xref:Lucene.Net.Search.Similarities.Similarity.ComputeNorm computeNorm()). The actual encoding depends upon the Similarity implementation, but note that most use a lossy encoding (such a [...]
+Indexing time boosts are pre-processed for storage efficiency and written to storage for a field as follows: * All boosts of that field (i.e. all boosts under the same field name in that doc) are multiplied. * The boost is then encoded into a normalization value by the Similarity object at index-time: [ComputeNorm](xref:Lucene.Net.Search.Similarities.Similarity#methods). The actual encoding depends upon the Similarity implementation, but note that most use a lossy encoding (such as multi [...]
 
 ## Changing Scoring — Similarity
 
- Changing [](xref:Lucene.Net.Search.Similarities.Similarity Similarity) is an easy way to influence scoring, this is done at index-time with [](xref:Lucene.Net.Index.IndexWriterConfig.SetSimilarity(Lucene.Net.Search.Similarities.Similarity) IndexWriterConfig.SetSimilarity(Similarity)) and at query-time with [](xref:Lucene.Net.Search.IndexSearcher.SetSimilarity(Lucene.Net.Search.Similarities.Similarity) IndexSearcher.SetSimilarity(Similarity)). Be sure to use the same Similarity at query- [...]
+ Changing [Similarity](xref:Lucene.Net.Search.Similarities.Similarity) is an easy way to influence scoring, this is done at index-time with [IndexWriterConfig.setSimilarity](xref:Lucene.Net.Index.IndexWriterConfig#methods) and at query-time with [IndexSearcher.setSimilarity](xref:Lucene.Net.Search.IndexSearcher#methods). Be sure to use the same Similarity at query-time as at index-time (so that norms are encoded/decoded correctly); Lucene makes no effort to verify this. 
 
  You can influence scoring by configuring a different built-in Similarity implementation, or by tweaking its parameters, subclassing it to override behavior. Some implementations also offer a modular API which you can extend by plugging in a different component (e.g. term frequency normalizer). 
 
- Finally, you can extend the low level [](xref:Lucene.Net.Search.Similarities.Similarity Similarity) directly to implement a new retrieval model, or to use external scoring factors particular to your application. For example, a custom Similarity can access per-document values via [](xref:Lucene.Net.Search.FieldCache FieldCache) or [](xref:Lucene.Net.Index.NumericDocValues) and integrate them into the score. 
+ Finally, you can extend the low level [Similarity](xref:Lucene.Net.Search.Similarities.Similarity) directly to implement a new retrieval model, or to use external scoring factors particular to your application. For example, a custom Similarity can access per-document values via [FieldCache](xref:Lucene.Net.Search.FieldCache) or <xref:Lucene.Net.Index.NumericDocValues> and integrate them into the score. 
 
- See the [](xref:Lucene.Net.Search.Similarities) package documentation for information on the built-in available scoring models and extending or changing Similarity. 
+ See the <xref:Lucene.Net.Search.Similarities> package documentation for information on the built-in available scoring models and extending or changing Similarity. 
 
 ## Custom Queries — Expert Level
 
 Custom queries are an expert level task, so tread carefully and be prepared to share your code if you want help. 
 
-With the warning out of the way, it is possible to change a lot more than just the Similarity when it comes to matching and scoring in Lucene. Lucene's search is a complex mechanism that is grounded by <span>three main classes</span>: 1. [](xref:Lucene.Net.Search.Query Query) — The abstract object representation of the user's information need. 2. [](xref:Lucene.Net.Search.Weight Weight) — The internal interface representation of the user's Query, so that Query objects may be reused. This [...]
+With the warning out of the way, it is possible to change a lot more than just the Similarity when it comes to matching and scoring in Lucene. Lucene's search is a complex mechanism that is grounded by <span>three main classes</span>: 1. [Query](xref:Lucene.Net.Search.Query) — The abstract object representation of the user's information need. 2. [Weight](xref:Lucene.Net.Search.Weight) — The internal interface representation of the user's Query, so that Query objects may be reused. This i [...]
 
 #### The Query Class
 
-In some sense, the [](xref:Lucene.Net.Search.Query Query) class is where it all begins. Without a Query, there would be nothing to score. Furthermore, the Query class is the catalyst for the other scoring classes as it is often responsible for creating them or coordinating the functionality between them. The [](xref:Lucene.Net.Search.Query Query) class has several methods that are important for derived classes: 1. [](xref:Lucene.Net.Search.Query.CreateWeight(IndexSearcher) createWeight(I [...]
+In some sense, the [Query](xref:Lucene.Net.Search.Query) class is where it all begins. Without a Query, there would be nothing to score. Furthermore, the Query class is the catalyst for the other scoring classes as it is often responsible for creating them or coordinating the functionality between them. The [Query](xref:Lucene.Net.Search.Query) class has several methods that are important for derived classes: 1. [Searcher)](xref:Lucene.Net.Search.Query#methods) — A [Weight](xref:Lucene.N [...]
 
 #### The Weight Interface
 
-The [](xref:Lucene.Net.Search.Weight Weight) interface provides an internal representation of the Query so that it can be reused. Any [](xref:Lucene.Net.Search.IndexSearcher IndexSearcher) dependent state should be stored in the Weight implementation, not in the Query class. The interface defines five methods that must be implemented: 1. [](xref:Lucene.Net.Search.Weight.GetQuery getQuery()) — Pointer to the Query that this Weight represents. 2. [](xref:Lucene.Net.Search.Weight.GetValueFo [...]
+The [Weight](xref:Lucene.Net.Search.Weight) interface provides an internal representation of the Query so that it can be reused. Any [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher) dependent state should be stored in the Weight implementation, not in the Query class. The interface defines five methods that must be implemented: 1. [GetQuery](xref:Lucene.Net.Search.Weight#methods) — Pointer to the Query that this Weight represents. 2. [GetValueForNormalization](xref:Lucene.Net.Search [...]
 
 #### The Scorer Class
 
-The [](xref:Lucene.Net.Search.Scorer Scorer) abstract class provides common scoring functionality for all Scorer implementations and is the heart of the Lucene scoring process. The Scorer defines the following abstract (some of them are not yet abstract, but will be in future versions and should be considered as such now) methods which must be implemented (some of them inherited from [](xref:Lucene.Net.Search.DocIdSetIterator DocIdSetIterator)): 1. [](xref:Lucene.Net.Search.Scorer.NextDo [...]
+The [Scorer](xref:Lucene.Net.Search.Scorer) abstract class provides common scoring functionality for all Scorer implementations and is the heart of the Lucene scoring process. The Scorer defines the following abstract (some of them are not yet abstract, but will be in future versions and should be considered as such now) methods which must be implemented (some of them inherited from [DocIdSetIterator](xref:Lucene.Net.Search.DocIdSetIterator)): 1. [NextDoc](xref:Lucene.Net.Search.Scorer#m [...]
 
 #### The BulkScorer Class
 
-The [](xref:Lucene.Net.Search.BulkScorer BulkScorer) scores a range of documents. There is only one abstract method: 1. [](xref:Lucene.Net.Search.BulkScorer.Score(Lucene.Net.Search.Collector,int) score(Collector,int)) — Score all documents up to but not including the specified max document. 
+The [BulkScorer](xref:Lucene.Net.Search.BulkScorer) scores a range of documents. There is only one abstract method: 1. [Score](xref:Lucene.Net.Search.BulkScorer#methods) — Score all documents up to but not including the specified max document. 
 
 #### Why would I want to add my own Query?
 
@@ -150,14 +155,14 @@ In a nutshell, you want to add your own custom Query implementation when you thi
 
 This section is mostly notes on stepping through the Scoring process and serves as fertilizer for the earlier sections.
 
-In the typical search application, a [](xref:Lucene.Net.Search.Query Query) is passed to the [](xref:Lucene.Net.Search.IndexSearcher IndexSearcher), beginning the scoring process.
+In the typical search application, a [Query](xref:Lucene.Net.Search.Query) is passed to the [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher), beginning the scoring process.
 
-Once inside the IndexSearcher, a [](xref:Lucene.Net.Search.Collector Collector) is used for the scoring and sorting of the search results. These important objects are involved in a search: 1. The [](xref:Lucene.Net.Search.Weight Weight) object of the Query. The Weight object is an internal representation of the Query that allows the Query to be reused by the IndexSearcher. 2. The IndexSearcher that initiated the call. 3. A [](xref:Lucene.Net.Search.Filter Filter) for limiting the result  [...]
+Once inside the IndexSearcher, a [Collector](xref:Lucene.Net.Search.Collector) is used for the scoring and sorting of the search results. These important objects are involved in a search: 1. The [Weight](xref:Lucene.Net.Search.Weight) object of the Query. The Weight object is an internal representation of the Query that allows the Query to be reused by the IndexSearcher. 2. The IndexSearcher that initiated the call. 3. A [Filter](xref:Lucene.Net.Search.Filter) for limiting the result set [...]
 
-Assuming we are not sorting (since sorting doesn't affect the raw Lucene score), we call one of the search methods of the IndexSearcher, passing in the [](xref:Lucene.Net.Search.Weight Weight) object created by [](xref:Lucene.Net.Search.IndexSearcher.CreateNormalizedWeight(Lucene.Net.Search.Query) IndexSearcher.CreateNormalizedWeight(Query)), [](xref:Lucene.Net.Search.Filter Filter) and the number of results we want. This method returns a [](xref:Lucene.Net.Search.TopDocs TopDocs) object [...]
+Assuming we are not sorting (since sorting doesn't affect the raw Lucene score), we call one of the search methods of the IndexSearcher, passing in the [Weight](xref:Lucene.Net.Search.Weight) object created by [IndexSearcher.createNormalizedWeight](xref:Lucene.Net.Search.IndexSearcher#methods), [Filter](xref:Lucene.Net.Search.Filter) and the number of results we want. This method returns a [TopDocs](xref:Lucene.Net.Search.TopDocs) object, which is an internal collection of search results [...]
 
-If a Filter is being used, some initial setup is done to determine which docs to include. Otherwise, we ask the Weight for a [](xref:Lucene.Net.Search.Scorer Scorer) for each [](xref:Lucene.Net.Index.IndexReader IndexReader) segment and proceed by calling [](xref:Lucene.Net.Search.BulkScorer.Score(Lucene.Net.Search.Collector) BulkScorer.Score(Collector)). 
+If a Filter is being used, some initial setup is done to determine which docs to include. Otherwise, we ask the Weight for a [Scorer](xref:Lucene.Net.Search.Scorer) for each [IndexReader](xref:Lucene.Net.Index.IndexReader) segment and proceed by calling [BulkScorer.score](xref:Lucene.Net.Search.BulkScorer#methods). 
 
-At last, we are actually going to score some documents. The score method takes in the Collector (most likely the TopScoreDocCollector or TopFieldCollector) and does its business.Of course, here is where things get involved. The [](xref:Lucene.Net.Search.Scorer Scorer) that is returned by the [](xref:Lucene.Net.Search.Weight Weight) object depends on what type of Query was submitted. In most real world applications with multiple query terms, the [](xref:Lucene.Net.Search.Scorer Scorer) is [...]
+At last, we are actually going to score some documents. The score method takes in the Collector (most likely the TopScoreDocCollector or TopFieldCollector) and does its business.Of course, here is where things get involved. The [Scorer](xref:Lucene.Net.Search.Scorer) that is returned by the [Weight](xref:Lucene.Net.Search.Weight) object depends on what type of Query was submitted. In most real world applications with multiple query terms, the [Scorer](xref:Lucene.Net.Search.Scorer) is go [...]
 
-Assuming a BooleanScorer2, we first initialize the Coordinator, which is used to apply the coord() factor. We then get a internal Scorer based on the required, optional and prohibited parts of the query. Using this internal Scorer, the BooleanScorer2 then proceeds into a while loop based on the [](xref:Lucene.Net.Search.Scorer.NextDoc Scorer.NextDoc()) method. The nextDoc() method advances to the next document matching the query. This is an abstract method in the Scorer class and is thus [...]
\ No newline at end of file
+Assuming a BooleanScorer2, we first initialize the Coordinator, which is used to apply the coord() factor. We then get a internal Scorer based on the required, optional and prohibited parts of the query. Using this internal Scorer, the BooleanScorer2 then proceeds into a while loop based on the [Scorer.nextDoc](xref:Lucene.Net.Search.Scorer#methods) method. The nextDoc() method advances to the next document matching the query. This is an abstract method in the Scorer class and is thus ov [...]
\ No newline at end of file
diff --git a/src/Lucene.Net/Store/package.md b/src/Lucene.Net/Store/package.md
index e8d2ba6..665ba5d 100644
--- a/src/Lucene.Net/Store/package.md
+++ b/src/Lucene.Net/Store/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Store
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
diff --git a/src/Lucene.Net/Util/Automaton/package.md b/src/Lucene.Net/Util/Automaton/package.md
index e0243c0..7d46f74 100644
--- a/src/Lucene.Net/Util/Automaton/package.md
+++ b/src/Lucene.Net/Util/Automaton/package.md
@@ -35,8 +35,8 @@ alphabet and support for all standard (and a number of non-standard)
 regular expression operations.
 
 The most commonly used functionality is located in the classes
-<tt>[](xref:Lucene.Net.Util.Automaton.Automaton)</tt> and
-<tt>[](xref:Lucene.Net.Util.Automaton.RegExp)</tt>.
+<tt><xref:Lucene.Net.Util.Automaton.Automaton></tt> and
+<tt><xref:Lucene.Net.Util.Automaton.RegExp></tt>.
 
 For more information, go to the package home page at 
 <tt>[http://www.brics.dk/automaton/](http://www.brics.dk/automaton/)</tt>.
diff --git a/src/Lucene.Net/Util/Fst/package.md b/src/Lucene.Net/Util/Fst/package.md
index 3f58a80..76e6057 100644
--- a/src/Lucene.Net/Util/Fst/package.md
+++ b/src/Lucene.Net/Util/Fst/package.md
@@ -24,13 +24,13 @@ Finite State Transducers](http://en.wikipedia.org/wiki/Finite_state_transducer)
 *   Fast and low memory overhead construction of the minimal FST 
        (but inputs must be provided in sorted order)
 *   Low object overhead and quick deserialization (byte[] representation)
-*   Optional two-pass compression: [](xref:Lucene.Net.Util.Fst.FST.Pack FST.Pack())
-*   [](xref:Lucene.Net.Util.Fst.Util.GetByOutput Lookup-by-output) when the 
+*   Optional two-pass compression: [FST.pack](xref:Lucene.Net.Util.Fst.FST#methods)
+*   [Lookup-by-output](xref:Lucene.Net.Util.Fst.Util#methods) when the 
        outputs are in sorted order (e.g., ordinals or file pointers)
-*   Pluggable [](xref:Lucene.Net.Util.Fst.Outputs Outputs) representation
-*   [](xref:Lucene.Net.Util.Fst.Util.ShortestPaths N-shortest-paths) search by
+*   Pluggable [Outputs](xref:Lucene.Net.Util.Fst.Outputs) representation
+*   [N-shortest-paths](xref:Lucene.Net.Util.Fst.Util#methods) search by
        weight
-*   Enumerators ([](xref:Lucene.Net.Util.Fst.IntsRefFSTEnum IntsRef) and [](xref:Lucene.Net.Util.Fst.BytesRefFSTEnum BytesRef)) that behave like {@link java.util.SortedMap SortedMap} iterators
+*   Enumerators ([IntsRef](xref:Lucene.Net.Util.Fst.IntsRefFSTEnum) and [BytesRef](xref:Lucene.Net.Util.Fst.BytesRefFSTEnum)) that behave like {@link java.util.SortedMap SortedMap} iterators
 
 FST Construction example:
 
diff --git a/src/Lucene.Net/Util/Packed/package.md b/src/Lucene.Net/Util/Packed/package.md
index 24ac142..9898a4f 100644
--- a/src/Lucene.Net/Util/Packed/package.md
+++ b/src/Lucene.Net/Util/Packed/package.md
@@ -19,53 +19,53 @@
 
  The packed package provides * sequential and random access capable arrays of positive longs, * routines for efficient serialization and deserialization of streams of packed integers. The implementations provide different trade-offs between memory usage and access speed. The standard usage scenario is replacing large int or long arrays in order to reduce the memory footprint. 
 
- The main access point is the [](xref:Lucene.Net.Util.Packed.PackedInts) factory. 
+ The main access point is the <xref:Lucene.Net.Util.Packed.PackedInts> factory. 
 
 ### In-memory structures
 
-*   **[](xref:Lucene.Net.Util.Packed.PackedInts.Mutable)**
+*   **<xref:Lucene.Net.Util.Packed.PackedInts.Mutable>**
 
     *   Only supports positive longs.
     *   Requires the number of bits per value to be known in advance.
     *   Random-access for both writing and reading.
-*   **[](xref:Lucene.Net.Util.Packed.GrowableWriter)**
+*   **<xref:Lucene.Net.Util.Packed.GrowableWriter>**
 
     *   Same as PackedInts.Mutable but grows the number of bits per values when needed.
     *   Useful to build a PackedInts.Mutable from a read-once stream of longs.
-*   **[](xref:Lucene.Net.Util.Packed.PagedGrowableWriter)**
+*   **<xref:Lucene.Net.Util.Packed.PagedGrowableWriter>**
 
     *   Slices data into fixed-size blocks stored in GrowableWriters.
     *   Supports more than 2B values.
     *   You should use Appending(Delta)PackedLongBuffer instead if you don't need random write access.
-*   **[](xref:Lucene.Net.Util.Packed.AppendingDeltaPackedLongBuffer)**
+*   **<xref:Lucene.Net.Util.Packed.AppendingDeltaPackedLongBuffer>**
 
     *   Can store any sequence of longs.
     *   Compression is good when values are close to each other.
     *   Supports random reads, but only sequential writes.
     *   Can address up to 2^42 values.
-*   **[](xref:Lucene.Net.Util.Packed.AppendingPackedLongBuffer)**
+*   **<xref:Lucene.Net.Util.Packed.AppendingPackedLongBuffer>**
 
     *   Same as AppendingDeltaPackedLongBuffer but assumes values are 0-based.
-*   **[](xref:Lucene.Net.Util.Packed.MonotonicAppendingLongBuffer)**
+*   **<xref:Lucene.Net.Util.Packed.MonotonicAppendingLongBuffer>**
 
     *   Same as AppendingDeltaPackedLongBuffer except that compression is good when the stream is a succession of affine functions.
 
 ### Disk-based structures
 
-*   **[](xref:Lucene.Net.Util.Packed.PackedInts.Writer), [](xref:Lucene.Net.Util.Packed.PackedInts.Reader), [](xref:Lucene.Net.Util.Packed.PackedInts.ReaderIterator)**
+*   **<xref:Lucene.Net.Util.Packed.PackedInts.Writer>, <xref:Lucene.Net.Util.Packed.PackedInts.Reader>, <xref:Lucene.Net.Util.Packed.PackedInts.ReaderIterator>**
 
     *   Only supports positive longs.
     *   Requires the number of bits per value to be known in advance.
     *   Supports both fast sequential access with low memory footprint with ReaderIterator and random-access by either loading values in memory or leaving them on disk with Reader.
-*   **[](xref:Lucene.Net.Util.Packed.BlockPackedWriter), [](xref:Lucene.Net.Util.Packed.BlockPackedReader), [](xref:Lucene.Net.Util.Packed.BlockPackedReaderIterator)**
+*   **<xref:Lucene.Net.Util.Packed.BlockPackedWriter>, <xref:Lucene.Net.Util.Packed.BlockPackedReader>, <xref:Lucene.Net.Util.Packed.BlockPackedReaderIterator>**
 
     *   Splits the stream into fixed-size blocks.
     *   Compression is good when values are close to each other.
     *   Can address up to 2B * blockSize values.
-*   **[](xref:Lucene.Net.Util.Packed.MonotonicBlockPackedWriter), [](xref:Lucene.Net.Util.Packed.MonotonicBlockPackedReader)**
+*   **<xref:Lucene.Net.Util.Packed.MonotonicBlockPackedWriter>, <xref:Lucene.Net.Util.Packed.MonotonicBlockPackedReader>**
 
     *   Same as the non-monotonic variants except that compression is good when the stream is a succession of affine functions.
     *   The reason why there is no sequential access is that if you need sequential access, you should rather delta-encode and use BlockPackedWriter.
-*   **[](xref:Lucene.Net.Util.Packed.PackedDataOutput), [](xref:Lucene.Net.Util.Packed.PackedDataInput)**
+*   **<xref:Lucene.Net.Util.Packed.PackedDataOutput>, <xref:Lucene.Net.Util.Packed.PackedDataInput>**
 
     *   Writes sequences of longs where each long can use any number of bits.
\ No newline at end of file
diff --git a/src/Lucene.Net/Util/package.md b/src/Lucene.Net/Util/package.md
index dcbfc19..e2f92e4 100644
--- a/src/Lucene.Net/Util/package.md
+++ b/src/Lucene.Net/Util/package.md
@@ -1,4 +1,9 @@
-
+---
+uid: Lucene.Net.Util
+summary: *content
+---
+
+
 <!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
diff --git a/src/Lucene.Net/overview.md b/src/Lucene.Net/overview.md
index 4237a1c..2d8ce08 100644
--- a/src/Lucene.Net/overview.md
+++ b/src/Lucene.Net/overview.md
@@ -1,4 +1,9 @@
-<!--
+---
+uid: Lucene.Net
+summary: *content
+---
+
+<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
@@ -46,59 +51,59 @@ Apache Lucene is a high-performance, full-featured text search engine library. H
 
 The Lucene API is divided into several packages:
 
-*   **[](xref:Lucene.Net.Analysis)**
-defines an abstract [](xref:Lucene.Net.Analysis.Analyzer Analyzer)
+*   **<xref:Lucene.Net.Analysis>**
+defines an abstract [Analyzer](xref:Lucene.Net.Analysis.Analyzer)
 API for converting text from a {@link java.io.Reader}
-into a [](xref:Lucene.Net.Analysis.TokenStream TokenStream),
-an enumeration of token [](xref:Lucene.Net.Util.Attribute Attribute)s. 
-A TokenStream can be composed by applying [](xref:Lucene.Net.Analysis.TokenFilter TokenFilter)s
-to the output of a [](xref:Lucene.Net.Analysis.Tokenizer Tokenizer). 
-Tokenizers and TokenFilters are strung together and applied with an [](xref:Lucene.Net.Analysis.Analyzer Analyzer). 
+into a [TokenStream](xref:Lucene.Net.Analysis.TokenStream),
+an enumeration of token [Attribute](xref:Lucene.Net.Util.Attribute)s. 
+A TokenStream can be composed by applying [TokenFilter](xref:Lucene.Net.Analysis.TokenFilter)s
+to the output of a [Tokenizer](xref:Lucene.Net.Analysis.Tokenizer). 
+Tokenizers and TokenFilters are strung together and applied with an [Analyzer](xref:Lucene.Net.Analysis.Analyzer). 
 [analyzers-common](../analyzers-common/overview-summary.html) provides a number of Analyzer implementations, including 
 [StopAnalyzer](../analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html)
 and the grammar-based [StandardAnalyzer](../analyzers-common/org/apache/lucene/analysis/standard/StandardAnalyzer.html).
-*   **[](xref:Lucene.Net.Codecs)**
+*   **<xref:Lucene.Net.Codecs>**
 provides an abstraction over the encoding and decoding of the inverted index structure,
 as well as different implementations that can be chosen depending upon application needs.
 
-    **[](xref:Lucene.Net.Documents)**
-provides a simple [](xref:Lucene.Net.Documents.Document Document)
-class.  A Document is simply a set of named [](xref:Lucene.Net.Documents.Field Field)s,
+    **<xref:Lucene.Net.Documents>**
+provides a simple [Document](xref:Lucene.Net.Documents.Document)
+class.  A Document is simply a set of named [Field](xref:Lucene.Net.Documents.Field)s,
 whose values may be strings or instances of {@link java.io.Reader}.
-*   **[](xref:Lucene.Net.Index)**
-provides two primary classes: [](xref:Lucene.Net.Index.IndexWriter IndexWriter),
-which creates and adds documents to indices; and [](xref:Lucene.Net.Index.IndexReader),
+*   **<xref:Lucene.Net.Index>**
+provides two primary classes: [IndexWriter](xref:Lucene.Net.Index.IndexWriter),
+which creates and adds documents to indices; and <xref:Lucene.Net.Index.IndexReader>,
 which accesses the data in the index.
-*   **[](xref:Lucene.Net.Search)**
-provides data structures to represent queries (ie [](xref:Lucene.Net.Search.TermQuery TermQuery)
-for individual words, [](xref:Lucene.Net.Search.PhraseQuery PhraseQuery) 
-for phrases, and [](xref:Lucene.Net.Search.BooleanQuery BooleanQuery) 
-for boolean combinations of queries) and the [](xref:Lucene.Net.Search.IndexSearcher IndexSearcher)
-which turns queries into [](xref:Lucene.Net.Search.TopDocs TopDocs).
+*   **<xref:Lucene.Net.Search>**
+provides data structures to represent queries (ie [TermQuery](xref:Lucene.Net.Search.TermQuery)
+for individual words, [PhraseQuery](xref:Lucene.Net.Search.PhraseQuery) 
+for phrases, and [BooleanQuery](xref:Lucene.Net.Search.BooleanQuery) 
+for boolean combinations of queries) and the [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher)
+which turns queries into [TopDocs](xref:Lucene.Net.Search.TopDocs).
 A number of [QueryParser](../queryparser/overview-summary.html)s are provided for producing
 query structures from strings or xml.
 
-    **[](xref:Lucene.Net.Store)**
-defines an abstract class for storing persistent data, the [](xref:Lucene.Net.Store.Directory Directory),
-which is a collection of named files written by an [](xref:Lucene.Net.Store.IndexOutput IndexOutput)
-and read by an [](xref:Lucene.Net.Store.IndexInput IndexInput). 
-Multiple implementations are provided, including [](xref:Lucene.Net.Store.FSDirectory FSDirectory),
-which uses a file system directory to store files, and [](xref:Lucene.Net.Store.RAMDirectory RAMDirectory)
+    **<xref:Lucene.Net.Store>**
+defines an abstract class for storing persistent data, the [Directory](xref:Lucene.Net.Store.Directory),
+which is a collection of named files written by an [IndexOutput](xref:Lucene.Net.Store.IndexOutput)
+and read by an [IndexInput](xref:Lucene.Net.Store.IndexInput). 
+Multiple implementations are provided, including [FSDirectory](xref:Lucene.Net.Store.FSDirectory),
+which uses a file system directory to store files, and [RAMDirectory](xref:Lucene.Net.Store.RAMDirectory)
 which implements files as memory-resident data structures.
-*   **[](xref:Lucene.Net.Util)**
-contains a few handy data structures and util classes, ie [](xref:Lucene.Net.Util.OpenBitSet OpenBitSet)
-and [](xref:Lucene.Net.Util.PriorityQueue PriorityQueue).
+*   **<xref:Lucene.Net.Util>**
+contains a few handy data structures and util classes, ie [OpenBitSet](xref:Lucene.Net.Util.OpenBitSet)
+and [PriorityQueue](xref:Lucene.Net.Util.PriorityQueue).
 To use Lucene, an application should:
 
-1.  Create [](xref:Lucene.Net.Documents.Document Document)s by
+1.  Create [Document](xref:Lucene.Net.Documents.Document)s by
 adding
-[](xref:Lucene.Net.Documents.Field Field)s;
-2.  Create an [](xref:Lucene.Net.Index.IndexWriter IndexWriter)
-and add documents to it with [](xref:Lucene.Net.Index.IndexWriter.AddDocument(Iterable) addDocument());
+[Field](xref:Lucene.Net.Documents.Field)s;
+2.  Create an [IndexWriter](xref:Lucene.Net.Index.IndexWriter)
+and add documents to it with [AddDocument](xref:Lucene.Net.Index.IndexWriter#methods);
 3.  Call [QueryParser.parse()](../queryparser/org/apache/lucene/queryparser/classic/QueryParserBase.html#parse(java.lang.String))
 to build a query from a string; and
-4.  Create an [](xref:Lucene.Net.Search.IndexSearcher IndexSearcher)
-and pass the query to its [](xref:Lucene.Net.Search.IndexSearcher.Search(Lucene.Net.Search.Query, int) search())
+4.  Create an [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher)
+and pass the query to its [Search](xref:Lucene.Net.Search.IndexSearcher#methods)
 method.
 Some simple examples of code which does this are:
 
@@ -106,36 +111,3 @@ Some simple examples of code which does this are:
 index for all the files contained in a directory.
 *    [SearchFiles.java](../demo/src-html/org/apache/lucene/demo/SearchFiles.html) prompts for
 queries and searches an index.
-To demonstrate these, try something like:
-
-> <tt>> **java -cp lucene-core.jar:lucene-demo.jar:lucene-analyzers-common.jar org.apache.lucene.demo.IndexFiles -index index -docs rec.food.recipes/soups**</tt>
-> 
-> <tt>adding rec.food.recipes/soups/abalone-chowder</tt>
-> 
-> <tt>  </tt>[ ... ]
-> 
-> <tt>> **java -cp lucene-core.jar:lucene-demo.jar:lucene-queryparser.jar:lucene-analyzers-common.jar org.apache.lucene.demo.SearchFiles**</tt>
-> 
-> <tt>Query: **chowder**</tt>
-> 
-> <tt>Searching for: chowder</tt>
-> 
-> <tt>34 total matching documents</tt>
-> 
-> <tt>1. rec.food.recipes/soups/spam-chowder</tt>
-> 
-> <tt>  </tt>[ ... thirty-four documents contain the word "chowder" ... ]
-> 
-> <tt>Query: **"clam chowder" AND Manhattan**</tt>
-> 
-> <tt>Searching for: +"clam chowder" +manhattan</tt>
-> 
-> <tt>2 total matching documents</tt>
-> 
-> <tt>1. rec.food.recipes/soups/clam-chowder</tt>
-> 
-> <tt>  </tt>[ ... two documents contain the phrase "clam chowder"
-> and the word "manhattan" ... ]
-> 
->     [ Note: "+" and "-" are canonical, but "AND", "OR"
-> and "NOT" may be used. ]
\ No newline at end of file
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins.sln b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins.sln
new file mode 100644
index 0000000..99a8fbb
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins.sln
@@ -0,0 +1,22 @@
+
+Microsoft Visual Studio Solution File, Format Version 12.00
+# Visual Studio 15
+VisualStudioVersion = 15.0.26430.14
+MinimumVisualStudioVersion = 10.0.40219.1
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "LuceneDocsPlugins", "LuceneDocsPlugins\LuceneDocsPlugins.csproj", "{D5D1C256-4A5A-4C57-949D-E9A1FFB6A5D1}"
+EndProject
+Global
+	GlobalSection(SolutionConfigurationPlatforms) = preSolution
+		Debug|Any CPU = Debug|Any CPU
+		Release|Any CPU = Release|Any CPU
+	EndGlobalSection
+	GlobalSection(ProjectConfigurationPlatforms) = postSolution
+		{D5D1C256-4A5A-4C57-949D-E9A1FFB6A5D1}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+		{D5D1C256-4A5A-4C57-949D-E9A1FFB6A5D1}.Debug|Any CPU.Build.0 = Debug|Any CPU
+		{D5D1C256-4A5A-4C57-949D-E9A1FFB6A5D1}.Release|Any CPU.ActiveCfg = Release|Any CPU
+		{D5D1C256-4A5A-4C57-949D-E9A1FFB6A5D1}.Release|Any CPU.Build.0 = Release|Any CPU
+	EndGlobalSection
+	GlobalSection(SolutionProperties) = preSolution
+		HideSolutionNode = FALSE
+	EndGlobalSection
+EndGlobal
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneDfmEngineCustomizer.cs b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneDfmEngineCustomizer.cs
new file mode 100644
index 0000000..3da552a
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneDfmEngineCustomizer.cs
@@ -0,0 +1,27 @@
+using System;
+using System.Collections.Generic;
+using System.Composition;
+using System.Diagnostics;
+using System.Linq;
+using System.Text;
+using System.Threading.Tasks;
+using Microsoft.DocAsCode.Common;
+using Microsoft.DocAsCode.Dfm;
+using Microsoft.DocAsCode.MarkdownLite;
+using Microsoft.DocAsCode.MarkdownLite.Matchers;
+
+namespace LuceneDocsPlugins
+{
+    /// <summary>
+    /// Exports our custom markdown parser via MEF to DocFx
+    /// </summary>
+    [Export(typeof(IDfmEngineCustomizer))]
+    public class LuceneDfmEngineCustomizer : IDfmEngineCustomizer
+    {
+        public void Customize(DfmEngineBuilder builder, IReadOnlyDictionary<string, object> parameters)
+        {
+            var index = builder.BlockRules.FindIndex(r => r is MarkdownHeadingBlockRule);
+            builder.BlockRules = builder.BlockRules.Insert(index, new LuceneNoteBlockRule());
+        }
+    }
+}
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneDocsPlugins.csproj b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneDocsPlugins.csproj
new file mode 100644
index 0000000..1c1892e
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneDocsPlugins.csproj
@@ -0,0 +1,107 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
+  <Import Project="..\packages\Microsoft.Net.Compilers.2.2.0\build\Microsoft.Net.Compilers.props" Condition="Exists('..\packages\Microsoft.Net.Compilers.2.2.0\build\Microsoft.Net.Compilers.props')" />
+  <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
+  <PropertyGroup>
+    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
+    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
+    <ProjectGuid>{D5D1C256-4A5A-4C57-949D-E9A1FFB6A5D1}</ProjectGuid>
+    <OutputType>Library</OutputType>
+    <AppDesignerFolder>Properties</AppDesignerFolder>
+    <RootNamespace>LuceneDocsPlugins</RootNamespace>
+    <AssemblyName>LuceneDocsPlugins</AssemblyName>
+    <TargetFrameworkVersion>v4.6.1</TargetFrameworkVersion>
+    <FileAlignment>512</FileAlignment>
+    <TargetFrameworkProfile />
+    <NuGetPackageImportStamp>
+    </NuGetPackageImportStamp>
+  </PropertyGroup>
+  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
+    <DebugSymbols>true</DebugSymbols>
+    <DebugType>full</DebugType>
+    <Optimize>false</Optimize>
+    <OutputPath>..\..\..\..\websites\apidocs\lucenetemplate\plugins\</OutputPath>
+    <DefineConstants>DEBUG;TRACE</DefineConstants>
+    <ErrorReport>prompt</ErrorReport>
+    <WarningLevel>4</WarningLevel>
+  </PropertyGroup>
+  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
+    <DebugType>pdbonly</DebugType>
+    <Optimize>true</Optimize>
+    <OutputPath>bin\Release\</OutputPath>
+    <DefineConstants>TRACE</DefineConstants>
+    <ErrorReport>prompt</ErrorReport>
+    <WarningLevel>4</WarningLevel>
+  </PropertyGroup>
+  <ItemGroup>
+    <Reference Include="HtmlAgilityPack, Version=1.4.9.0, Culture=neutral, PublicKeyToken=bd319b19eaf3b43a, processorArchitecture=MSIL">
+      <HintPath>..\packages\HtmlAgilityPack.1.4.9\lib\Net45\HtmlAgilityPack.dll</HintPath>
+    </Reference>
+    <Reference Include="Microsoft.DocAsCode.Common, Version=2.24.0.0, Culture=neutral, processorArchitecture=MSIL">
+      <HintPath>..\packages\Microsoft.DocAsCode.Common.2.24.0\lib\net461\Microsoft.DocAsCode.Common.dll</HintPath>
+    </Reference>
+    <Reference Include="Microsoft.DocAsCode.Dfm, Version=2.24.0.0, Culture=neutral, processorArchitecture=MSIL">
+      <HintPath>..\packages\Microsoft.DocAsCode.Dfm.2.24.0\lib\net461\Microsoft.DocAsCode.Dfm.dll</HintPath>
+    </Reference>
+    <Reference Include="Microsoft.DocAsCode.MarkdownLite, Version=2.24.0.0, Culture=neutral, processorArchitecture=MSIL">
+      <HintPath>..\packages\Microsoft.DocAsCode.MarkdownLite.2.24.0\lib\net461\Microsoft.DocAsCode.MarkdownLite.dll</HintPath>
+    </Reference>
+    <Reference Include="Microsoft.DocAsCode.Plugins, Version=2.24.0.0, Culture=neutral, processorArchitecture=MSIL">
+      <HintPath>..\packages\Microsoft.DocAsCode.Plugins.2.24.0\lib\net461\Microsoft.DocAsCode.Plugins.dll</HintPath>
+    </Reference>
+    <Reference Include="Microsoft.DocAsCode.YamlSerialization, Version=2.24.0.0, Culture=neutral, processorArchitecture=MSIL">
+      <HintPath>..\packages\Microsoft.DocAsCode.YamlSerialization.2.24.0\lib\net461\Microsoft.DocAsCode.YamlSerialization.dll</HintPath>
+    </Reference>
+    <Reference Include="Newtonsoft.Json, Version=9.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
+      <HintPath>..\packages\Newtonsoft.Json.9.0.1\lib\net45\Newtonsoft.Json.dll</HintPath>
+    </Reference>
+    <Reference Include="System" />
+    <Reference Include="System.Collections.Immutable, Version=1.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
+      <HintPath>..\packages\System.Collections.Immutable.1.3.1\lib\portable-net45+win8+wp8+wpa81\System.Collections.Immutable.dll</HintPath>
+      <Private>True</Private>
+    </Reference>
+    <Reference Include="System.Composition.AttributedModel, Version=1.0.31.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
+      <HintPath>..\packages\System.Composition.AttributedModel.1.0.31\lib\portable-net45+win8+wp8+wpa81\System.Composition.AttributedModel.dll</HintPath>
+    </Reference>
+    <Reference Include="System.Composition.Convention, Version=1.0.31.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
+      <HintPath>..\packages\System.Composition.Convention.1.0.31\lib\portable-net45+win8+wp8+wpa81\System.Composition.Convention.dll</HintPath>
+    </Reference>
+    <Reference Include="System.Composition.Hosting, Version=1.0.31.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
+      <HintPath>..\packages\System.Composition.Hosting.1.0.31\lib\portable-net45+win8+wp8+wpa81\System.Composition.Hosting.dll</HintPath>
+    </Reference>
+    <Reference Include="System.Composition.Runtime, Version=1.0.31.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
+      <HintPath>..\packages\System.Composition.Runtime.1.0.31\lib\portable-net45+win8+wp8+wpa81\System.Composition.Runtime.dll</HintPath>
+    </Reference>
+    <Reference Include="System.Composition.TypedParts, Version=1.0.31.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
+      <HintPath>..\packages\System.Composition.TypedParts.1.0.31\lib\portable-net45+win8+wp8+wpa81\System.Composition.TypedParts.dll</HintPath>
+    </Reference>
+    <Reference Include="System.Core" />
+    <Reference Include="System.Xml.Linq" />
+    <Reference Include="System.Data.DataSetExtensions" />
+    <Reference Include="Microsoft.CSharp" />
+    <Reference Include="System.Data" />
+    <Reference Include="System.Net.Http" />
+    <Reference Include="System.Xml" />
+    <Reference Include="YamlDotNet, Version=4.1.0.0, Culture=neutral, PublicKeyToken=ec19458f3c15af5e, processorArchitecture=MSIL">
+      <HintPath>..\packages\YamlDotNet.Signed.4.1.0\lib\net35\YamlDotNet.dll</HintPath>
+    </Reference>
+  </ItemGroup>
+  <ItemGroup>
+    <Compile Include="LuceneDfmEngineCustomizer.cs" />
+    <Compile Include="LuceneNoteBlockRule.cs" />
+    <Compile Include="LuceneNoteBlockToken.cs" />
+    <Compile Include="LuceneRendererPartProvider.cs" />
+    <Compile Include="LuceneTokenRendererPart.cs" />
+    <Compile Include="Properties\AssemblyInfo.cs" />
+  </ItemGroup>
+  <ItemGroup>
+    <None Include="packages.config" />
+  </ItemGroup>
+  <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
+  <Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
+    <PropertyGroup>
+      <ErrorText>This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them.  For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
+    </PropertyGroup>
+    <Error Condition="!Exists('..\packages\Microsoft.Net.Compilers.2.2.0\build\Microsoft.Net.Compilers.props')" Text="$([System.String]::Format('$(ErrorText)', '..\packages\Microsoft.Net.Compilers.2.2.0\build\Microsoft.Net.Compilers.props'))" />
+  </Target>
+</Project>
\ No newline at end of file
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneNoteBlockRule.cs b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneNoteBlockRule.cs
new file mode 100644
index 0000000..28af227
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneNoteBlockRule.cs
@@ -0,0 +1,26 @@
+using System.Text.RegularExpressions;
+using Microsoft.DocAsCode.MarkdownLite;
+
+namespace LuceneDocsPlugins
+{
+    /// <summary>
+    /// The regex rule to parse out the custom Lucene tokens
+    /// </summary>
+    public class LuceneNoteBlockRule : IMarkdownRule
+    {       
+        public virtual Regex LabelRegex { get; } = new Regex("^@lucene\\.(?<notetype>(experimental|internal))$");
+
+        public virtual IMarkdownToken TryMatch(IMarkdownParser parser, IMarkdownParsingContext context)
+        {
+            var match = LabelRegex.Match(context.CurrentMarkdown);
+            if (match.Length == 0)
+            {
+                return null;
+            }
+            var sourceInfo = context.Consume(match.Length);
+            return new LuceneNoteBlockToken(this, parser.Context, match.Groups[1].Value, sourceInfo);
+        }
+
+        public virtual string Name => "LuceneNote";
+    }
+}
\ No newline at end of file
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneNoteBlockToken.cs b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneNoteBlockToken.cs
new file mode 100644
index 0000000..310ac80
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneNoteBlockToken.cs
@@ -0,0 +1,26 @@
+using Microsoft.DocAsCode.MarkdownLite;
+
+namespace LuceneDocsPlugins
+{
+    /// <summary>
+    /// A custom token class representing our custom Lucene tokens
+    /// </summary>
+    public class LuceneNoteBlockToken : IMarkdownToken
+    {
+        public LuceneNoteBlockToken(IMarkdownRule rule, IMarkdownContext context, string label, SourceInfo sourceInfo)
+        {
+            Rule = rule;
+            Context = context;
+            Label = label;
+            SourceInfo = sourceInfo;
+        }
+
+        public IMarkdownRule Rule { get; }
+
+        public IMarkdownContext Context { get; }
+
+        public string Label { get; }
+
+        public SourceInfo SourceInfo { get; }
+    }
+}
\ No newline at end of file
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneRendererPartProvider.cs b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneRendererPartProvider.cs
new file mode 100644
index 0000000..5ca2407
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneRendererPartProvider.cs
@@ -0,0 +1,18 @@
+using System.Collections.Generic;
+using System.Composition;
+using Microsoft.DocAsCode.Dfm;
+
+namespace LuceneDocsPlugins
+{
+    /// <summary>
+    /// Exports our custom renderer via MEF to DocFx
+    /// </summary>
+    [Export(typeof(IDfmCustomizedRendererPartProvider))]
+    public class LuceneRendererPartProvider : IDfmCustomizedRendererPartProvider
+    {
+        public IEnumerable<IDfmCustomizedRendererPart> CreateParts(IReadOnlyDictionary<string, object> parameters)
+        {
+            yield return new LuceneTokenRendererPart();
+        }
+    }
+}
\ No newline at end of file
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneTokenRendererPart.cs b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneTokenRendererPart.cs
new file mode 100644
index 0000000..19140fe
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/LuceneTokenRendererPart.cs
@@ -0,0 +1,20 @@
+using Microsoft.DocAsCode.Dfm;
+using Microsoft.DocAsCode.MarkdownLite;
+
+namespace LuceneDocsPlugins
+{
+    /// <summary>
+    /// Used to replace custom Lucene tokens with custom HTML markup
+    /// </summary>
+    public sealed class LuceneTokenRendererPart : DfmCustomizedRendererPartBase<IMarkdownRenderer, LuceneNoteBlockToken, MarkdownBlockContext>
+    {
+        private const string Message = "This is a Lucene.NET {0} API, use at your own risk";
+
+        public override string Name => "LuceneTokenRendererPart";
+
+        public override bool Match(IMarkdownRenderer renderer, LuceneNoteBlockToken token, MarkdownBlockContext context) => true;
+
+        public override StringBuffer Render(IMarkdownRenderer renderer, LuceneNoteBlockToken token, MarkdownBlockContext context) 
+            => "<div class=\"lucene-block lucene-" + token.Label.ToLower() + "\">" + string.Format(Message, token.Label.ToUpper()) + "</div>";
+    }
+}
\ No newline at end of file
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/Properties/AssemblyInfo.cs b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/Properties/AssemblyInfo.cs
new file mode 100644
index 0000000..7c430c6
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/Properties/AssemblyInfo.cs
@@ -0,0 +1,36 @@
+using System.Reflection;
+using System.Runtime.CompilerServices;
+using System.Runtime.InteropServices;
+
+// General Information about an assembly is controlled through the following
+// set of attributes. Change these attribute values to modify the information
+// associated with an assembly.
+[assembly: AssemblyTitle("LuceneDocsPlugins")]
+[assembly: AssemblyDescription("")]
+[assembly: AssemblyConfiguration("")]
+[assembly: AssemblyCompany("")]
+[assembly: AssemblyProduct("LuceneDocsPlugins")]
+[assembly: AssemblyCopyright("Copyright ©  2017")]
+[assembly: AssemblyTrademark("")]
+[assembly: AssemblyCulture("")]
+
+// Setting ComVisible to false makes the types in this assembly not visible
+// to COM components.  If you need to access a type in this assembly from
+// COM, set the ComVisible attribute to true on that type.
+[assembly: ComVisible(false)]
+
+// The following GUID is for the ID of the typelib if this project is exposed to COM
+[assembly: Guid("d5d1c256-4a5a-4c57-949d-e9a1ffb6a5d1")]
+
+// Version information for an assembly consists of the following four values:
+//
+//      Major Version
+//      Minor Version
+//      Build Number
+//      Revision
+//
+// You can specify all the values or you can default the Build and Revision Numbers
+// by using the '*' as shown below:
+// [assembly: AssemblyVersion("1.0.*")]
+[assembly: AssemblyVersion("1.0.0.0")]
+[assembly: AssemblyFileVersion("1.0.0.0")]
diff --git a/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/packages.config b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/packages.config
new file mode 100644
index 0000000..848589b
--- /dev/null
+++ b/src/docs/LuceneDocsPlugins/LuceneDocsPlugins/packages.config
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="utf-8"?>
+<packages>
+  <package id="HtmlAgilityPack" version="1.4.9" targetFramework="net46" />
+  <package id="Microsoft.Composition" version="1.0.31" targetFramework="net461" />
+  <package id="Microsoft.DocAsCode.Common" version="2.24.0" targetFramework="net461" />
+  <package id="Microsoft.DocAsCode.Dfm" version="2.24.0" targetFramework="net461" />
+  <package id="Microsoft.DocAsCode.MarkdownLite" version="2.24.0" targetFramework="net461" />
+  <package id="Microsoft.DocAsCode.Plugins" version="2.24.0" targetFramework="net461" />
+  <package id="Microsoft.DocAsCode.YamlSerialization" version="2.24.0" targetFramework="net461" />
+  <package id="Microsoft.Net.Compilers" version="2.2.0" targetFramework="net46" developmentDependency="true" />
+  <package id="Newtonsoft.Json" version="9.0.1" targetFramework="net46" />
+  <package id="System.Collections.Immutable" version="1.3.1" targetFramework="net46" />
+  <package id="System.Composition" version="1.0.31" targetFramework="net461" />
+  <package id="System.Composition.AttributedModel" version="1.0.31" targetFramework="net461" />
+  <package id="System.Composition.Convention" version="1.0.31" targetFramework="net461" />
+  <package id="System.Composition.Hosting" version="1.0.31" targetFramework="net461" />
+  <package id="System.Composition.Runtime" version="1.0.31" targetFramework="net461" />
+  <package id="System.Composition.TypedParts" version="1.0.31" targetFramework="net461" />
+  <package id="YamlDotNet.Signed" version="4.1.0" targetFramework="net46" />
+</packages>
\ No newline at end of file
diff --git a/src/docs/readme.md b/src/docs/readme.md
new file mode 100644
index 0000000..e69de29
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/App.config b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/App.config
index 8587218..1b44f2e 100644
--- a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/App.config
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/App.config
@@ -37,6 +37,14 @@
         <assemblyIdentity name="System.IO.FileSystem.Primitives" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
         <bindingRedirect oldVersion="0.0.0.0-4.0.2.0" newVersion="4.0.2.0" />
       </dependentAssembly>
+      <dependentAssembly>
+        <assemblyIdentity name="HtmlAgilityPack" publicKeyToken="bd319b19eaf3b43a" culture="neutral" />
+        <bindingRedirect oldVersion="0.0.0.0-1.5.0.0" newVersion="1.5.0.0" />
+      </dependentAssembly>
+      <dependentAssembly>
+        <assemblyIdentity name="System.Net.Http" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
+        <bindingRedirect oldVersion="0.0.0.0-4.1.1.0" newVersion="4.1.1.0" />
+      </dependentAssembly>
     </assemblyBinding>
   </runtime>
 </configuration>
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/DocConverter.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/DocConverter.cs
index badf7a1..2fbbad7 100644
--- a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/DocConverter.cs
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/DocConverter.cs
@@ -1,5 +1,8 @@
-using System;
+//using JavaDocToMarkdownConverter.MarkdownParsers;
+using JavaDocToMarkdownConverter.Formatters;
+using System;
 using System.Collections.Generic;
+using System.Globalization;
 using System.IO;
 using System.Linq;
 using System.Text;
@@ -8,6 +11,7 @@ using System.Threading.Tasks;
 
 namespace JavaDocToMarkdownConverter
 {
+
     /*
      * Licensed to the Apache Software Foundation (ASF) under one or more
      * contributor license agreements.  See the NOTICE file distributed with
@@ -26,12 +30,7 @@ namespace JavaDocToMarkdownConverter
      */
 
     public class DocConverter
-    {
-        private static Regex LinkRegex = new Regex(@"{@link\s*?(?<cref>org\.apache\.lucene\.[^}]*)\s?(?<text>[^}]*)}", RegexOptions.Compiled);
-        private static Regex RepoLinkRegex = new Regex(@"(?<=\()(?<cref>src-html/[^)]*)", RegexOptions.Compiled);
-
-        private static Regex JavaCodeExtension = new Regex(@".java$", RegexOptions.Compiled);
-        private static Regex DocType = new Regex(@"<!doctype[^>]*>", RegexOptions.Compiled);
+    {   
 
         /// <summary>
         /// 
@@ -76,186 +75,141 @@ namespace JavaDocToMarkdownConverter
             var converter = new Html2Markdown.Converter();
             var markdown = converter.ConvertFile(inputDoc);
 
-            markdown = ReplaceCodeLinks(markdown);
-            markdown = ReplaceRepoLinks(markdown);
+            var ns = ExtractNamespaceFromFile(outputFile);
+            if (NamespaceFileMappings.TryGetValue(ns, out var realNs))
+                ns = realNs;
 
-            // Remove <doctype>
-            markdown = DocType.Replace(markdown, string.Empty);
-
-            File.WriteAllText(outputFile, markdown, Encoding.UTF8);
-        }
-
-        private string ReplaceCodeLinks(string markdown)
-        {
-            Match link = LinkRegex.Match(markdown);
-            if (link.Success)
+            foreach (var r in JavaDocFormatters.Replacers)
             {
-                do
-                {
-                    string cref = CorrectCRef(link.Groups["cref"].Value);
-                    string newLink;
-                    if (!string.IsNullOrWhiteSpace(link.Groups["text"].Value))
-                    {
-                        string linkText = link.Groups[2].Value;
-                        linkText = JavaCodeExtension.Replace(linkText, ".cs");
-                        //newLink = "<see cref=\"" + cref + "\">" + linkText + "</see>";
-                        newLink = "[" + linkText + "](xref:" + cref + ")";
-                    }
-                    else
-                    {
-                        //newLink = "<see cref=\"" + cref + "\"/>";
-                        newLink = "[](xref:" + cref + ")";
-                    }
-
-                    markdown = LinkRegex.Replace(markdown, newLink, 1);
-
-
-                } while ((link = LinkRegex.Match(markdown)).Success);
+                markdown = r.Replace(markdown);
             }
-
-            return markdown;
-        }
-
-        //https://github.com/apache/lucenenet/blob/Lucene.Net_4_8_0_beta00004/src/Lucene.Net.Analysis.Common/Analysis/Ar/ArabicAnalyzer.cs
-        private string ReplaceRepoLinks(string markdown)
-        {
-            Match link = RepoLinkRegex.Match(markdown);
-            if (link.Success)
+            if (JavaDocFormatters.CustomReplacers.TryGetValue(ns, out var cr))
             {
-                do
-                {
-                    string cref = CorrectRepoCRef(link.Groups["cref"].Value);
-                    cref = "https://github.com/apache/lucenenet/blob/{tag}/src/" + cref;
-
-                    markdown = RepoLinkRegex.Replace(markdown, cref, 1);
+                markdown = cr.Replace(markdown);
+            }
 
+            var appendYamlHeader = ShouldAppendYamlHeader(inputDoc, ns);
 
-                } while ((link = RepoLinkRegex.Match(markdown)).Success);
-            }
+            var fileContent = appendYamlHeader ? AppendYamlHeader(ns, markdown) : markdown;
 
-            return markdown;
+            File.WriteAllText(outputFile, fileContent, Encoding.UTF8);
         }
 
-        private IDictionary<string, string> packageToProjectName = new Dictionary<string, string>()
-        {
-            { "analysis.common" , "Lucene.Net.Analysis.Common"},
-            { "analysis.icu" , "Lucene.Net.Analysis.ICU"},
-            { "analysis.kuromoji" , "Lucene.Net.Analysis.Kuromoji"},
-            { "analysis.morfologik" , "Lucene.Net.Analysis.Morfologik"},
-            { "analysis.phonetic" , "Lucene.Net.Analysis.Phonetic"},
-            { "analysis.smartcn" , "Lucene.Net.Analysis.SmartCn"},
-            { "analysis.stempel" , "Lucene.Net.Analysis.Stempel"},
-            { "analysis.uima" , "Lucene.Net.Analysis.UIMA"},
-            { "benchmark" , "Lucene.Net.Benchmark"},
-            { "classification" , "Lucene.Net.Classification"},
-            { "codecs" , "Lucene.Net.Codecs"},
-            { "core" , "Lucene.Net"},
-            { "demo" , "Lucene.Net.Demo"},
-            { "expressions" , "Lucene.Net.Expressions"},
-            { "facet" , "Lucene.Net.Facet"},
-            { "grouping" , "Lucene.Net.Grouping"},
-            { "highlighter" , "Lucene.Net.Highlighter"},
-            { "join" , "Lucene.Net.Join"},
-            { "memory" , "Lucene.Net.Memory"},
-            { "misc" , "Lucene.Net.Misc"},
-            { "queries" , "Lucene.Net.Queries"},
-            { "queryparser" , "Lucene.Net.QueryParser"},
-            { "replicator" , "Lucene.Net.Replicator"},
-            { "sandbox" , "Lucene.Net.Sandbox"},
-            { "spatial" , "Lucene.Net.Spatial"},
-            { "suggest" , "Lucene.Net.Suggest"},
-            { "test-framework" , "Lucene.Net.TestFramework"},
-        };
-
-        private string CorrectRepoCRef(string cref)
+        /// <summary>
+        /// Normally yaml headers are applied to "overview" files but in some cases it's the equivalent "package" file that 
+        /// contains the documentation we want
+        /// </summary>
+        /// <param name="inputFile"></param>
+        /// <param name="ns"></param>
+        /// <returns></returns>
+        private bool ShouldAppendYamlHeader(string inputFile, string ns)
         {
-            string temp = cref;
-            if (temp.StartsWith("src-html"))
+            var fileName = Path.GetFileNameWithoutExtension(inputFile); //should be either "overview" or "package"
+            if (string.Equals("overview", fileName, StringComparison.InvariantCultureIgnoreCase))
             {
-                temp = temp.Replace("src-html/", "");
-            }
-
-            temp = temp.Replace("/", ".");
-            temp = temp.Replace(".html", ".cs");
-
-            var segments = temp.Split('.');
+                if (YamlHeadersForPackageFiles.Contains(ns, StringComparer.InvariantCultureIgnoreCase)) 
+                    return false; //don't append yaml header for this overview file, it will be put on the package file
 
-            if (temp.StartsWith("analysis"))
-            {
-                string project;
-                if (packageToProjectName.TryGetValue(segments[3] + "." + segments[4], out project))
-                    temp = project + "/" + string.Join("/", segments.Skip(5).ToArray());
+                return true; //the default for 'overview' files
             }
-            else
+            else if (string.Equals("package", fileName, StringComparison.InvariantCultureIgnoreCase))
             {
-                string project;
-                if (packageToProjectName.TryGetValue(segments[3], out project))
-                    temp = project + "/" + string.Join("/", segments.Skip(4).ToArray());
+                if (YamlHeadersForPackageFiles.Contains(ns, StringComparer.InvariantCultureIgnoreCase))
+                    return true;
             }
 
-            temp = CorrectCRefCase(temp);
-            foreach (var item in namespaceCorrections)
+            return false;
+        }
+
+        /// <summary>
+        /// For these namespaces we'll use the package.md file instead of the overview.md as the doc file
+        /// </summary>
+        private static readonly List<string> YamlHeadersForPackageFiles = new List<string>
             {
-                if (!item.Key.StartsWith("Lucene.Net"))
-                    temp = temp.Replace(item.Key, item.Value);
-            }
+                "Lucene.Net.Analysis.SmartCn",
+                "Lucene.Net.Facet",
+                "Lucene.Net.Grouping",
+                "Lucene.Net.Join",
+                "Lucene.Net.Index.Memory",
+                "Lucene.Net.Replicator",
+                "Lucene.Net.QueryParsers.Classic",
+                "Lucene.Net.Codecs",
+                "Lucene.Net.Analysis",
+                "Lucene.Net.Codecs.Compressing",
+                "Lucene.Net.Codecs.Lucene3x",
+                "Lucene.Net.Codecs.Lucene40",
+                "Lucene.Net.Codecs.Lucene41",
+                "Lucene.Net.Codecs.Lucene42",
+                "Lucene.Net.Codecs.Lucene45",
+                "Lucene.Net.Codecs.Lucene46",
+                "Lucene.Net.Documents",
+                "Lucene.Net.Index",
+                "Lucene.Net.Search",
+                "Lucene.Net.Search.Payloads",
+                "Lucene.Net.Search.Similarities",
+                "Lucene.Net.Search.Spans",
+                "Lucene.Net.Store",
+                "Lucene.Net.Util",
+            };
 
-            temp = Regex.Replace(temp, "/[Cc]s", ".cs");
+        /// <summary>
+        /// Maps the file based namespace folders to the actual namespaces
+        /// </summary>
+        private static readonly Dictionary<string, string> NamespaceFileMappings = new Dictionary<string, string>(StringComparer.InvariantCultureIgnoreCase)
+        { 
+            ["Lucene.Net.Memory"] = "Lucene.Net.Index.Memory",
+            ["Lucene.Net.QueryParser.Classic"] = "Lucene.Net.QueryParsers.Classic",
+            ["Lucene.Net.Document"] = "Lucene.Net.Documents",
+        };
 
-            return temp;
+        private string AppendYamlHeader(string ns, string fileContent)
+        {
+            var sb = new StringBuilder();
+            sb.AppendLine("---");
+            sb.Append("uid: ");
+            if (NamespaceFileMappings.TryGetValue(ns, out var realNs))
+                sb.AppendLine(realNs);
+            else
+                sb.AppendLine(ns);            
+            sb.AppendLine("summary: *content");
+            sb.AppendLine("---");
+            sb.AppendLine();
+
+            return sb + fileContent;
         }
 
-        private string CorrectCRef(string cref)
+        /// <summary>
+        /// Normally the files would be in the same folder name as their namespace but this isn't the case so we need to try to figure it out
+        /// </summary>
+        /// <param name="outputFile"></param>
+        /// <returns></returns>
+        private string ExtractNamespaceFromFile(string outputFile)
         {
-            var caseCorrected = CorrectCRefCase(cref);
-            var temp = caseCorrected.Replace("org.Apache.Lucene.", "Lucene.Net.");
-            foreach (var item in namespaceCorrections)
-            {
-                temp = temp.Replace(item.Key, item.Value);
-            }
+            var folder = Path.GetDirectoryName(outputFile);
+            var folderParts = folder.Split(Path.DirectorySeparatorChar);
 
-            int index = temp.IndexOf('#');
-            if (index > -1)
+            var index = folderParts.Length - 1;
+            for(int i = index; i >= 0;i--)
             {
-                var sb = new StringBuilder(temp);
-                // special case - capitalize char after #
-                sb[index + 1] = char.ToUpperInvariant(sb[index + 1]);
-                // special case - replace Java # with .
-                temp = sb.ToString().Replace('#', '.');
+                if (folderParts[i].StartsWith("Lucene.Net", StringComparison.InvariantCultureIgnoreCase))
+                {
+                    index = i;
+                    break;
+                }
             }
 
-            return temp;
-        }
-
-        private IDictionary<string, string> namespaceCorrections = new Dictionary<string, string>()
-        {
-            { "Lucene.Net.Document", "Lucene.Net.Documents" },
-            { "Lucene.Net.Benchmark", "Lucene.Net.Benchmarks" },
-            { "Lucene.Net.Queryparser", "Lucene.Net.QueryParsers" },
-            { ".Tokenattributes", ".TokenAttributes" },
-            { ".Charfilter", ".CharFilter" },
-            { ".Commongrams", ".CommonGrams" },
-            { ".Ngram", ".NGram" },
-            { ".Hhmm", ".HHMM" },
-            { ".Blockterms", ".BlockTerms" },
-            { ".Diskdv", ".DiskDV" },
-            { ".Intblock", ".IntBlock" },
-            { ".Simpletext", ".SimpleText" },
-            { ".Postingshighlight", ".PostingsHighlight" },
-            { ".Vectorhighlight", ".VectorHighlight" },
-            { ".Complexphrase", ".ComplexPhrase" },
-            { ".Valuesource", ".ValueSources" },
-        };
-
-        private string CorrectCRefCase(string cref)
-        {
-            var sb = new StringBuilder(cref);
-            for (int i = 0; i < sb.Length - 1; i++)
+            var nsParts = new List<string>();
+            for(var i = index;i< folderParts.Length;i++)
             {
-                if (sb[i] == '.')
-                    sb[i + 1] = char.ToUpper(sb[i + 1]);
+                var innerParts = folderParts[i].Split('.');
+                foreach(var innerPart in innerParts)
+                {
+                    nsParts.Add(innerPart);
+                }
             }
-            return sb.ToString();
+                                    
+            var textInfo = new CultureInfo("en-US", false).TextInfo;
+            return string.Join(".", nsParts.Select(x => textInfo.ToTitleCase(x)).ToArray());
         }
 
 
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/CodeLinkReplacer.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/CodeLinkReplacer.cs
new file mode 100644
index 0000000..0d3cce2
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/CodeLinkReplacer.cs
@@ -0,0 +1,98 @@
+using Html2Markdown.Replacement;
+using System.Globalization;
+using System.Text.RegularExpressions;
+
+namespace JavaDocToMarkdownConverter.Formatters
+{
+
+    //TODO: This could instead be done with the LuceneDocsPlugins and our custom markdown parsing
+
+    public class CodeLinkReplacer : IReplacer
+    {
+        private static readonly Regex LinkRegex = new Regex(@"{@link\s*?(?<cref>org\.apache\.lucene\.[\w\.]*)\s?(?<text>[^}]*)}", RegexOptions.Compiled);
+        private static readonly Regex JavaCodeExtension = new Regex(@".java$", RegexOptions.Compiled);
+
+        public string Replace(string html)
+        {
+            return ReplaceCodeLinks(html);
+        }
+
+        private string ReplaceCodeLinks(string markdown)
+        {
+            Match link = LinkRegex.Match(markdown);
+            if (link.Success)
+            {
+                do
+                {
+                    string cref = link.Groups["cref"].Value.CorrectCRef();
+                    string newLink;
+
+                    //see https://dotnet.github.io/docfx/spec/docfx_flavored_markdown.html?tabs=tabid-1%2Ctabid-a#cross-reference 
+                    //for xref syntax support
+
+                    var text = link.Groups["text"].Value;
+                    
+                    if (HasLinkText(text, cref, out var methodName, out var methodLink))
+                    {
+                        if (string.IsNullOrWhiteSpace(methodName))
+                        {
+                            //string linkText = link.Groups[2].Value;
+                            var linkText = JavaCodeExtension.Replace(text, ".cs");
+                            newLink = "[" + linkText + "](xref:" + cref + ")";
+                        }
+                        else
+                        {
+                            newLink = "[" + methodName + "](xref:" + cref + "#" + methodLink + ")";
+                        }                        
+                    }
+                    else
+                    {
+                        newLink = "<xref:" + cref + ">";
+                    }
+
+                    markdown = LinkRegex.Replace(markdown, newLink, 1);
+
+
+                } while ((link = LinkRegex.Match(markdown)).Success);
+            }
+
+            return markdown;
+        }
+
+        private bool HasLinkText(string text, string cref, out string methodName, out string link)
+        {
+            methodName = null;
+            link = null;
+            if (!string.IsNullOrWhiteSpace(text))
+            {
+                if (text.Contains("#"))
+                {
+                    var lastSpace = text.LastIndexOf(' ');
+                    if (lastSpace >= 0)
+                    {
+                        methodName = text.Substring(lastSpace + 1);
+                        var lastBracket = methodName.LastIndexOf('(');
+                        if (lastBracket >= 0)
+                            methodName = methodName.Substring(0, lastBracket);
+                        if (char.IsLower(methodName[0]))
+                            methodName = char.ToUpper(methodName[0]) + methodName.Substring(1);
+
+                        link = text.Substring(1, lastSpace - 1).CorrectCRef();
+                        if (char.IsLower(link[0]))
+                            link = char.ToUpper(link[0]) + link.Substring(1);
+
+                        //the method link needs to be in a full namespace format but delimited by _
+                        //HOWEVER, there's no way we can make this work because the lucene parameters are simple like `iterator` but 
+                        //the docfx method links require fully qualified types like System_Collections_Generic_IEnumerable_Lucene_Net_Index_IIndexableField
+                        //and we don't have that information to extract. The best we can do is just deep link to the #methods of the class.
+                        //link = $"{string.Join("_", cref.Split('.'))}_{methodName}";
+
+                        link = "methods";
+                    }
+                }
+                return true;
+            }
+            return false;
+        }
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/DocTypeReplacer.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/DocTypeReplacer.cs
new file mode 100644
index 0000000..d488cbf
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/DocTypeReplacer.cs
@@ -0,0 +1,17 @@
+using Html2Markdown.Replacement;
+using System.Text.RegularExpressions;
+
+namespace JavaDocToMarkdownConverter.Formatters
+{
+
+    public class DocTypeReplacer : IReplacer
+    {
+        private static readonly Regex DocType = new Regex(@"<!doctype[^>]*>", RegexOptions.Compiled);
+
+        public string Replace(string html)
+        {
+            return DocType.Replace(html, string.Empty);
+
+        }
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/ExtraHtmlElementReplacer.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/ExtraHtmlElementReplacer.cs
new file mode 100644
index 0000000..25677f7
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/ExtraHtmlElementReplacer.cs
@@ -0,0 +1,61 @@
+using Html2Markdown.Replacement;
+using HtmlAgilityPack;
+using System.IO;
+using System.Text;
+using System.Text.RegularExpressions;
+
+namespace JavaDocToMarkdownConverter.Formatters
+{
+    /// <summary>
+    /// For some reason the normal markdown converter doesn't get a few html elements so this removes those
+    /// </summary>
+    public class ExtraHtmlElementReplacer : IReplacer
+    {
+        private static readonly Regex MetaTag = new Regex(@"<meta\s+.*?>", RegexOptions.Compiled | RegexOptions.IgnoreCase);
+        private static readonly Regex TitleTag = new Regex(@"<title>.*?</title>", RegexOptions.Compiled | RegexOptions.Singleline | RegexOptions.IgnoreCase);
+        private static readonly Regex HeadTag = new Regex(@"<head>.*?</head>", RegexOptions.Compiled | RegexOptions.Singleline | RegexOptions.IgnoreCase);
+        private static readonly Regex BodyStart = new Regex(@"<body>", RegexOptions.Compiled | RegexOptions.IgnoreCase);
+        private static readonly Regex BodyEnd = new Regex(@"</body>", RegexOptions.Compiled | RegexOptions.IgnoreCase);
+        private static readonly Regex HtmlStart = new Regex(@"<html>", RegexOptions.Compiled | RegexOptions.IgnoreCase);
+        private static readonly Regex HtmlEnd = new Regex(@"</html>", RegexOptions.Compiled | RegexOptions.IgnoreCase);
+
+        public string Replace(string html)
+        {
+            foreach(var r in new[] { MetaTag, TitleTag, HeadTag,BodyStart, BodyEnd, HtmlStart, HtmlEnd })
+            {
+                html = r.Replace(html, string.Empty);
+            }
+
+            return html;
+            
+
+            //var htmlDoc = new HtmlDocument();
+            //using (var input = new StringReader(html))
+            //{
+            //    htmlDoc.Load(input);
+            //}
+
+            //foreach(var e in HtmlElements)
+            //{
+            //    RemoveElements(htmlDoc, e);
+            //}
+
+            //var sb = new StringBuilder();
+            //using (var output = new StringWriter(sb))
+            //{
+            //    htmlDoc.Save(output);
+            //}
+            //return sb.ToString();
+        }
+
+        //private void RemoveElements(HtmlDocument htmlDoc, string elementMatch)
+        //{
+        //    var metaTags = htmlDoc.DocumentNode.SelectNodes(elementMatch);
+        //    if (metaTags == null) return;
+        //    foreach (var m in metaTags)
+        //    {
+        //        m.ParentNode.RemoveChild(m);
+        //    }
+        //}
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/IReplacer.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/IReplacer.cs
new file mode 100644
index 0000000..79d89ad
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/IReplacer.cs
@@ -0,0 +1,10 @@
+namespace JavaDocToMarkdownConverter.Formatters
+{
+
+    //This is exposed in the newer version of Html2Markdown but the later versions don't parse correctly so we have 
+    //to remain on our current version and just do this ourselves. 
+    public interface IReplacer
+    {
+        string Replace(string html);
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/JavaDocFormatters.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/JavaDocFormatters.cs
new file mode 100644
index 0000000..c3fbd10
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/JavaDocFormatters.cs
@@ -0,0 +1,25 @@
+using System;
+using System.Collections.Generic;
+
+namespace JavaDocToMarkdownConverter.Formatters
+{
+    
+    public class JavaDocFormatters
+    {
+        public static IEnumerable<IReplacer> Replacers => new IReplacer[]
+            {
+                new CodeLinkReplacer(),
+                new RepoLinkReplacer(),
+                new DocTypeReplacer(),
+                new ExtraHtmlElementReplacer()
+            };
+
+        /// <summary>
+        /// A list of custom replacers for specific uid files
+        /// </summary>
+        public static IDictionary<string, IReplacer> CustomReplacers => new Dictionary<string, IReplacer>(StringComparer.InvariantCultureIgnoreCase)
+        {
+            ["Lucene.Net"] = new PatternReplacer(new System.Text.RegularExpressions.Regex("To demonstrate these, try something like:.*$", System.Text.RegularExpressions.RegexOptions.Singleline))
+        };
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/PatternReplacer.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/PatternReplacer.cs
new file mode 100644
index 0000000..27a7e35
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/PatternReplacer.cs
@@ -0,0 +1,21 @@
+using System.Text.RegularExpressions;
+
+namespace JavaDocToMarkdownConverter.Formatters
+{
+    public class PatternReplacer : IReplacer
+    {
+        private readonly Regex pattern;
+        private readonly string replacement;
+
+        public PatternReplacer(Regex pattern, string replacement = null)
+        {
+            this.pattern = pattern;
+            this.replacement = replacement;
+        }
+
+        public string Replace(string html)
+        {
+            return pattern.Replace(html, replacement ?? string.Empty);
+        }
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/RepoLinkReplacer.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/RepoLinkReplacer.cs
new file mode 100644
index 0000000..24ff096
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Formatters/RepoLinkReplacer.cs
@@ -0,0 +1,39 @@
+using Html2Markdown.Replacement;
+using System.Text.RegularExpressions;
+
+namespace JavaDocToMarkdownConverter.Formatters
+{
+
+    //TODO: This could instead be done with the LuceneDocsPlugins and our custom markdown parsing
+    //TODO: We need to pass in a tag here
+
+    public class RepoLinkReplacer : IReplacer
+    {
+        private static readonly Regex RepoLinkRegex = new Regex(@"(?<=\()(?<cref>src-html/[^)]*)", RegexOptions.Compiled);
+
+        public string Replace(string html)
+        {
+            return ReplaceRepoLinks(html);
+        }
+
+        //https://github.com/apache/lucenenet/blob/Lucene.Net_4_8_0_beta00004/src/Lucene.Net.Analysis.Common/Analysis/Ar/ArabicAnalyzer.cs
+        private string ReplaceRepoLinks(string markdown)
+        {
+            Match link = RepoLinkRegex.Match(markdown);
+            if (link.Success)
+            {
+                do
+                {
+                    string cref = link.Groups["cref"].Value.CorrectRepoCRef();
+                    cref = "https://github.com/apache/lucenenet/blob/{tag}/src/" + cref;
+
+                    markdown = RepoLinkRegex.Replace(markdown, cref, 1);
+
+
+                } while ((link = RepoLinkRegex.Match(markdown)).Success);
+            }
+
+            return markdown;
+        }
+    }
+}
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter.csproj b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter.csproj
index 381aa73..3426d7c 100644
--- a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter.csproj
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter.csproj
@@ -122,9 +122,17 @@
     </Reference>
   </ItemGroup>
   <ItemGroup>
+    <Compile Include="Formatters\CodeLinkReplacer.cs" />
     <Compile Include="DocConverter.cs" />
+    <Compile Include="Formatters\DocTypeReplacer.cs" />
+    <Compile Include="Formatters\ExtraHtmlElementReplacer.cs" />
+    <Compile Include="Formatters\IReplacer.cs" />
+    <Compile Include="Formatters\JavaDocFormatters.cs" />
+    <Compile Include="Formatters\PatternReplacer.cs" />
+    <Compile Include="Formatters\RepoLinkReplacer.cs" />
     <Compile Include="Program.cs" />
     <Compile Include="Properties\AssemblyInfo.cs" />
+    <Compile Include="StringExtensions.cs" />
   </ItemGroup>
   <ItemGroup>
     <None Include="App.config" />
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Program.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Program.cs
index 2b87daf..975100d 100644
--- a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Program.cs
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/Program.cs
@@ -1,11 +1,11 @@
 using System;
-using System.Collections.Generic;
 using System.Linq;
 using System.Text;
 using System.Threading.Tasks;
 
 namespace JavaDocToMarkdownConverter
 {
+
     /*
      * Licensed to the Apache Software Foundation (ASF) under one or more
      * contributor license agreements.  See the NOTICE file distributed with
diff --git a/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/StringExtensions.cs b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/StringExtensions.cs
new file mode 100644
index 0000000..eeda50b
--- /dev/null
+++ b/src/dotnet/tools/JavaDocToMarkdownConverter/JavaDocToMarkdownConverter/StringExtensions.cs
@@ -0,0 +1,134 @@
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+using System.Text.RegularExpressions;
+
+namespace JavaDocToMarkdownConverter
+{
+    public static class StringExtensions
+    {
+        public static string CorrectCRef(this string cref)
+        {
+            var caseCorrected = CorrectCRefCase(cref);
+            var temp = caseCorrected.Replace("org.Apache.Lucene.", "Lucene.Net.");
+            foreach (var item in namespaceCorrections)
+            {
+                temp = temp.Replace(item.Key, item.Value);
+            }
+
+
+            //TODO: Not sure if this is necessary? The # delimits a method name and we already take care of that
+            int index = temp.IndexOf('#');
+            if (index > -1)
+            {
+                var sb = new StringBuilder(temp);
+                // special case - capitalize char after #
+                sb[index + 1] = char.ToUpperInvariant(sb[index + 1]);
+                // special case - replace Java # with .
+                temp = sb.ToString().Replace('#', '.');
+            }
+
+            return temp;
+        }
+
+        public static string CorrectCRefCase(this string cref)
+        {
+            var sb = new StringBuilder(cref);
+            for (int i = 0; i < sb.Length - 1; i++)
+            {
+                if (sb[i] == '.')
+                    sb[i + 1] = char.ToUpper(sb[i + 1]);
+            }
+            return sb.ToString();
+        }
+
+        public static string CorrectRepoCRef(this string cref)
+        {
+            string temp = cref;
+            if (temp.StartsWith("src-html"))
+            {
+                temp = temp.Replace("src-html/", "");
+            }
+
+            temp = temp.Replace("/", ".");
+            temp = temp.Replace(".html", ".cs");
+
+            var segments = temp.Split('.');
+
+            if (temp.StartsWith("analysis"))
+            {
+                string project;
+                if (packageToProjectName.TryGetValue(segments[3] + "." + segments[4], out project))
+                    temp = project + "/" + string.Join("/", segments.Skip(5).ToArray());
+            }
+            else
+            {
+                string project;
+                if (packageToProjectName.TryGetValue(segments[3], out project))
+                    temp = project + "/" + string.Join("/", segments.Skip(4).ToArray());
+            }
+
+            temp = CorrectCRefCase(temp);
+            foreach (var item in namespaceCorrections)
+            {
+                if (!item.Key.StartsWith("Lucene.Net"))
+                    temp = temp.Replace(item.Key, item.Value);
+            }
+
+            temp = Regex.Replace(temp, "/[Cc]s", ".cs");
+
+            return temp;
+        }
+
+        private static readonly IDictionary<string, string> packageToProjectName = new Dictionary<string, string>()
+        {
+            { "analysis.common" , "Lucene.Net.Analysis.Common"},
+            { "analysis.icu" , "Lucene.Net.Analysis.ICU"},
+            { "analysis.kuromoji" , "Lucene.Net.Analysis.Kuromoji"},
+            { "analysis.morfologik" , "Lucene.Net.Analysis.Morfologik"},
+            { "analysis.phonetic" , "Lucene.Net.Analysis.Phonetic"},
+            { "analysis.smartcn" , "Lucene.Net.Analysis.SmartCn"},
+            { "analysis.stempel" , "Lucene.Net.Analysis.Stempel"},
+            { "analysis.uima" , "Lucene.Net.Analysis.UIMA"},
+            { "benchmark" , "Lucene.Net.Benchmark"},
+            { "classification" , "Lucene.Net.Classification"},
+            { "codecs" , "Lucene.Net.Codecs"},
+            { "core" , "Lucene.Net"},
+            { "demo" , "Lucene.Net.Demo"},
+            { "expressions" , "Lucene.Net.Expressions"},
+            { "facet" , "Lucene.Net.Facet"},
+            { "grouping" , "Lucene.Net.Grouping"},
+            { "highlighter" , "Lucene.Net.Highlighter"},
+            { "join" , "Lucene.Net.Join"},
+            { "memory" , "Lucene.Net.Memory"},
+            { "misc" , "Lucene.Net.Misc"},
+            { "queries" , "Lucene.Net.Queries"},
+            { "queryparser" , "Lucene.Net.QueryParser"},
+            { "replicator" , "Lucene.Net.Replicator"},
+            { "sandbox" , "Lucene.Net.Sandbox"},
+            { "spatial" , "Lucene.Net.Spatial"},
+            { "suggest" , "Lucene.Net.Suggest"},
+            { "test-framework" , "Lucene.Net.TestFramework"},
+        };
+
+        private static readonly IDictionary<string, string> namespaceCorrections = new Dictionary<string, string>()
+        {
+            { "Lucene.Net.Document", "Lucene.Net.Documents" },
+            { "Lucene.Net.Benchmark", "Lucene.Net.Benchmarks" },
+            { "Lucene.Net.Queryparser", "Lucene.Net.QueryParsers" },
+            { ".Tokenattributes", ".TokenAttributes" },
+            { ".Charfilter", ".CharFilter" },
+            { ".Commongrams", ".CommonGrams" },
+            { ".Ngram", ".NGram" },
+            { ".Hhmm", ".HHMM" },
+            { ".Blockterms", ".BlockTerms" },
+            { ".Diskdv", ".DiskDV" },
+            { ".Intblock", ".IntBlock" },
+            { ".Simpletext", ".SimpleText" },
+            { ".Postingshighlight", ".PostingsHighlight" },
+            { ".Vectorhighlight", ".VectorHighlight" },
+            { ".Complexphrase", ".ComplexPhrase" },
+            { ".Valuesource", ".ValueSources" },
+        };
+    }
+}
diff --git a/src/dotnet/tools/lucene-cli/docs/analysis/toc.yml b/src/dotnet/tools/lucene-cli/docs/analysis/toc.yml
new file mode 100644
index 0000000..e6f5ab9
--- /dev/null
+++ b/src/dotnet/tools/lucene-cli/docs/analysis/toc.yml
@@ -0,0 +1,6 @@
+- name: kuromoji-build-dictionary
+  href: kuromoji-build-dictionary.md
+- name: stempel-compile-stems
+  href: stempel-compile-stems.md
+- name: stempel-patch-stems
+  href: stempel-patch-stems.md
\ No newline at end of file
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/index.md b/src/dotnet/tools/lucene-cli/docs/benchmark/index.md
index 66d4e04..ecf7c4c 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/index.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/index.md
@@ -1,4 +1,9 @@
-# benchmark
+---
+uid: Lucene.Net.Cli.Benchmark
+summary: *content
+---
+
+# benchmark
 
 ## Description
 
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/toc.yml b/src/dotnet/tools/lucene-cli/docs/benchmark/toc.yml
new file mode 100644
index 0000000..11df861
--- /dev/null
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/toc.yml
@@ -0,0 +1,12 @@
+- name: extract-reuters
+  href: extract-reuters.md
+- name: extract-wikipedia
+  href: extract-wikipedia.md
+- name: find-quality-queries
+  href: find-quality-queries.md
+- name: run-trec-eval
+  href: run-trec-eval.md
+- name: run
+  href: run.md
+- name: sample
+  href: sample.md
\ No newline at end of file
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/toc.yml b/src/dotnet/tools/lucene-cli/docs/demo/toc.yml
new file mode 100644
index 0000000..69ab457
--- /dev/null
+++ b/src/dotnet/tools/lucene-cli/docs/demo/toc.yml
@@ -0,0 +1,18 @@
+- name: associations-facets
+  href: associations-facets.md
+- name: distance-facets
+  href: distance-facets.md
+- name: expression-aggregation-facets
+  href: expression-aggregation-facets.md
+- name: index-files
+  href: index-files.md
+- name: multi-category-lists-facets
+  href: multi-category-lists-facets.md
+- name: range-facets
+  href: range-facets.md
+- name: search-files
+  href: search-files.md
+- name: simple-facets
+  href: simple-facets.md
+- name: simple-sorted-set-facets
+  href: simple-sorted-set-facets.md
\ No newline at end of file
diff --git a/src/dotnet/tools/lucene-cli/docs/index/toc.yml b/src/dotnet/tools/lucene-cli/docs/index/toc.yml
new file mode 100644
index 0000000..7157307
--- /dev/null
+++ b/src/dotnet/tools/lucene-cli/docs/index/toc.yml
@@ -0,0 +1,26 @@
+- name: check
+  href: check.md
+- name: copy-segments
+  href: copy-segments.md
+- name: delete-segments
+  href: delete-segments.md
+- name: extract-cfs
+  href: extract-cfs.md
+- name: fix
+  href: fix.md
+- name: list-cfs
+  href: list-cfs.md
+- name: list-high-freq-terms
+  href: list-high-freq-terms.md
+- name: list-segments
+  href: list-segments.md
+- name: list-taxonomy-stats
+  href: list-taxonomy-stats.md
+- name: list-term-info
+  href: list-term-info.md
+- name: merge
+  href: merge.md  
+- name: split
+  href: split.md  
+- name: upgrade
+  href: upgrade.md
\ No newline at end of file
diff --git a/src/dotnet/tools/lucene-cli/docs/lock/toc.yml b/src/dotnet/tools/lucene-cli/docs/lock/toc.yml
new file mode 100644
index 0000000..badb9d4
--- /dev/null
+++ b/src/dotnet/tools/lucene-cli/docs/lock/toc.yml
@@ -0,0 +1,4 @@
+- name: stress-test
+  href: stress-test.md
+- name: verify-server
+  href: verify-server.md
\ No newline at end of file
diff --git a/src/dotnet/tools/lucene-cli/docs/toc.yml b/src/dotnet/tools/lucene-cli/docs/toc.yml
new file mode 100644
index 0000000..c3fe7d4
--- /dev/null
+++ b/src/dotnet/tools/lucene-cli/docs/toc.yml
@@ -0,0 +1,15 @@
+- name: Analysis
+  href: analysis/toc.yml
+  topicHref: analysis/index.md
+- name: Benchmark
+  href: benchmark/toc.yml
+  topicHref: benchmark/index.md
+- name: Index
+  href: index/toc.yml
+  topicHref: index/index.md
+- name: Lock
+  href: lock/toc.yml
+  topicHref: lock/index.md
+- name: Demo
+  href: demo/toc.yml
+  topicHref: demo/index.md
\ No newline at end of file
diff --git a/websites/apidocs/api/toc.yml b/websites/apidocs/api/toc.yml
new file mode 100644
index 0000000..574c41c
--- /dev/null
+++ b/websites/apidocs/api/toc.yml
@@ -0,0 +1,48 @@
+- name: Lucene.Net
+  href: ../obj/docfx/api/Lucene.Net/toc.yml
+  topicUid: Lucene.Net
+- name: Lucene.Net.Queries
+  href: ../obj/docfx/api/Lucene.Net.Queries/toc.yml
+  topicUid: Lucene.Net.Queries
+- name: Lucene.Net.Analysis
+  href: ../obj/docfx/api/Lucene.Net.Analysis/toc.yml
+  topicUid: Lucene.Net.Analysis
+- name: Lucene.Net.QueryParser
+  href: ../obj/docfx/api/Lucene.Net.QueryParser/toc.yml
+- name: Lucene.Net.Highlighter
+  href: ../obj/docfx/api/Lucene.Net.Highlighter/toc.yml
+  topicUid: Lucene.Net.Highlighter
+- name: Lucene.Net.Facet
+  href: ../obj/docfx/api/Lucene.Net.Facet/toc.yml
+  topicUid: Lucene.Net.Facet  
+- name: Lucene.Net.Classification
+  href: ../obj/docfx/api/Lucene.Net.Classification/toc.yml
+  topicUid: Lucene.Net.Classification
+- name: Lucene.Net.Expressions
+  href: ../obj/docfx/api/Lucene.Net.Expressions/toc.yml
+  topicUid: Lucene.Net.Expressions
+- name: Lucene.Net.Codecs
+  href: ../obj/docfx/api/Lucene.Net.Codecs/toc.yml
+  topicUid: Lucene.Net.Codecs
+- name: Lucene.Net.Join
+  href: ../obj/docfx/api/Lucene.Net.Join/toc.yml
+  topicUid: Lucene.Net.Join
+- name: Lucene.Net.Grouping
+  href: ../obj/docfx/api/Lucene.Net.Grouping/toc.yml
+  topicUid: Lucene.Net.Grouping
+- name: Lucene.Net.Suggest
+  href: ../obj/docfx/api/Lucene.Net.Suggest/toc.yml
+- name: Lucene.Net.Memory
+  href: ../obj/docfx/api/Lucene.Net.Memory/toc.yml
+- name: Lucene.Net.Spatial
+  href: ../obj/docfx/api/Lucene.Net.Spatial/toc.yml
+  topicUid: Lucene.Net.Spatial
+- name: Lucene.Net.Replicator
+  href: ../obj/docfx/api/Lucene.Net.Replicator/toc.yml
+  topicUid: Lucene.Net.Replicator
+- name: Lucene.Net.ICU
+  href: ../obj/docfx/api/Lucene.Net.ICU/toc.yml
+  topicUid: Lucene.Net.Analysis.ICU
+- name: Lucene.Net.Demo
+  href: ../obj/docfx/api/Lucene.Net.Demo/toc.yml
+  topicUid: Lucene.Net.Demo
\ No newline at end of file
diff --git a/websites/apidocs/docfx.json b/websites/apidocs/docfx.json
new file mode 100644
index 0000000..e011b78
--- /dev/null
+++ b/websites/apidocs/docfx.json
@@ -0,0 +1,330 @@
+{
+  "metadata": [
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net/Lucene.Net.csproj",
+            "Lucene.Net.Misc/Lucene.Net.Misc.csproj"      
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Queries/Lucene.Net.Queries.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Queries",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Facet/Lucene.Net.Facet.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Facet",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Classification/Lucene.Net.Classification.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Classification",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Expressions/Lucene.Net.Expressions.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Expressions",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Codecs/Lucene.Net.Codecs.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Codecs",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Join/Lucene.Net.Join.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Join",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Grouping/Lucene.Net.Grouping.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+      }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Grouping",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },    
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Analysis.Common/Lucene.Net.Analysis.Common.csproj",
+            "Lucene.Net.Analysis.Stempel/Lucene.Net.Analysis.Stempel.csproj",
+            "Lucene.Net.Analysis.SmartCn/Lucene.Net.Analysis.SmartCn.csproj",
+            "Lucene.Net.Analysis.Phonetic/Lucene.Net.Analysis.Phonetic.csproj",
+            "Lucene.Net.Analysis.Kuromoji/Lucene.Net.Analysis.Kuromoji.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Analysis",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.QueryParser/Lucene.Net.QueryParser.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.QueryParser",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Suggest/Lucene.Net.Suggest.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Suggest",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Memory/Lucene.Net.Memory.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Memory",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },    
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Spatial/Lucene.Net.Spatial.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Spatial",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Highlighter/Lucene.Net.Highlighter.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Highlighter",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Replicator/Lucene.Net.Replicator.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Replicator",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.ICU/Lucene.Net.ICU.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src/dotnet"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.ICU",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    },
+    {
+      "src": [
+        {
+          "files": [
+            "Lucene.Net.Demo/Lucene.Net.Demo.csproj"
+          ],
+          "exclude": ["**/obj/**", "**/bin/**", "**/Lucene.Net.Test*/**"],
+          "src": "../../src"
+        }
+      ],
+      "dest": "obj/docfx/api/Lucene.Net.Demo",
+      "properties": {
+          "TargetFramework": "netstandard1.6"
+      }
+    }
+  ],
+  "build": {
+    "content": [
+      {
+        "files": ["Lucene.Net/overview.md"],
+        "src": "../../src",
+        "dest": "api"
+      },
+      {
+        "files": ["**.yml","**.md"],
+        "src": "obj/docfx/api",
+        "dest": "api"
+      },
+      {
+        "files": ["**.yml","**.md"],
+        "src": "api",
+        "dest": "api"
+      },
+      {
+        "files": ["**.md", "**.yml"],
+        "src": "../../src/dotnet/tools/lucene-cli/docs",
+        "dest": "cli"
+      },
+      {
+        "files": ["toc.yml", "*.md", "web.config"]
+      }
+    ],
+    "resource": [
+      {
+        "files": [
+          "logo/favicon.ico",
+          "logo/lucene-net-icon-64x64.png",
+          "logo/lucene-net-color.png",
+          "logo/lucene-net-reverse-color.png"
+        ],
+        "src": "../../branding"
+      }
+    ],   
+    "globalMetadata": {
+      "_appTitle": "Apache Lucene.NET 4.8.0 Documentation",      
+      "_disableContribution": false,
+      "_appFaviconPath": "logo/favicon.ico",
+      "_enableSearch": true,
+      "_appLogoPath": "logo/lucene-net-color.png",
+      "_appFooter": "Copyright © 2018 Licensed to the Apache Software Foundation (ASF)"
+    },
+    "overwrite": [
+      {
+        "files": ["**/package.md","**/overview.md"],
+        "src": "../../src",
+        "exclude": ["Lucene.Net/overview.md"]
+      }
+    ],
+    "dest": "_site",
+    "globalMetadataFiles": [],
+    "fileMetadataFiles": [],
+    "template": [
+      "default",
+      "lucenetemplate"
+    ],
+    "postProcessors": [],
+    "markdownEngineName": "dfm",
+    "noLangKeyword": false,
+    "keepFileLink": false,
+    "cleanupCacheHistory": false,
+    "disableGitFeatures": false
+  }
+}
\ No newline at end of file
diff --git a/websites/apidocs/docs.ps1 b/websites/apidocs/docs.ps1
new file mode 100644
index 0000000..8120e14
--- /dev/null
+++ b/websites/apidocs/docs.ps1
@@ -0,0 +1,165 @@
+# -----------------------------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the ""License""); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+# 
+# http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an ""AS IS"" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# -----------------------------------------------------------------------------------
+
+param (
+	[Parameter(Mandatory=$false)]
+	[int]
+	$ServeDocs = 1,
+	[Parameter(Mandatory=$false)]
+	[int]
+	$Clean = 0,
+	# LogLevel can be: Diagnostic, Verbose, Info, Warning, Error
+	[Parameter(Mandatory=$false)]
+	[string]
+	$LogLevel = 'Info'
+)
+
+[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
+
+$PSScriptFilePath = (Get-Item $MyInvocation.MyCommand.Path).FullName
+$RepoRoot = (get-item $PSScriptFilePath).Directory.Parent.Parent.FullName;
+$ApiDocsFolder = Join-Path -Path $RepoRoot -ChildPath "websites\apidocs";
+$ToolsFolder = Join-Path -Path $ApiDocsFolder -ChildPath "tools";
+#ensure the /build/tools folder
+New-Item $ToolsFolder -type directory -force
+
+if ($Clean -eq 1) {
+	Write-Host "Cleaning tools..."
+	Remove-Item (Join-Path -Path $ToolsFolder "\*") -recurse -force -ErrorAction SilentlyContinue
+}
+
+New-Item "$ToolsFolder\tmp" -type directory -force
+
+# Go get docfx.exe if we don't have it
+New-Item "$ToolsFolder\docfx" -type directory -force
+$DocFxExe = "$ToolsFolder\docfx\docfx.exe"
+if (-not (test-path $DocFxExe))
+{
+	Write-Host "Retrieving docfx..."
+	$DocFxZip = "$ToolsFolder\tmp\docfx.zip"
+	Invoke-WebRequest "https://github.com/dotnet/docfx/releases/download/v2.38.1/docfx.zip" -OutFile $DocFxZip -TimeoutSec 60 
+	#unzip
+	Expand-Archive $DocFxZip -DestinationPath (Join-Path -Path $ToolsFolder -ChildPath "docfx")
+}
+
+# ensure we have NuGet
+New-Item "$ToolsFolder\nuget" -type directory -force
+$nuget = "$ToolsFolder\nuget\nuget.exe"
+if (-not (test-path $nuget))
+{
+  Write-Host "Download NuGet..."
+  Invoke-WebRequest "https://dist.nuget.org/win-x86-commandline/latest/nuget.exe" -OutFile $nuget -TimeoutSec 60
+}
+
+# ensure we have vswhere
+New-Item "$ToolsFolder\vswhere" -type directory -force
+ $vswhere = "$ToolsFolder\vswhere\vswhere.exe"
+if (-not (test-path $vswhere))
+{
+   Write-Host "Download VsWhere..."
+   $path = "$ToolsFolder\tmp"
+   &$nuget install vswhere -OutputDirectory $path
+   $dir = ls "$path\vswhere.*" | sort -property Name -descending | select -first 1
+   $file = ls -path "$dir" -name vswhere.exe -recurse
+   mv "$dir\$file" $vswhere   
+ }
+
+ Remove-Item  -Recurse -Force "$ToolsFolder\tmp"
+
+# delete anything that already exists
+if ($Clean -eq 1) {
+	Write-Host "Cleaning..."
+	Remove-Item (Join-Path -Path $ApiDocsFolder "_site\*") -recurse -force -ErrorAction SilentlyContinue
+	Remove-Item (Join-Path -Path $ApiDocsFolder "_site") -force -ErrorAction SilentlyContinue
+	Remove-Item (Join-Path -Path $ApiDocsFolder "obj\*") -recurse -force -ErrorAction SilentlyContinue
+	Remove-Item (Join-Path -Path $ApiDocsFolder "obj") -force -ErrorAction SilentlyContinue
+}
+
+# Build our custom docfx tools
+
+$msbuild = &$vswhere -latest -products * -requires Microsoft.Component.MSBuild -property installationPath
+if ($msbuild) {
+  Write-Host "MSBuild path = $msbuild";
+
+  # Due to a bug with docfx and msbuild, we also need to set environment vars here
+  # https://github.com/dotnet/docfx/issues/1969
+
+  [Environment]::SetEnvironmentVariable("VSINSTALLDIR", "$msbuild")
+  [Environment]::SetEnvironmentVariable("VisualStudioVersion", "15.0")
+
+  # Then it turns out we also need 2015 build tools installed, wat!? 
+  # https://www.microsoft.com/en-us/download/details.aspx?id=48159
+  
+
+  $msbuild = join-path $msbuild 'MSBuild\15.0\Bin\MSBuild.exe'
+  if (-not (test-path $msbuild)) {
+	throw "MSBuild not found!"
+  }
+
+  # Build the plugin solution
+  $pluginSln = (Join-Path -Path $RepoRoot "src\docs\LuceneDocsPlugins\LuceneDocsPlugins.sln")
+  & $nuget restore $pluginSln
+
+  $PluginsFolder = (Join-Path -Path $ApiDocsFolder "lucenetemplate\plugins")
+  New-Item $PluginsFolder -type directory -force
+  & $msbuild $pluginSln "/p:OutDir=$PluginsFolder"
+
+  # Rebuild the main solution to ensure everything is in place correctly (only on clean)
+  if ($Clean -eq 1) {
+	$mainSln = (Join-Path -Path $RepoRoot "Lucene.Net.sln")
+	& $nuget restore $mainSln  
+	& $msbuild $mainSln "/t:Clean,Build"
+  }  
+
+}
+else {
+	throw "MSBuild not found!"
+}
+
+# NOTE: There's a ton of Lucene docs that we want to copy and re-format. I'm not sure if we can really automate this 
+# in a great way since the docs seem to be in many places, for example:
+# Home page - 	https://github.com/apache/lucene-solr/blob/branch_4x/lucene/site/xsl/index.xsl
+# Wiki docs - 	https://wiki.apache.org/lucene-java/FrontPage?action=show&redirect=FrontPageEN - not sure where the source is for this
+# Html pages - 	Example: https://github.com/apache/lucene-solr/blob/releases/lucene-solr/4.8.0/lucene/highlighter/src/java/org/apache/lucene/search/highlight/package.html - these seem to be throughout the source
+#				For these ones, could we go fetch them and download all *.html files from Git?
+
+$DocFxJson = Join-Path -Path $ApiDocsFolder "docfx.json"
+$DocFxLog = Join-Path -Path $ApiDocsFolder "obj\docfx.log"
+
+if($?) { 
+	if ($ServeDocs -eq 0){
+
+		Write-Host "Building metadata..."
+		if ($Clean -eq 1) {
+			& $DocFxExe metadata $DocFxJson -l "$DocFxLog" --loglevel $LogLevel --force
+		}
+		else {
+			& $DocFxExe metadata $DocFxJson -l "$DocFxLog" --loglevel $LogLevel
+		}
+
+		# build the output		
+		Write-Host "Building docs..."
+		& $DocFxExe build $DocFxJson -l "$DocFxLog" --loglevel $LogLevel
+	}
+	else {
+		# build + serve (for testing)
+		Write-Host "starting website..."
+		& $DocFxExe $DocFxJson --serve
+	}
+}
\ No newline at end of file
diff --git a/websites/apidocs/filterConfig.yml b/websites/apidocs/filterConfig.yml
new file mode 100644
index 0000000..ea8d3b8
--- /dev/null
+++ b/websites/apidocs/filterConfig.yml
@@ -0,0 +1,4 @@
+apiRules:
+- exclude:
+    uidRegex: ^Lucene\.Net\.Support\.Character\.\w*SURROGATE
+    type: Field
\ No newline at end of file
diff --git a/websites/apidocs/index.md b/websites/apidocs/index.md
new file mode 100644
index 0000000..5af1d6d
--- /dev/null
+++ b/websites/apidocs/index.md
@@ -0,0 +1,68 @@
+---
+title: Lucene.Net Docs - The documentation website for Lucene.Net
+description: The documentation website for Lucene.Net
+---
+
+Apache Lucene.Net 4.8.0 Documentation
+===============
+
+---------------
+
+Lucene is a .NET full-text search engine. Lucene.NET is not a complete application, 
+but rather a code library and API that can easily be used to add search capabilities
+to applications.
+
+This is the official API documentation for <b>Apache Lucene.NET 4.8.0</b>.
+
+## Getting Started
+
+The following section is intended as a "getting started" guide. It has three
+audiences: first-time users looking to install Apache Lucene in their
+application; developers looking to modify or base the applications they develop
+on Lucene; and developers looking to become involved in and contribute to the
+development of Lucene. The goal is to help you "get started". It does not go into great depth
+on some of the conceptual or inner details of Lucene:
+
+* [Lucene demo, its usage, and sources](xref:Lucene.Net.Demo): Tutorial and walk-through of the command-line Lucene demo.
+* [Introduction to Lucene's APIs](xref:Lucene.Net): High-level summary of the different Lucene packages.
+* [Analysis overview](xref:Lucene.Net.Analysis): Introduction to Lucene's analysis API. See also the [TokenStream consumer workflow](xref:Lucene.Net.Analysis.TokenStream).
+
+## Reference Documents
+
+* [Changes](https://github.com/apache/lucenenet/releases/tag/Lucene.Net_4_8_0): List of changes in this release.
+* System Requirements: Minimum and supported .NET versions. __TODO: Add link__
+* Migration Guide: What changed in Lucene 4; how to migrate code from Lucene 3.x. __TODO: Add link__
+* [File Formats](xref:Lucene.Net.Codecs.Lucene46) : Guide to the supported index format used by Lucene.  This can be customized by using [an alternate codec](xref:Lucene.Net.Codecs).
+* [Search and Scoring in Lucene](xref:Lucene.Net.Search): Introduction to how Lucene scores documents.
+* [Classic Scoring Formula](xref:Lucene.Net.Search.Similarities.TFIDFSimilarity): Formula of Lucene's classic [Vector Space](http://en.wikipedia.org/wiki/Vector_Space_Model) implementation. (look [here](xref:Lucene.Net.Search.Similarities) for other models)
+* [Classic QueryParser Syntax](xref:Lucene.Net.QueryParsers.Classic): Overview of the Classic QueryParser's syntax and features.
+
+## API Docs
+
+* [core](xref:Lucene.Net): Lucene core library
+* [analyzers-common](xref:Lucene.Net.Analysis): Analyzers for indexing content in different languages and domains.
+* __To be completed__:analyzers-icu: Analysis integration with ICU (International Components for Unicode).
+* [analyzers-kuromoji](xref:Lucene.Net.Analysis.Jn): Japanese Morphological Analyzer
+* __To be completed__: analyzers-morfologik: Analyzer for indexing Polish
+* [analyzers-phonetic](xref:Lucene.Net.Analysis.Phonetic): Analyzer for indexing phonetic signatures (for sounds-alike search)
+* [analyzers-smartcn](xref:Lucene.Net.Analysis.Cn.Smart): Analyzer for indexing Chinese
+* [analyzers-stempel](xref:Lucene.Net.Analysis.Stempel): Analyzer for indexing Polish
+* __To be completed__: analyzers-uima: Analysis integration with Apache UIMA
+* [benchmark](xref:Lucene.Net.Cli.Benchmark): System for benchmarking Lucene
+* [classification](xref:Lucene.Net.Classification): Classification module for Lucene
+* [codecs](xref:Lucene.Net.Codecs): Lucene codecs and postings formats.
+* [demo](xref:Lucene.Net.Demo): Simple example code
+* [expressions](xref:Lucene.Net.Expressions): Dynamically computed values to sort/facet/search on based on a pluggable grammar.
+* [facet](xref:Lucene.Net.Facet): Faceted indexing and search capabilities
+* [grouping](xref:Lucene.Net.Search.Grouping): Collectors for grouping search results.
+* [highlighter](xref:Lucene.Net.Search.Highlight): Highlights search keywords in results
+* [join](xref:Lucene.Net.Join): Index-time and Query-time joins for normalized content
+* [memory](xref:Lucene.Net.Index.Memory): Single-document in-memory index implementation
+* [misc](xref:Lucene.Net.Misc): Index tools and other miscellaneous code
+* [queries](xref:Lucene.Net.Queries): Filters and Queries that add to core Lucene
+* [queryparser](xref:Lucene.Net.QueryParsers.Classic): Query parsers and parsing framework
+* [replicator](xref:Lucene.Net.Replicator): Files replication utility
+* [sandbox](xref:Lucene.Net.Sandbox): Various third party contributions and new ideas
+* [spatial](xref:Lucene.Net.Spatial): Geospatial search
+* [suggest](xref:Lucene.Net.Search.Suggest): Auto-suggest and Spellchecking support
+* __Docs to be fixed__:* test-framework: Framework for testing Lucene-based applications
\ No newline at end of file
diff --git a/websites/apidocs/lucenetemplate/partials/navbar.tmpl.partial b/websites/apidocs/lucenetemplate/partials/navbar.tmpl.partial
new file mode 100644
index 0000000..ab8f519
--- /dev/null
+++ b/websites/apidocs/lucenetemplate/partials/navbar.tmpl.partial
@@ -0,0 +1,22 @@
+{{!Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license. See LICENSE file in the project root for full license information.}}
+
+<nav id="autocollapse" class="navbar ng-scope" role="navigation">
+  <div class="container">
+    <div class="navbar-header">
+      <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar">
+        <span class="sr-only">Toggle navigation</span>
+        <span class="icon-bar"></span>
+        <span class="icon-bar"></span>
+        <span class="icon-bar"></span>
+      </button>
+      {{>partials/logo}}
+    </div>
+    <div class="collapse navbar-collapse" id="navbar">
+      <form class="navbar-form navbar-right" role="search" id="search">
+        <div class="form-group">
+          <input type="text" class="form-control" id="search-query" placeholder="Search" autocomplete="off">
+        </div>
+      </form>
+    </div>
+  </div>
+</nav>
diff --git a/websites/apidocs/lucenetemplate/styles/main.css b/websites/apidocs/lucenetemplate/styles/main.css
new file mode 100644
index 0000000..252cdb1
--- /dev/null
+++ b/websites/apidocs/lucenetemplate/styles/main.css
@@ -0,0 +1,73 @@
+/* .navbar-inverse {
+    background: #4a95da;
+    background: rgb(44, 95, 163);
+    background: -moz-linear-gradient(top, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    background: -webkit-linear-gradient(top, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    background: linear-gradient(to bottom, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#2c5fa3', endColorstr='#4096ee', GradientType=0);
+    border-color:white;
+  }
+  .navbar-inverse .navbar-nav>li>a, .navbar-inverse .navbar-text {
+    color: #fff;
+  }
+  .navbar-inverse .navbar-nav>.active>a {
+      background-color: #1764AA;
+  }
+  .navbar-inverse .navbar-nav>.active>a:focus, .navbar-inverse .navbar-nav>.active>a:hover {
+      background-color: #1764AA;
+  } */
+
+  .btn-primary:hover {
+    background-color: #1764AA;
+}
+button, a {
+    color: #1764AA;
+    /* #0095eb */
+}
+button:hover,
+button:focus,
+a:hover,
+a:focus {
+  color: #143653;
+  text-decoration: none;
+}
+nav.navbar {
+    background-color:white;
+}
+.navbar-brand  {
+    height: 80px;
+}
+.navbar-header .navbar-brand img {    
+    width:300px;
+    height:55px;
+    margin:10px 10px 10px 0px;
+}
+.navbar-toggle .icon-bar{
+    margin-top: 2px;
+    background-color:#0095eb;
+}
+.navbar-toggle {
+    border-color:#0095eb;
+}
+header ul.navbar-nav {
+    font-size:1.2em;
+    float:right;
+    font-weight: 600;
+}
+
+.sidefilter {
+    top:120px;
+}
+
+.sidetoc {
+    top: 180px;
+    background-color:rgb(247, 247, 247);
+}
+
+body .toc {
+    background-color:rgb(247, 247, 247);
+}
+
+.sidefilter {
+    background-color: rgb(247, 247, 247);
+}
diff --git a/websites/apidocs/lucenetemplate/styles/main.js b/websites/apidocs/lucenetemplate/styles/main.js
new file mode 100644
index 0000000..ad4722e
--- /dev/null
+++ b/websites/apidocs/lucenetemplate/styles/main.js
@@ -0,0 +1,32 @@
+$(function () {
+
+    renderAlerts();
+
+    function renderAlerts() {
+        $('.lucene-block').addClass('alert alert-info');
+    }
+
+    // //docfx has a hard coded value of 60px in height check for the nav bar
+    // //but our nav bar is taller so we need to work around this
+    // function fixAutoCollapseBug() {
+    //     autoCollapse();
+    //     $(window).on('resize', autoCollapse);
+    //     $(document).on('click', '.navbar-collapse.in', function (e) {
+    //         if ($(e.target).is('a')) {
+    //             $(this).collapse('hide');
+    //         }
+    //     });
+
+    //     function autoCollapse() {
+    //         var navbar = $('#autocollapse');
+    //         if (navbar.height() === null) {
+    //             setTimeout(autoCollapse, 310);
+    //         }
+    //         navbar.removeClass(collapsed);
+    //         if (navbar.height() > 60) {
+    //             navbar.addClass(collapsed);
+    //         }
+    //     }
+    // }
+
+})
\ No newline at end of file
diff --git a/websites/apidocs/lucenetemplate/web.config b/websites/apidocs/lucenetemplate/web.config
new file mode 100644
index 0000000..f646909
--- /dev/null
+++ b/websites/apidocs/lucenetemplate/web.config
@@ -0,0 +1,9 @@
+<?xml version="1.0"?>
+ 
+<configuration>
+    <system.webServer>
+        <staticContent>
+            <mimeMap fileExtension=".json" mimeType="application/json" />
+     </staticContent>
+    </system.webServer>
+</configuration> 
\ No newline at end of file
diff --git a/websites/apidocs/toc.yml b/websites/apidocs/toc.yml
new file mode 100644
index 0000000..7c6e30d
--- /dev/null
+++ b/websites/apidocs/toc.yml
@@ -0,0 +1,8 @@
+- name: Lucene.Net API
+  href: api/
+  topicUid: Lucene.Net
+- name: Lucene.Net CLI
+  href: ../../src/dotnet/tools/lucene-cli/docs/
+  topicHref: ../../src/dotnet/tools/lucene-cli/docs/index.md
+- name: Lucene.Net Website
+  href: https://lucenenetsite.azurewebsites.net
\ No newline at end of file
diff --git a/websites/site/contributing/current-status.md b/websites/site/contributing/current-status.md
new file mode 100644
index 0000000..dff3a88
--- /dev/null
+++ b/websites/site/contributing/current-status.md
@@ -0,0 +1,13 @@
+---
+uid: contributing/current-status
+---
+The current status of the Lucene.Net project
+===============
+
+---------------
+
+Working toward Lucene.Net 4.8.0 (currently in BETA)
+
+Latest Stable Version: Lucene.Net 3.0.3
+
+[https://cwiki.apache.org/confluence/display/LUCENENET/Current+Status](https://cwiki.apache.org/confluence/display/LUCENENET/Current+Status)
\ No newline at end of file
diff --git a/websites/site/contributing/documentation.md b/websites/site/contributing/documentation.md
new file mode 100644
index 0000000..fe23741
--- /dev/null
+++ b/websites/site/contributing/documentation.md
@@ -0,0 +1,29 @@
+---
+uid: contributing/documentation
+---
+Documentation & Website
+===============
+
+---------------
+
+_If you wish to help out with this website and the API documentation site, here's some info that you'll need_
+
+## Website
+
+The website source code is found in the same Git repository as the Lucene.Net code in the folder: `/websites/site`. The site is built with a static site generator called [DocFx](https://dotnet.github.io/docfx/) and all of the content/pages are created using Markdown files.
+
+To build the website and run it on your machine, run the powershell script: `/websites/site/site.ps1`. You don't have to pass any parameters in and it will build the site and host it at [http://localhost:8080](http://localhost:8080). There are 2 parameters that you can use:
+
+* `-ServeDocs` _(default is 1)_ The value of `1` means it will build the docs and host the site, if `0` is specified, it will build the static site to be hosted elsewhere.
+* `-Clean` _(default is 0)_ The value of `1` means that it will clear all caches and tool files before it builds again. This is handy if a new version of docfx is available or if there's odd things occuring with the incremental build.
+
+The file/folder structure is within `/websites/site`:
+
+* `site.ps1` - the build script
+* `docfx.json` - the DocFx configuration file _(see docfx manual for further info)_
+* `*.md` - the root site content such as the index and download pages
+* `toc.yml` - these files determine the menu structures _(see docfx manual for further info)_
+* `contributing/*` - the Contributing section
+* `lucenetemplate/*` - the custom template files to style the website
+* `tools/*` - during the build process some tools will be downloaded which are stored here
+* `_site` - this is the exported static site that is generated
\ No newline at end of file
diff --git a/websites/site/contributing/index.md b/websites/site/contributing/index.md
new file mode 100644
index 0000000..48a1f66
--- /dev/null
+++ b/websites/site/contributing/index.md
@@ -0,0 +1,53 @@
+---
+uid: contributing
+---
+Lucene.Net project contributing guide
+===============
+
+---------------
+
+## Getting involved
+
+_There are lots ways to help contribute to the Lucene.Net project!_
+
+Lucene.Net is a very large project (over 400,000 executable lines of code and nearly 1,000,000 lines of text total) and we welcome any and all help to maintain such an effort. 
+
+### Ask a Question
+
+If you have a general how-to question or need help from the Lucene.Net community, please email the Apache Lucene.Net-User mailing list by sending a message to:
+
+[user@lucenenet.apache.org](mailto:user@lucenenet.apache.org)
+
+We recommend you join the [user mailing list](https://cwiki.apache.org/confluence/display/LUCENENET/Mailing+Lists) to stay looped into all user discussions.
+
+Alternatively, you can get help via [StackOverflow](https://stackoverflow.com/questions/tagged/lucene.net).
+
+Please do not submit general how-to questions to JIRA, use JIRA for bug reports/tasks only.
+
+see __[mailing lists](xref:contributing/mailing-lists)__ 
+
+### Start a Discussion
+
+To start a development discussion regarding technical features of Lucene.Net, please email the Apache Lucene.Net-Developer mailing list by sending a message to: 
+
+[dev@lucenenet.apache.org](mailto:dev@lucenenet.apache.org)
+
+We recommend you join both the [user and dev mailing lists](https://cwiki.apache.org/confluence/display/LUCENENET/Mailing+Lists) to stay looped in to all user and developer discussions.
+
+see __[mailing lists](xref:contributing/mailing-lists)__ 
+
+### Website and Documentation
+
+Help with keeping this website and documentation up to date would be a great help. It would be great to migrate/consolidate a lot of the important information from the wiki to this website. See __[website and documentation](xref:contributing/documentation)__ for more information about contributing to this area.
+
+### Report a Bug
+
+To report a bug, please use the [JIRA issue tracker](xref:contributing/issue-tracker). You can signup for a JIRA account [here](https://cwiki.apache.org/confluence/signup.action) (it just takes a minute).
+
+### Submit a Pull Request
+
+First have a look at the __[Current Status](xref:contributing/current-status)__ of the project to see where things are at.
+
+There's a [detailed contribution guide here](https://github.com/apache/lucenenet/blob/master/CONTRIBUTING.md). _(it would be good to migrate this guide to this website)_
+
+And another guide is here about all the basics of getting started with the repo and how we prefer to receive pull requests [Git Setup and Pull Requests](https://cwiki.apache.org/confluence/display/LUCENENET/Git+Setup+and+Pull+Requests)
\ No newline at end of file
diff --git a/websites/site/contributing/issue-tracker.md b/websites/site/contributing/issue-tracker.md
new file mode 100644
index 0000000..d5eeaab
--- /dev/null
+++ b/websites/site/contributing/issue-tracker.md
@@ -0,0 +1,9 @@
+---
+uid: contributing/issue-tracker
+---
+Issue Tracker
+===============
+
+---------------
+
+Follow what we are working on, help us by submitting patches, or submit your own enhancement or bug requests at our issue tracker: __[Lucene.Net JIRA](https://issues.apache.org/jira/browse/LUCENENET)__
\ No newline at end of file
diff --git a/websites/site/contributing/mailing-lists.md b/websites/site/contributing/mailing-lists.md
new file mode 100644
index 0000000..5e8116a
--- /dev/null
+++ b/websites/site/contributing/mailing-lists.md
@@ -0,0 +1,34 @@
+---
+uid: contributing/mailing-lists
+---
+
+Mailing Lists
+===============
+
+---------------
+
+_To subscribe to the mailing lists, send an email to [subscribe@lucenenet.apache.org](mailto:subscribe@lucenenet.apache.org). To unsubscribe, send an email to [unsubscribe@lucenenet.apache.org](mailto:unsubscribe@lucenenet.apache.org)._
+
+### Developers 
+
+[dev@lucenenet.apache.org](mailto:dev@lucenenet.apache.org) 
+
+The developers mailing list is used to discuss the technical future of Lucene.Net. This mailing list also recieves JIRA items and comments and notifications from github
+
+__[Dev Archive](http://mail-archives.apache.org/mod_mbox/lucenenet-dev/)__
+
+### Users 
+
+[user@lucenenet.apache.org](mailto:user@lucenenet.apache.org) 
+
+A list for general how-to questions and getting help from the community. Most (if not all) of our developers are also subscribed to this list
+
+__[User Archive](http://mail-archives.apache.org/mod_mbox/lucenenet-user/)__
+
+### Commits
+
+[commits@lucenenet.apache.org](mailto:commits@lucenenet.apache.org)
+
+To keep track of all source code submits to our repository
+
+__[Commit Archive](http://mail-archives.apache.org/mod_mbox/lucenenet-commits/)__
\ No newline at end of file
diff --git a/websites/site/contributing/source.md b/websites/site/contributing/source.md
new file mode 100644
index 0000000..add62a3
--- /dev/null
+++ b/websites/site/contributing/source.md
@@ -0,0 +1,24 @@
+Source code
+===============
+
+---------------
+
+## Git repository
+
+Apache Lucene.Net uses git as its source code management system. 
+
+The official repository is here: __[https://git-wip-us.apache.org/repos/asf?p=lucenenet.git](https://git-wip-us.apache.org/repos/asf?p=lucenenet.git)__. 
+
+You can clone the repo with the command line or your favorite Git client, example:
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/lucenenet.git
+```
+
+There also is a mirror at GitHub __[https://github.com/apache/lucene.net](https://github.com/apache/lucene.net)__ which is synced from the Apache repository.
+
+Most work currently happens on the branch named __branch_4x__.
+
+## Building & testing
+
+The guide for building/testing is currently in the Git repository __[here](https://github.com/apache/lucenenet/blob/master/README.md#building-and-testing)__
\ No newline at end of file
diff --git a/websites/site/contributing/toc.yml b/websites/site/contributing/toc.yml
new file mode 100644
index 0000000..7dbe3fe
--- /dev/null
+++ b/websites/site/contributing/toc.yml
@@ -0,0 +1,12 @@
+- name: Mailing Lists
+  href: mailing-lists.md
+- name: Source Code
+  href: source.md
+- name: Issue Tracker
+  href: issue-tracker.md
+- name: Wiki
+  href: wiki.md
+- name: Documentation
+  href: documentation.md
+- name: Current status
+  href: current-status.md
\ No newline at end of file
diff --git a/websites/site/contributing/wiki.md b/websites/site/contributing/wiki.md
new file mode 100644
index 0000000..f0a3c31
--- /dev/null
+++ b/websites/site/contributing/wiki.md
@@ -0,0 +1,9 @@
+Wiki
+===============
+
+---------------
+
+There's a lot of scattered (_but important!_) information on the __[Lucene.Net wiki](https://cwiki.apache.org/confluence/display/LUCENENET/Lucene.Net)__. Quite a few of the links on this website currently link to the wiki but the plan is to migrate all of the relevant & important docs over to this website.
+
+If that's something you think you might want to help out with, see the [Contributing section](xref:contributing) for more info. 
+
diff --git a/websites/site/docfx.json b/websites/site/docfx.json
new file mode 100644
index 0000000..eb4db41
--- /dev/null
+++ b/websites/site/docfx.json
@@ -0,0 +1,48 @@
+{
+  "build": {
+    "content": [
+      {
+        "files": [
+          "contributing/**.md", 
+          "contributing/**/toc.yml", 
+          "download/**.md", 
+          "download/**/toc.yml", 
+          "toc.yml", 
+          "*.md"]
+      }
+    ],
+    "resource": [
+      {
+        "files": [
+          "logo/favicon.ico",
+          "logo/lucene-net-icon-64x64.png",
+          "logo/lucene-net-color.png",
+          "logo/lucene-net-reverse-color.png"
+        ],
+        "src": "../../branding"
+      }
+    ],
+    "globalMetadata": {
+      "_appTitle": "Apache Lucene.NET 4.8.0",      
+      "_disableContribution": false,
+      "_disableBreadcrumb": false,
+      "_appFaviconPath": "logo/favicon.ico",
+      "_enableSearch": false,
+      "_appLogoPath": "logo/lucene-net-color.png",
+      "_appFooter": "Copyright © 2019 The Apache Software Foundation, Licensed under the <a href='http://www.apache.org/licenses/LICENSE-2.0' target='_blank'>Apache License, Version 2.0</a><br/> <small>Apache Lucene.Net, Lucene.Net, Apache, the Apache feather logo, and the Apache Lucene.Net project logo are trademarks of The Apache Software Foundation. <br/>All other marks mentioned may be trademarks or registered trademarks of their respective owners.</small>"
+    },
+    "dest": "_site",
+    "globalMetadataFiles": [],
+    "fileMetadataFiles": [],
+    "template": [
+      "default",
+      "lucenetemplate"
+    ],
+    "postProcessors": [],
+    "markdownEngineName": "markdig",
+    "noLangKeyword": false,
+    "keepFileLink": false,
+    "cleanupCacheHistory": false,
+    "disableGitFeatures": false
+  }
+}
\ No newline at end of file
diff --git a/websites/site/docs.md b/websites/site/docs.md
new file mode 100644
index 0000000..f6775bf
--- /dev/null
+++ b/websites/site/docs.md
@@ -0,0 +1,20 @@
+---
+_disableBreadcrumb: true
+---
+
+Lucene.Net Documenttaion
+===============
+
+---------------
+
+## Lucene 4.8.0
+
+The documentation website for Lucene 4.8.0 is currently still a work in progress. Currently they are available on a temporary website here [https://lucenenetdocs.azurewebsites.net/](https://lucenenetdocs.azurewebsites.net/)
+
+## Lucene 3.0.3
+
+The documentation website for Lucene 3.0.3 is here [http://lucenenet.apache.org/docs/3.0.3/Index.html](http://lucenenet.apache.org/docs/3.0.3/Index.html)
+
+## Lucene 2.9.4.1
+
+The documentation website for Lucene 3.0.3 is here [http://lucenenet.apache.org/docs/2.9.4/Index.html](http://lucenenet.apache.org/docs/2.9.4/Index.html)
\ No newline at end of file
diff --git a/websites/site/download/download.md b/websites/site/download/download.md
new file mode 100644
index 0000000..24cdbbd
--- /dev/null
+++ b/websites/site/download/download.md
@@ -0,0 +1,26 @@
+---
+uid: download
+---
+
+Download Lucene.Net
+===============
+
+---------------
+
+## [Lucene 4.8.0](xref:download/4)
+
+_Status:_ __`Beta`__
+
+_Released:_ __`Pending...`__
+
+## [Lucene 3.0.3](xref:download/3)
+
+_Status:_ __`Stable`__
+
+_Released:_ `2012-10-26`
+
+## [Lucene 2.9.4.1](xref:download/2)
+
+_Status:_ __`Stable`__
+
+_Released:_ `2011-12-02`
\ No newline at end of file
diff --git a/websites/site/download/toc.yml b/websites/site/download/toc.yml
new file mode 100644
index 0000000..00c1aeb
--- /dev/null
+++ b/websites/site/download/toc.yml
@@ -0,0 +1,6 @@
+- name: Version 4.8
+  href: version-4.md
+- name: Version 3.0.3
+  href: version-3.md
+- name: Version 2.9.4
+  href: version-2.md
\ No newline at end of file
diff --git a/websites/site/download/version-2.md b/websites/site/download/version-2.md
new file mode 100644
index 0000000..0d9200a
--- /dev/null
+++ b/websites/site/download/version-2.md
@@ -0,0 +1,22 @@
+---
+uid: download/2
+---
+
+Download Lucene.Net 2.9.4
+===============
+
+---------------
+
+## Lucene 2.9.4.1
+
+_Status:_ __`Stable`__
+
+_Released:_ `2011-12-02`
+
+<div class="nuget-well" style="text-align:left;">
+    PM> Install-Package Lucene.Net -Version 2.9.4.1
+</div>
+
+### Source code
+
+* [Git release tag](https://github.com/apache/lucenenet/releases/tag/Lucene.Net_2_9_4g_RC1)
\ No newline at end of file
diff --git a/websites/site/download/version-3.md b/websites/site/download/version-3.md
new file mode 100644
index 0000000..736d9be
--- /dev/null
+++ b/websites/site/download/version-3.md
@@ -0,0 +1,55 @@
+---
+uid: download/3
+---
+
+Download Lucene.Net 3.3
+===============
+
+---------------
+
+## Lucene 3.0.3
+
+_Status:_ __`Stable`__
+
+_Released:_ `2012-10-26`
+
+__[Release notes](https://cwiki.apache.org/confluence/display/LUCENENET/Lucene.Net+3.0.3)__
+
+<div class="nuget-well" style="text-align:left;">
+    PM> Install-Package Lucene.Net -Version 3.0.3
+</div>
+
+### Supported Frameworks
+
+- .NET Framework 4.0
+- .NET Framework 3.5
+
+### All Packages
+
+- [Lucene.Net](https://www.nuget.org/packages/Lucene.Net/3.0.3) - Core library
+- [Lucene.Net.Contrib](https://www.nuget.org/packages/Lucene.Net.Contrib/3.0.3) - Various user contributed functionality and extras
+- [Lucene.Net.Contrib.Spatial](https://www.nuget.org/packages/Lucene.Net.Contrib.Spatial/3.0.3) - Geospatial Search
+- [Lucene.Net.Contrib.Spatial.NTS](https://www.nuget.org/packages/Lucene.Net.Contrib.Spatial.NTS/3.0.3) - Geospatial search with support for NetTopologySuite.
+
+### Source code
+
+* [Git release tag](https://github.com/apache/lucenenet/releases/tag/Lucene.Net_3_0_3_RC2_final)
+
+### Binary releases
+
+<ul>
+<li><a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.bin.zip">Binary</a>
+ / <a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.bin.zip.asc">PGP Signature</a>
+ / <a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.bin.zip.sha1">SHA1 Checksum</a>
+ / <a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.bin.zip.md5">MD5 Checksum</a> </li>
+<li><a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.src.zip">Source</a>
+ / <a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.src.zip.asc">PGP Signature</a>
+ / <a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.src.zip.sha1">SHA1 Checksum</a>
+ / <a href="https://www.apache.org/dist/lucenenet/3.0.3-RC2/Apache-Lucene.Net-3.0.3-RC2.src.zip.md5">MD5 Checksum</a> </li>
+</ul>
+
+The above release files should be verified using the PGP signatures and the
+<a href="https://www.apache.org/dist/lucenenet/KEYS">project release KEYS</a>. See
+<a href="https://www.apache.org/dyn/closer.cgi#verify">verification instructions</a> for a
+description of using the PGP and KEYS files for verification. SHA checksums
+are also provided as alternative verification method.
diff --git a/websites/site/download/version-4.md b/websites/site/download/version-4.md
new file mode 100644
index 0000000..fa918a3
--- /dev/null
+++ b/websites/site/download/version-4.md
@@ -0,0 +1,66 @@
+---
+uid: download/4
+---
+
+Download Lucene.Net
+===============
+
+---------------
+
+## Lucene 4.8.0
+
+_Status:_ __`Beta`__
+
+_Released:_ __`Pending...`__
+
+<div class="nuget-well" style="text-align:left;">
+    PM> Install-Package Lucene.Net -Version 4.8.0-beta00005
+</div>
+
+### Source code
+
+* [Git Repository](https://github.com/apache/lucenenet)
+
+### Supported Frameworks
+
+- [.NET Standard 2.0](https://docs.microsoft.com/en-us/dotnet/standard/net-standard)
+- [.NET Standard 1.6](https://docs.microsoft.com/en-us/dotnet/standard/net-standard)
+- .NET Framework 4.5
+
+### All Packages
+
+<!--- TO BE ADDED WHEN RELEASED 
+
+- [Lucene.Net.Analysis.UIMA](https://www.nuget.org/packages/Lucene.Net.Analysis.UIMA/) - Analysis integration with Apache UIMA)
+
+-->
+
+- [Lucene.Net](https://www.nuget.org/packages/Lucene.Net/) - Core library
+- [Lucene.Net.Analysis.Common](https://www.nuget.org/packages/Lucene.Net.Analysis.Common/) - Analyzers for indexing content in different languages and domains
+- [Lucene.Net.Analysis.Kuromoji](https://www.nuget.org/packages/Lucene.Net.Analysis.Kuromoji/) - Japanese Morphological Analyzer 
+- [Lucene.Net.Analysis.Phonetic](https://www.nuget.org/packages/Lucene.Net.Analysis.Phonetic/) - Analyzer for indexing phonetic signatures (for sounds-alike search)
+- [Lucene.Net.Analysis.SmartCn](https://www.nuget.org/packages/Lucene.Net.Analysis.SmartCn/) - Analyzer for indexing Chinese
+- [Lucene.Net.Analysis.Stempel](https://www.nuget.org/packages/Lucene.Net.Analysis.Stempel/) - Analyzer for indexing Polish
+- [Lucene.Net.Benchmark](https://www.nuget.org/packages/Lucene.Net.Benchmark/) - System for benchmarking Lucene
+- [Lucene.Net.Classification](https://www.nuget.org/packages/Lucene.Net.Classification/) - Classification module for Lucene
+- [Lucene.Net.Codecs](https://www.nuget.org/packages/Lucene.Net.Codecs/) - Lucene codecs and postings formats
+- [Lucene.Net.Expressions](https://www.nuget.org/packages/Lucene.Net.Expressions/) - Dynamically computed values to sort/facet/search on based on a pluggable grammar
+- [Lucene.Net.Facet](https://www.nuget.org/packages/Lucene.Net.Facet/) - Faceted indexing and search capabilities
+- [Lucene.Net.Grouping](https://www.nuget.org/packages/Lucene.Net.Grouping/) - Collectors for grouping search results
+- [Lucene.Net.Highlighter](https://www.nuget.org/packages/Lucene.Net.Highlighter/) - Highlights search keywords in results
+- [Lucene.Net.ICU](https://www.nuget.org/packages/Lucene.Net.ICU/) - Specialized ICU (International Components for Unicode) Analyzers and Highlighters
+- [Lucene.Net.Join](https://www.nuget.org/packages/Lucene.Net.Join/) - Index-time and Query-time joins for normalized content
+- [Lucene.Net.Memory](https://www.nuget.org/packages/Lucene.Net.Memory/) - Single-document in-memory index implementation
+- [Lucene.Net.Misc](https://www.nuget.org/packages/Lucene.Net.Misc/) - Index tools and other miscellaneous code
+- [Lucene.Net.Queries](https://www.nuget.org/packages/Lucene.Net.Queries/) - Filters and Queries that add to core Lucene
+- [Lucene.Net.QueryParser](https://www.nuget.org/packages/Lucene.Net.QueryParser/) - Text to Query parsers and parsing framework
+- [Lucene.Net.Replicator](https://www.nuget.org/packages/Lucene.Net.Replicator/)  Files replication utility
+- [Lucene.Net.Sandbox](https://www.nuget.org/packages/Lucene.Net.Sandbox/) - Various third party contributions and new ideas
+- [Lucene.Net.Spatial](https://www.nuget.org/packages/Lucene.Net.Spatial/) - Geospatial search
+- [Lucene.Net.Suggest](https://www.nuget.org/packages/Lucene.Net.Suggest/) - Auto-suggest and Spellchecking support
+
+### Remaining work
+
+See __[Current Status](xref:contributing/current-status)__ for more details on the remaining work
+
+This version is a direct port of the Java Lucene project at [this release](https://github.com/apache/lucene-solr/releases/tag/releases%2Flucene-solr%2F4.8.0)
\ No newline at end of file
diff --git a/websites/site/index.md b/websites/site/index.md
new file mode 100644
index 0000000..843d871
--- /dev/null
+++ b/websites/site/index.md
@@ -0,0 +1,18 @@
+---
+title: Welcome to the Lucene.Net website!
+description: Lucene.Net is a port of the Lucene search engine library, written in C# and targeted at .NET runtime users.
+documentType: index
+---
+
+Lucene.Net
+===============
+
+<h2 id="about" class="text-center">About the project</h2>
+
+Lucene.Net is a port of the Lucene search engine library, written in C# and targeted at .NET runtime users
+
+### Our Goals
+
+* Maintain the existing line-by-line port from Java to C#, fully automating and commoditizing the process such that the project can easily synchronize with the Java Lucene release schedule
+* Maintaining the high-performance requirements expected of a first class C# search engine library
+* Maximize usability and power when used within the .NET runtime. To that end, it will present a highly idiomatic, carefully tailored API that takes advantage of many of the special features of the .NET runtime
\ No newline at end of file
diff --git a/websites/site/lucenetemplate/index.html.tmpl b/websites/site/lucenetemplate/index.html.tmpl
new file mode 100644
index 0000000..de41d61
--- /dev/null
+++ b/websites/site/lucenetemplate/index.html.tmpl
@@ -0,0 +1,58 @@
+{{!Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license. See LICENSE file in the project root for full license information.}}
+{{!include(/^styles/.*/)}}
+{{!include(/^fonts/.*/)}}
+{{!include(favicon.ico)}}
+{{!include(logo.svg)}}
+<!DOCTYPE html>
+<!--[if IE]><![endif]-->
+<html lang="en">
+  {{>partials/head-content}}
+  <body id="homepage" data-spy="scroll" data-target="#affix">
+    <div id="wrapper">
+      <header>
+        {{>partials/navbar}}
+      </header>
+      <section id="intro" class="home-section">
+        <div class="container">
+          <p class="text-center">Lucene.Net is a high performance search engine library for .NET</p>
+          <div class="row">
+              <div class="nuget-well col-xs-8 col-xs-offset-2 col-sm-6 col-sm-offset-3">
+                  Install-Package Lucene.Net -Pre
+              </div>
+          </div>
+          <div class="row">
+              <div class="text-center project-links">
+                  <a href="https://github.com/apache/lucenenet" target="_blank" >
+                      <i class="fa fa-github"></i>
+                  </a>
+                  <a href="http://mail-archives.apache.org/mod_mbox/lucenenet-dev/" target="_blank">
+                      <i class="fa fa-envelope-o"></i>
+                  </a>
+                  <a href="https://stackoverflow.com/questions/tagged/lucene.net" target="_blank">
+                      <i class="fa fa-stack-overflow"></i>
+                  </a>
+                  <a href="download.html">
+                      <i class="fa fa-download" target="_blank"></i>
+                  </a>
+              </div>
+          </div>
+        </div>
+      </section>
+{{>partials/home-quick-start}}
+      <section class="home-section">
+        <div class="container">
+          <div class="row">
+            {{{conceptual}}}
+          </div>
+        </div>
+      </section>
+      <section class="home-section">
+        <div class="container">
+          <div class="row"></div>
+        </div>
+      </section>
+      {{>partials/footer}}
+    </div>
+    {{>partials/scripts}}
+  </body>
+</html>
\ No newline at end of file
diff --git a/websites/site/lucenetemplate/partials/head-content.tmpl.partial b/websites/site/lucenetemplate/partials/head-content.tmpl.partial
new file mode 100644
index 0000000..50cc7c5
--- /dev/null
+++ b/websites/site/lucenetemplate/partials/head-content.tmpl.partial
@@ -0,0 +1,27 @@
+{{!Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license. See LICENSE file in the project root for full license information.}}
+
+<head>
+  <meta charset="utf-8">
+  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+  <title>{{#title}}{{title}}{{/title}}{{^title}}{{>partials/title}}{{/title}} {{#_appTitle}}| {{_appTitle}} {{/_appTitle}}</title>
+  <meta name="viewport" content="width=device-width">
+  <meta name="title" content="{{#title}}{{title}}{{/title}}{{^title}}{{>partials/title}}{{/title}} {{#_appTitle}}| {{_appTitle}} {{/_appTitle}}">
+  <meta name="generator" content="docfx {{_docfxVersion}}">
+  {{#_description}}<meta name="description" content="{{_description}}">{{/_description}}
+  <link rel="shortcut icon" href="{{_rel}}{{{_appFaviconPath}}}{{^_appFaviconPath}}favicon.ico{{/_appFaviconPath}}">
+  <link rel="stylesheet" href="{{_rel}}styles/docfx.vendor.css">
+  <link rel="stylesheet" href="{{_rel}}styles/docfx.css">
+  <link rel="stylesheet" href="{{_rel}}styles/main.css">
+  <meta property="docfx:navrel" content="{{_navRel}}">
+  <meta property="docfx:tocrel" content="{{_tocRel}}">
+  {{#_noindex}}<meta name="searchOption" content="noindex">{{/_noindex}}
+  {{#_enableSearch}}<meta property="docfx:rel" content="{{_rel}}">{{/_enableSearch}}
+  {{#_enableNewTab}}<meta property="docfx:newtab" content="true">{{/_enableNewTab}}
+  
+  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/academicons/1.8.0/css/academicons.min.css" integrity="sha512-GGGNUPDhnG8LEAEDsjqYIQns+Gu8RBs4j5XGlxl7UfRaZBhCCm5jenJkeJL8uPuOXGqgl8/H1gjlWQDRjd3cUQ==" crossorigin="anonymous">
+  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css" integrity="sha512-SfTiTlX6kk+qitfevl/7LibUOeJWlt9rbyDn92a1DqWOw9vWG2MFoays0sgObmWazO5BQPiFucnnEAjpAB+/Sw==" crossorigin="anonymous">
+  
+  <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Lato:400,700%7CMerriweather%7CRoboto+Mono">
+  <link rel="stylesheet" href="/styles/site.css">
+
+</head>
diff --git a/websites/site/lucenetemplate/partials/head.tmpl.partial b/websites/site/lucenetemplate/partials/head.tmpl.partial
new file mode 100644
index 0000000..2a02a27
--- /dev/null
+++ b/websites/site/lucenetemplate/partials/head.tmpl.partial
@@ -0,0 +1,24 @@
+{{!Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license. See LICENSE file in the project root for full license information.}}
+
+<head>
+  <meta charset="utf-8">
+  <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+  <title>{{#title}}{{title}}{{/title}}{{^title}}{{>partials/title}}{{/title}} {{#_appTitle}}| {{_appTitle}} {{/_appTitle}}</title>
+  <meta name="viewport" content="width=device-width">
+  <meta name="title" content="{{#title}}{{title}}{{/title}}{{^title}}{{>partials/title}}{{/title}} {{#_appTitle}}| {{_appTitle}} {{/_appTitle}}">
+  <meta name="generator" content="docfx {{_docfxVersion}}">
+  {{#_description}}<meta name="description" content="{{_description}}">{{/_description}}
+  <link rel="shortcut icon" href="{{_rel}}{{{_appFaviconPath}}}{{^_appFaviconPath}}favicon.ico{{/_appFaviconPath}}">
+  <link rel="stylesheet" href="{{_rel}}styles/docfx.vendor.css">
+  <link rel="stylesheet" href="{{_rel}}styles/docfx.css">
+  <link rel="stylesheet" href="{{_rel}}styles/main.css">
+  <meta property="docfx:navrel" content="{{_navRel}}">
+  <meta property="docfx:tocrel" content="{{_tocRel}}">
+  {{#_noindex}}<meta name="searchOption" content="noindex">{{/_noindex}}
+  {{#_enableSearch}}<meta property="docfx:rel" content="{{_rel}}">{{/_enableSearch}}
+  {{#_enableNewTab}}<meta property="docfx:newtab" content="true">{{/_enableNewTab}}
+
+  <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Lato:400,700%7CMerriweather%7CRoboto+Mono">
+  <link rel="stylesheet" href="/styles/site.css">
+
+</head>
diff --git a/websites/site/lucenetemplate/partials/home-quick-start.tmpl.partial b/websites/site/lucenetemplate/partials/home-quick-start.tmpl.partial
new file mode 100644
index 0000000..1d441dd
--- /dev/null
+++ b/websites/site/lucenetemplate/partials/home-quick-start.tmpl.partial
@@ -0,0 +1,70 @@
+<section id="quick-start" class="home-section">
+<div class="container">
+<div class="row">
+<div class="col-xs-12 col-md-6">
+<p class="text-center">Create an index and define a text analyzer</p>
+<pre class="clean">
+<code class="csharp">// Ensures index backwards compatibility
+var AppLuceneVersion = LuceneVersion.LUCENE_48;
+
+var indexLocation = @"C:\Index";
+var dir = FSDirectory.Open(indexLocation);
+
+//create an analyzer to process the text
+var analyzer = new StandardAnalyzer(AppLuceneVersion);
+
+//create an index writer
+var indexConfig = new IndexWriterConfig(AppLuceneVersion, analyzer);
+var writer = new IndexWriter(dir, indexConfig);
+</code>
+</pre>
+</div>
+<div class="col-xs-12 col-md-6">
+<p class="text-center">Add to the index</p>
+<pre class="clean">
+<code class="csharp">var source = new
+{
+    Name = "Kermit the Frog",
+    FavouritePhrase = "The quick brown fox jumps over the lazy dog"
+};
+var doc = new Document();
+// StringField indexes but doesn't tokenise
+doc.Add(new StringField("name", source.Name, Field.Store.YES));
+
+doc.Add(new TextField("favouritePhrase", source.FavouritePhrase, Field.Store.YES));
+
+writer.AddDocument(doc);
+writer.Flush(triggerMerge: false, applyAllDeletes: false);
+</code>
+</div>
+</div>
+<div class="row">
+<div class="col-xs-12 col-md-6">
+<p class="text-center">Construct a query</p>
+<pre class="clean">
+<code class="csharp">// search with a phrase
+var phrase = new MultiPhraseQuery();
+phrase.Add(new Term("favouritePhrase", "brown"));
+phrase.Add(new Term("favouritePhrase", "fox"));
+</code>
+</pre>
+</div>                    
+<div class="col-xs-12 col-md-6">
+<p class="text-center">Fetch the results</p>
+<pre class="clean">
+<code class="csharp">// re-use the writer to get real-time updates
+var searcher = new IndexSearcher(writer.GetReader(applyAllDeletes: true));
+var hits = searcher.Search(phrase, 20 /* top 20 */).ScoreDocs;
+foreach (var hit in hits)
+{
+&nbsp;&nbsp;&nbsp;&nbsp;var foundDoc = searcher.Doc(hit.Doc);
+&nbsp;&nbsp;&nbsp;&nbsp;hit.Score.Dump("Score");
+&nbsp;&nbsp;&nbsp;&nbsp;foundDoc.Get("name").Dump("Name");
+&nbsp;&nbsp;&nbsp;&nbsp;foundDoc.Get("favouritePhrase").Dump("Favourite Phrase");
+}
+</code>
+</pre>
+</div>
+</div>
+</div>
+</section>
\ No newline at end of file
diff --git a/websites/site/lucenetemplate/partials/navbar.tmpl.partial b/websites/site/lucenetemplate/partials/navbar.tmpl.partial
new file mode 100644
index 0000000..ab8f519
--- /dev/null
+++ b/websites/site/lucenetemplate/partials/navbar.tmpl.partial
@@ -0,0 +1,22 @@
+{{!Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license. See LICENSE file in the project root for full license information.}}
+
+<nav id="autocollapse" class="navbar ng-scope" role="navigation">
+  <div class="container">
+    <div class="navbar-header">
+      <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar">
+        <span class="sr-only">Toggle navigation</span>
+        <span class="icon-bar"></span>
+        <span class="icon-bar"></span>
+        <span class="icon-bar"></span>
+      </button>
+      {{>partials/logo}}
+    </div>
+    <div class="collapse navbar-collapse" id="navbar">
+      <form class="navbar-form navbar-right" role="search" id="search">
+        <div class="form-group">
+          <input type="text" class="form-control" id="search-query" placeholder="Search" autocomplete="off">
+        </div>
+      </form>
+    </div>
+  </div>
+</nav>
diff --git a/websites/site/lucenetemplate/styles/main.css b/websites/site/lucenetemplate/styles/main.css
new file mode 100644
index 0000000..812bf28
--- /dev/null
+++ b/websites/site/lucenetemplate/styles/main.css
@@ -0,0 +1,73 @@
+/* .navbar-inverse {
+    background: #4a95da;
+    background: rgb(44, 95, 163);
+    background: -moz-linear-gradient(top, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    background: -webkit-linear-gradient(top, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    background: linear-gradient(to bottom, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#2c5fa3', endColorstr='#4096ee', GradientType=0);
+    border-color:white;
+  }
+  .navbar-inverse .navbar-nav>li>a, .navbar-inverse .navbar-text {
+    color: #fff;
+  }
+  .navbar-inverse .navbar-nav>.active>a {
+      background-color: #1764AA;
+  }
+  .navbar-inverse .navbar-nav>.active>a:focus, .navbar-inverse .navbar-nav>.active>a:hover {
+      background-color: #1764AA;
+  } */
+
+  .btn-primary:hover {
+    background-color: #1764AA;
+}
+button, a {
+    color: #1764AA;
+    /* #0095eb */
+}
+button:hover,
+button:focus,
+a:hover,
+a:focus {
+  color: #143653;
+  text-decoration: none;
+}
+nav.navbar {
+    background-color:white;
+}
+.navbar-brand  {
+    height: 80px;
+}
+.navbar-header .navbar-brand img {    
+    width:300px;
+    height:55px;
+    margin:10px 10px 10px 0px;
+}
+.navbar-toggle .icon-bar{
+    margin-top: 2px;
+    background-color:#0095eb;
+}
+.navbar-toggle {
+    border-color:#0095eb;
+}
+header ul.navbar-nav {
+    /* font-size:1.2em; */
+    float:right;
+    font-weight: 600;
+}
+
+.sidefilter {
+    top:120px;
+}
+
+.sidetoc {
+    top: 180px;
+    background-color:rgb(247, 247, 247);
+}
+
+body .toc {
+    background-color:rgb(247, 247, 247);
+}
+
+.sidefilter {
+    background-color: rgb(247, 247, 247);
+}
\ No newline at end of file
diff --git a/websites/site/lucenetemplate/styles/site.css b/websites/site/lucenetemplate/styles/site.css
new file mode 100644
index 0000000..b4fe7f7
--- /dev/null
+++ b/websites/site/lucenetemplate/styles/site.css
@@ -0,0 +1,131 @@
+/* START From hugo academic css */
+#homepage section {
+    font-family: 'Merriweather', serif;
+    font-size: 16px;
+    line-height: 1.65;
+}
+#homepage pre, #homepage code {
+  font-family: 'Roboto Mono', 'Courier New', 'Courier', monospace;
+}
+#homepage h2, #homepage h3, #homepage h4  {
+    font-family: 'Lato', sans-serif;
+    font-weight: 400;
+    margin-bottom: 1em;
+    line-height: 1.25;
+    color: #313131;
+    text-rendering: optimizeLegibility;
+}
+#homepage h3 {
+    font-weight: 700;
+}
+nav.navbar {
+    font-family: 'Lato', sans-serif;
+    font-weight: 400;
+    line-height: 1.25;
+    text-rendering: optimizeLegibility;
+    font-size: 16px;
+}
+.home-section:first-of-type {
+    padding-top: 50px;
+}
+.home-section:nth-of-type(even) {
+    background-color: rgb(247, 247, 247);
+}
+@media screen and (min-width: 58em) {
+    #homepage section {
+        font-size: 20px;
+    }
+}
+/* END From hugo academic css */
+
+pre.clean {
+    border: none !important;
+    border-radius: 0 !important;
+    background-color: #f8f8f8;
+    overflow: auto;
+    display: block;
+    padding: 9.5px;
+    margin: 0 0 10px;
+    font-size: 13px;
+    line-height: 1.42857143;
+    color: #333;
+    word-break: break-all;
+    word-wrap: break-word;
+}
+
+#intro {
+    margin-top:80px;
+    /* Permalink - use to edit and share this gradient: http://colorzilla.com/gradient-editor/#2c5fa3+0,4096ee+100 */
+    background: rgb(44, 95, 163);
+    /* Old browsers */
+    background: -moz-linear-gradient(top, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    /* FF3.6-15 */
+    background: -webkit-linear-gradient(top, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    /* Chrome10-25,Safari5.1-6 */
+    background: linear-gradient(to bottom, rgba(44, 95, 163, 1) 0%, rgba(64, 150, 238, 1) 100%);
+    /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
+    filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#2c5fa3', endColorstr='#4096ee', GradientType=0);
+    /* IE6-9 */
+    color: white;
+}
+#intro p {
+    margin: 0 0 10px;
+    margin-bottom: 2rem;
+}
+
+.project-links {
+    margin-top: 20px;
+    font-size:30px;
+
+    vertical-align: bottom;
+}
+
+.project-links a {
+    color: white;
+}
+
+.project-links a:hover {
+    color: #0095eb;
+    text-decoration: none;
+    transition: color 0.6s ease;
+}
+
+.project-links i {
+    font-size: 1.7em;
+    margin-left: 2rem;
+}
+
+#intro h1 h2 h3 h4 h5 {
+    color: white;
+}
+
+.no-padding {
+    padding: 0 !important;
+    margin: 0 !important;
+}
+
+.nuget-well {
+    -moz-border-radius: 5px;
+    -webkit-border-radius: 5px;
+    background-color: #202020;
+    border: 4px solid silver;
+    border-radius: 5px;
+    box-shadow: 2px 2px 3px #6e6e6e;
+    color: #e2e2e2;
+    display: block;
+    font: 1em 'andale mono', 'lucida console', monospace;
+    line-height: 1em;
+    overflow: auto;
+    padding: 15px;
+    text-align: center;
+}
+
+.home-section {
+    padding: 4rem 0 4rem 0;
+}
+
+@media screen and (min-width: 700px) {
+    .project-links {
+        margin-top: 4rem;
+    }
+}
\ No newline at end of file
diff --git a/websites/site/lucenetemplate/web.config b/websites/site/lucenetemplate/web.config
new file mode 100644
index 0000000..f646909
--- /dev/null
+++ b/websites/site/lucenetemplate/web.config
@@ -0,0 +1,9 @@
+<?xml version="1.0"?>
+ 
+<configuration>
+    <system.webServer>
+        <staticContent>
+            <mimeMap fileExtension=".json" mimeType="application/json" />
+     </staticContent>
+    </system.webServer>
+</configuration> 
\ No newline at end of file
diff --git a/websites/site/site.ps1 b/websites/site/site.ps1
new file mode 100644
index 0000000..c0f15b3
--- /dev/null
+++ b/websites/site/site.ps1
@@ -0,0 +1,86 @@
+# -----------------------------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the ""License""); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+# 
+# http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an ""AS IS"" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# -----------------------------------------------------------------------------------
+
+param (
+	[Parameter(Mandatory=$false)]
+	[int]
+	$ServeDocs = 1,
+	[Parameter(Mandatory=$false)]
+	[int]
+	$Clean = 0,
+	# LogLevel can be: Diagnostic, Verbose, Info, Warning, Error
+	[Parameter(Mandatory=$false)]
+	[string]
+	$LogLevel = 'Info'
+)
+
+[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
+
+$PSScriptFilePath = (Get-Item $MyInvocation.MyCommand.Path).FullName
+$RepoRoot = (get-item $PSScriptFilePath).Directory.Parent.Parent.FullName;
+$SiteFolder = Join-Path -Path $RepoRoot -ChildPath "websites\site";
+$ToolsFolder = Join-Path -Path $SiteFolder -ChildPath "tools";
+#ensure the /build/tools folder
+New-Item $ToolsFolder -type directory -force
+
+if ($Clean -eq 1) {
+	Write-Host "Cleaning tools..."
+	Remove-Item (Join-Path -Path $ToolsFolder "\*") -recurse -force -ErrorAction SilentlyContinue
+}
+
+New-Item "$ToolsFolder\tmp" -type directory -force
+
+# Go get docfx.exe if we don't have it
+New-Item "$ToolsFolder\docfx" -type directory -force
+$DocFxExe = "$ToolsFolder\docfx\docfx.exe"
+if (-not (test-path $DocFxExe))
+{
+	Write-Host "Retrieving docfx..."
+	$DocFxZip = "$ToolsFolder\tmp\docfx.zip"
+	Invoke-WebRequest "https://github.com/dotnet/docfx/releases/download/v2.38.1/docfx.zip" -OutFile $DocFxZip -TimeoutSec 60 
+	#unzip
+	Expand-Archive $DocFxZip -DestinationPath (Join-Path -Path $ToolsFolder -ChildPath "docfx")
+}
+
+ Remove-Item  -Recurse -Force "$ToolsFolder\tmp"
+
+# delete anything that already exists
+if ($Clean -eq 1) {
+	Write-Host "Cleaning..."
+	Remove-Item (Join-Path -Path $SiteFolder "_site\*") -recurse -force -ErrorAction SilentlyContinue
+	Remove-Item (Join-Path -Path $SiteFolder "_site") -force -ErrorAction SilentlyContinue
+	Remove-Item (Join-Path -Path $SiteFolder "obj\*") -recurse -force -ErrorAction SilentlyContinue
+	Remove-Item (Join-Path -Path $SiteFolder "obj") -force -ErrorAction SilentlyContinue
+}
+
+$DocFxJson = Join-Path -Path $SiteFolder "docfx.json"
+$DocFxLog = Join-Path -Path $SiteFolder "obj\docfx.log"
+
+if($?) { 
+	if ($ServeDocs -eq 0){
+		# build the output		
+		Write-Host "Building docs..."
+		& $DocFxExe build $DocFxJson -l "$DocFxLog" --loglevel $LogLevel
+	}
+	else {
+		# build + serve (for testing)
+		Write-Host "starting website..."
+		& $DocFxExe $DocFxJson --serve
+	}
+}
\ No newline at end of file
diff --git a/websites/site/toc.yml b/websites/site/toc.yml
new file mode 100644
index 0000000..b8b211a
--- /dev/null
+++ b/websites/site/toc.yml
@@ -0,0 +1,12 @@
+- name: About
+  href: /#about
+- name: Quick start
+  href: /#quick-start
+- name: Download
+  href: download/
+  topicHref: download/download.md
+- name: Documentation
+  topicHref: docs.md
+- name: Contributing
+  href: contributing/
+  topicHref: contributing/index.md
\ No newline at end of file


Mime
View raw message