lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ganesh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-9466) During concurrency some Solr document are not seen even after soft and hard commit
Date Thu, 01 Sep 2016 14:29:21 GMT

    [ https://issues.apache.org/jira/browse/SOLR-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455564#comment-15455564
] 

Ganesh commented on SOLR-9466:
------------------------------

Hi Shawn, Thanks for your reply.

Regarding cache autowarm count, we have disable most of caches and for filter we have set
autowarmcount as 0. 
Also for our tomcat maxThreads we have set it to 5000. 

Actually we are in the process of upgrading to new version and it's going on in our development
environment. To validate our product in new version etc, that exercise will go for 3 to 4
weeks. 
But till that we need to support our production environment with 4.10.2 version. So we are
looking for some help badly on this. 

Do you see increase in tomcat's maxthread from 5000 to 10000 will help us over here ? Already
we have set autowarmcount to zero. To give little background of our use case, our application
can hit our solr server with almost 50 to 100 threads parallely for adding / updating the
documents. 

I have pasted my solrconfig over here for reference. Let us know if any configuration change
will help us to get rid out of this missing documents.

<?xml version="1.0" encoding="UTF-8" ?>
<!--  Licensed to the Apache Software Foundation (ASF) under one or more  contributor license
agreements.  See the NOTICE file distributed with  this work for additional information regarding
copyright ownership.  The ASF licenses this file to You under the Apache License, Version
2.0  (the "License"); you may not use this file except in compliance with  the License.  You
may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0  Unless required
by applicable law or agreed to in writing, software  distributed under the License is distributed
on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and  limitations under the
License. -->
<config>
	<luceneMatchVersion>LUCENE_42</luceneMatchVersion>
	<lib dir="../../../contrib/extraction/lib" regex=".*\.jar" />
	<lib dir="../../../dist/" regex="solr-cell-\d.*\.jar" />
	<lib dir="../../../contrib/clustering/lib/" regex=".*\.jar" />
	<lib dir="../../../dist/" regex="solr-clustering-\d.*\.jar" />
	<lib dir="../../../contrib/langid/lib/" regex=".*\.jar" />
	<lib dir="../../../dist/" regex="solr-langid-\d.*\.jar" />
	<lib dir="../../../contrib/velocity/lib" regex=".*\.jar" />
	<lib dir="../../../dist/" regex="solr-velocity-\d.*\.jar" />
	<lib dir="/total/crap/dir/ignored" />
	<dataDir>${solr.data.dir:}</dataDir>
	<directoryFactory name="DirectoryFactory"  class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
	<codecFactory class="solr.SchemaCodecFactory"/>
	<lockType>${solr.lock.type:native}</lockType>
</indexConfig>
<jmx />
<updateHandler class="solr.DirectUpdateHandler2">
	<updateLog>
		<str name="dir">${solr.ulog.dir:}</str>
	</updateLog>
	<autoCommit>
		<maxTime>30000</maxTime>
		<openSearcher>false</openSearcher>
	</autoCommit>
	<autoSoftCommit>
		<maxTime>1000</maxTime>
	</autoSoftCommit>
</updateHandler>
<maxBooleanClauses>1024</maxBooleanClauses>
<filterCache class="solr.FastLRUCache" size="256" initialSize="128" autowarmCount="0"/>
<enableLazyFieldLoading>false</enableLazyFieldLoading>
<queryResultWindowSize>20</queryResultWindowSize>
<queryResultMaxDocsCached>50</queryResultMaxDocsCached>
<listener event="newSearcher" class="solr.QuerySenderListener">
	<arr name="queries"/>
</listener>
<listener event="firstSearcher" class="solr.QuerySenderListener">
	<arr name="queries">
		<lst>
			<str name="q">static firstSearcher warming in solrconfig.xml</str>
		</lst>
	</arr>
</listener>
<useColdSearcher>false</useColdSearcher>
<maxWarmingSearchers>2</maxWarmingSearchers>
</query>
<requestDispatcher handleSelect="false" >
	<requestParsers enableRemoteStreaming="true"  multipartUploadLimitInKB="2048000" formdataUploadLimitInKB="2048"/>
	<httpCaching never304="true" />
</requestDispatcher>
<requestHandler name="/select" class="solr.SearchHandler">
	<lst name="defaults">
		<str name="echoParams">explicit</str>
		<int name="rows">10</int>
		<str name="df">text</str>
	</lst>
</requestHandler>
<requestHandler name="/query" class="solr.SearchHandler">
	<lst name="defaults">
		<str name="echoParams">explicit</str>
		<str name="wt">json</str>
		<str name="indent">true</str>
		<str name="df">text</str>
	</lst>
</requestHandler>
<requestHandler name="/get" class="solr.RealTimeGetHandler">
	<lst name="defaults">
		<str name="omitHeader">true</str>
		<str name="wt">json</str>
		<str name="indent">true</str>
	</lst>
</requestHandler>

<requestHandler name="/browse" class="solr.SearchHandler">
	<lst name="defaults">
		<str name="echoParams">explicit</str>
		<!-- VelocityResponseWriter settings -->
		<str name="wt">velocity</str>
		<str name="v.template">browse</str>
		<str name="v.layout">layout</str>
		<str name="title">Solritas</str>
		<!-- Query settings -->
		<str name="defType">edismax</str>
		<str name="qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4 title^10.0
description^5.0 keywords^5.0 author^2.0 resourcename^1.0</str>
		<str name="df">text</str>
		<str name="mm">100%</str>
		<str name="q.alt">*:*</str>
		<str name="rows">10</str>
		<str name="fl">*,score</str>
		<str name="mlt.qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0</str>
		<str name="mlt.fl">text,features,name,sku,id,manu,cat,title,description,keywords,author,resourcename</str>
		<int name="mlt.count">3</int>
		<!-- Faceting defaults -->
		<str name="facet">on</str>
		<str name="facet.field">cat</str>
		<str name="facet.field">manu_exact</str>
		<str name="facet.field">content_type</str>
		<str name="facet.field">author_s</str>
		<str name="facet.query">ipod</str>
		<str name="facet.query">GB</str>
		<str name="facet.mincount">1</str>
		<str name="facet.pivot">cat,inStock</str>
		<str name="facet.range.other">after</str>
		<str name="facet.range">price</str>
		<int name="f.price.facet.range.start">0</int>
		<int name="f.price.facet.range.end">600</int>
		<int name="f.price.facet.range.gap">50</int>
		<str name="facet.range">popularity</str>
		<int name="f.popularity.facet.range.start">0</int>
		<int name="f.popularity.facet.range.end">10</int>
		<int name="f.popularity.facet.range.gap">3</int>
		<str name="facet.range">manufacturedate_dt</str>
		<str name="f.manufacturedate_dt.facet.range.start">NOW/YEAR-10YEARS</str>
		<str name="f.manufacturedate_dt.facet.range.end">NOW</str>
		<str name="f.manufacturedate_dt.facet.range.gap">+1YEAR</str>
		<str name="f.manufacturedate_dt.facet.range.other">before</str>
		<str name="f.manufacturedate_dt.facet.range.other">after</str>
		<!-- Highlighting defaults -->
		<str name="hl">on</str>
		<str name="hl.fl">content features title name</str>
		<str name="hl.encoder">html</str>
		<str name="hl.simple.pre">&lt;b&gt;</str>
		<str name="hl.simple.post">&lt;/b&gt;</str>
		<str name="f.title.hl.fragsize">0</str>
		<str name="f.title.hl.alternateField">title</str>
		<str name="f.name.hl.fragsize">0</str>
		<str name="f.name.hl.alternateField">name</str>
		<str name="f.content.hl.snippets">3</str>
		<str name="f.content.hl.fragsize">200</str>
		<str name="f.content.hl.alternateField">content</str>
		<str name="f.content.hl.maxAlternateFieldLength">750</str>
		<!-- Spell checking defaults -->
		<str name="spellcheck">on</str>
		<str name="spellcheck.extendedResults">false</str>
		<str name="spellcheck.count">5</str>
		<str name="spellcheck.alternativeTermCount">2</str>
		<str name="spellcheck.maxResultsForSuggest">5</str>
		<str name="spellcheck.collate">true</str>
		<str name="spellcheck.collateExtendedResults">true</str>
		<str name="spellcheck.maxCollationTries">5</str>
		<str name="spellcheck.maxCollations">3</str>
	</lst>
	<!-- append spellchecking to our list of components -->
	<arr name="last-components">
		<str>spellcheck</str>
	</arr>
</requestHandler>
<requestHandler name="/update" class="solr.UpdateRequestHandler"/>
<requestHandler name="/update/json" class="solr.JsonUpdateRequestHandler">
	<lst name="defaults">
		<str name="stream.contentType">application/json</str>
	</lst>
</requestHandler>
<requestHandler name="/update/csv" class="solr.CSVRequestHandler">
	<lst name="defaults">
		<str name="stream.contentType">application/csv</str>
	</lst>
</requestHandler>
<requestHandler name="/update/extract"  startup="lazy" class="solr.extraction.ExtractingRequestHandler"
>
	<lst name="defaults">
		<str name="lowernames">true</str>
		<str name="uprefix">ignored_</str>
		<!-- capture link hrefs but ignore div attributes -->
		<str name="captureAttr">true</str>
		<str name="fmap.a">links</str>
		<str name="fmap.div">ignored_</str>
	</lst>
</requestHandler>
<requestHandler name="/analysis/field"  startup="lazy" class="solr.FieldAnalysisRequestHandler"
/>
<requestHandler name="/analysis/document"  class="solr.DocumentAnalysisRequestHandler"
 startup="lazy" />
<requestHandler name="/admin/"  class="solr.admin.AdminHandlers" />
<requestHandler name="/admin/ping" class="solr.PingRequestHandler">
	<lst name="invariants">
		<str name="q">solrpingquery</str>
	</lst>
	<lst name="defaults">
		<str name="echoParams">all</str>
	</lst>
</requestHandler>
<!-- Echo the request contents back to the client -->
<requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
	<lst name="defaults">
		<str name="echoParams">explicit</str>
		<str name="echoHandler">true</str>
	</lst>
</requestHandler>
<requestHandler name="/replication" class="solr.ReplicationHandler" > </requestHandler>
<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
	<str name="queryAnalyzerFieldType">textSpell</str>
	<!-- a spellchecker built from a field of the main index -->
	<lst name="spellchecker">
		<str name="name">default</str>
		<str name="field">name</str>
		<str name="classname">solr.DirectSolrSpellChecker</str>
		<!-- the spellcheck distance measure used, the default is the internal levenshtein -->
		<str name="distanceMeasure">internal</str>
		<!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
		<float name="accuracy">0.5</float>
		<!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
		<int name="maxEdits">2</int>
		<!-- the minimum shared prefix when enumerating terms -->
		<int name="minPrefix">1</int>
		<!-- maximum number of inspections per result. -->
		<int name="maxInspections">5</int>
		<!-- minimum length of a query term to be considered for correction -->
		<int name="minQueryLength">4</int>
		<!-- maximum threshold of documents a query term can appear to be considered for correction
-->
		<float name="maxQueryFrequency">0.01</float>
		<!-- uncomment this to require suggestions to occur in 1% of the documents<float name="thresholdTokenFrequency">.01</float>-->
	</lst>
	<!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage
-->
	<lst name="spellchecker">
		<str name="name">wordbreak</str>
		<str name="classname">solr.WordBreakSolrSpellChecker</str>
		<str name="field">name</str>
		<str name="combineWords">true</str>
		<str name="breakWords">true</str>
		<int name="maxChanges">10</int>
	</lst>
</searchComponent>
<requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
	<lst name="defaults">
		<str name="df">text</str>
		<str name="spellcheck.dictionary">default</str>
		<str name="spellcheck.dictionary">wordbreak</str>
		<str name="spellcheck">on</str>
		<str name="spellcheck.extendedResults">true</str>
		<str name="spellcheck.count">10</str>
		<str name="spellcheck.alternativeTermCount">5</str>
		<str name="spellcheck.maxResultsForSuggest">5</str>
		<str name="spellcheck.collate">true</str>
		<str name="spellcheck.collateExtendedResults">true</str>
		<str name="spellcheck.maxCollationTries">10</str>
		<str name="spellcheck.maxCollations">5</str>
	</lst>
	<arr name="last-components">
		<str>spellcheck</str>
	</arr>
</requestHandler>
<searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
<requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
	<lst name="defaults">
		<str name="df">text</str>
		<bool name="tv">true</bool>
	</lst>
	<arr name="last-components">
		<str>tvComponent</str>
	</arr>
</requestHandler>
<searchComponent name="clustering" enable="${solr.clustering.enabled:false}" class="solr.clustering.ClusteringComponent"
>
	<!-- Declare an engine -->
	<lst name="engine">
		<!-- The name, only one can be named "default" -->
		<str name="name">default</str>
		<str name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>
		<str name="LingoClusteringAlgorithm.desiredClusterCountBase">20</str>
		<str name="carrot.lexicalResourcesDir">clustering/carrot2</str>
		<str name="MultilingualClustering.defaultLanguage">ENGLISH</str>
	</lst>
	<lst name="engine">
		<str name="name">stc</str>
		<str name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>
	</lst>
</searchComponent>
<requestHandler name="/clustering" startup="lazy" enable="${solr.clustering.enabled:false}"
class="solr.SearchHandler">
	<lst name="defaults">
		<bool name="clustering">true</bool>
		<str name="clustering.engine">default</str>
		<bool name="clustering.results">true</bool>
		<!-- The title field -->
		<str name="carrot.title">name</str>
		<str name="carrot.url">id</str>
		<!-- The field to cluster on -->
		<str name="carrot.snippet">features</str>
		<!-- produce summaries -->
		<bool name="carrot.produceSummary">true</bool>
		<!-- the maximum number of labels per cluster -->
		<!--<int name="carrot.numDescriptions">5</int>-->
		<!-- produce sub clusters -->
		<bool name="carrot.outputSubClusters">false</bool>
		<str name="defType">edismax</str>
		<str name="qf">text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4</str>
		<str name="q.alt">*:*</str>
		<str name="rows">10</str>
		<str name="fl">*,score</str>
	</lst>
	<arr name="last-components">
		<str>clustering</str>
	</arr>
</requestHandler>
<searchComponent name="terms" class="solr.TermsComponent"/>
<!-- A request handler for demonstrating the terms component -->
<requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
	<lst name="defaults">
		<bool name="terms">true</bool>
		<bool name="distrib">false</bool>
	</lst>
	<arr name="components">
		<str>terms</str>
	</arr>
</requestHandler>
<searchComponent name="elevator" class="solr.QueryElevationComponent" >
	<!-- pick a fieldType to analyze queries -->
	<str name="queryFieldType">string</str>
	<str name="config-file">elevate.xml</str>
</searchComponent>
<!-- A request handler for demonstrating the elevator component -->
<requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
	<lst name="defaults">
		<str name="echoParams">explicit</str>
		<str name="df">text</str>
	</lst>
	<arr name="last-components">
		<str>elevator</str>
	</arr>
</requestHandler>
<searchComponent class="solr.HighlightComponent" name="highlight">
	<highlighting>
		<fragmenter name="gap"  default="true" class="solr.highlight.GapFragmenter">
			<lst name="defaults">
				<int name="hl.fragsize">100</int>
			</lst>
		</fragmenter>
		<fragmenter name="regex"  class="solr.highlight.RegexFragmenter">
			<lst name="defaults">
				<!-- slightly smaller fragsizes work better because of slop -->
				<int name="hl.fragsize">70</int>
				<!-- allow 50% slop on fragment sizes -->
				<float name="hl.regex.slop">0.5</float>
				<!-- a basic sentence pattern -->
				<str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
			</lst>
		</fragmenter>
		<!-- Configure the standard formatter -->
		<formatter name="html"  default="true" class="solr.highlight.HtmlFormatter">
			<lst name="defaults">
				<str name="hl.simple.pre">
					<![CDATA[<em>]]>
				</str>
				<str name="hl.simple.post">
					<![CDATA[</em>]]>
				</str>
			</lst>
		</formatter>
		<!-- Configure the standard encoder -->
		<encoder name="html"  class="solr.highlight.HtmlEncoder" />
		<!-- Configure the standard fragListBuilder -->
		<fragListBuilder name="simple"  class="solr.highlight.SimpleFragListBuilder"/>
		<!-- Configure the single fragListBuilder -->
		<fragListBuilder name="single"  class="solr.highlight.SingleFragListBuilder"/>
		<!-- Configure the weighted fragListBuilder -->
		<fragListBuilder name="weighted"  default="true" class="solr.highlight.WeightedFragListBuilder"/>
		<!-- default tag FragmentsBuilder -->
		<fragmentsBuilder name="default"  default="true" class="solr.highlight.ScoreOrderFragmentsBuilder">
			<!--  </fragmentsBuilder><!-- multi-colored tag FragmentsBuilder -->
			<fragmentsBuilder name="colored"  class="solr.highlight.ScoreOrderFragmentsBuilder">
				<lst name="defaults">
					<str name="hl.tag.pre">
						<![CDATA[<b style="background:yellow">,<b style="background:lawgreen">,<b
style="background:aquamarine">,<b style="background:magenta">,<b style="background:palegreen">,<b
style="background:coral">,<b style="background:wheat">,<b style="background:khaki">,<b
style="background:lime">,<b style="background:deepskyblue">]]>
					</str>
					<str name="hl.tag.post">
						<![CDATA[</b>]]>
					</str>
				</lst>
			</fragmentsBuilder>
			<boundaryScanner name="default"  default="true" class="solr.highlight.SimpleBoundaryScanner">
				<lst name="defaults">
					<str name="hl.bs.maxScan">10</str>
					<str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
				</lst>
			</boundaryScanner>
			<boundaryScanner name="breakIterator"  class="solr.highlight.BreakIteratorBoundaryScanner">
				<lst name="defaults">
					<!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
					<str name="hl.bs.type">WORD</str>
					<!-- language and country are used when constructing Locale object.  -->
					<!-- And the Locale object will be used when getting instance of BreakIterator -->
					<str name="hl.bs.language">en</str>
					<str name="hl.bs.country">US</str>
				</lst>
			</boundaryScanner>
		</highlighting>
	</searchComponent>
	<queryResponseWriter name="xml" class="solr.XMLResponseWriter" />
	<queryResponseWriter default="true" name="PWxml" class="com.vg.pw.solr.PWXMLOperation.PWXMLResponseWriter"
/>
	<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
	<queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
	<queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
	<queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
	<queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
	<queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
	<queryResponseWriter name="PWcsv" class="com.vg.pw.solr.csv.response.PWCSVResponseWriter"
/>
	<queryResponseWriter name="json" class="solr.JSONResponseWriter">
		<str name="content-type">text/plain; charset=UTF-8</str>
	</queryResponseWriter>
	<queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy"/>
	<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
		<int name="xsltCacheLifetimeSeconds">5</int>
	</queryResponseWriter>
	<admin>
		<defaultQuery>*:*</defaultQuery>
	</admin>
</config>

> During concurrency some Solr document are not seen even after soft and hard commit
> ----------------------------------------------------------------------------------
>
>                 Key: SOLR-9466
>                 URL: https://issues.apache.org/jira/browse/SOLR-9466
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: SolrCloud
>    Affects Versions: 4.10.2
>         Environment: Cent OS
>            Reporter: Ganesh
>            Priority: Critical
>
> Solr cloud with 2 nodes, master master, with 5 collection and 2 shards in each collection.

> During concurrent usage of SOLR where both updates and search is sent to SOLR server,
some of our updates / adding of new documents are getting lost.
> We could see that update hitting solr and we could see it in localhost_access file of
tomcat, also in catalina.out. But still we couldn't see that record while searching. 
> Following is the catalina.out logs for the document which is getting indexed properly.
> Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor processAdd
> FINE: PRE_UPDATE add{,id=CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001}
{{params(crid=CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001),defaults(wt=xml)}}
> Sep 01, 2016 7:39:31 AM org.apache.solr.update.TransactionLog <init>
> FINE: New TransactionLog file=/ebdata2/solrdata/IOB_shard1_replica1/data/tlog/tlog.0000000000000220856,
exists=false, size=0, openExisting=false
> Sep 01, 2016 7:39:31 AM org.apache.solr.update.SolrCmdDistributor submit
> FINE: sending update to http://xx.xx.xx.xx:7070/solr/IOB_shard1_replica2/ retry:0 add{_version_=1544254202941800448,id=CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001}
params:update.distrib=FROMLEADER&distrib.from=http%3A%2F%2Fxx.xx.xx.xx%3A7070%2Fsolr%2FIOB_shard1_replica1%2F
> Sep 01, 2016 7:39:31 AM org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner
run
> FINE: starting runner: org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2
> Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor finish
> FINE: PRE_UPDATE FINISH {{params(crid=CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001),defaults(wt=xml)}}
> Sep 01, 2016 7:39:31 AM org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner
run
> FINE: finished: org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2
> Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor finish
> INFO: [IOB_shard1_replica1] webapp=/solr path=/update params={crid=CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001}
{add=[CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001 (1544254202941800448)]}
0 9
> Sep 01, 2016 7:39:31 AM org.apache.solr.servlet.SolrDispatchFilter doFilter
> FINE: Closing out SolrRequest: {{params(crid=CUA0000004390000019223370564139207241C3LEA0000020769223370567404392838EXCC3000001),defaults(wt=xml)}}
> For the one which document is not getting indexed, we could see only following log in
catalina.out. Not sure whether it's getting added to SOLR.
> Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor finish
> FINE: PRE_UPDATE FINISH {{params(crid=CUA0000004390000019223370564139182810C3LEA0000020179223370567061972057EXCC1000002),defaults(wt=xml)}}
> Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor finish
> INFO: [IOB_shard1_replica1] webapp=/solr path=/update params={crid=CUA0000004390000019223370564139182810C3LEA0000020179223370567061972057EXCC1000002}
{} 0 1
> Sep 01, 2016 7:39:56 AM org.apache.solr.servlet.SolrDispatchFilter doFilter
> FINE: Closing out SolrRequest: {{params(crid=CUA0000004390000019223370564139182810C3LEA0000020179223370567061972057EXCC1000002),defaults(wt=xml)}}
> We have set autosoftcommit to 1 seconds and autohardcommit to 30 seconds.
> We are not getting any errors or exceptions in the log. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message