lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eks Dev (JIRA)" <>
Subject [jira] [Commented] (SOLR-4117) IO error while trying to get the size of the Directory
Date Wed, 28 Nov 2012 15:20:58 GMT


Eks Dev commented on SOLR-4117:

fwiw, we *think* we observed the following problem in simple master slave setup with NRTCachingDirectory...
I am not sure it has something to do with issue, because ewe did not see this exception, anyhow

on replication, slave gets the index from master and works fine, then on:
1. graceful restart, the world looks fine 
2. kill -9 or such, solr does not start because an index gets corrupt (should actually not

We speculate that solr now does replication directly to Directory implementation and does
not ensure that replicated files get fsck-ed completely after replication. As far as I remember,
replication was going to /temp (disk) and than moving files if all went ok. Working under
assumption that everything is already persisted. Maybe this invariant does not hold any more
and some explicit fsck is needed for caching directories? 

I might be completely wrong, we just observed symptoms in not really debug-friendly environment

> IO error while trying to get the size of the Directory
> ------------------------------------------------------
>                 Key: SOLR-4117
>                 URL:
>             Project: Solr
>          Issue Type: Bug
>          Components: SolrCloud
>    Affects Versions: 5.0
>         Environment:
> Debian Squeeze, Tomcat 6, Sun Java 6, 10 nodes, 10 shards, rep. factor 2.
>            Reporter: Markus Jelsma
>            Assignee: Mark Miller
>            Priority: Minor
>             Fix For: 5.0
> With SOLR-4032 fixed we see other issues when randomly taking down nodes (nicely via
tomcat restart) while indexing a few million web pages from Hadoop. We do make sure that at
least one node is up for a shard but due to recovery issues it may not be live.
> One node seems to work but generates IO errors in the log and ZookeeperExeption in the
GUI. In the GUI we only see:
> {code}
> SolrCore Initialization Failures
>     openindex_f:

> Please check your logs for more information
> {code}
> and in the log we only see the following exception:
> {code}
> 2012-11-28 11:47:26,652 ERROR [solr.handler.ReplicationHandler] - [http-8080-exec-28]
- : IO error while trying to get the size of the
directory '/opt/solr/cores/shard_f/data/index' does not exist
>         at
>         at
>         at
>         at org.apache.solr.core.DirectoryFactory.sizeOfDirectory(
>         at org.apache.solr.handler.ReplicationHandler.getIndexSize(
>         at org.apache.solr.handler.ReplicationHandler.getReplicationDetails(
>         at org.apache.solr.handler.ReplicationHandler.handleRequestBody(
>         at org.apache.solr.handler.RequestHandlerBase.handleRequest(
>         at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(
>         at org.apache.solr.core.SolrCore.execute(
>         at org.apache.solr.servlet.SolrDispatchFilter.execute(
>         at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
>         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
>         at org.apache.catalina.core.ApplicationFilterChain.doFilter(
>         at org.apache.catalina.core.StandardWrapperValve.invoke(
>         at org.apache.catalina.core.StandardContextValve.invoke(
>         at org.apache.catalina.core.StandardHostValve.invoke(
>         at org.apache.catalina.valves.ErrorReportValve.invoke(
>         at org.apache.catalina.core.StandardEngineValve.invoke(
>         at org.apache.catalina.connector.CoyoteAdapter.service(
>         at org.apache.coyote.http11.Http11NioProcessor.process(
>         at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(
>         at$
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> {code}

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message