manifoldcf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karl Wright (JIRA)" <>
Subject [jira] [Resolved] (CONNECTORS-1312) jcifs.smb.SmbException: Connection reset by peer: socket write error
Date Sat, 07 May 2016 19:57:12 GMT


Karl Wright resolved CONNECTORS-1312.
       Resolution: Fixed
         Assignee: Karl Wright
    Fix Version/s: ManifoldCF 2.5


I've committed this but it is not clear to me that the problem is transient on your setup.
 If it is not transient, the document will retry repeatedly and the job will still abort eventually.

It is in general a bad idea to knowingly stress windows servers because they fail in a myriad
of different ways when you do that.  We can fix this issue or that but it's like putting fingers
in a dike.

> jcifs.smb.SmbException: Connection reset by peer: socket write error
> --------------------------------------------------------------------
>                 Key: CONNECTORS-1312
>                 URL:
>             Project: ManifoldCF
>          Issue Type: Bug
>          Components: JCIFS connector
>    Affects Versions: ManifoldCF 2.5
>         Environment: Windows x64, java 1.8.x
>            Reporter: Konstantin Avdeev
>            Assignee: Karl Wright
>             Fix For: ManifoldCF 2.5
> hi Karl,
> we've found another JCIFS exception: Windows share jobs stop when encountering a "Connection
reset by peer" error, e.g.:
> {code}
> ERROR 2016-05-03 15:29:24,209 (Worker thread '80') - JCIFS: SmbException tossed processing
> jcifs.smb.SmbException: Connection reset by peer: socket write error
> Connection reset by peer: socket write error
> 	at Method)
> 	at
> 	at
> 	at jcifs.smb.SmbTransport.doSend(
> 	at jcifs.util.transport.Transport.sendrecv(
> 	at jcifs.smb.SmbTransport.send(
> 	at jcifs.smb.SmbSession.send(
> 	at jcifs.smb.SmbTree.send(
> 	at jcifs.smb.SmbFile.send(
> 	at jcifs.smb.SmbFileInputStream.readDirect(
> 	at
> 	at
> 	at
> 	at
> 	at java.nio.file.Files.copy(
> 	at java.nio.file.Files.copy(
> 	at
> 	at
> 	at
> 	at
> 	at org.apache.tika.detect.CompositeDetector.detect(
> 	at org.apache.tika.parser.AutoDetectParser.parse(
> 	at org.apache.manifoldcf.agents.transformation.tika.TikaParser.parse(
> 	at org.apache.manifoldcf.agents.transformation.tika.TikaExtractor.addOrReplaceDocumentWithException(
> 	at org.apache.manifoldcf.agents.incrementalingest.IncrementalIngester$PipelineAddEntryPoint.addOrReplaceDocumentWithException(
> 	at org.apache.manifoldcf.agents.incrementalingest.IncrementalIngester$PipelineAddFanout.sendDocument(
> 	at org.apache.manifoldcf.agents.incrementalingest.IncrementalIngester$PipelineObjectWithVersions.addOrReplaceDocumentWithException(
> 	at org.apache.manifoldcf.agents.incrementalingest.IncrementalIngester.documentIngest(
> 	at org.apache.manifoldcf.crawler.system.WorkerThread$ProcessActivity.ingestDocumentWithException(
> 	at org.apache.manifoldcf.crawler.system.WorkerThread$ProcessActivity.ingestDocumentWithException(
> 	at org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveConnector.processDocuments(
> 	at
> {code}
> Current workaround - to start the job again (manually or by the scheduler).
> It is clear, that there are many errors, when it makes no sense to skip a failed URL
and continue the job, e.g.:
> {code}
> Error: SmbAuthException thrown: Logon failure: unknown user name or bad password.
> {code}
> I'm thinking about a general solution, like defining a list (through the UI or properties.xml)
with non severe exceptions, like "file busy" or "symlink detected" etc, so the admins would
be able to specify, when the crawler should stop and when it should retry, skip and go further.
> What do you think?
> Thank you!

This message was sent by Atlassian JIRA

View raw message