lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Miller (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SOLR-12290) Improve our servlet stream closing prevention code for users and devs.
Date Mon, 30 Apr 2018 04:40:00 GMT

     [ https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Mark Miller updated SOLR-12290:
-------------------------------
    Description: 
Original Summary:

When you fetch a file for replication we close the request output stream after writing the
file which ruins the connection for reuse.

We can't close response output streams, we need to reuse these connections. If we do, clients
are hit will connection problems when they try and reuse the connection from their pool.

New Summary:

At some point the above was addressed during refactoring. We should remove these neutered
closes and review our close shield code.


If you are here to track down why this is done:

Connection reuse requires that we read all streams and do not close them - instead the container
itself must manage request and response streams. If we allow them to be closed, not only do
we lose some connection reuse, but we can cause spurious client errors that can cause expensive
recoveries for no reason. The spec allows us to count on the container to manage streams.
It's our job simply to not close them and to always read them fully, from client and server.


Java itself can help with always reading the streams fully up to some small default amount
of unread stream slack, but that is very dangerous to count on, so we always manually eat
up anything on the streams our normal logic ends up not reading for whatever reason.


 

 

  was:
Original Summary:

When you fetch a file for replication we close the request output stream after writing the
file which ruins the connection for reuse.

We can't close response output streams, we need to reuse these connections. If we do, clients
are hit will connection problems when they try and reuse the connection from their pool.

New Summary:

At some point the above was addressed during refactoring. We should remove these neutered
closes and review our close shield code.


If you are here to track down why this is done:

Connection reuse requires that we read all streams and do not close them - instead the container
itself must manage request and response streams. If we allow them to be closed, not only do
we lose some connection reuse, but we can cause spurious client errors that can for expensive
recoveries for no reason. The spec allows us to count on the container to manage streams.
It's our job simply to not close them and to always read them fully, from client and server.


Java itself can help with always reading the streams fully up to some small default amount
of unread stream slack, but that is very dangerous to count on, so we always manually eat
up anything on the streams our normal logic ends up not reading for whatever reason.


 

 


> Improve our servlet stream closing prevention code for users and devs.
> ----------------------------------------------------------------------
>
>                 Key: SOLR-12290
>                 URL: https://issues.apache.org/jira/browse/SOLR-12290
>             Project: Solr
>          Issue Type: Task
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Mark Miller
>            Assignee: Mark Miller
>            Priority: Minor
>         Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream after writing
the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. If we do,
clients are hit will connection problems when they try and reuse the connection from their
pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove these neutered
closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - instead the
container itself must manage request and response streams. If we allow them to be closed,
not only do we lose some connection reuse, but we can cause spurious client errors that can
cause expensive recoveries for no reason. The spec allows us to count on the container to
manage streams. It's our job simply to not close them and to always read them fully, from
client and server. 
> Java itself can help with always reading the streams fully up to some small default amount
of unread stream slack, but that is very dangerous to count on, so we always manually eat
up anything on the streams our normal logic ends up not reading for whatever reason.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message