db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Samuel Andrew McIntyre <fuzzylo...@nonintuitive.com>
Subject Re: Help detecting client disconnects for network server
Date Sun, 10 Oct 2004 10:21:55 GMT
Hash: SHA1

On Oct 9, 2004, at 11:57 PM, Jonas S Karlsson wrote:

> Another solution, would be to require the client to "ping" the server
> at regular intervals, kind of like a remote "watchdog" process, the
> server clears a flag for that client at any communcation, keeps the
> time when it last recieved communication.  A server watchdog thread
> can then at regular time interval (x times longer than the interval at
> the client) check that the the flag is cleared/time is acceptable, and
> if not "kill" the client connection.

I just want to point out that this is almost precisely what TCP 
keep-alive tries to do at a low level. And that, in practice, it fails 
to determine anything about the state of the network between the client 
and server. This means that it is left up to the reader (i.e. the 
listener expecting response to a ping) to decide what the absence of a 
response to such a 'ping' actually means.

Is it really necessary to reinvent keep-alive at a higher level? Why 
not just decide that 'X' (m)secs without response indicates that a 
failure has occurred of enough significance to cancel a pending 
transaction? And if the value of 'X' is variable, that the application 
developer writing the client/server application can decide what the 
appropriate value of 'X' should be?

Version: GnuPG v1.2.4 (Darwin)


View raw message