uima-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Charles Proefrock <chas....@hotmail.com>
Subject RE: Server Socket Timeout Woes
Date Tue, 29 Apr 2008 17:22:13 GMT
I'm excited to see this thread for it's affirmation that someone has pushed Vinci scalability
to the point that Steve has at LLNL.  Also, to know the currently released version has some
limitations.  At the risk of diverting this thread, let me share what we've found.
 
I'm on board with Adam's line of thinking.  We've just spent 2 weeks experimenting with the
various options for exclusive/random allocation of Vinci services, finding that 'exclusive'
is the most reliable way to balance load (random sometimes hands all of the clients the same
service while other services go unused).  The phrase "when a service is needed" isn't clear
in the documentation.  As Adam indicated, our finding is that "need" occurs only at client
thread initialization time as opposed to each process(CAS) call.  Additionally, "exclusive"
is not exactly clear, as two client threads can be handed the same service if the number of
services available are less than the number of threads initializing.  This behavior is robust
(better to get a remote than have nothing allocated), but it isn't clear from our relatively
small setup (two threads, two remotes) what the word 'exclusive' means or how large a system
can get before 'exclusive' pans out as the right/wrong approach.
 
In the face of services starting/stopping on remote computers (e.g., during multi-platform
reboot), there seems to be no way to robustly take advantage of additional services coming
on-line.  If "when needed" meant each process(CAS) call (as an option at least ... to trade
the re-connect negotiation overhead for dynamic scalability), then a system that initializes
to 5 remotes can balance out as 10,20,30 remotes come online.  For now, we are using the CPE
'numToProcess' parameter to exit the CPE, then construct a new CPE and re-enter the process()
routine to seek out new services periodically.
Also, we are seeing a startup sequence that sometimes results in the first document sent to
each remote returning immediately with a connection/timeout exception ... so we catch those
items and re-submit them at the end of the queue in case they really did exit due to a valid
timeout exception.
 
Any feedback/collaboration would be appreciated.
 
- Charles
 



> Date: Wed, 23 Apr 2008 17:44:50 -0400> From: alally@alum.rpi.edu> To: uima-user@incubator.apache.org>
Subject: Re: Server Socket Timeout Woes> > On Wed, Apr 23, 2008 at 4:39 PM, Steve Suppe
<ssuppe@llnl.gov> wrote:> > Hello again,> >> > I think you are 100%
right here. I managed to roll back to my patched> > version of UIMA 2.1.0. In this one,
I implemented the pool of threads as> > automatically expandable. This seemed to solve
all of our problems, and> > things are chugging away very happily now.> >>
> I know this is the user group, but is this something I should look to> > contributing
somehow?> >> > Definitely - you could open a JIRA issue and attach a patch. We>
should probably think a bit about how this thread pool was supposed to> work, though. My
first thought is that the clients would round-robin> over the available threads, and each
thread would be used for only one> request and would then be relinquished back into the
pool. But> instead, it looks like the client holds onto a thread for the entire> time
that client is connected, which doesn't make a whole lot of> sense. If the thread pool
worked in a more sensible way, it might not> need to be expandable.> > -Adam
_________________________________________________________________
Back to work after baby–how do you know when you’re ready?
http://lifestyle.msn.com/familyandparenting/articleNW.aspx?cp-documentid=5797498&ocid=T067MSN40A0701A
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message