axis-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Borut Bolcina <>
Subject Intermediary node
Date Thu, 13 Nov 2003 10:29:30 GMT
Hi, again

since I had no luck, nobody answered to my question posted a month ago 
(subject WebServices chaining), i'll try again.

Can you please help me understand just how can one implement this 
'simple' scenario.


Many clients will connect with different "payload weight", which 
roughly determines the computational time for each request. The 
ultimate goal would be that intermediary node acts as a queue and a 
load balancer for a number of endpoints.

My first goal would be just simply the intermediary to receive the 
request, read the headers, make some database operations, make some 
choices and if everything is ok, forward the request with new headers 
to "real" web service (ENDPOINT) to do the job. If successful, the 
intermediary again does some processing and returns the result back to 
the originator of the request - the CLIENT.

Now, what happens if in the middle of this process another request 
comes in? The intermediary should be clever enough to know that the 
ENDPOINT is busy and act accordingly. I read about synchronous and 
asynchronous requests and other theoretical stuff which I could find on 
the internet, but I am still confused. If asynchronous services would 
be a way to go, then I guess the clients will have to be smarter and 
the choice of technology narrower. If request-response mechanism could 
do the job, it would be easier, wouldn't be?

I am aware (it is even a demand) that I will have to implement several 
ENDPOINTs which would work as a farm of the same services to increase 
the efficiency and the reliability of service will be one of the major 
questions, so I don't want to start implementing a bad architecture 
which will limit my options later on.

I have written a client which I installed on several machines to stress 
test the ENDPOINT web service. I bombarded the poor workhorse from 
those machines with requests coming apart just under a second from 
each. Each response lasted roughly from 15 to 45 seconds, but they all 
got processed successfully. Now, I would be a happy person if I could 
be able to squeeze this intermediary in-between to do some logging. I 
guess HTTP nature of request-response loop handled the queue for me. 
How do I do this with intermediary node?

Does my question narrows down to handling sessions? If client sends a 
request which takes a long time to process, how to handle another 
client's request which came seconds after the first one?

Am I missing something very crucial here?

I am developing with Java and WebObjects as the application server, 
just to note, but not to disturb you. WO uses Axis engine.


View raw message