geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Mulder <>
Subject Re: Remoting Update
Date Mon, 24 Nov 2003 14:38:16 GMT
On Mon, 24 Nov 2003, Hiram Chirino wrote:
> Thanks for updating the test case..  It made it easy to track down the 
> problem.  It turns out that the reason that the NotificationListener is 
> not properly removed is because the filters are being passed by value in 
> the addNotificationListener and  removeNotificationListener method 
> calls.  This means the server side gets two different copies of the 
> filter which are != to each other.  The fix is to implement the equals() 
> and hashCode() so that the two copies can be compared properly.

	Okay, I added equals and hashCode in the filter, and now 
everything seems to be groovy!  (Oops, perhaps the wrong word to use 
around here).  I did discover that if I add a NotificationListener with a 
NotificationFilter, I also have to pass the same filter to the remove 
call.  It's curious that you can't just remove the listener and have it 
figure out that it should remove that listener regardless of whether it 
was registered with a filter or not.  In any case, I've checked in an 
updated ProgressObject that should work fine now.  Thanks again for the 

> I agree..  we need a more robust solution.  A small hiccup of the client 
> should not take down the server.

	If I understand correctly, you're creating a proxy for the remote 
objects on the server side, and registering the proxies with the actual 
MBeanServer.  Is that correct?  If so, can the proxies unregister 
themselves whenever they get a remoting error?

	Though that still doesn't solve the underlying threading issue.  
IMHO, we have to get MX4J changed so it uses at most 1 thread per client
(or equivalently per listener), not one thread per notification per
client.  In the time it takes a call to timeout, you may have had easily
100 notifications for the missing client...  And then you're treated to a 
stack trace for each one, while the next 100 calls have already been 


View raw message