river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Trasuk <tras...@stratuscom.com>
Subject Re: Clustered Jini Server? Was: Re: Mirroring to GitHub
Date Wed, 03 Jun 2015 14:04:18 GMT

There’s never a bad time to point out the excellent Jiniology series that’s still up on
Artima.com <http://artima.com/>:

http://www.artima.com/jini/jiniology/index.html <http://www.artima.com/jini/jiniology/index.html>

In particular, the explanation of JavaSpaces: 
http://www.artima.com/jini/jiniology/js1.html <http://www.artima.com/jini/jiniology/js1.html>

Cheers,

Greg Trasuk

> On Jun 3, 2015, at 7:42 AM, Palash Ray <paawak@gmail.com> wrote:
> 
> Interesting thought about using Java Spaces. However, for us, there is an
> extra maintenance of the Java Spaces server in production, which is my
> worry. Moreover, in our application, it is always a synchronous call from
> the Swing client to the Jini server. It would be a lot of effort to make
> this an asynchronous call to Java Sapces. So for our kind of application,
> this would not be suitable.
> 
> Thanks,
> Palash.
> 
> On Wed, Jun 3, 2015 at 1:14 AM, Simon Roberts <
> simon@dancingcloudservices.com> wrote:
> 
>> Hard to be sure if this is a sensible comment without knowing more about
>> what you're trying to do, but the typical "load balance" in a Jini
>> environment has traditionally been a Java Spaces server, into which "jobs"
>> (probably simply Runnable implementations) are placed. The clustered work
>> engines are configured to take (in a transaction) a job from the space,
>> process it, and put it back with an attribute indicating completion. On
>> putting the job back, the original take transaction is committed.
>> Therefore, if the server crashes before the job is completed, the "take"
>> evaporates, and some other work engine gets to re-take, hopefully
>> completing successfully. This model allows any number of work engines to be
>> load balanced for essentially zero communication between them, and no
>> actual load balancer exists (in the sense that no active component has to
>> keep track of the work engines). The workers take as fast at they're able
>> to do work, but no faster, so they don't get overloaded. You can bring
>> workers up, and shut them down, with zero reconfiguration.
>> 
>> Cheers,
>> Simon
>> 
>> 
>> On Tue, Jun 2, 2015 at 8:13 PM, Palash Ray <paawak@gmail.com> wrote:
>> 
>>> Thanks Dennis, I will definitely explore that option.
>>> 
>>> On Tue, Jun 2, 2015 at 9:37 PM, Dennis Reedy <dennis.reedy@gmail.com>
>>> wrote:
>>> 
>>>> Hi Palash,
>>>> 
>>>> Using reggie as a load balancer does not make the most sense, what you
>>> may
>>>> want to consider to to maintain a collection of discovered services and
>>>> simply round robin across them. You might want to start looking at the
>>>> ServiceDiscoveryManager and the LookupCache for this.
>>>> 
>>>> HTH
>>>> 
>>>> Dennis
>>>> 
>>>> 
>>>> On Tue, Jun 2, 2015 at 9:16 PM, Palash Ray <paawak@gmail.com> wrote:
>>>> 
>>>>> Excellent. May be we can help each other here.
>>>>> 
>>>>> Let me start by giving some more context around the problem.
>>>>> 
>>>>> *Problem*
>>>>> Our middle tier that is a Jini-based rmi server. We have a Swing
>> client
>>>>> that connects to it. In the middle tier, we have lot of processing
>>> logic:
>>>>> fetch something from the database, do some calculation intensive
>>>>> processing, write the results back to the database.
>>>>> 
>>>>> Of late there has been a huge increase of the loads: the no. of Swing
>>>>> clients has increased, as has the bulk of the data to be processed.
>> It
>>>> has
>>>>> come to a point, where our production server, which is a single
>>> machine,
>>>> is
>>>>> creaking under the load.
>>>>> 
>>>>> So, we have decided to cluster it. We are planning to have at least 3
>>> or
>>>> 4
>>>>> Jini servers and a load balancer to spread out the load evenly.
>>>>> 
>>>>> I was doing a proof of concept using the Jini infrasctrure itself.
>>> These
>>>>> were my thoughts:
>>>>> 
>>>>> *Option 1*
>>>>> 
>>>>> 
>>>> 
>>> 
>> https://github.com/paawak/blog/tree/master/code/jini/unsecure/load-balancing
>>>>> 
>>>>> The load balancing architecture here is very very simple. There is a
>>>>> single load balancer with its own reggie running at 6670. This is the
>>>>> primary contact point for all clients.
>>>>> 
>>>>> There are multiple reggie involved for load balancing. The following
>>>>> convention is followed:
>>>>> 
>>>>> 1. The reggie for load balancer is at 6670
>>>>> 2. The reggie for the actual jini servers are at 5561, 5562, 5563,
>> etc.
>>>>> 
>>>>> When the load-balancer recieves a request from client, it does the
>>>>> look-up at the appropriate jini-server and returns the remote
>> service.
>>>>> 
>>>>> *Option 2*
>>>>> https://github.com/paawak/jini-in-a-war
>>>>> 
>>>>> I figured that if we can embed the Jini in a Tomcat and then
>> clustering
>>>> the
>>>>> Tomcat would be very easy. But this is still work in progress, and
>>> there
>>>>> are lot of details that I need to figure out.
>>>>> 
>>>>> Please let me know if the above makes sense or is around the same
>>> things
>>>>> that interest you. I would like to have a out of the box Jini
>> solution
>>>>> that *just
>>>>> works*. And I am happy to code for any solution that you guys think
>>>> should
>>>>> be the way forward.
>>>>> 
>>>>> Thanks,
>>>>> Palash.
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On Tue, Jun 2, 2015 at 2:14 PM, Patricia Shanahan <pats@acm.org>
>>> wrote:
>>>>> 
>>>>>> Also, if there is any chance the bottleneck is in River, I would
be
>>>> very,
>>>>>> very interested in constructing a benchmark based on your workload
>>> that
>>>>>> demonstrates the scaling problem. I would like to run it against
>> the
>>>>> latest
>>>>>> unreleased version, which I think may fix some scaling issues. If
>> it
>>>>> still
>>>>>> shows scaling problems, I want to track them down and see whether
>>> they
>>>>> are
>>>>>> fixable without clustering.
>>>>>> 
>>>>>> My most recent professional background, before retiring, was as a
>>>>>> performance architect working on multiprocessor servers for Cray
>>>> Research
>>>>>> and Sun Microsystems. When I first got involved in River I was
>>> thinking
>>>>> of
>>>>>> doing some performance analysis and improvement, one of my favorite
>>>>> games,
>>>>>> but could not find a suitable benchmark, or an actual user with a
>>>> scaling
>>>>>> problem.
>>>>>> 
>>>>>> Patricia
>>>>>> 
>>>>>> 
>>>>>> On 6/2/2015 10:24 AM, Greg Trasuk wrote:
>>>>>> 
>>>>>>> 
>>>>>>> Palash:
>>>>>>> 
>>>>>>> Could you expand on your need for a “clustered Jini server”?
 What
>>>>>>> features are you looking for, and what aspects of the application
>>> need
>>>>> to
>>>>>>> be clustered?  This might provide fertile grounds for development.
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> 
>>>>>>> Greg Trasuk
>>>>>>> 
>>>>>>> On Jun 2, 2015, at 12:38 PM, Palash Ray <paawak@gmail.com>
>> wrote:
>>>>>>>> 
>>>>>>>> Hi Greg, Patricia,
>>>>>>>> 
>>>>>>>> Really happy to see:
>>>>>>>> https://github.com/trasukg/river-container
>>>>>>>> 
>>>>>>>> I think we are off in the right direction. I have been using
>> river
>>>> for
>>>>>>>> almost 2 years now, but only recently started taking an interest
>> in
>>>>>>>> the code that makes it tick.
>>>>>>>> 
>>>>>>>> Our organisation is facing some scalability issues with Jini
of
>>> late,
>>>>>>>> well, I am not blaming Jini here, its just that we need a
>> clustered
>>>>>>>> Jini server.
>>>>>>>> 
>>>>>>>> To that end I was playing around the code a bit. I have some
>> ideas
>>>>>>>> which I can discuss with this group later.
>>>>>>>> 
>>>>>>>> I have created a small proof of concept of embedding Jini
in a
>> war
>>>> and
>>>>>>>> running it in a webserver:
>>>>>>>> https://github.com/paawak/jini-in-a-war
>>>>>>>> 
>>>>>>>> Also, I keep blogging about Jini with whatever little
>>> understanding I
>>>>>>>> have:
>>>>>>>> http://palashray.com/java/jini/
>>>>>>>> 
>>>>>>>> In the coming days, I look forward to contributing to the
river
>>>>> project.
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> Palash.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On 6/2/15, Greg Trasuk <trasukg@stratuscom.com> wrote:
>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Thanks, Jukka.  And by the way, I’m happy to see you’re
still
>>>> watching
>>>>>>>>> River!
>>>>>>>>> 
>>>>>>>>> Cheers,
>>>>>>>>> 
>>>>>>>>> Greg Trasuk.
>>>>>>>>> 
>>>>>>>>> On Jun 2, 2015, at 11:14 AM, Jukka Zitting <
>>>> jukka.zitting@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Hi,
>>>>>>>>>> 
>>>>>>>>>> 2015-06-02 1:17 GMT-04:00 Greg Trasuk <trasukg@stratuscom.com
>>> :
>>>>>>>>>> 
>>>>>>>>>>> I notice that for some reason, the 2.1 branch
shows up as
>>> current,
>>>>> so
>>>>>>>>>>> you
>>>>>>>>>>> need to switch
>>>>>>>>>>> to the 2.2 branch explicitly to see the latest
releases.  I’m
>>> not
>>>>> sure
>>>>>>>>>>> how to change that.
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> You can file an INFRA issue to get the default branch
and other
>>>>> GitHub
>>>>>>>>>> metadata changed.
>>>>>>>>>> 
>>>>>>>>>> Jukka
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Simon Roberts
>> (303) 249 3613
>> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message