river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shay Hassidim" <s...@gigaspaces.com>
Subject RE: Distributed ExecutorService
Date Mon, 07 Dec 2009 17:53:02 GMT
It would be great if the group will take a look on GigaSpaces
ExecutorService as a reference: 


-----Original Message-----
From: Gregg Wonderly [mailto:gregg@wonderly.org] 
Sent: Monday, December 07, 2009 11:53 AM
To: river-dev@incubator.apache.org
Subject: Re: Distributed ExecutorService

This is the service model that I envisioned for the work I did to put
into a javaspace like environment with my http://griddle.dev.java.net

I've always wanted to have to conversation about importing permissions,
but to 
some degree we've already done that.

o  The granting of permissions to a specific codebase URL already
o  With HTTPMD, we can guarantee that the content of the jar hasn't
    and thus the permissions we are granting are to be used by software
    we know the behavior of.
o  With DynamicPolicyProvider and the
    implementations, you can derive a dynamic "service" environment with
    appropriate changes to com.sun.jini.start.ServiceStarter to include
    convehence of whatever policy and configuration you want to use.

I've used com.sun.jini.start.ServiceStarter and a derivative of my 
http://startnow.dev.java.net projects org.wonderly.url.vhttp.Handler 
StreamHandler implementation to create a dynamically downloading and
caching for 
offline execution client runtime environment.

I think we have lots of bits and pieces that can work together, and the
bundle mechanism might be a standard way to package into a jar, what
are actually required.

In the real world, many people still seem to have no stamina for
building an 
exact policy, and instead, look at the software they are running as
trusted or not-trusted in totality, and use AllPermission as the gating

If we were to work on providing a "complete" permission convehence
would that be useful if there still are not adequate tools for 
discovering/knowing exactly what permissions are required?

Gregg Wonderly

Peter Firmstone wrote:
> I've had a few thoughts about the whole "move the code to the data" 
> concept (or "Move the code to the Service node") for some time, 
> considering it a low priority, I have kept quiet about it, until 
> recently when the topic came up during a recent email discussion.
> Current Practise for River applications is to move code and data
> together in the form of marshalled objects.  Two particular groups of 
> Objects are of interest, those that are process or code intensive
> methods process and create returned results and data intensive objects

> where there is little to be done in the way of processing, where minor

> copy / transformations are performed on existing state.
> I think that the River platform addresses these Object groups quite 
> effectively when the processing is known at compile time or when the 
> service requirements are clear.  However there are Occasions when it 
> would be less network intensive or simpler to submit  the distributed 
> equivalent of a  ScheduledTask or Runnable to consume an existing data

> intensive service at the origin of that service and make the desired 
> result available via a temporary service or some other mechanism or 
> protocol.  In cases where particular class files and libraries
> to perform processing are available at the service node, but
> at the client due to a legacy java environment, no ability to load 
> remote class files, or a constrained memory environment that cannot 
> provide enough memory space for the processing required.  The result
> the uploaded runnable class file can be transformed into a locally 
> available or compatible class file.
> The Runnable uploaded code might be uploaded to the service node, by
> client or a third party mediator.  Any suggestions for what the 
> mechanism should be would also be useful. I'm thinking that a signed 
> OSGi bundle containing a set of permissions would be a good model to 
> start from, considering that OSGi already has many of the Security 
> mechanisms that would make such a thing possible.
> In essence the DistributedScheduledTask is a remote piece of client
> that is executed in the service node.  I'm wondering just what should
> DistributedExecutorService provide, if anyone else has had thoughts 
> similar to mine.
> For instance, a Reporting Node in a cluster might send out the same 
> DistributedScheduledTask to all available services of a particular
> to perform some intensive data processing or filtering remotely at
> node and retrieve the results from each after processing.  The
> Node might have changing reporting requirements similar to performing 
> queries for instance.
> Cheers,
> Peter.

View raw message