river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Firmstone <j...@zeus.net.au>
Subject Thinking Aloud - fundamental challenges of jvm based distributed computing
Date Tue, 15 Nov 2011 23:10:20 GMT
With River, we execute separately compiled bytecodes at runtime by 
taking advantage of discovery and lookup services - distributed 
dependency injection.  Service API, Java and Jini Platforms provide 
compatibility, for separately compiled components, these are the parent 
classes and interfaces that clients and services use for communicating 
across compile time boundaries, demarcated by Service API.

Determining classes to be shared between proxy and client namespaces and 
identifying what shouldn't be creates challenges for developers, who 
currently need to define preferred class lists.

My personal preference is to share as little as possible, how selfish!

The Java platform itself presents issues with Memory isolation, 
namespace visibility, class resolution and codebase annotation loss for 
proxy's or distributed objects.

proxy - remote reference, implements Remote
distributed object - local copy of an object, serialized, implements 

If we share only the Service API, Java and Jini Platform classes and 
objects, then we minimise Java platform issues.

Proxy classes that extend services API, java or Jini platform classes 
don't need to be visible to the client (the reverse holds true also), 
since client code uses the super classes and interfaces in the Service 
API, Java or Jini Platform.

In reality client classes and any other libraries are loaded into the 
application ClassLoader with the Service API and Jini Platform, so all 
these additional classes are also visible to proxy's.

If the proxy uses libraries or has classes in the same namespace as the 
client, unless the proxy's classes are preferred, they will be resolved 
by the application ClassLoader (the classes may or may not be 
compatible), if the proxy is later transferred to another node in the 
network, the codebase annotations have been lost because the classes 
were loaded by the application class loader and resolved from the class 
path.  If the 2nd remote node doesn't have the required classes on its 
classpath, it's game over.

This problem could be solved if the Classpath isn't visible to the 
proxy, only the ServiceAPI, Java and Jini Platform classes, so 
developers don't have to understand ClassLoader visibility and preferred 
classes and may instead just focus on developing services and getting 
their OS and network to support multicast discovery.

Some options:

   1. Use the extension classloader (command line option, not
      /jre/lib/ext/) for ServiceAPI and Jini Platform classses, the
      classpath will then not be visible to the proxy's that reside in
      child classloaders of the extension classloader.  Note classes in
      the extension classloader can have reduced permission and don't
      have to be AllPermission.
   2. OR Create a new child classloader, for the application and all
   3. OR Use a Subprocess JVM for all smart proxy's (identified as
      classes implementing Remote and having codebase annotations), the
      JVM has been optimised to share platform class files with
      subprocess jvm's for fast startup and less memory consumption. 
      However this requires reflective proxy's to represent smart
      proxy's in the main jvm, the part I haven't figured out yet is how
      to have a reflective proxy replaced by a smart proxy when it gets
      transferred to another machine.  Perhaps it might be possible
      using a marker interface on the reflective proxy, however I'll
      save that discussion for another thread.

To have any of these work properly would require a container,  Tim 
Blackman's Jini in a jar https://issues.apache.org/jira/browse/RIVER-342 
solves a number of the configuration, command line options and classpath 
visibility issues using a new URL scheme, which enables a Jini 
application to be run from a jar file.

Do you think it's time we work towards a standard container?  So that at 
some point in the future all the downstream projects will be able to 
support it as well as their existing container?

What are the requirements for such a container?



View raw message