river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michał Kłeczek (XPro Sp. z o. o.)" <michal.klec...@xpro.biz>
Subject Re: OSGi
Date Mon, 30 Jan 2017 07:50:59 GMT
I absolutely agree with the requirements you state.

The problem with Jini (and hence River) usage of TCCL is that it assumes 
a parent-child relationship between class loaders - which in turn causes 
the issues with transferring object graphs I've described earlier.

What I understood when working on this is also that OSGI is not the 
right choice either :)
What I also understood is that even having a "module" concept in Java is 
not enough :)

Any solution needs to make a distinction between:
- a static notion of a module (in OSGI it is a bundle, in current Jini - 
a module represented by a codebase string)
- a dynamic notion of a module (in OSGI its a BundleWiring, in current 
Jini it would be the _hierarchy_ of class loaders) - it is a runtime 
state representing a module with its resolved dependencies
And the most important conclusion is that it is the latter that must be 
sent between parties in a distributed system to make it all work.

To support class (API) evolution the solution must also allow to provide 
"open ends" in the module dependency graph. The difference from 
PreferredClassProvider is that these "open ends" must allow selecting 
one of many existing class loaders instead of only one particular 
ClassLoader that a client has set as TCCL.
TCCL is still needed to support many separate services in a single JVM. 
It is used to select a proper subset of class loaders as the set of 
candidates to choose from when resolving code bases. It allows to make 
sure that a single "static module" may produce many instances of 
"dynamic module" resolved differently in the same JVM.

I am working on this and hope to be able to provide an initial 
implementation soon.
The solution I am working on assumes code bases are represented as 
serializable objects, so any example I am giving is based on that.

The basic idea might be presented as follows (of course details related 
to concurrency, weak references and proper method visibility to make the 
thing secure are left out):

class ClassResolver {
   static Map<ClassLoader, ClassResolver> globalResolverMap;
   static ClassResolver getContextClassResolver() {...} //impl returns a 
resolver based on the TCCL

   final Map<ClassLoader, CodeBase> codeBaseMap;
   final Map<CodeBase, ClassLoader> existingLoadersMap;
   final Set<ApiCodeBase> apiImplementations;

   ClassLoader getClassLoader(CodeBase cb) {
     ClassLoader loader = lookup(cb);
     if (loader == null) {
       loader = resolve(cb).createLoader(this);
       //update the caches etc
     }
     return loader;
   }

   CodeBase resolve(CodeBase cb) {
     if (cb instanceof ApiCodeBase) {
       return resolveApi((ApiCodeBase) cb);
     }
     return cb;
   }

   CodeBase resolveApi(ApiCodeBase apiCb) {
     //java 8 style
     //example is simplified since we want to select a "best match" in 
reality
     
existingLoadersMap.keySet().stream().filter(apiCb::matchesCodeBase).first().orElse(apiCb);
   }
}

class CodeBase {
   protected abstract ClassLoader createLoader(ClassResolver resolver); 
//creates a ClassLoader using provided resolver to resolve any dependencies
}

class ApiCodeBase {
   Predicate<? super CodeBase> matcher;
   CodeBase defaultImplementation;

   ClassLoader createLoader(ClassResolver resolver) {
     return defaultImplementation.createLoader(resolver);
   }

   boolean matchesCodeBase(CodeBase cb) {
     return matcher.test(cb);
   }

}

So now when a service provider creates initializes its runtime 
environment it creates a set of instances of CodeBase subclasses
connected to each other in a way specific to a particular class loading 
implementation.
Some of which might be ApiCodeBase instances that will use their default 
implementations to create a class loader in the service provider 
environment (since they are resolved in a "clean" ClassResolver on startup).

Any client that will deserialize an object graph provided by the service 
will either:
1. Not have matching CodeBase to select from when resolving ApiCodeBase 
(the situation similar to service provider startup)
2. Have a matching CodeBase to select from:
a) it is an ApiCodeBase - will possibly be again resolved when 
transferring further
b) it is not an ApiCodeBase - will "lock" an "open end"

So any party is free to mark modules as "private" or "public".

The tricky thing is still handling code base bounce backs.
If not careful when specifying API a service might encounter a "lost 
codebase" problem which might be mitigated by having API modules only 
consist of interfaces.

Thanks,
Michal

Gregg Wonderly wrote:
> But codebase identity is not a single thing.  If you are going to allow a client to interact
with multiple services and use those services, together, to create a composite service or
just be a client application, you need all of the classes to interact.  One of the benefits
of dynamic class loading and the selling point of how Jini was first presented (and I still
consider this to the a big deal), is the notion that you can introduce a new version of a
service which might already exist in duplicity to try out the new version.  Thus, the same
class name can have multiple versions presented by multiple jars or bundles.  You need to
load the right one, and expose it to the client(s) in a way that keeps things distinctly separated.
>
>         service A ->  codeSource1, codesource2, codesource3
> new service A ->  codesource4, codesource2, codesource5
>
> If you get the new service A, you need (as if it was a separate service), to resolve
it using the proper code sources.  I understand how to do this with TCCL, and I also understand
how it might be done with some other class loading mechanism.  The question is, for OSGi bundles,
how does a bundle loader manager make that any different from TCCL in that intimately, you
still have a “tree” or “set” of dependencies that are resolved into the composite
codebase.  Bundles introduce a larger collection of active classes and mechanisms managing
the dependency graph.  There are tools to assemble bundles and lots of other associated details.
 TCCL makes it trivial to “know” what codesource is active and is no different in complexity
then casting a class loader back to a bundle class loader to find fields which detail the
collection of involved classes.
>
> I am not an OSGi user.  I am not trying to be an OSGi opponent.  What I am trying to
say is that I consider all the commentary in those articles about TCCL not working to be just
inexperience and argument to try and justify a different position or interpretation of what
the real problem is.
>
> The real problem is that there is not one “module” concept in Java (another one is
almost here in JDK 9/Jigsaw).  No one is working together on this, and OSGi is solving problems
in a small part of the world of software.   It works well for embedded, static systems.  I
think OSGi misses the mark on dynamic systems because of the piecemeal loading and resolving
of classes.  I am not sure that OSGi developers really understand everything that Jini can
do because of the choices made (and not made) in the design.  The people who put Jini together
had a great deal of years of experience piecing together systems which needed to work well
with a faster degree of variability and adaptation to the environment then what most people
seem to experience in their classes and work environments which are locked down by extremely
controlled distribution strategies which end up slowing development in an attempt to control
everything that doesn’t actually cause quality to suffer.
>
> Gregg
>
>


Mime
  • Unnamed multipart/mixed (inline, None, 0 bytes)
View raw message