river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Firmstone <peter.firmst...@zeus.net.au>
Subject Have we been doing codebase annotations wrong?
Date Wed, 31 Jan 2018 07:26:17 GMT
Have we been doing codebase annotations wrong?

Could RMIClassLoader have been better conceived?

Could a simpler alternative be utilised instead?

For example, classes are resolved differently during deserialization 
than how classes are resolved at runtime. At runtime a ClassLoader 
delegates to its parent ClassLoader, or in the case of modular systems, 
ClassLoader’s of imported modules. RMIClassLoader, doesn’t look to 
resolve classes through ClassLoader hierarchies, but instead tries to 
locate each ClassLoader directly based on an annotation.

The problem with this approach is not all ClassLoader’s provide codebase 
annotations, and class resolution may be different at distinct nodes in 
the network.

Currently codebase annotations may change when marshalling between 
nodes, depending on where each class is resolved.

Refer to: http://sorcersoft.org/resources/jini/smli_tr-2006-149.pdf

A much simpler approach.

We can define the ClassLoader at each endpoint.

In JERI, a ServerEndpoint can be assigned a default ClassLoader, by 
passing it as a parameter to its InvocationLayerFactory. The client 
Endpoint’s default ClassLoader is the ClassLoader of its dynamic proxy 
instance (the ClassLoader where the java.lang.reflect.Proxy dynamically 
generated instance is loaded).

So if a service has a smart proxy, it’s codebase should be present in a 
ClassLoader at both the ServerEndpoint and the client Endpoint, so the 
default ClassLoader’s at both Endpoint’s contain that codebase.

The default ClassLoader at each Endpoint now has responsibility for 
resolution of classes, no longer is RMIClassLoader required. In fact 
codebase annotations no longer need to be annotated with every class in 
the stream either.

But what about client parameter objects, or exported remote handback 
objects passed as parameters, I hear you ask?

Simple, we use a marker interface, so these objects can identify 
themselves to the stream, a bootstrap proxy can be provided by the 
stream, that uses only local classes, present at both endpoints, the 
original object can be stored into a MarshalledInstance and serialized 
with the bootstrap proxy to the remote Endpoint, that allows the 
originator to be authenticated, it’s codebase provisioned into a 
ClassLoader which becomes the default loader of the MarshalledInstance 
to deserialize the object in question. The identity key of the 
ClassLoader will be a combination of the bootstrap proxy’s 
InvocationHandler identity and it’s codebase annotation, so it can be 
cached. If a remote object which is contained within the 
MarshalledInstance, its JERI Endpoint will use the default ClassLoader 
passed to the MarshalledInstance as its default loader.

Note: If we’re in a traditional java hierarchical ClassLoader system 
(not modular), we’d want the current stream’s default loader to be the 
parent loader of the resolved or provisioned ClassLoader passed to the 

So now you don’t get codebase annotation loss, ServerEndpoint and client 
Endpoint’s have ClassLoader’s with compatible class resolution. The 
codebase annotation becomes a configuration concern of the service.

In addition, MethodConstraints can also be applied to exported objects 
nested within other services. It can be passed using the stream context. 
This ensures that minimum principal authentication, integrity and 
confidentiality apply to all nested objects.

The good news is that most of the mechanisms are already present and 
backward compatibility can be preserved allowing eventual migration.

Remember that a smart proxy can have no real server back end 
communication at all (except for providing the codebase), it’s just an 
object that gets serialized around different nodes, in this case the 
bootstrap proxy is still used to provide the codebase annotation and as 
trust verification.

How does trust work in this system?

Provided you still trust the bootstrap proxy’s service, after method 
constraints that ensure confidentiality and minimum principal 
authentication have been applied , it provides the codebase annotation, 
and if integrity constraint is true, then the codebase scheme is checked 
for integrity, or if it’s signed, the jar is validated by a provider. 
(The signers can be anonymous and advised by the bootstrap proxy). You 
now trust that the code will validate input during deserialization, and 
if the de-serialized object implements RemoteMethodControl apply 
MethodConstraints to it as well. The object bytes in serial form may 
have originated from a third party (also with MethodConstraints applied, 
but possibly not trusted by the original node), in any case it’s 
important for the input to be validated during deserialization.

Note this system would also utilise a Service Provider Interface to 
communicate with the bootstrap proxy and preferred classes can still be 
supported, simply by using a PreferredClassLoader when loading the codebase.

This system also allows support for modular environments like OSGi to be 
relatively simple when compared to RMIClassLoaderSpi. Additonally it 
allows an OSGi node to interact with traditional nodes / services, 
provided jar files have bundle manifests and the configured codebase 
annotation string contains all required jar files, including 
dependencies (the OSGi provider can ignore the dependencies, provided 
the first jar is the proxy bundle).

I've currently got a prototype I'm working on if anyone's interested.



View raw message