geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kresten Krab Thorup <>
Subject Re: Donation of a CORBA Orb
Date Mon, 04 Jul 2005 15:53:42 GMT

Thanks for your questions, as I was trying to answer the response  
ended up becoming rather lengthy...

First off, I don't think that it makes too much sense to just toss a  
bunch of code over the fence; that would be a dead-end.  We need to  
set it up such that it becomes part of the community, and to do this  
I am thinking that it would be good to engage the community in  
rewriting some parts of the ORB that we were thinking to refactor  
anyway. If we piecemeal the contribution in smaller chunks I am  
thinking there is a better chance to make it everybody's property.   
As for schedule of this, we're thinking that it would come in pieces  
over the next 4 months or so.  We need to do some clean-up and  
partitioning from the rest of our code base and as I said before, we  
would like to introduce this whole thing gently both to us and the  

I think of this as a sing-along or "karaoke" project: somewhere in  
between "writing from scratch" and "donation".  We will take the  
individual parts of the Trifork ORB, clean them up with coding  
standards, javadoc, etc.; and have the community be part of reworking  
and improving the pieces along the way.  For some parts, such as the  
core io, I would like it to be redone entirely; whereas much of the  
higher-level logic such as RMI/IIOP and POA handling can go almost  
straight through.  When we're all the way though the entire code base  
will be appropriately "apachified."

As for where it should be placed ASF-wise, I am thinking that it  
would be best to place it as part of Geronimo initially, because it  
is a good thing to have a concrete project [the appserver] to drive  
the requirements.  Also, the featureset required to do Java EE is  
somewhat less than the full CORBA spec (for instance, we have no  
interface/implementation repository, and no IDL compiler).

Our CORBA subsystem is written almost entirely by two people (of  
which one is me), and so we would initially have two people on this.   
Depending on how this thing takes off we can add more people over time.

I have started working on a road map for how the donation could  
progress over time, and I will share this with you once it's a bit  
further along.    However, just to tease everyone, below is an  
example of where I am thinking we could start.

As for matching the technology, I don't really know where to begin;  
but an ORB is a relatively standard piece of technology.  I can come  
with some random highlights:

- The ORB is quite clean in the sense that it has literally no  
statics; since it is designed to be able to run multiple concurrent  
- In our RMI/IIOP (and thus our EJB container) we generate all stubs  
directly in memory; so there is no need to have javac in the loop.   
However, since CORBA stubs (and thus RMI/IIOP stubs) needs to extend  
a class [javax.rmi.CORBA.Stub] the JDK Proxies won't do it, so we  
have our own implementation of java.lang.reflect.Proxy based on BCEL  
that allows a proxy to be subclass of a given class.
- Many EJB containers have a shortcut to effectively implement  
LocalEJB semantics for local calls to make applications run faster;  
even though this is really a break of semantics.  Our RMI/IIOP has a  
middle-ground option which is a very efficient "in-memory deep copy"  
with very-close-to-full semantics.  It avoids  
writing everything to a byte array; passing immutable objects such as  
Strings, java.lang.Integer, etc. straight through; but copies all  
other objects properly.
- All wire-level security and transaction processing is done cleanly  
in interceptors; so for these parts we can probably use most of what  
you already have.

The non-portable part is essentially in how you tell CORBA to use  
SSL; as you have surely experienced yourselves.

Please let me know if you have any questions.

Kresten Krab Thorup
CTO, Trifork

A final side note: internally we call this CORBA project "Navajo," as  
the IIOP protocol is appropriately backwards and obscure to be named  
after the indians that were employed to relay secret messages in the  
pacific during WWII :-), see 

Anyway, here is the "first project" I am thinking of...

== first project ==

Right now the Trifork ORB is using NIO for the server-side of IIOP,  
but "classic" IO for the client side.  The NIO part is great because  
it lets us run all corba handling in a single selector thread backed  
by the appserver's thread pool.  However, With the experience from  
working with this for the last 5 years, I would like to redo the core  
I/O subsystem, and so I have started to do the first steps towards  
this rework.

The benefits of this, apart from cleaning up code that has grown over  
time, would be:

- Reduce copying data through the stack.
- Reduce thread usage further to support even more clients.
- Off-load reading&writing - e.g. response writing to the framework  
so as to better handle slow clients.

There are many reasons why I would like to do this, here is one:    
One optimization that we did at one point was to pool byte arrays,  
because the allocation of byte arrays (read: zeroing out memory) took  
way too much time. [I know - generally pooling in JVM's is a bad  
idea, but in this case it made a lot of sense]  With this rewrite we  
would gain the same optimization one more time.  CORBA Input/ 
OutputStreams should be backed by java.nio.ByteBuffers directly,  
which will then be passed straight down to the NIO interface.

The effort as I see it falls in two parts:

- Asynchroneous I/O API (AIO).  Based on the abstractions we have  
internally in the ORB, I'm doing a generalized version of something  
very similar to IBM's aio4j; future-based socket I/O. Unline aio4j  
however, the API runs straight on top of a Java SE 1.4 [no native  
code], and hooks into an external thread pool to provide the same API  
based on both NIO and "classic" IO technology.

- IIOP Streams on AIO. Based on the above, write InputStream/ 
OutputStream implementations, as  well as connection management,  
backed by the AIO infrastructure and NIO direct buffers such that the  
underlying OS can stream data straight into the high-level structures.


On Jul 4, 2005, at 2:37 PM, Geir Magnusson Jr. wrote:

> Joern,
> Thanks for the note.  This is the right place to discuss.
> There are two separate threads of discussion that I can think of :
> 1) Technical - if the donation fits technically into what we are  
> doing (I'm sure it does...)
> 2) Administrative - how, where and when
> As for #2 my questions to you are :
> a) What is the timing for this donation?  How soon?
> b) Do you see this as coming straight to Geronimo to be part of  
> Geronimo initially, or to the Apache Incubator where we could work  
> on it with you and then make the decision of coming to the Geronimo  
> project or being something else, like a stand-alone top-level project.
> c) I assume that you'd be offering committers to help us with the  
> codebase and to continue working and expanding.  Do you have an  
> idea of how many?
> Thanks for doing this, and we look forward to discussion on both  
> subjects above.
> geir
> -- 
> Geir Magnusson Jr                                  +1-203-665-6437

View raw message