harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Neil Macneale <mac4-harm...@theory.org>
Subject Re: Security
Date Sat, 02 Jul 2005 05:09:11 GMT

Tom Tromey wrote:
>>"Ben" == Ben Laurie <ben@algroup.co.uk> writes: 
> Ben> I can't think of _any_ other interesting security properties that Java
> Ben> has and C lacks. Am I missing something?
> Probably not.  At some point any VM has to do untrusted things.  There
> may be reasons that this "window" is bigger or smaller, and smaller is
> probably preferable, but it doesn't seem to me to be a necessary
> consequence of the implementation language.
> That said, it does make sense to think not only about how to implement
> security, but also how to verify it, and likewise how to ensure the VM
> remains secure in the face of a lot of mutation.

This is actually more along the lines of what I was thinking when I
wrote the first post, though I admit I wasn't very clear about that :-/
I don't want to start the C/C++ Vs. Java argument all over. People are
ultimately the ones who introduce security problems, and people are
ultimately the ones who find them. Tools and verifiers get you part of
the way, but at some point along the way code needs to be read by a
human expressly with the purpose of finding vulnerabilities. Unlike
functionality defects, which are generally ferreted out by having lots
of eyeballs, security defects are usually a little more subtle.
How does this get dealt with in an open source project where security is
of high importance?

OpenBSD has a required audit for all submissions, if I understand
correctly. Is something like that required? The problem with such
processes and requirements is that it starts to feel like my job  :-)

Since people are itchin' to get going on code there will probably be a
phase of seed code which is torn apart, and lots of stub functions and
so forth. All this thrashing usually results in a pieces of code which
were never fully dealt with correctly. It's inevitable, IMHO. So how do
we combat this from the perspective of preventing vulnerabilities to
"ship" with milestone releases? Should we require a peer review on every
package, or component, or file? If PGP signatures are required, then a
script can be written to tell us what has not been read. But this gets
back to a process, and that may be two heavy weight for this particular
group. What solutions have people seen in other communities which has
worked? I'm particularly interested in Apache's approach (I assume there
is one).

My main concern is that Harmony will get to a milestone, say 1.0, and a
massive effort will be required to search for security holes. That is
the worst case scenario in my opinion. If this J2SE implementation is
going to be taken seriously, then people need to trust it, and that is
not the way to build trust.

> For checking we'll probably be adding tests to Mauve for various
> security things as we start working on the security infrastructure in
> libgcj.  These kinds of tests still miss a lot though.
> One idea we've discussed a little is writing new FindBugs checks to
> look for the required security calls.  But this doesn't protect us
> from bugs in the native code or bugs allowing access to non-standard
> weird things that shouldn't be generally accessible (we have some
> interesting code in gnu.gcj.*).
> Tom

I'm all for the use of tools to scan for common errors. I'd like to hear
more about people's experience with different verifiers.


View raw message