river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Firmstone <j...@zeus.net.au>
Subject Re: [Fwd: Re: Anonymity, Security - ProcessBuilder and Process]
Date Sun, 12 Jun 2011 23:08:35 GMT
Hmm, interesting Phoenix is currently used to activate the service, so 
it might be used to host a proxy?

So far my thoughts have just been based around an executor like 
interface called IsolatedProcess, where the current Subject can be used 
for elevating permissions in the second jvm using a jeri connection 
based on streams and pipes for a locally isolated jvm.

A remote sacrificial drone node might also be used to run multiple jvm's 
as IsolatedProcess services using Secure Jeri (ssl), that's sort of 
getting into surrogate territory too I suppose.  But to find it you'd 
need a lookup service, so there's a circularity problem with that.

Serializable Callable tasks are submitted, reflective proxy's are 
returned for remote objects or a serializable object returned for non 
remote objects.  The smart proxy remains in the isolated process jvm, 
replaced by a reflective proxy at the client, while any reflective proxy 
based services are passed straight through to the client.

Invocations on the proxy's methods would be transferred to the second 
jvm as tasks via an invocation handler.

This would be low level, the idea being the client wouldn't be aware 
that the proxy it's dealing with is running in a separate jvm.

Eg:

public interface IsolatedProcess{

Future process(Callable task) throws IOException;

}

But I'm open to a Phoenix experiment.

Cheers,

Peter.

Gregg Wonderly wrote:
> I just wonder if most of phoenix would not already be used.  Basically, we'd provide
a service definition that would perform a lookup with a designated serviceID and then we'd
lookup that service and use it locally.  That, for example would allow service endpoint actions
for recovery from comms problems etc to "just" happen, as well as allow a crashed endpoint
JVM to recover.
>
> Gregg
>
> Sent from my iPhone
>
> On Jun 12, 2011, at 2:54 AM, Peter Firmstone <jini@zeus.net.au> wrote:
>
>   
>> Tom Hobbs wrote:
>>     
>>> Always communicating in a separate JVM is going to have obvious performance
>>> costs.  Do we know what they are and are they acceptable? 
>>>       
>> It's hard to say at this stage, without an implementation, but it will consume more
resources.
>>
>> I figure a good compromise would be that each registrar proxy be responsible for
it's own jvm and any services it provides.  The client would be run from it's own jvm.
>>
>> A separate JVM for remote code reduces the amount of client and platform code visible
to proxy's.  Shared class (static) variables are not possible between client and downloaded
code.  This would also allow different conflicting libraries to be kept separate.
>>
>> The Isolates API would be more desirable, but not available.
>>
>> Just an experiment at this stage, time will tell...  anyone wanting to help, sing
out.
>>
>>     
>>> Is going to be
>>> easy to turn off for.people who trust what they're downloading an don't want
>>> to pay the perf costs etc?
>>>  
>>>       
>> I hope so, haven't considered configuration at this stage.
>>
>> Cheers,
>>
>> Peter.
>>
>>     
>>> On 11 Jun 2011 20:49, "Peter Firmstone" <jini@zeus.net.au> wrote:
>>>  
>>>       
>>>> Dan Creswell wrote:
>>>>    
>>>>         
>>>>> On 8 June 2011 05:31, Peter Firmstone <jini@zeus.net.au> wrote:
>>>>>
>>>>>      
>>>>>           
>>>>>> Phoenix wakes (Activates) up a Service when it's required on the
server
>>>>>> side. I haven't thought of a good name for it, but unlike Phoenix,
the
>>>>>> concept is to perform discovery, lookup and execute smart proxy's
on
>>>>>>        
>>>>>>             
>>> behalf
>>>  
>>>       
>>>>>> of the client jvm at the client node, although I concede you could
run a
>>>>>> service from it also. Reflective proxy's would be used to make smart
>>>>>> proxy's appear to the client as thought they're running in the same
jvm.
>>>>>>
>>>>>> Process has some peculiarities when it comes to input and output
>>>>>>        
>>>>>>             
>>> streams,
>>>  
>>>       
>>>>>> they cannot block and thus require threads and buffers to ensure
io
>>>>>>        
>>>>>>             
>>> streams
>>>  
>>>       
>>>>>> are always drained. Process uses streams over operating system pipes
to
>>>>>> communicate between the primary jvm and subprocess jvm.
>>>>>>
>>>>>> I've been toying around with some Jeri endpoints, specifically for
>>>>>>        
>>>>>>             
>>> Process
>>>  
>>>       
>>>>>> streams and pipes, still I'm not sure if I should consider it a secure
>>>>>> method of communication just because it's local. Do you think I should
>>>>>> encrypt the streams?
>>>>>>
>>>>>>
>>>>>>        
>>>>>>             
>>>>> So you want to use pipes?
>>>>>
>>>>> The answer to whether you want to encrypt the streams or not is down
>>>>> to what kind of threat you're trying to mitigate. And the threats
>>>>> possible are determined by what solution you adopt. Pipes are
>>>>> basically shared memory, what kind of attacks are you worrying about
>>>>> in that scenario?
>>>>>
>>>>>      
>>>>>           
>>>> I guess the attacker would be someone who already has user access to a
>>>> system, if that's the case, the game's probably lost for most systems.
>>>>
>>>> I'm trying to consider the semantics of such a connection with regard to
>>>> InvocationConstratints.
>>>>
>>>> Integrity,
>>>> Confidentiality,
>>>> ServerAuthentication
>>>> ClientAuthentication.
>>>>
>>>> It really doesn't support any of the above constraints, but we're not
>>>> going to use it for discovery etc..
>>>>
>>>> The intended purpose is to isolate downloaded code in a separate jvm and
>>>> communicate with it using a reflective proxy.
>>>>
>>>>
>>>>    
>>>>         
>>>>>> Cheers,
>>>>>>
>>>>>> Peter.
>>>>>>
>>>>>>
>>>>>>        
>>>>>>             
>>>>>      
>>>>>           
>>>  
>>>       
>
>   


Mime
View raw message