hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Cutting <cutt...@apache.org>
Subject Re: HTTP transport?
Date Tue, 29 Sep 2009 23:35:14 GMT
Raghu Angadi wrote:
> Does this mean current Avro RPC transport (an improved version of Hadoop 
> RPC) can still exist as long as it supported by developers?

Sure, folks can create new transports for Avro.  There is, for example, 
in Hadoop Common some code that tunnels Avro RPCs inside Hadoop RPCs.

> Where does security lie : Avro or Transport layer?

That's not yet clear.  If we settle on HTTP as the preferred transport, 
then the transport should probably handle security, since many security 
standards already exist for HTTP and many HTTP servers and clients 
already support adding new security mechanisms.  I'd rather not 
re-invent all this in Avro if we can avoid it.

> If it is part of transport : How does an app get hold of required 
> information (e.g. user identity).

Perhaps the way we currently do this in the RPC server, with thread 
locals?  For example, the Avro RPC servlet could have a static method 
that returns that returns the value of 
HttpServletRequest#getUserPrincipal().

> May be 'transceiver' can have an interface that can transfer security 
> information between transport layer and Avro.

Yes, we could add methods like getPrincipal() to Transciever, but we'd 
still probably need to use a thread local accessed by a static method to 
get the Transciever if we continue to use reflection for server 
implementations.  Or we could stray from reflection, and make services 
implement an interface through which we can pass them things like the 
principal.

Doug

Mime
View raw message