hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gibbon, Robert, VF-Group" <Robert.Gib...@vodafone.com>
Subject RE: Basic question about using C# with Hadoop filesystems
Date Sun, 10 Jan 2010 20:58:26 GMT

http://code.google.com/p/jar2ikvmc/

This tool referenced from IKVM, referenced from the Mono project claims to resolve dependencies
and convert java bytecode class files collected in JARs to Mono MSil DLLs. You might be able
to use this to make a native MSil .NET port of the hbase client.

HTH
R

-----Original Message-----
From: Gibbon, Robert, VF-Group
Sent: Sun 1/10/2010 9:30 PM
To: hbase-user@hadoop.apache.org
Subject: RE: Basic question about using C# with Hadoop filesystems
 
Options for you:

http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/

Thrift is one of the main ways into HBase and HDFS. Above is an IDL for C# blog post.

http://www.mono-project.com/using/relnotes/1.0-beta1.html

Mono has a bytecode to MSil translator built in. Not likely to give high performance though,
and I doubt it will work TBH

http://caffeine.berlios.de

Provides mono-java interop via JNI: no dynamic bytecode->MSil translation. I have never
used it and the project looks quite dead but might still do what you want.



-----Original Message-----
From: Andrew Purtell [mailto:apurtell@apache.org]
Sent: Sun 1/10/2010 8:45 PM
To: hbase-user@hadoop.apache.org
Subject: Re: Basic question about using C# with Hadoop filesystems
 
Just to clarify:

> On Windows especially context switching during I/O like that has a 
> high penalty.

should read

> Context switching during I/O like that has a penalty.

I know we are talking about Mono on Linux here. After all the subject
is FUSE. I forgot to fix that statement before hitting 'send'. :-)



----- Original Message ----
> From: Andrew Purtell <apurtell@apache.org>
> To: hbase-user@hadoop.apache.org
> Sent: Sun, January 10, 2010 11:30:42 AM
> Subject: Re: Basic question about using C# with Hadoop filesystems
> 
> Bear in mind that hdfs-fuse has something like a 30% performance impact
> when compared with direct access via the Java API. The data path is
> something like:
> 
>     your app -> kernel -> libfuse -> JVM -> kernel -> HDFS
> 
>     HDFS -> kernel-> JVM -> libfuse -> kernel -> your app
> 
> On Windows especially context switching during I/O like that has a 
> high penalty. Maybe it would be better to bind the C libhdfs API
> directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
> But, at that point, you have pulled the Java Virtual Machine into the
> address space of your process and are bridging between Java land and
> C# land over the JNI and the C# equivalent. So, at this point, why not
> just use Java instead of C#? Or, just use C and limit the damage to
> only one native-to-managed interface instead of two?
> 
> The situation will change somewhat when/if all HDFS RPC is moved to
> some RPC and serialization scheme which is truly language independent,
> i.e. Avro. I have no idea when or if that will happen. Even if that
> happens, as Ryan said before, the HDFS client is fat. Just talking
> the RPC gets you maybe 25% of the way toward a functional HDFS
> client. 
> 
> The bottom line is the Hadoop software ecosystem has a strong Java
> affinity. 
> 
>    - Andy
> 
> 
> 
> ----- Original Message ----
> > From: Jean-Daniel Cryans 
> > To: hbase-user@hadoop.apache.org
> > Sent: Sun, January 10, 2010 8:57:32 AM
> > Subject: Re: Basic question about using C# with Hadoop filesystems
> > 
> > http://code.google.com/p/hdfs-fuse/
> > 
> > On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
> > wrote:
> > > ah, sorry, forgot to mention, it's in hdfs-user mailing list
> > > hdfs-user@hadoop.apache.org



      




Mime
View raw message