jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Klimetschek <aklim...@day.com>
Subject Re: Re: Moving to DFS System ..
Date Tue, 10 Feb 2009 14:01:18 GMT
On Tue, Feb 10, 2009 at 2:44 PM, imadhusudhanan
<imadhusudhanan@zohocorp.com> wrote:
> I use the Apache Hadoop project as DFS. Have anyone dealt with the similar JR to DFS
conversion.. ?? pls explain ...

Still, what do you mean by DFS? Distributed File System? How do you
"use" it (ie. Apache Hadoop)  in your client applications, what is the
interface you use? Direct filesystem access, webdav, Hadoop API, etc?

Jackrabbit obviously mainly provides the JCR API as interface, but it
also provides a stable WebDAV filesystem-like mapping (only
nt:file/nt:folder in the repository) that can be mounted as file
system. The backend part of Jackrabbit (persistence managers,
datastore) is optimized for performance and pure JCR usage, it is an
integral part of Jackrabbit's internal architecture. If you want to
connect existing datasources via JCR, the Jackrabbit SPI interface is
thought to make development of such connectors/adaptors simpler.


Alexander Klimetschek

View raw message