hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vikas Ashok Patil <vikas...@buffalo.edu>
Subject Re: Integrating Lustre and HDFS
Date Sat, 12 Jun 2010 03:35:33 GMT
Hello Allen,

Thanks for the reply.

You are right about trying to run two distributed filesystems. The reason
being, there are certain restrictions (in our cluster environment) to
include the local file system into lustre. Please tell me how would I make
mapreduce access more than one file system. At least the configs don't seem
to allow it.

Vikas A Patil

On Sat, Jun 12, 2010 at 12:32 AM, Allen Wittenauer <awittenauer@linkedin.com
> wrote:

> On Jun 10, 2010, at 8:27 PM, Vikas Ashok Patil wrote:
> > Thanks for the replies.
> >
> > If I have fs.default.name = file://my_lustre_mount_point , then only the
> > lustre filesystem will be used. I would like to have something like
> >
> > fs.default.name=file://my_lustre_mount_point , hdfs://localhost:9123
> >
> > so that both local filesystem and lustre are in use.
> >
> > Kindly correct me if I am missing something here.
> I guess we're all confused as to your use case.  Why do you want to run two
> distributed file systems on the same nodes?  Why can't you use Lustre for
> all your needs?
> As to fs.default.name, you can only have one.  [That's why it is a
> default. *smile*]  If you want to access more than one file system from
> within MapReduce, you'll need to specify it explicitly.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message