hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White" <tom.e.wh...@gmail.com>
Subject Re: Namenode Exceptions with S3
Date Thu, 17 Jul 2008 18:52:36 GMT
On Thu, Jul 17, 2008 at 6:16 PM, Doug Cutting <cutting@apache.org> wrote:
> Can't one work around this by using a different configuration on the client
> than on the namenodes and datanodes?  The client should be able to set
> fs.default.name to an s3: uri, while the namenode and datanode must have it
> set to an hdfs: uri, no?

Yes, that's a good solution.

>> It might be less confusing if the HDFS daemons didn't use
>> fs.default.name to define the namenode host and port. Just like
>> mapred.job.tracker defines the host and port for the jobtracker,
>> dfs.namenode.address (or similar) could define the namenode. Would
>> this be a good change to make?
>
> Probably.  For back-compatibility we could leave it empty by default,
> deferring to fs.default.name, only if folks specify a non-empty
> dfs.namenode.address would it be used.

I've opened https://issues.apache.org/jira/browse/HADOOP-3782 for this.

Tom

Mime
View raw message