ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mahadev konar (JIRA)" <>
Subject [jira] [Updated] (AMBARI-8244) Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs
Date Mon, 29 Dec 2014 19:49:14 GMT


Mahadev konar updated AMBARI-8244:
    Assignee: Ivan Mitic

> Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs
> ---------------------------------------------------------------------
>                 Key: AMBARI-8244
>                 URL:
>             Project: Ambari
>          Issue Type: Bug
>          Components: stacks
>    Affects Versions: 2.0.0
>            Reporter: Ivan Mitic
>            Assignee: Ivan Mitic
>              Labels: HDP
>             Fix For: 2.0.0
>         Attachments: AMBARI-8244.2.patch, AMBARI-8244.patch
> Right now changing the default file system does not work with the HDP 2.0.6+ stacks.
Given that it might be common to run HDP against some other file system in the cloud, adding
support for this will be super useful. One alternative is to consider a separate stack definition
for other file systems, however, given that I noticed just 2 minor bugs needed to support
this, I would rather extend on the existing code.
> Bugs:
>  - One issue is in Nagios install scripts, where it is assumed that fs.defaultFS has
the namenode port number.
>  - Another issue is in HDFS install scripts, where {{hadoop dfsadmin}} command only works
when hdfs is the default file system.
> Fix for both places is to extract the namenode address/port from {{dfs.namenode.rpc-address}}
if one is defined and use it instead of relying on {{fs.defaultFS}}. 
> Haven't included any tests yet (my first Ambari patch, not sure what is appropriate,
so please comment).

This message was sent by Atlassian JIRA

View raw message