ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-8244) Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs
Date Tue, 11 Nov 2014 20:51:34 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207034#comment-14207034
] 

Hadoop QA commented on AMBARI-8244:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12680861/AMBARI-8244.2.patch
  against trunk revision .

    {color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/606//console

This message is automatically generated.

> Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs
> ---------------------------------------------------------------------
>
>                 Key: AMBARI-8244
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8244
>             Project: Ambari
>          Issue Type: Bug
>          Components: stacks
>    Affects Versions: 2.0.0
>            Reporter: Ivan Mitic
>              Labels: HDP
>         Attachments: AMBARI-8244.2.patch, AMBARI-8244.patch
>
>
> Right now changing the default file system does not work with the HDP 2.0.6+ stacks.
Given that it might be common to run HDP against some other file system in the cloud, adding
support for this will be super useful. One alternative is to consider a separate stack definition
for other file systems, however, given that I noticed just 2 minor bugs needed to support
this, I would rather extend on the existing code.
> Bugs:
>  - One issue is in Nagios install scripts, where it is assumed that fs.defaultFS has
the namenode port number.
>  - Another issue is in HDFS install scripts, where {{hadoop dfsadmin}} command only works
when hdfs is the default file system.
> Fix for both places is to extract the namenode address/port from {{dfs.namenode.rpc-address}}
if one is defined and use it instead of relying on {{fs.defaultFS}}. 
> Haven't included any tests yet (my first Ambari patch, not sure what is appropriate,
so please comment).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message