ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Siddharth Wagle <swa...@hortonworks.com>
Subject Re: Finding valid properties for config via Ambari 1.5
Date Tue, 08 Apr 2014 00:34:26 GMT
Hi Chris,

The purpose of this API is to discover what default set of configuration
properties are needed before deploying the cluster.
Such properties are captured in the stack definition for each service
(/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/*/configuration)

The properties that you have listed do not have a value because the
currently stacks API does not return properties with empty values.
The value for such properties are defined by the user. Example:
nagios_web_password, cannot have a default value in the stack definition.
However, it is needed to be in the Nagios global configuration because
Ambari monitors staleness of properties and suggests which components
require restart by looking at the actual configuration tag reported by the
agent and the latest configuration that is expected on the host.

- How does this list get generated? Is it controlled by something defined
in the stack definition?
Currently the web UI is adding all the missing values to the
configurations, that will change with fix for AMBARI-4921.

Best Regards,
Sid


On Mon, Apr 7, 2014 at 4:53 PM, Chris Mildebrandt <chris@woodenrhino.com>wrote:

> Sorry, I guess I was looking for a way to query the properties through a
> REST interface before things are really deployed. I am attempting to
> automate the configuration entirely through the Ambari REST API.
>
> In my example from the first note, that API into the stack provides almost
> all of the configuration parameters. I have the following questions about
> that:
>
> - What's the purpose for this API?
> - Is there a reason the list of parameters is not complete?
> - How does this list get generated? Is it controlled by something defined
> in the stack definition?
> - If it's not in the stack, would it make sense to go in that direction?
>
> Does that make sense?
>
> Thanks,
> -Chris
>
>
> On Mon, Apr 7, 2014 at 2:55 PM, Siddharth Wagle <swagle@hortonworks.com>wrote:
>
>> Hi Chris,
>>
>> If you look at global.xml in the stack definition you should be able to
>> find most if not all of the above properties.
>> These are properties that are required to configure the cluster but do
>> not belong to a stack component, the web UI sets the appropriate values at
>> runtime.
>> The configuration type is "global".
>>
>> I am currently working on a Jira to refactor properties with empty values
>> that might be of help,
>> https://issues.apache.org/jira/browse/AMBARI-4921
>> Should have a patch for trunk in a couple of days.
>>
>> -Sid
>>
>>
>>
>> On Mon, Apr 7, 2014 at 2:26 PM, Erin Boyd <eboyd@redhat.com> wrote:
>>
>>> Try grepping in /etc/conf.
>>> Erin
>>>
>>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Chris Mildebrandt [chris@woodenrhino.com]
>>> Received: Monday, 07 Apr 2014, 1:23PM
>>> To: ambari-user [ambari-user@incubator.apache.org]
>>> Subject: Finding valid properties for config via Ambari 1.5
>>>
>>>
>>> Hello,
>>>
>>> I'd like to take a property name and match it to a configuration type
>>> (core-site, global-site, etc). I have found I can get a list of properties
>>> with their type here:
>>>
>>>
>>> http://host:8080/api/v1/stacks/HDP/versions/2.0.6/stackServices/?fields=configurations/StackConfigurations/type
>>>
>>> However, I also noticed there are some missing values:
>>>
>>> hive_database
>>> templeton.hive.properties
>>> hive_hostname
>>> hadoop.proxyuser.hive.groups
>>> hive_jdbc_connection_url
>>> hadoop_conf_dir
>>> hbase_tmp_dir
>>> yarn.scheduler.capacity.root.default.acl_administer_queue
>>> hive_jdbc_driver
>>> hadoop.proxyuser.hive.hosts
>>> oozie.service.HadoopAccessorService.jobTracker.whitelist
>>> hive_metastore_user_passwd
>>> fs_checkpoint_size
>>> apache_artifacts_download_url
>>> oozie.service.HadoopAccessorService.nameNode.whitelist
>>> dfs_exclude
>>> hive_database_type
>>> run_dir
>>> hadoop.proxyuser.oozie.groups
>>> smokeuser
>>> hadoop.proxyuser.hcat.hosts
>>> nagios_contact
>>> mapreduce.cluster.local.dir
>>> hregion_memstoreflushsize
>>> oozie_jdbc_connection_url
>>> hcat_conf_dir
>>> oozie_database
>>> yarn.nodemanager.aux-services.mapreduce.shuffle.class
>>> hadoop.proxyuser.oozie.hosts
>>> yarn.scheduler.capacity.resource-calculator
>>> oozie_jdbc_driver
>>> oozie_hostname
>>> hadoop.proxyuser.hcat.groups
>>> dfs.block.local-path-access.user
>>> java64_home
>>> gpl_artifacts_download_url
>>> oozie_database_type
>>> oozie_metastore_user_passwd
>>> user_group
>>> nagios_web_password
>>> hive_database_name
>>>
>>> Is there a way to programmatically know where these parameters should
>>> live, or that they even exist?
>>>
>>> Thanks,
>>> -Chris
>>>
>>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
View raw message