ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From philippe lanckvrind <lanckvrind.phili...@gmail.com>
Subject Re: Ambari 2.1 / HDP 2.3 & dfs.http.policy = HTTPS_ONLY issue
Date Mon, 10 Aug 2015 11:40:06 GMT
Dear all,

I've found the issue related to the Error I was facing of with HTTPS_ONLY

Actually, its the class *URLStreamProvider.java *which trow me an error :
https://github.com/apache/ambari/blob/c74443d9fb593bd0fecc3d64f92b334f7687f6b1/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLStreamProvider.java#L283

It's because it the object Configuration was not correctly set in
*ComponentSSLConfiguration.java*

   - configuration.getTruststorePath(),
   - configuration.getTruststorePassword(),
   - configuration.getTruststoreType());

I found that these variable needed to be set in the file
/etc/ambari-server/conf/ambari.properties

   - ssl.trustStore.path=/etc/hadoop/conf
   - ssl.trustStore.password=MyTrustorePassword
   - ssl.trustStore.type=jks


After set dfs.http.policy = HTTPS_ONLY in hdfs-site.xml and restarted the
ambari-server. I was able to restart HDFS component.

However I always had te curl error on the namenode :

   - checked_call['curl -sS -L -w '%{http_code}' -X GET 'http://
   *.*.*.*.*:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'']
   {'logoutput': None, 'user': 'hdfs', 'stderr': -1, 'quiet': False}


I found the issue after checking the class
*hdfs_resource.py  *
https://github.com/apache/ambari/blob/0144f793c5dfc1fac54a8bcb39e787b605e676fe/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py#L133

This value is actually depreciated in hdfs 2.7.1
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SecureMode.html

This can be tackle down when you add* dfs.https.enable=true* in
hdfs-site.xml. Now, the curl queries are now correct.

I hope it will help you in order to improve the api.

Best,

Lanckvrind Philippe.

2015-07-31 19:28 GMT+02:00 Alejandro Fernandez <afernandez@hortonworks.com>:

> Hi Philippe, Ambari is probably trying to access something like NameNode
> JMX.
> You can start the debugging tools on your web browser to figure out what
> is failing. For now, I suggest setting the property to "HTTP_AND_HTTPS"
> until you can get the configs pushed out to all of the HDFS hosts, and
> ensure that HTTPS is indeed working with your certificate. After that, you
> can change the property to "HTTPS_ONLY".
>
> Hope that helps,
> Alejandro
>
> From: philippe lanckvrind <lanckvrind.philippe@gmail.com>
> Date: Thursday, July 30, 2015 at 10:39 PM
> To: Alejandro Fernandez <afernandez@hortonworks.com>
> Cc: "user@ambari.apache.org" <user@ambari.apache.org>, Jing Zhao <
> jing@hortonworks.com>
> Subject: Re: Ambari 2.1 / HDP 2.3 & dfs.http.policy = HTTPS_ONLY issue
>
> thank you for your answer Alejandro.
>
> I'll give more detail about my concern because even with the SASL
> activated, it remains the same.
> Also, I strongly suspect that a part of the issue is comming from
> abari-server.
> Conrete situation:
> All the component are stop through ambari ui
> I just add the parameter dfs.http.policy to HTTPS_ONLY, I save the
> configuration and then I directly receive an error message from Ambari UI
> error 400 related to ressource component. The same goes with YARN when I
> set dfs.http.policy to HTTPS_ONLY.
> And I repeat, Saving the configuration from the ambari ui before
> restarting the HDFS through the ui.
>
> If you wish, I can create a youtube video and show the steps.
>
> Also, when I set dfs.http.policy to HTTP_AND_HTTPS, every thing goes
> perfect, no error from ambari ui and https namenode is accessible.
>
> Hope it helps.
>
> Best
>
> 2015-07-30 19:32 GMT+02:00 Alejandro Fernandez <afernandez@hortonworks.com
> >:
>
>> +Jing
>>
>> Hi Philippe,
>>
>> When setting dfs.hdfs.policy to HTTPS_ONLY, you typically have to enable
>> SSL on your cluster.
>>
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_hdfs_admin_tools/content/configuring_datanode_sasl.html
>>
>>
>> This question is better suited for the HDFS team.
>>
>> Thanks,
>> Alejandro
>>
>> From: philippe lanckvrind <lanckvrind.philippe@gmail.com>
>> Reply-To: "user@ambari.apache.org" <user@ambari.apache.org>
>> Date: Wednesday, July 29, 2015 at 11:25 PM
>> To: "user@ambari.apache.org" <user@ambari.apache.org>
>> Subject: Fwd: Ambari 2.1 / HDP 2.3 & dfs.http.policy = HTTPS_ONLY issue
>>
>> Dear all,
>>
>> I've noticed a strange issue with ambari 2.1 when I set the parameter
>> dfs.hdfs.policy to HTTPS_ONLY.
>>
>> issue 1:
>> When the parameter is set, the web-ui popup error codes :
>> 500 status code received on GET method for API:
>> /api/v1/clusters/HDP_CLUSTER/services/HDFS/components/NAMENODE?fields=metrics/dfs/FSNamesystem/CorruptBlocks,metrics/dfs/FSNamesystem/UnderReplicatedBlocks&format=null_padding
>>
>>
>> When i continu, short after I can't access the the dashboard anymore or
>> any other services on it with the related error:
>> 500 status code received on GET method for API:
>> /api/v1/clusters/HDP_CLUSTER/components/?ServiceComponentInfo/component_name=FLUME_HANDLER|ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|ServiceComponentInfo/category=MASTER&fields=ServiceComponentInfo/service_name,host_components/HostRoles/host_name,host_components/HostRoles/state,host_components/HostRoles/maintenance_state,host_components/HostRoles/stale_configs,host_components/HostRoles/ha_state,host_components/HostRoles/desired_admin_state,host_components/metrics/jvm/memHeapUsedM,host_components/metrics/jvm/HeapMemoryMax,host_components/metrics/jvm/HeapMemoryUsed,host_components/metrics/jvm/memHeapCommittedM,host_components/metrics/mapred/jobtracker/trackers_decommissioned,host_components/metrics/cpu/cpu_wio,host_components/metrics/rpc/RpcQueueTime_avg_time,host_components/metrics/dfs/FSNamesystem/*,host_components/metrics/dfs/namenode/Version,host_components/metrics/dfs/namenode/LiveNodes,host_components/metrics/dfs/namenode/DeadNodes,host_components/metrics/dfs/namenode/DecomNodes,host_components/metrics/dfs/namenode/TotalFiles,host_components/metrics/dfs/namenode/UpgradeFinalized,host_components/metrics/dfs/namenode/Safemode,host_components/metrics/runtime/StartTime,host_components/processes/HostComponentProcess,host_components/metrics/hbase/master/IsActiveMaster,host_components/metrics/hbase/master/MasterStartTime,host_components/metrics/hbase/master/MasterActiveTime,host_components/metrics/hbase/master/AverageLoad,host_components/metrics/master/AssignmentManger/ritCount,metrics/api/v1/cluster/summary,metrics/api/v1/topology/summary,host_components/metrics/yarn/Queue,host_components/metrics/yarn/ClusterMetrics/NumActiveNMs,host_components/metrics/yarn/ClusterMetrics/NumLostNMs,host_components/metrics/yarn/ClusterMetrics/NumUnhealthyNMs,host_components/metrics/yarn/ClusterMetrics/NumRebootedNMs,host_components/metrics/yarn/ClusterMetrics/NumDecommissionedNMs&minimal_response=true
>>
>>
>> Issue 2 :
>> before losing the control of Ambari, after setting dfs.hdfs.policy to
>> HTTPS_ONLY, When I try to start HDFS, I receveive the folowing error:
>> Connection failed to http://*******:50090 (Execution of 'curl -k
>> --negotiate -u : -b
>> /var/lib/ambari-agent/data/tmp/cookies/275cbc46-ffae-4524-bc29-6896c0b565e5
>> -c
>> /var/lib/ambari-agent/data/tmp/cookies/275cbc46-ffae-4524-bc29-6896c0b565e5
>> -w '%{http_code}' http://*******t:50090 --connect-timeout 5 --max-time 7
>> -o /dev/null' returned 7. curl: (7) couldn't connect to host
>> 000)
>>
>>
>>
>> Configuration testing:
>> Configuration 1
>>
>>    - Docker v1.7
>>    - HP 2.3
>>    - Ambari 2.1 Hortonworks repo
>>    - centos 6.6
>>
>> Configuration 2
>>
>>    - virtual box v 4.3.10
>>    - HDP 2.3
>>    - Ambari 2.1 Hortonworks repo
>>    - Centos 6.6 server
>>
>> I also noticed tat I can without a problem manually start the hdfs
>> component with the SSL activated.
>>
>>
>> In advance, tank you for your feedback
>>
>>
>>
>

Mime
View raw message