ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From philippe lanckvrind <lanckvrind.phili...@gmail.com>
Subject Re: Ambari 2.1 / HDP 2.3 & dfs.http.policy = HTTPS_ONLY issue
Date Fri, 31 Jul 2015 05:39:10 GMT
thank you for your answer Alejandro.

I'll give more detail about my concern because even with the SASL
activated, it remains the same.
Also, I strongly suspect that a part of the issue is comming from
abari-server.
Conrete situation:
All the component are stop through ambari ui
I just add the parameter dfs.http.policy to HTTPS_ONLY, I save the
configuration and then I directly receive an error message from Ambari UI
error 400 related to ressource component. The same goes with YARN when I
set dfs.http.policy to HTTPS_ONLY.
And I repeat, Saving the configuration from the ambari ui before restarting
the HDFS through the ui.

If you wish, I can create a youtube video and show the steps.

Also, when I set dfs.http.policy to HTTP_AND_HTTPS, every thing goes
perfect, no error from ambari ui and https namenode is accessible.

Hope it helps.

Best

2015-07-30 19:32 GMT+02:00 Alejandro Fernandez <afernandez@hortonworks.com>:

> +Jing
>
> Hi Philippe,
>
> When setting dfs.hdfs.policy to HTTPS_ONLY, you typically have to enable
> SSL on your cluster.
>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_hdfs_admin_tools/content/configuring_datanode_sasl.html
>
>
> This question is better suited for the HDFS team.
>
> Thanks,
> Alejandro
>
> From: philippe lanckvrind <lanckvrind.philippe@gmail.com>
> Reply-To: "user@ambari.apache.org" <user@ambari.apache.org>
> Date: Wednesday, July 29, 2015 at 11:25 PM
> To: "user@ambari.apache.org" <user@ambari.apache.org>
> Subject: Fwd: Ambari 2.1 / HDP 2.3 & dfs.http.policy = HTTPS_ONLY issue
>
> Dear all,
>
> I've noticed a strange issue with ambari 2.1 when I set the parameter
> dfs.hdfs.policy to HTTPS_ONLY.
>
> issue 1:
> When the parameter is set, the web-ui popup error codes :
> 500 status code received on GET method for API:
> /api/v1/clusters/HDP_CLUSTER/services/HDFS/components/NAMENODE?fields=metrics/dfs/FSNamesystem/CorruptBlocks,metrics/dfs/FSNamesystem/UnderReplicatedBlocks&format=null_padding
>
>
> When i continu, short after I can't access the the dashboard anymore or
> any other services on it with the related error:
> 500 status code received on GET method for API:
> /api/v1/clusters/HDP_CLUSTER/components/?ServiceComponentInfo/component_name=FLUME_HANDLER|ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|ServiceComponentInfo/category=MASTER&fields=ServiceComponentInfo/service_name,host_components/HostRoles/host_name,host_components/HostRoles/state,host_components/HostRoles/maintenance_state,host_components/HostRoles/stale_configs,host_components/HostRoles/ha_state,host_components/HostRoles/desired_admin_state,host_components/metrics/jvm/memHeapUsedM,host_components/metrics/jvm/HeapMemoryMax,host_components/metrics/jvm/HeapMemoryUsed,host_components/metrics/jvm/memHeapCommittedM,host_components/metrics/mapred/jobtracker/trackers_decommissioned,host_components/metrics/cpu/cpu_wio,host_components/metrics/rpc/RpcQueueTime_avg_time,host_components/metrics/dfs/FSNamesystem/*,host_components/metrics/dfs/namenode/Version,host_components/metrics/dfs/namenode/LiveNodes,host_components/metrics/dfs/namenode/DeadNodes,host_components/metrics/dfs/namenode/DecomNodes,host_components/metrics/dfs/namenode/TotalFiles,host_components/metrics/dfs/namenode/UpgradeFinalized,host_components/metrics/dfs/namenode/Safemode,host_components/metrics/runtime/StartTime,host_components/processes/HostComponentProcess,host_components/metrics/hbase/master/IsActiveMaster,host_components/metrics/hbase/master/MasterStartTime,host_components/metrics/hbase/master/MasterActiveTime,host_components/metrics/hbase/master/AverageLoad,host_components/metrics/master/AssignmentManger/ritCount,metrics/api/v1/cluster/summary,metrics/api/v1/topology/summary,host_components/metrics/yarn/Queue,host_components/metrics/yarn/ClusterMetrics/NumActiveNMs,host_components/metrics/yarn/ClusterMetrics/NumLostNMs,host_components/metrics/yarn/ClusterMetrics/NumUnhealthyNMs,host_components/metrics/yarn/ClusterMetrics/NumRebootedNMs,host_components/metrics/yarn/ClusterMetrics/NumDecommissionedNMs&minimal_response=true
>
>
> Issue 2 :
> before losing the control of Ambari, after setting dfs.hdfs.policy to
> HTTPS_ONLY, When I try to start HDFS, I receveive the folowing error:
> Connection failed to http://*******:50090 (Execution of 'curl -k
> --negotiate -u : -b
> /var/lib/ambari-agent/data/tmp/cookies/275cbc46-ffae-4524-bc29-6896c0b565e5
> -c
> /var/lib/ambari-agent/data/tmp/cookies/275cbc46-ffae-4524-bc29-6896c0b565e5
> -w '%{http_code}' http://*******t:50090 --connect-timeout 5 --max-time 7
> -o /dev/null' returned 7. curl: (7) couldn't connect to host
> 000)
>
>
>
> Configuration testing:
> Configuration 1
>
>    - Docker v1.7
>    - HP 2.3
>    - Ambari 2.1 Hortonworks repo
>    - centos 6.6
>
> Configuration 2
>
>    - virtual box v 4.3.10
>    - HDP 2.3
>    - Ambari 2.1 Hortonworks repo
>    - Centos 6.6 server
>
> I also noticed tat I can without a problem manually start the hdfs
> component with the SSL activated.
>
>
> In advance, tank you for your feedback
>
>
>

Mime
View raw message