flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "deng (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-7881) flink can't deployed on yarn with ha
Date Tue, 31 Oct 2017 06:25:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-7881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16226328#comment-16226328
] 

deng commented on FLINK-7881:
-----------------------------

I have  dig out why the issue happened.

As fllow  code the failoverProxyProvider is null , so it think  it's a No-HA case. The YarnConfiguration
didn't read hdfs-site.xml to get "dfs.client.failover.proxy.provider.startdt" value,only yarn-site.xml
and core-site.xml be read.

!screenshot-1.png!
!screenshot-2.png!

I have found a solution as below:

In org.apache.flink.yarnYarnApplicationMasterRunner.java:
        replace "final YarnConfiguration yarnConfig = new YarnConfiguration();" with "final
YarnConfiguration yarnConfig = new YarnConfiguration(HadoopUtils.getHadoopConfiguration());"

   In org.apache.flink.runtime.util.HadoopUtils.java:
        add the below code in getHadoopConfiguration()
             if (new File(possibleHadoopConfPath + "/yarn-site.xml").exists()) {
      retConf.addResource(new org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/yarn-site.xml"));

      if (LOG.isDebugEnabled()) {
       LOG.debug("Adding " + possibleHadoopConfPath + "/yarn-site.xml to hadoop configuration");
      }
     }



> flink can't deployed on yarn with ha
> ------------------------------------
>
>                 Key: FLINK-7881
>                 URL: https://issues.apache.org/jira/browse/FLINK-7881
>             Project: Flink
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.3.2
>            Reporter: deng
>            Priority: Blocker
>         Attachments: screenshot-1.png, screenshot-2.png
>
>
> I start yarn-ssession.sh on yarn, but it can't read hdfs logical url. It always connect
to hdfs://master:8020, it should be 9000, my hdfs defaultfs is hdfs://master.
> I have config the YARN_CONF_DIR and HADOOP_CONF_DIR, it didn't work.
> Is it a bug? i use flink-1.3.0-bin-hadoop27-scala_2.10
> 2017-10-20 11:00:05,395 DEBUG org.apache.hadoop.ipc.Client                          
       - IPC Client (1035144464) connection to startdt/173.16.5.215:8020 from admin: closed
> 2017-10-20 11:00:05,398 ERROR org.apache.flink.yarn.YarnApplicationMasterRunner     
       - YARN Application Master initialization failed
> java.net.ConnectException: Call From spark3/173.16.5.216 to master:8020 failed on connection
exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> 	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> 	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> 	at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> 	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message