apex-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vlad Rozov <v.ro...@datatorrent.com>
Subject Re: FailoverProxyProvider Error on Launch
Date Fri, 03 Mar 2017 21:37:38 GMT
Please check that your application package does not include hadoop jars.

Thank you,

Vlad

/Join us at Apex Big Data World-San Jose 
<http://www.apexbigdata.com/san-jose.html>, April 4, 2017/
http://www.apexbigdata.com/san-jose-register.html 
<http://www.apexbigdata.com/san-jose-register.html>
On 3/3/17 11:37, Ganelin, Ilya wrote:
>
> Hi, all – my application submits to the gateway, is accepted, but then 
> fails with:
>
> 017-03-03 14:30:50,505 INFO org.apache.hadoop.service.AbstractService: 
> Service com.datatorrent.stram.StreamingAppMasterService failed in 
> state INITED; cause: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>
>             at 
> org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:129)
>
>             at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:155)
>
>             at 
> org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:240)
>
>             at 
> org.apache.hadoop.fs.FileContext$2.run(FileContext.java:332)
>
> ….
>
> Caused by: java.lang.AbstractMethodError: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Ljava/lang/Object;
>
>             at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
>
>             at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
>
>             at 
> org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
>
>             at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:181)
>
>             at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:708)
>
>             at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:651)
>
>             at org.apache.hadoop.fs.Hdfs.<init>(Hdfs.java:90)
>
>             ... 47 more
>
> This seems related to 
> http://stackoverflow.com/questions/38262064/exception-while-running-spark-submit-on-hadoop-cluster-with-highavailability
>
> But I don’t know how this applies in the current situation. Wouldn’t 
> DT retrieve the core-site/hdfs-site from the Hadoop path we provide 
> when setting up the gateway?
>
> Our cluster is now deployed as an HA cluster, which wasn’t the case 
> the last time we ran Apex.
>
> Any ideas on how to resolve this would be much appreciated. Thanks!
>
> - Ilya Ganelin
>
> id:image001.png@01D1F7A4.F3D42980
>
>
> ------------------------------------------------------------------------
>
> The information contained in this e-mail is confidential and/or 
> proprietary to Capital One and/or its affiliates and may only be used 
> solely in performance of work or services for Capital One. The 
> information transmitted herewith is intended only for use by the 
> individual or entity to which it is addressed. If the reader of this 
> message is not the intended recipient, you are hereby notified that 
> any review, retransmission, dissemination, distribution, copying or 
> other use of, or taking of any action in reliance upon this 
> information is strictly prohibited. If you have received this 
> communication in error, please contact the sender and delete the 
> material from your computer.
>


Mime
View raw message