falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Venkat R <verama...@yahoo.com.INVALID>
Subject Re: Replication Job throws GSSException
Date Thu, 10 Jul 2014 23:27:16 GMT
Switched to webhdfs, but the co-ordinator keeps failing with the following exception and thinks
the data on the other side is not present. I am running Apache version of Oozie (4.0.1). 
Any thoughts?

Venkat

ACTION[0000006-140710220847349-oozie-oozi-C@1] Error, 
java.lang.NoClassDefFoundError: Could not initialize class javax.ws.rs.core.MediaType
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.jsonParse(WebHdfsFileSystem.java:287)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:630)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:535)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:424)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:953)
at org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:143)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:227)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:381)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:402)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathRunner.getUrl(WebHdfsFileSystem.java:652)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.init(WebHdfsFileSystem.java:485)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:531)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:424)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:678)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:689)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1399)
at org.apache.oozie.dependency.FSURIHandler.exists(FSURIHandler.java:100)
at org.apache.oozie.command.coord.CoordActionInputCheckXCommand.pathExists(CoordActionInputCheckXCommand.java:484)
at org.apache.oozie.command.coord.CoordActionInputCheckXCommand.checkListOfPaths(CoordActionInputCheckXCommand.java:455)
at org.apache.oozie.command.coord.CoordActionInputCheckXCommand.checkResolvedUris(CoordActionInputCheckXCommand.java:425)
at org.apache.oozie.command.coord.CoordActionInputCheckXCommand.checkInput(CoordActionInputCheckXCommand.java:255)
at org.apache.oozie.command.coord.CoordActionInputCheckXCommand.execute(CoordActionInputCheckXCommand.java:130)
at org.apache.oozie.command.coord.CoordActionInputCheckXCommand.execute(CoordActionInputCheckXCommand.java:65)
at org.apache.oozie.command.XCommand.call(XCommand.java:280)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)


On Thursday, July 10, 2014 2:42 PM, Venkat R <veramacha@yahoo.com.INVALID> wrote:
 


ok, will try now and see.



On Thursday, July 10, 2014 2:37 PM, Arpit Gupta <arpit@hortonworks.com> wrote:



from the stack trace it looks like you are using hftp. We ran into issues when running tests
against secure hadoop + hftp

https://issues.apache.org/jira/browse/HDFS-5842

I recommend switch the readonly interface to webhdfs. 

--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/


On Jul 10, 2014, at 2:16 PM, Arpit Gupta <arpit@hortonworks.com> wrote:

> You need to provide the nn principal in the cluster.xml for each cluster. The following
property needs to be provided in each cluster's xml
> 
> dfs.namenode.kerberos.principal
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
> 
> On Jul 10, 2014, at 2:08 PM, Venkat R <veramacha@yahoo.com> wrote:
> 
>> Using the demo example. There is a replication job that copies dataset from Source
to Target cluster by launching a REPLICATION job on Target Oozie cluster. But it fails with
the following GSSException:
>> 
>> I have added both the oozie servers (one for source and target clusters) to the core-site.xml
of both the clusters as proxyuser machines as below:
>> 
>> source-cluster and target-cluster : core-site.xml has the following: 
>> 
>>   <property>
>>     <name>hadoop.proxyuser.oozie.groups</name>
>>     <value>users</value>
>>   </property>
>>   <property>
>>     <name>hadoop.proxyuser.oozie.hosts</name>
>>     <value>eat1-hcl0758.grid.linkedin.com,eat1-hcl0759.grid.linkedin.com</value>
>>   </property>
>> 
>> Appreciate any pointers.
>> Venkat
>> 
>> Failing Oozie Launcher, Main class [org.apache.falcon.latedata.LateDataHandler],
main() threw exception, Unable to obtain remote token
>> java.io.IOException: Unable to obtain remote token
>>     at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:251)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:246)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:246)
>>     at org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:143)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:336)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:323)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:455)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:470)
>>     at org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:499)
>>     at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>>     at org.apache.hadoop.fs.Globber.glob(Globber.java:238)
>>     at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1624)
>>     at org.apache.falcon.latedata.LateDataHandler.usage(LateDataHandler.java:269)
>>     at org.apache.falcon.latedata.LateDataHandler.getFileSystemUsageMetric(LateDataHandler.java:252)
>>     at org.apache.falcon.latedata.LateDataHandler.computeStorageMetric(LateDataHandler.java:224)
>>     at org.apache.falcon.latedata.LateDataHandler.computeMetrics(LateDataHandler.java:170)
>>     at org.apache.falcon.latedata.LateDataHandler.run(LateDataHandler.java:147)
>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>     at org.apache.falcon.latedata.LateDataHandler.main(LateDataHandler.java:60)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
>>     at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>>     at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>>     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
>> Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException:
GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos
tgt)
>>     at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
>>     at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
>>     at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
>>     at org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:164)
>>     at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:371)
>>     at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
>>     ... 35 more
>> Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)
>>     at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>>     at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
>>     at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>>     at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
>>     at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
>>     at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
>>     at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285)
>>     at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:261)
>>     ... 40 more
>> 
>> Oozie Launcher failed, finishing Hadoop job gracefully
>> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message