falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Venkat R <verama...@yahoo.com.INVALID>
Subject Re: Regarding Falcon+Oozie+Hcat table replication
Date Sat, 05 Jul 2014 07:27:29 GMT
Hi Venkatesh,

I have made the changes to both the oozie servers (source and target cluster) -- meaning,
added hadood-conf of source and target clusters to both oozie-site.xml. I see the replication
oozie job gets started on the TARGET oozie server, but it throws the following exception:
-- looks like unable to get the TGT.

Let me know if you have any pointers
Thanks
Venkat



Failing Oozie Launcher, Main class [org.apache.falcon.latedata.LateDataHandler], main() threw
exception, Unable to obtain remote token
java.io.IOException: Unable to obtain remote token at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:249)
at org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:251) at org.apache.hadoop.hdfs.web.HftpFileSystem$2.run(HftpFileSystem.java:246)
at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at
org.apache.hadoop.hdfs.web.HftpFileSystem.getDelegationToken(HftpFileSystem.java:246) at org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:143)
at org.apache.hadoop.hdfs.web.HftpFileSystem.addDelegationTokenParam(HftpFileSystem.java:336)
at org.apache.hadoop.hdfs.web.HftpFileSystem.openConnection(HftpFileSystem.java:323) at org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:455)
at
 org.apache.hadoop.hdfs.web.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:470)
at org.apache.hadoop.hdfs.web.HftpFileSystem.getFileStatus(HftpFileSystem.java:499) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:238) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1624)
at org.apache.falcon.latedata.LateDataHandler.usage(LateDataHandler.java:214) at org.apache.falcon.latedata.LateDataHandler.getFileSystemUsageMetric(LateDataHandler.java:206)
at org.apache.falcon.latedata.LateDataHandler.computeStorageMetric(LateDataHandler.java:178)
at org.apache.falcon.latedata.LateDataHandler.computeMetrics(LateDataHandler.java:124) at
org.apache.falcon.latedata.LateDataHandler.run(LateDataHandler.java:101) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.falcon.latedata.LateDataHandler.main(LateDataHandler.java:60) at
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:606) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at
org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException:
No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
at org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:164)
at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.run(DelegationTokenFetcher.java:371)
at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:238)
... 35 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any
Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121) at
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261)
at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)
at
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:261)
... 40 more
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.impl.MetricsSystemImpl).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 


On Thursday, July 3, 2014 11:15 PM, Seetharam Venkatesh <venkatesh@innerzeal.com> wrote:
 


I think I know this issue. Sorry did not see the error first time around.
This is documented but since 0.5 is not out yet, is in the source. You
should follow the instructions below and this error should go away.

   * Oozie 4.x with Hadoop-2.x
Replication jobs are submitted to oozie on the destination cluster. Oozie
runs a table export job
on RM on source cluster. Oozie server on the target cluster must be
configured with source hadoop
configs else jobs fail with errors on secure and non-secure clusters as
below:
<verbatim>
org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not
found for ApplicationAttempt appattempt_1395965672651_0010_000002
</verbatim>

Make sure all oozie servers that falcon talks to has the hadoop configs
configured in oozie-site.xml
<verbatim>
<property>
      <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>

<value>*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3</value>
      <description>
          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the
HOST:PORT of
          the Hadoop service (JobTracker, HDFS). The wildcard '*'
configuration is
          used when there is no exact match for an authority. The
HADOOP_CONF_DIR contains
          the relevant Hadoop *-site.xml files. If the path is relative is
looked within
          the Oozie configuration directory; though the path can be
absolute (i.e. to point
          to Hadoop client conf/ directories in the local filesystem.
      </description>
    </property>
</verbatim>



On Thu, Jul 3, 2014 at 8:23 AM, John Yu <johnyu0520@gmail.com> wrote:

> Hey Seetharam,
>
> Thanks for your suggestion.  So I have done the following:
>
> 1. I've put hive-site.xml into oozie/conf/hadoop-conf
> 2. I've made sure  hive.metastore.execute.setugi is true (it is true by
> default actually)
> 3. I've copied the source cluster hadoop xml's into oozie/conf/hadoop-conf
> per
>
> https://git-wip-us.apache.org/repos/asf?p=incubator-falcon.git;a=blob;f=docs/src/site/twiki/HiveIntegration.twiki;h=2af5a6b8351c09c60d73531470b959e801f19b97;hb=HEAD
>
> so far it is still complaining for the same error.
>
> I am thinking on running through the hive integration documentation page
> from scratch again to see if it changes things.
>
>
> Thanks,
> John
>
>
> 2014-06-30 10:05 GMT-07:00 Seetharam Venkatesh <venkatesh@innerzeal.com>:
>
> > I'm not sure if you have the latest sandbox but you'd need to add
> > hive-site.xml in oozie and have hive.metastore.execute.setugi set to true
> > for this to work.
> >
> > This has been fixed in the next patch release.
> >
> >
> > On Fri, Jun 27, 2014 at 11:50 PM, John Yu <johnyu0520@gmail.com> wrote:
> >
> > > Hello Everyone,
> > >
> > > I am trying to get a Hcat based feed replication going between
> clusters,
> > > and is running into the below error (full stack trace and related
> > > information will be provided below)
> > > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception
> > while
> > > registering
> > >
> > > org.apache.hadoop.security.token.SecretManager$InvalidToken: Password
> > > not found for ApplicationAttempt appattempt_1403829424249_0002_000001
> > >
> > >
> > > Wondering if someone can provide insights into what might be happening
> > > here, or where to start trying to resolve this issue.  Any pointers
> would
> > > be greatly appreciated.
> > >
> > > Thanks!
> > > John
> > >
> > >
> > > --- System ---
> > > Hortonworks Sandbox 2.1
> > > Hadoop 2.4.0
> > > Hive 0.13.0
> > > Falcon 0.5
> > > Oozie 4.0.0
> > >
> > > The setup is between two HDP 2.1 sandboxes hosted on different boxes
> > >
> > >
> > > --- Hive site.xml ---
> > >
> > > (are these the only two properties I have to worry about?  )
> > >
> > >     <property>
> > >
> > >       <name>hive.security.authorization.enabled</name>
> > >
> > >       <value>false (originally true) </value>
> > >
> > >     </property>
> > >
> > >     <property>
> > >
> > >       <name>hive.server2.enable.doAs</name>
> > >
> > >       <value>true (originally false) </value>
> > >
> > >     </property>
> > >
> > > ---Error Stacktrace
> > >
> > > 2014-06-26 17:52:58,418 WARN [main] org.apache.hadoop.ipc.Client:
> > > Exception encountered while connecting to the server :
> > >
> > >
> >
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> > > Password not found for ApplicationAttempt
> > > appattempt_1403829424249_0002_000001
> > > 2014-06-26 17:52:58,425 ERROR [main]
> > > org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception
> > > while registering
> > > org.apache.hadoop.security.token.SecretManager$InvalidToken: Password
> > > not found for ApplicationAttempt appattempt_1403829424249_0002_000001
> > >         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > Method)
> > >         at
> > >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > >         at
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >         at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > >         at
> > >
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> > >         at
> > >
> >
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
> > >         at
> > >
> >
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:109)
> > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >         at
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >         at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >         at java.lang.reflect.Method.invoke(Method.java:606)
> > >         at
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> > >         at
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> > >         at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown
> > Source)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:155)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:116)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:213)
> > >         at
> > >
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:817)
> > >         at
> > >
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> > >         at
> > >
> >
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1075)
> > >         at
> > >
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1460)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1456)
> > >         at
> > >
> >
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1389)
> > > Caused by:
> > >
> >
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> > > Password not found for ApplicationAttempt
> > > appattempt_1403829424249_0002_000001
> > >         at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> > >         at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> > >         at
> > >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > >         at com.sun.proxy.$Proxy34.registerApplicationMaster(Unknown
> > Source)
> > >         at
> > >
> >
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:106)
> > >
> > >
> > >
> > > --
> > > 余守中  John Yu (Yu, Shoou-Jong)
> > > Mobile: 650-691-3314
> > >
> >
> >
> >
> > --
> > Regards,
> > Venkatesh
> >
> > “Perfection (in design) is achieved not when there is nothing more to
> add,
> > but rather when there is nothing more to take away.”
> > - Antoine de Saint-Exupéry

> >
>
>
>
> --
> 余守中  John Yu (Yu, Shoou-Jong)
> Mobile: 650-691-3314
>



-- 
Regards,
Venkatesh

“Perfection (in design) is achieved not when there is nothing more to add,
but rather when there is nothing more to take away.”
- Antoine de Saint-Exupéry
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message