falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FALCON-1925) Add hadoop classpath to falcon client classpath in falcon-config.sh
Date Fri, 22 Apr 2016 04:47:12 GMT

    [ https://issues.apache.org/jira/browse/FALCON-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15253340#comment-15253340
] 

ASF GitHub Bot commented on FALCON-1925:
----------------------------------------

GitHub user vrangan opened a pull request:

    https://github.com/apache/falcon/pull/112

    FALCON-1925: Add hadoop classpath to falcon client classpath in falcon-config.sh

    This is for recipe submission in a secure cluster

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/vrangan/falcon master

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/falcon/pull/112.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #112
    
----
commit bc5a458217b79a56210ac2b10a481c9588dcc407
Author: Venkat Ranganathan <venkat@hortonworks.com>
Date:   2016-03-30T00:57:25Z

    Falcon webUI returns 413 (Full head - Request entity too large) error when TLS is enabled
in a secure cluster with AD integration

commit 5edcc8fd69608c714809756838e01e3a7bb85a31
Author: Venkat Ranganathan <venkat@hortonworks.com>
Date:   2016-04-21T07:11:08Z

    Merge remote-tracking branch 'upstream/master'

commit 157a4f78797934a449d6cea27730abd4fae6a1d1
Author: Venkat Ranganathan <venkat@hortonworks.com>
Date:   2016-04-19T05:16:12Z

    Fix for allowing RM principal to be specified in cluster entity

commit ac6dd928c47761f25933444926020dab041f4cce
Author: yzheng-hortonworks <yzheng@hortonworks.com>
Date:   2016-04-21T22:06:05Z

    FALCON-1790 CLI support for instance search
    
    Tested CLI support. Also added documentation for both REST api and CLI.
    
    Author: yzheng-hortonworks <yzheng@hortonworks.com>
    
    Reviewers: "Praveen Adlakha <adlakha.praveen@gmail.com>, Balu Vellanki <balu@apache.org>"
    
    Closes #110 from yzheng-hortonworks/FALCON-1790

commit 9f7fa4bfbb334bd11767f7147086e3d084efb9cd
Author: Sowmya Ramesh <sowmya_kr@apache.org>
Date:   2016-04-21T22:18:43Z

    FALCON-1105 REST API support for Server side extensions artifact repository management
    
    REST API support for Server side extensions artifact repository management.
    Fixed the warnings in the file.
    
    Author: Sowmya Ramesh <sramesh@hortonworks.com>
    
    Reviewers: "Balu<bvellanki@hortonworks.com>, Ying Zheng<yzheng@hortonworks.com>"
    
    Closes #102 from sowmyaramesh/FALCON-1105

commit 4c50d8dc3d0c0f87349e9c070bad615a99604ec8
Author: Praveen Adlakha <adlakha.praveen@gmail.com>
Date:   2016-04-21T10:09:36Z

    FALCON-1749 Instance status does not show instances if entity is dele…
    
    …ted from one of the colos
    
    Author: Praveen Adlakha <adlakha.praveen@gmail.com>
    
    Reviewers: Pallavi Rao <pallavi.rao@inmobi.com>
    
    Closes #108 from PraveenAdlakha/1749 and squashes the following commits:
    
    ef50c74 [Praveen Adlakha] minor fix
    0cedbb8 [Praveen Adlakha] Merge branch '1749' of github.com:PraveenAdlakha/falcon into
1749
    7e25db6 [Praveen Adlakha] FALCON-1749 Instance status does not show instances if entity
is deleted from one of the colos

commit 608efeb18bba05431c4b8ef3544084b6678db3c2
Author: bvellanki <bvellanki@hortonworks.com>
Date:   2016-04-21T23:48:16Z

    FALCON-1861 Support HDFS Snapshot based replication in Falcon
    
    Documentation will be added in Jira FALCON-1908
    
    Author: bvellanki <bvellanki@hortonworks.com>
    
    Reviewers: "Sowmya <sramesh@hortonworks.com>, sandeepSamudrala <sandysmdl@gmail.com>,
Ying Zheng <yzheng@hortonworks.com>, Venkat Ranganathan <venkat@hortonworks.com>"
    
    Closes #105 from bvellanki/master

commit 4025644153884d36a24b12aa907af50237a368f8
Author: Venkat Ranganathan <venkat@hortonworks.com>
Date:   2016-04-22T04:44:36Z

    FALCON-1925: Add hadoop classpath to falcon client classpath in falcon-config.sh

commit 66907d80e9bedb0739f823bcd808b3bae4cf2c13
Author: Venkat Ranganathan <venkat@hortonworks.com>
Date:   2016-04-22T04:45:47Z

    Merge remote-tracking branch 'upstream/master'

----


> Add hadoop classpath to falcon client classpath in falcon-config.sh
> -------------------------------------------------------------------
>
>                 Key: FALCON-1925
>                 URL: https://issues.apache.org/jira/browse/FALCON-1925
>             Project: Falcon
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.7, 0.8, 0.9
>         Environment: secure clusters
>            Reporter: Venkat Ranganathan
>            Assignee: Venkat Ranganathan
>             Fix For: trunk
>
>
> We need the falcon client classpath to be updated in the falcon config.sh to include
hadoop classpath so that recipe submission can happen successfully as Hive metastore client
 - otherwise, we will get exceptions such as this
> {quote}
>  falcon recipe -name hive-disaster-recovery -operation HIVE_DISASTER_RECOVERY -properties
hive-disaster-recovery.properties
> Recipe processing failed com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException: Unable to instantiate org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException:
Unable to instantiate org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> 	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2263)
> 	at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
> 	at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4789)
> 	at org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
> 	at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
> 	at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
> 	at org.apache.hive.hcatalog.api.HCatClientHMSImpl.initialize(HCatClientHMSImpl.java:823)
> 	at org.apache.hive.hcatalog.api.HCatClient.create(HCatClient.java:71)
> 	at org.apache.falcon.recipe.HiveReplicationRecipeTool.getHiveMetaStoreClient(HiveReplicationRecipeTool.java:140)
> 	at org.apache.falcon.recipe.HiveReplicationRecipeTool.validate(HiveReplicationRecipeTool.java:52)
> 	at org.apache.falcon.recipe.RecipeTool.run(RecipeTool.java:88)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> 	at org.apache.falcon.recipe.RecipeTool.main(RecipeTool.java:64)
> 	at org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1094)
> 	at org.apache.falcon.cli.FalconCLI.recipeCommand(FalconCLI.java:1065)
> 	at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:212)
> 	at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:156)
> Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
> 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:118)
> 	at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230)
> 	at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
> 	at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4792)
> 	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
> 	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
> 	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
> 	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257)
> 	... 16 more
> Caused by: java.lang.reflect.InvocationTargetException
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  41 falcon.recipe.nn.principal=nn/_HOST@REALM
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
> 	... 26 more
> Caused by: MetaException(message:Could not connect to meta store using any of the URIs
provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate
failed
> 	at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> 	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
> 	at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
> 	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
> 	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> 	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:426)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:181)
> 	at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.<init>(HiveClientCache.java:330)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:118)
> 	at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:230)
> 	at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
> 	at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4792)
> 	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
> 	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
> 	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
> 	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257)
> 	at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
> 	at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4789)
> 	at org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
> 	at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
> 	at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
> 	at org.apache.hive.hcatalog.api.HCatClientHMSImpl.initialize(HCatClientHMSImpl.java:823)
> 	at org.apache.hive.hcatalog.api.HCatClient.create(HCatClient.java:71)
> 	at org.apache.falcon.recipe.HiveReplicationRecipeTool.getHiveMetaStoreClient(HiveReplicationRecipeTool.java:140)
> 	at org.apache.falcon.recipe.HiveReplicationRecipeTool.validate(HiveReplicationRecipeTool.java:52)
> 	at org.apache.falcon.recipe.RecipeTool.run(RecipeTool.java:88)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> 	at org.apache.falcon.recipe.RecipeTool.main(RecipeTool.java:64)
> 	at org.apache.falcon.client.FalconClient.submitRecipe(FalconClient.java:1094)
> 	at org.apache.falcon.cli.FalconCLI.recipeCommand(FalconCLI.java:1065)
> 	at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:212)
> 	at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:156)
> )
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:472)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:181)
> 	at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.<init>(HiveClientCache.java:330)
> 	... 31 more
> ERROR: java.lang.RuntimeException: Unable to instantiate org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message