sentry-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sravya Tirukkovalur (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SENTRY-1044) Tables with non-hdfs locations breaks HMS startup
Date Tue, 02 Feb 2016 03:11:40 GMT

    [ https://issues.apache.org/jira/browse/SENTRY-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127567#comment-15127567
] 

Sravya Tirukkovalur commented on SENTRY-1044:
---------------------------------------------

+1. LGTM. Thanks for the patch [~qwertymaniac]!

> Tables with non-hdfs locations breaks HMS startup
> -------------------------------------------------
>
>                 Key: SENTRY-1044
>                 URL: https://issues.apache.org/jira/browse/SENTRY-1044
>             Project: Sentry
>          Issue Type: Bug
>          Components: Hdfs Plugin
>    Affects Versions: 1.6.0
>            Reporter: Harsh J
>            Priority: Critical
>         Attachments: SENTRY-1044.patch, SENTRY-1044.patch
>
>
> To repro, create any table with location on s3a, file:///, etc., and enable HDFS Sync.
It will cause the plugin to go into an invalid state with the below exception:
> {code}
> ERROR org.apache.sentry.hdfs.MetastorePlugin: [main]: #### Could not create Initial AuthzPaths
or HMSHandler !! 
> java.lang.IllegalArgumentException: pathElements cannot be NULL 
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) 
> at org.apache.sentry.hdfs.HMSPaths$Entry.findPrefixEntry(HMSPaths.java:250) 
> at org.apache.sentry.hdfs.HMSPaths$Entry.createAuthzObjPath(HMSPaths.java:197) 
> at org.apache.sentry.hdfs.HMSPaths.addAuthzObject(HMSPaths.java:354) 
> at org.apache.sentry.hdfs.HMSPaths.addPathsToAuthzObject(HMSPaths.java:388) 
> at org.apache.sentry.hdfs.UpdateableAuthzPaths.applyPartialUpdate(UpdateableAuthzPaths.java:112)

> at org.apache.sentry.hdfs.UpdateableAuthzPaths.updatePartial(UpdateableAuthzPaths.java:74)

> at org.apache.sentry.hdfs.MetastoreCacheInitializer.createInitialUpdate(MetastoreCacheInitializer.java:245)

> at org.apache.sentry.hdfs.MetastorePlugin$1.run(MetastorePlugin.java:160) 
> at org.apache.sentry.hdfs.MetastorePlugin.<init>(MetastorePlugin.java:197) 
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

> at java.lang.reflect.Constructor.newInstance(Constructor.java:422) 
> at org.apache.sentry.binding.metastore.SentryMetastorePostEventListener.<init>(SentryMetastorePostEventListener.java:78)

> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

> at java.lang.reflect.Constructor.newInstance(Constructor.java:422) 
> at org.apache.hadoop.hive.metastore.MetaStoreUtils.getMetaStoreListeners(MetaStoreUtils.java:1439)

> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:485)

> at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)

> at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)

> at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5775)

> at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5770)

> at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6022)

> at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5947) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

> at java.lang.reflect.Method.invoke(Method.java:497) 
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> We check parsed paths for nulls before we add them to the PathsUpdate object in DbTask
and PartitionTask, but this is omitted accidentally in TableTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message