hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thejas Nair <thejas.n...@gmail.com>
Subject Re: javax.jdo.JDODataStoreException: Required table missing : "VERSION" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.schema.autoCreateTables"
Date Fri, 19 Aug 2016 17:51:43 GMT
As the error message indicated, please use "schematool -initSchema
-dbType.. " command to create the proper metadata schema in metastore.


On Fri, Aug 19, 2016 at 4:53 AM, شجاع الرحمن بیگ <shujamughal@gmail.com>
wrote:

> Hey,
>
> Could you please help me resolving this error?
> version= hive 2.1.0
>
> ERROR StatusLogger No log4j2 configuration file found. Using default
> configuration: logging only errors to the console.
> 13:41:37.848 [main] ERROR hive.ql.metadata.Hive - Cannot initialize
> metastore due to autoCreate error
> javax.jdo.JDODataStoreException: Required table missing : "VERSION" in
> Catalog "" Schema "". DataNucleus requires this table to perform its
> persistence operations. Either your MetaData is incorrect, or you need to
> enable "datanucleus.schema.autoCreateTables"
>     at org.datanucleus.api.jdo.NucleusJDOHelper.
> getJDOExceptionForNucleusException(NucleusJDOHelper.java:553)
> ~[datanucleus-api-jdo-4.2.1.jar:?]
>     at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(
> JDOPersistenceManager.java:720) ~[datanucleus-api-jdo-4.2.1.jar:?]
>     at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(
> JDOPersistenceManager.java:740) ~[datanucleus-api-jdo-4.2.1.jar:?]
>     at org.apache.hadoop.hive.metastore.ObjectStore.
> setMetaStoreSchemaVersion(ObjectStore.java:7763)
> ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.ObjectStore.
> checkSchema(ObjectStore.java:7657) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.ObjectStore.
> verifySchema(ObjectStore.java:7632) ~[hive-exec-2.1.0.jar:2.1.0]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:1.7.0_79]
>     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57) ~[?:1.7.0_79]
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_79]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_79]
>     at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
> ~[hive-exec-2.1.0.jar:2.1.0]
>     at com.sun.proxy.$Proxy11.verifySchema(Unknown Source) ~[?:?]
>     at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:547)
> ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.HiveMetaStore$
> HMSHandler.createDefaultDB(HiveMetaStore.java:612)
> ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.HiveMetaStore$
> HMSHandler.init(HiveMetaStore.java:398) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<
> init>(RetryingHMSHandler.java:78) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
> getProxy(RetryingHMSHandler.java:84) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.HiveMetaStore.
> newRetryingHMSHandler(HiveMetaStore.java:6396)
> ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.
> <init>(HiveMetaStoreClient.java:236) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<
> init>(SessionHiveMetaStoreClient.java:70) ~[hive-exec-2.1.0.jar:2.1.0]
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method) ~[?:1.7.0_79]
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:57) ~[?:1.7.0_79]
>     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45) ~[?:1.7.0_79]
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> ~[?:1.7.0_79]
>     at org.apache.hadoop.hive.metastore.MetaStoreUtils.
> newInstance(MetaStoreUtils.java:1625) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>
> (RetryingMetaStoreClient.java:80) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(
> RetryingMetaStoreClient.java:130) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(
> RetryingMetaStoreClient.java:101) ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3317)
> ~[hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3356)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3336)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3590)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:236)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.
> registerAllFunctionsOnce(Hive.java:221) [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:310)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:290)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:266)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:545)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513)
> [hive-exec-2.1.0.jar:2.1.0]
>     at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:116)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
>     at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:1.7.0_79]
>     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57) ~[?:1.7.0_79]
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_79]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_79]
>     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
>     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
>     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> [spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
> Caused by: org.datanucleus.store.rdbms.exceptions.MissingTableException:
> Required table missing : "VERSION" in Catalog "" Schema "". DataNucleus
> requires this table to perform its persistence operations. Either your
> MetaData is incorrect, or you need to enable "datanucleus.schema.
> autoCreateTables"
>     at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:606)
> ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.
> performTablesValidation(RDBMSStoreManager.java:3365)
> ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.
> run(RDBMSStoreManager.java:2877) ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(
> AbstractSchemaTransaction.java:119) ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.rdbms.RDBMSStoreManager.manageClasses(RDBMSStoreManager.java:1608)
> ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:671)
> ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.rdbms.RDBMSStoreManager.
> getPropertiesForGenerator(RDBMSStoreManager.java:2069)
> ~[datanucleus-rdbms-4.1.7.jar:?]
>     at org.datanucleus.store.AbstractStoreManager.getStrategyValue(
> AbstractStoreManager.java:1271) ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.ExecutionContextImpl.newObjectId(
> ExecutionContextImpl.java:3759) ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.state.StateManagerImpl.setIdentity(StateManagerImpl.java:2267)
> ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:484)
> ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:120)
> ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.state.ObjectProviderFactoryImpl.
> newForPersistentNew(ObjectProviderFactoryImpl.java:218)
> ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.ExecutionContextImpl.persistObjectInternal(
> ExecutionContextImpl.java:2078) ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.ExecutionContextImpl.persistObjectWork(
> ExecutionContextImpl.java:1922) ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.ExecutionContextImpl.persistObject(
> ExecutionContextImpl.java:1777) ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.ExecutionContextThreadedImpl.persistObject(
> ExecutionContextThreadedImpl.java:217) ~[datanucleus-core-4.1.6.jar:?]
>     at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(
> JDOPersistenceManager.java:715) ~[datanucleus-api-jdo-4.2.1.jar:?]
>     ... 49 more
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.hive.ql.metadata.HiveException:
> org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:Hive metastore database is not initialized. Please
> use schematool (e.g. ./schematool -initSchema -dbType ...) to create the
> schema. If needed, don't forget to include the option to auto-create the
> underlying database in your JDBC connection string (e.g.
> ?createDatabaseIfNotExist=true for mysql))
>     at org.apache.hadoop.hive.ql.session.SessionState.start(
> SessionState.java:578)
>     at org.apache.hadoop.hive.ql.session.SessionState.start(
> SessionState.java:513)
>     at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(
> SparkSQLCLIDriver.scala:116)
>     at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(
> SparkSQLCLIDriver.scala)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:Hive metastore database is not initialized. Please
> use schematool (e.g. ./schematool -initSchema -dbType ...) to create the
> schema. If needed, don't forget to include the option to auto-create the
> underlying database in your JDBC connection string (e.g.
> ?createDatabaseIfNotExist=true for mysql))
>     at org.apache.hadoop.hive.ql.metadata.Hive.
> registerAllFunctionsOnce(Hive.java:226)
>     at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366)
>     at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:310)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:290)
>     at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:266)
>     at org.apache.hadoop.hive.ql.session.SessionState.start(
> SessionState.java:545)
>     ... 12 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:Hive metastore database is not initialized. Please
> use schematool (e.g. ./schematool -initSchema -dbType ...) to create the
> schema. If needed, don't forget to include the option to auto-create the
> underlying database in your JDBC connection string (e.g.
> ?createDatabaseIfNotExist=true for mysql))
>     at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(
> Hive.java:3593)
>     at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(
> Hive.java:236)
>     at org.apache.hadoop.hive.ql.metadata.Hive.
> registerAllFunctionsOnce(Hive.java:221)
>     ... 17 more
> Caused by: MetaException(message:Hive metastore database is not
> initialized. Please use schematool (e.g. ./schematool -initSchema -dbType
> ...) to create the schema. If needed, don't forget to include the option to
> auto-create the underlying database in your JDBC connection string (e.g.
> ?createDatabaseIfNotExist=true for mysql))
>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3364)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3336)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(
> Hive.java:3590)
>     ... 19 more
>
>
> Thanks
> Shuja
>
> --
> Regards
> Shuja-ur-Rehman Baig
> <http://pk.linkedin.com/in/shujamughal>
>

Mime
View raw message