hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jamal B <jm151...@gmail.com>
Subject Re: FAILED: Error in semantic analysis: Unable to fetch table test_table
Date Tue, 04 Jun 2013 02:38:40 GMT
It's weird.  If I use tinyint, the following exception is thrown when
attempting to create a table

NestedThrowablesStackTrace:
org.h2.jdbc.JdbcSQLException: Data conversion error converting "'N' (SDS:
IS_COMPRESSED TINYINT NOT NULL)"; SQL statement:
INSERT INTO SDS
(SD_ID,INPUT_FORMAT,IS_COMPRESSED,SERDE_ID,OUTPUT_FORMAT,LOCATION,NUM_BUCKETS)
VALUES (?,?,?,?,?,?,?) -- (?1, ?2, ?3, ?4, ?5, ?6, ?7) [22018-165]
        at
org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
        at org.h2.message.DbException.get(DbException.java:169)
        at org.h2.message.DbException.get(DbException.java:146)
        at org.h2.table.Column.convert(Column.java:147)
        at org.h2.command.dml.Insert.insertRows(Insert.java:112)
        at org.h2.command.dml.Insert.update(Insert.java:84)
        at org.h2.command.CommandContainer.update(CommandContainer.java:73)
        at org.h2.command.Command.executeUpdate(Command.java:226)
        at org.h2.server.TcpServerThread.process(TcpServerThread.java:325)
        at org.h2.server.TcpServerThread.run(TcpServerThread.java:146)
        at java.lang.Thread.run(Thread.java:679)

        at org.h2.engine.SessionRemote.done(SessionRemote.java:565)
        at
org.h2.command.CommandRemote.executeUpdate(CommandRemote.java:183)
        at
org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:143)
        at
org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:129)
        at
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
        at
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
        at
org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:396)
        at
org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:406)
        at
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
        at
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
        at
org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
        at
org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
        at
org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
        at
org.datanucleus.store.mapped.mapping.PersistableMapping.setObjectAsValue(PersistableMapping.java:664)
        at
org.datanucleus.store.mapped.mapping.PersistableMapping.setObject(PersistableMapping.java:423)
        at
org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:197)
        at
org.datanucleus.state.AbstractStateManager.providedObjectField(AbstractStateManager.java:1023)
        at
org.apache.hadoop.hive.metastore.model.MTable.jdoProvideField(MTable.java)
        at
org.apache.hadoop.hive.metastore.model.MTable.jdoProvideFields(MTable.java)
        at
org.datanucleus.jdo.state.JDOStateManagerImpl.provideFields(JDOStateManagerImpl.java:2803)
        at
org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:294)
        at
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
        at
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
        at
org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
        at
org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
        at
org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
        at
org.datanucleus.ObjectManagerImpl.persistObject(ObjectManagerImpl.java:1241)
        at
org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:655)
        at
org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
        at
org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
        at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
        at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
        at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)



but if I use char(1), based on the error msg indicating that it's inserting
a 'N' and not a number (0|1),  it works like a champ!  Go figure....


Thanks for all your help.



On Mon, Jun 3, 2013 at 9:40 PM, Stephen Sprague <spragues@gmail.com> wrote:

> well. There lies your answer.  So. what you might consider doing is
> altering that table column and change it to tinyint.  That might be just
> enough to trick it since i think the java code is expecting either a 0 or 1
> - not 'false'.
>
> might be worth a try.
>
>
> On Mon, Jun 3, 2013 at 5:47 PM, Jamal B <jm15119b@gmail.com> wrote:
>
>> It treats it as a boolean.
>>
>> http://www.h2database.com/html/datatypes.html#boolean_type
>>
>>
>> On Mon, Jun 3, 2013 at 8:16 PM, Stephen Sprague <spragues@gmail.com>wrote:
>>
>>> it does.  so its a bit datatype.  that's a tad non-standard i'd say.
>>> what does your backend store (H2) do with that?
>>>
>>> {code}
>>> | SDS   | CREATE TABLE `SDS` (
>>>
>>>   `SD_ID` bigint(20) NOT NULL,
>>>   `INPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>> DEFAULT NULL,
>>>   `IS_COMPRESSED` bit(1) NOT NULL,
>>>   `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>> DEFAULT NULL,
>>>   `NUM_BUCKETS` int(11) NOT NULL,
>>>   `OUTPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>> DEFAULT NULL,
>>>   `SERDE_ID` bigint(20) DEFAULT NULL,
>>>   PRIMARY KEY (`SD_ID`),
>>>   KEY `SDS_N49` (`SERDE_ID`),
>>>   CONSTRAINT `SDS_FK1` FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES`
>>> (`SERDE_ID`)
>>> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
>>> {code}
>>>
>>>
>>> On Mon, Jun 3, 2013 at 4:57 PM, Jamal B <jm15119b@gmail.com> wrote:
>>>
>>>> I gave that a shot, but it didn't work.  Could you run a describe table
>>>> to see if it matches this schema  (assuming mysql):
>>>> --
>>>> -- Table structure for table `SDS`
>>>> --
>>>>
>>>> /*!40101 SET @saved_cs_client     = @@character_set_client */;
>>>> /*!40101 SET character_set_client = utf8 */;
>>>> CREATE TABLE IF NOT EXISTS `SDS` (
>>>>   `SD_ID` bigint(20) NOT NULL,
>>>>   `INPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>>> DEFAULT NULL,
>>>>   `IS_COMPRESSED` bit(1) NOT NULL,
>>>>   `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>>> DEFAULT NULL,
>>>>   `NUM_BUCKETS` int(11) NOT NULL,
>>>>   `OUTPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>>> DEFAULT NULL,
>>>>   `SERDE_ID` bigint(20) DEFAULT NULL,
>>>>   PRIMARY KEY (`SD_ID`),
>>>>   KEY `SDS_N49` (`SERDE_ID`),
>>>>   CONSTRAINT `SDS_FK1` FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES`
>>>> (`SERDE_ID`)
>>>> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
>>>>
>>>>
>>>>
>>>> On Mon, Jun 3, 2013 at 4:20 PM, Stephen Sprague <spragues@gmail.com>wrote:
>>>>
>>>>> not that this is any consolation but on my mysql instance, same cdh
>>>>> release, i have the following for IS_COMPRESSED where you have 'false':
>>>>>
>>>>> {code}
>>>>> {metastore@etl1 root 13:13}>select SD_ID, is_compressed from SDS
>>>>> limit 10;
>>>>> +-------+---------------+
>>>>> | SD_ID | is_compressed |
>>>>> +-------+---------------+
>>>>> |     1 |               |
>>>>> |     2 |               |
>>>>> |     6 |               |
>>>>> |    11 |               |
>>>>> |    26 |               |
>>>>> |    31 |               |
>>>>> |    36 |               |
>>>>> |    41 |               |
>>>>> |    46 |               |
>>>>> |    47 |               |
>>>>> +-------+---------------+
>>>>> 10 rows in set (0.00 sec)
>>>>> {code}
>>>>>
>>>>> so *maybe* that 'false' value there is something specific to your
>>>>> backend store?  might you experiment and change it to a null string?
and
>>>>> see if that changes anything?   Something to think about anyway.
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Jun 3, 2013 at 12:49 PM, Jamal B <jm15119b@gmail.com> wrote:
>>>>>
>>>>>> Please forgive the cross post, but I could really use some help.
>>>>>>
>>>>>> I have Hive setup using a remote metastore, backed by H2, and am
able
>>>>>> to create tables, load data, and query them without issue.  However,
when I
>>>>>> restart the remote metastore, I can no longer query previously created
>>>>>> tables.  'show tables' shows them, but when I perform a simple select
>>>>>> 'select * from test_table limit 5' I receive a FAILED: Error in semantic
>>>>>> analysis: Unable to fetch table test_table error.  When I look at
the logs
>>>>>> in the metastore, I noticed the following exceptions repeat until
the retry
>>>>>> limit is exceeded:
>>>>>>
>>>>>> 13/06/03 19:02:06 INFO HiveMetaStore.audit: ugi=rtws
>>>>>> ip=unknown-ip-addr cmd=get_table : db=default tbl=test_table
>>>>>> 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found
>>>>>> initialisation for persistable class
>>>>>> org.apache.hadoop.hive.metastore.model.MSerDeInfo
>>>>>> 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found
>>>>>> initialisation for persistable class
>>>>>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor
>>>>>> 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found
>>>>>> initialisation for persistable class
>>>>>> org.apache.hadoop.hive.metastore.model.MTable
>>>>>> 13/06/03 19:02:06 INFO DataNucleus.JDO: Exception thrown
>>>>>>  Illegal null value in column SDS.IS_COMPRESSED
>>>>>> org.datanucleus.exceptions.NucleusDataStoreException: Illegal null
>>>>>> value in column SDS.IS_COMPRESSED
>>>>>> at
>>>>>> org.datanucleus.store.rdbms.mapping.CharRDBMSMapping.getBoolean(CharRDBMSMapping.java:374)
>>>>>>  at
>>>>>> org.datanucleus.store.mapped.mapping.SingleFieldMapping.getBoolean(SingleFieldMapping.java:122)
>>>>>> at
>>>>>> org.datanucleus.store.rdbms.fieldmanager.ResultSetGetter.fetchBooleanField(ResultSetGetter.java:64)
>>>>>>  at
>>>>>> org.datanucleus.state.AbstractStateManager.replacingBooleanField(AbstractStateManager.java:1038)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.java)
>>>>>>  at
>>>>>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceFields(MStorageDescriptor.java)
>>>>>> at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
>>>>>>  at
>>>>>> org.datanucleus.store.rdbms.query.PersistentClassROF$2.fetchFields(PersistentClassROF.java:487)
>>>>>> at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldValues(JDOStateManagerImpl.java:858)
>>>>>>  at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.initialiseForHollow(JDOStateManagerImpl.java:258)
>>>>>> at
>>>>>> org.datanucleus.state.StateManagerFactory.newStateManagerForHollowPopulated(StateManagerFactory.java:87)
>>>>>>  at
>>>>>> org.datanucleus.ObjectManagerImpl.findObject(ObjectManagerImpl.java:2389)
>>>>>> at
>>>>>> org.datanucleus.store.rdbms.query.PersistentClassROF.getObjectForDatastoreId(PersistentClassROF.java:481)
>>>>>>  at
>>>>>> org.datanucleus.store.rdbms.query.PersistentClassROF.getObject(PersistentClassROF.java:366)
>>>>>> at
>>>>>> org.datanucleus.store.rdbms.fieldmanager.ResultSetGetter.fetchObjectField(ResultSetGetter.java:144)
>>>>>>  at
>>>>>> org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.model.MTable.jdoReplaceField(MTable.java)
>>>>>>  at
>>>>>> org.apache.hadoop.hive.metastore.model.MTable.jdoReplaceFields(MTable.java)
>>>>>> at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
>>>>>>  at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
>>>>>> at
>>>>>> org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:335)
>>>>>>  at
>>>>>> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:240)
>>>>>> at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsFromDatastore(JDOStateManagerImpl.java:1929)
>>>>>>  at
>>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.loadUnloadedFields(JDOStateManagerImpl.java:1597)
>>>>>> at
>>>>>> org.datanucleus.jdo.state.Hollow.transitionRetrieve(Hollow.java:168)
>>>>>>  at
>>>>>> org.datanucleus.state.AbstractStateManager.retrieve(AbstractStateManager.java:470)
>>>>>> at
>>>>>> org.datanucleus.ObjectManagerImpl.retrieveObject(ObjectManagerImpl.java:1131)
>>>>>>  at
>>>>>> org.datanucleus.jdo.JDOPersistenceManager.jdoRetrieve(JDOPersistenceManager.java:534)
>>>>>> at
>>>>>> org.datanucleus.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:551)
>>>>>>  at
>>>>>> org.datanucleus.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:560)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:776)
>>>>>>  at
>>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:709)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1076)
>>>>>>  at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1073)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
>>>>>>  at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1073)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.process(ThriftHiveMetastore.java:5457)
>>>>>>  at
>>>>>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
>>>>>> at
>>>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>>>>>>  at
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>>>>>> at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>  at java.lang.Thread.run(Thread.java:679)
>>>>>>
>>>>>> What is odd is that when I look at the SDS table, the row for the
>>>>>> table does not contain a null value:
>>>>>>
>>>>>> SD_ID,INPUT_FORMAT,IS_COMPRESSED,LOCATION,NUM_BUCKETS,OUTPUT_FORMAT,SERDE_ID
>>>>>>
>>>>>> 1,org.apache.hadoop.mapred.TextInputFormat,false,hdfs://namenode/tmp/hivedata/stuff,-1,org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat,1
>>>>>>
>>>>>> So I'm guess it has something to do with the meatstore initialization
>>>>>> code but I'm not able to figure it out.  Here is the hive site config
>>>>>> section related to the metastore:
>>>>>> <property>
>>>>>>   <name>javax.jdo.option.ConnectionURL</name>
>>>>>>
>>>>>> <value>jdbc:h2:tcp://metastore:8161/metastrdb;SCHEMA_SEARCH_PATH=METASTORE</value>
>>>>>>   <description>JDBC connect string for a JDBC metastore</description>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>   <name>javax.jdo.option.ConnectionDriverName</name>
>>>>>>   <value>org.h2.Driver</value>
>>>>>>   <description>Driver class name for a JDBC metastore</description>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>   <name>javax.jdo.option.ConnectionUserName</name>
>>>>>>   <value>hiveuser</value>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>   <name>javax.jdo.option.ConnectionPassword</name>
>>>>>>   <value>changeme</value>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>   <name>datanucleus.autoCreateSchema</name>
>>>>>>   <value>false</value>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>   <name>datanucleus.fixedDatastore</name>
>>>>>>   <value>true</value>
>>>>>> </property>
>>>>>>
>>>>>> Attached is the h2 schema used to populate the metastore.  I
>>>>>> translated it from the mysql version without changing any table/column
>>>>>> names.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am using hive 0.7.1 from the CDH3u4 release.   Any help will be
>>>>>> greatly appreciated.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Mime
View raw message