incubator-drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "XUE, Xiaohui" <xiaohui....@sap.com>
Subject RE: show tables results in query failure
Date Mon, 25 Aug 2014 15:51:34 GMT
Hi Venki,

Thanks for the help, I have copied hive-exec-0.13 and this seems to resolve and issue. I'm
now able to show table list.

However, I still cannot query the tables:
1. when I try to do a select from a table where data is in a csv file, I got an exception
of "java.lang.IllegalArgumentException: Wrong FS". Full exception is below. I see the same
kind of error in https://issues.apache.org/jira/browse/DRILL-1172, but the fix seems to be
integrated to 0.4 already?
2. When I try to do a select from a table in parquet, sqlline freezes and I found an error
“java.lang.NoSuchFieldError: doubleTypeInfo” in the log (full log is also below). The
error comes from the a class in hive-exec jar that I copied from my Hive instance. As I can
query the same table from Hive, is it possible that the version is not compatible?

Thanks,
Xiaohui 

2014-08-25 14:31:34,414 [6a0ea065-4573-4947-8957-1a64837c4591:foreman] ERROR o.a.drill.exec.work.foreman.Foreman
- Error d3dc25d6-f21e-4237-bdda-648f5ec76ce9: Failure while setting up Foreman.
java.lang.IllegalArgumentException: Wrong FS: hdfs://dewdflhana1579.emea.global.corp.sap:8020/apps/hive/warehouse/smartmeter200m.db/weekdays_csv,
expected: file:///
	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:390) ~[hadoop-core-1.2.1.jar:na]
	at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55) ~[hadoop-core-1.2.1.jar:na]
	at org.apache.hadoop.fs.LocalFileSystem.pathToFile(LocalFileSystem.java:61) ~[hadoop-core-1.2.1.jar:na]
	at org.apache.hadoop.fs.LocalFileSystem.exists(LocalFileSystem.java:51) ~[hadoop-core-1.2.1.jar:na]
	at org.apache.drill.exec.store.hive.HiveScan.getSplits(HiveScan.java:148) ~[drill-storage-hive-core-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.store.hive.HiveScan.<init>(HiveScan.java:111) ~[drill-storage-hive-core-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.store.hive.HiveStoragePlugin.getPhysicalScan(HiveStoragePlugin.java:75)
~[drill-storage-hive-core-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.store.hive.HiveStoragePlugin.getPhysicalScan(HiveStoragePlugin.java:39)
~[drill-storage-hive-core-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(AbstractStoragePlugin.java:53)
~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.planner.logical.DrillTable.getGroupScan(DrillTable.java:54) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.planner.logical.DrillPushProjIntoScan.onMatch(DrillPushProjIntoScan.java:53)
~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.eigenbase.relopt.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:223) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.relopt.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:661) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at net.hydromatic.optiq.tools.Programs$RuleSetProgram.run(Programs.java:165) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at net.hydromatic.optiq.prepare.PlannerImpl.transform(PlannerImpl.java:273) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:145)
~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:126)
~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:128) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:403) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:219) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.work.WorkManager$RunnableWrapper.run(WorkManager.java:250) [drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_60]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_60]
	at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]

2014-08-25 14:33:45,232 [WorkManager-7] ERROR o.apache.drill.exec.work.WorkManager - Failure
while running wrapper [Foreman: f0bd0c86-6404-4ae0-b03e-9e7ac8488ac0]
java.lang.NoSuchFieldError: doubleTypeInfo
	at org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.getObjectInspector(ArrayWritableObjectInspector.java:67)
~[hive-exec-0.13.0.2.1.3.0-563.jar:0.13.0.2.1.3.0-563]
	at org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.<init>(ArrayWritableObjectInspector.java:60)
~[hive-exec-0.13.0.2.1.3.0-563.jar:0.13.0.2.1.3.0-563]
	at org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe.initialize(ParquetHiveSerDe.java:113)
~[hive-exec-0.13.0.2.1.3.0-563.jar:0.13.0.2.1.3.0-563]
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:218)
~[hive-metastore-0.12.0.jar:0.12.0]
	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:272)
~[drill-hive-exec-shaded-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:265) ~[drill-hive-exec-shaded-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:597) ~[drill-hive-exec-shaded-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.store.hive.schema.DrillHiveTable.getRowType(DrillHiveTable.java:56)
~[drill-storage-hive-core-0.4.0-incubating-SNAPSHOT.jar:0.4.0-incubating-SNAPSHOT]
	at net.hydromatic.optiq.prepare.OptiqCatalogReader.getTableFrom(OptiqCatalogReader.java:94)
~[optiq-core-0.9-20140730.000241-5.jar:na]
	at net.hydromatic.optiq.prepare.OptiqCatalogReader.getTable(OptiqCatalogReader.java:76) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at net.hydromatic.optiq.prepare.OptiqCatalogReader.getTable(OptiqCatalogReader.java:42) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.EmptyScope.getTableNamespace(EmptyScope.java:67) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:75)
~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:85) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:779)
~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:768) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:2599) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:2802)
~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:85) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:779)
~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:768) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.SqlSelect.validate(SqlSelect.java:208) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:742)
~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.eigenbase.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:458) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at net.hydromatic.optiq.prepare.PlannerImpl.validate(PlannerImpl.java:173) ~[optiq-core-0.9-20140730.000241-5.jar:na]
	at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateNode(DefaultSqlHandler.java:137)
~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:117)
~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:128) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:403) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:219) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at org.apache.drill.exec.work.WorkManager$RunnableWrapper.run(WorkManager.java:250) ~[drill-java-exec-0.4.0-incubating-SNAPSHOT-rebuffed.jar:0.4.0-incubating-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_60]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_60]
	at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]

-----Original Message-----
From: Venki Korukanti [mailto:venki.korukanti@gmail.com] 
Sent: vendredi 8 août 2014 19:55
To: drill-user@incubator.apache.org
Subject: Re: show tables results in query failure

Hi,

Parquet serde in Hive 0.12 is not native, you must be having a downloaded
parquet serde in Hive lib or in Hive classpath. Drill is not able to find
that class in its classpath, so throwing the exception. However for some
reason the error message got truncated (it is missing the classnotfound
exception part). To resolve this copy the parquet serde to Drill lib folder.

Thanks
Venki


On Fri, Aug 8, 2014 at 8:55 AM, XUE, Xiaohui <xiaohui.xue@sap.com> wrote:

> Hi,
>
> I'm trying to connect to hive from Apache Drill. I have successfully
> connected to the metastore as the command "show schemas;" gives me all my
> hive schemas.
>
> I  have also successfully changed my schema to "hive. `default` ", but the
> following command "show tables;" throw the exception below:
>
> Query failed: Failure while running fragment.
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
> [aceb5ec8-0168-4c63-ac81-fa8fad9aed1d]
>
> Error: exception while executing query: Failure while trying to get next
> result batch. (state=,code=0)
>
> Any hints of what could cause the issue? I have browsed my metastore and
> all tables seems to be correctly defined. My metastore is mysql.
>
> Thanks,
> Xiaohui
>
Mime
View raw message