hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chengxiang Li (JIRA)" <>
Subject [jira] [Commented] (HIVE-10073) Runtime exception when querying HBase with Spark [Spark Branch]
Date Thu, 26 Mar 2015 06:04:52 GMT


Chengxiang Li commented on HIVE-10073:

Hi, [~jxiang], I saw you only call checkOutputSpecs for ReduceWork, but there may be a FileSinkOperator
in map-only job as well, so we may also need to checkOutputSpecs for MapWork. Besides, the
checkOutputSpecs is invoked at SparkRecordHandler::init which would be executed for each task,
SparkPlanGenerator::generate(BaseWork work) may be a better place to do this, we can checkOutputSpecs
between clone jobconf and serialized jobconf, so this would only be checked once time at RSC

> Runtime exception when querying HBase with Spark [Spark Branch]
> ---------------------------------------------------------------
>                 Key: HIVE-10073
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Jimmy Xiang
>            Assignee: Jimmy Xiang
>             Fix For: spark-branch
>         Attachments: HIVE-10073.1-spark.patch
> When querying HBase with Spark, we got 
> {noformat}
>  Caused by: java.lang.IllegalArgumentException: Must specify table name
> at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(
> at org.apache.hadoop.util.ReflectionUtils.setConf(
> at org.apache.hadoop.util.ReflectionUtils.newInstance(
> at
> at
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(
> {noformat}
> But it works fine for MapReduce.

This message was sent by Atlassian JIRA

View raw message