hudi-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-hudi] haospotai opened a new issue #1284: [SUPPORT]
Date Mon, 27 Jan 2020 09:13:03 GMT
haospotai opened a new issue #1284: [SUPPORT]
URL: https://github.com/apache/incubator-hudi/issues/1284
 
 
    **Pyspark client sync table to hive**
   
   A clear and concise description of the problem.
   
   ```
   df = self.spark.read.json(data_hdfs)
           df.write.format("org.apache.hudi") \
               .option("hoodie.datasource.write.precombine.field", "uuid") \
               .option("hoodie.table.name", tablename) \
               .option("hoodie.datasource.write.keygenerator.class", "org.apache.hudi.NonpartitionedKeyGenerator")\
               .option("hoodie.datasource.hive_sync.partition_extractor_class", "org.apache.hudi.hive.NonPartitionedExtractor")\
               .option("hoodie.datasource.hive_sync.database", "default") \
               .option("hoodie.datasource.hive_sync.enable", "true")\
               .option("hoodie.datasource.hive_sync.table", tablename) \
               .option("hoodie.datasource.hive_sync.jdbcurl", os.environ['HIVE_JDBC_URL'])
\
               .option("hoodie.datasource.hive_sync.username", os.environ['HIVE_USER']) \
               .option("hoodie.datasource.hive_sync.password", os.environ['HIVE_PASSWORD'])
\
               .mode("Overwrite") \
               .save(self.host + self.hive_base_path)
   ```
   
   
   **Expected behavior**
   use Pyspark for ETL and sync table to hive without failure 
   
   **Environment Description**
   
   * Hudi version :release-0.5.0
   
   * Running on Docker? (yes/no) yes
   
   **Additional context**
   
   
   **Stacktrace**
   
   ```py4j.protocol.Py4JJavaError: An error occurred while calling o64.save.
   : java.lang.RuntimeException: java.lang.RuntimeException: class org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl
not org.apache.hudi.org.apache.hadoop_hive.metastore.MetaStoreFilterHook
   	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2227)
   	at org.apache.hudi.org.apache.hadoop_hive.metastore.HiveMetaStoreClient.loadFilterHooks(HiveMetaStoreClient.java:247)
   	at org.apache.hudi.org.apache.hadoop_hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:142)
   	at org.apache.hudi.org.apache.hadoop_hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:128)
   	at org.apache.hudi.hive.HoodieHiveClient.<init>(HoodieHiveClient.java:109)
   	at org.apache.hudi.hive.HiveSyncTool.<init>(HiveSyncTool.java:60)
   	at org.apache.hudi.HoodieSparkSqlWriter$.syncHive(HoodieSparkSqlWriter.scala:235)
   	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:169)
   	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
   	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
   	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
   	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
   	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
   	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
   	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
   	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
   	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
   	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
   	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
   	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
   	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
   	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
   	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   	at py4j.Gateway.invoke(Gateway.java:282)
   	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
   	at py4j.commands.CallCommand.execute(CallCommand.java:79)
   	at py4j.GatewayConnection.run(GatewayConnection.java:238)
   	at java.lang.Thread.run(Thread.java:748)
   Caused by: java.lang.RuntimeException: class org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl
not org.apache.hudi.org.apache.hadoop_hive.metastore.MetaStoreFilterHook
   	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2221)
   	... 38 more.```
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message