spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "xinzhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-21725) spark thriftserver insert overwrite table partition select
Date Wed, 01 Nov 2017 14:37:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-21725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16234149#comment-16234149
] 

xinzhang commented on SPARK-21725:
----------------------------------

I can't believe it. I build hadoop 2.8 last night.
It still appear .I think the issues here are relevant . 
[https://issues.apache.org/jira/browse/SPARK-21067]
[https://stackoverflow.com/questions/44233523/spark-sql-2-1-1-thrift-server-unable-to-move-source-hdfs-to-target]
[https://issues.apache.org/jira/browse/SPARK-11083]

My Env is Centos 6.5  Jvm 8 .And to be honest. I still cannot believe u could not reproduce
it !! 
Now we use thriftserver 1.6. It is OK . I tried  all 2.x. I am curious what is the different
between your env and my env.
Would u give me some suggests what should I check in my env ?

> spark thriftserver insert overwrite table partition select 
> -----------------------------------------------------------
>
>                 Key: SPARK-21725
>                 URL: https://issues.apache.org/jira/browse/SPARK-21725
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0
>         Environment: centos 6.7 spark 2.1  jdk8
>            Reporter: xinzhang
>            Priority: Major
>              Labels: spark-sql
>
> use thriftserver create table with partitions.
> session 1:
>  SET hive.default.fileformat=Parquet;create table tmp_10(count bigint) partitioned by
(pt string) stored as parquet;
> --ok
>  !exit
> session 2:
>  SET hive.default.fileformat=Parquet;create table tmp_11(count bigint) partitioned by
(pt string) stored as parquet; 
> --ok
>  !exit
> session 3:
> --connect the thriftserver
> SET hive.default.fileformat=Parquet;insert overwrite table tmp_10 partition(pt='1') select
count(1) count from tmp_11;
> --ok
>  !exit
> session 4(do it again):
> --connect the thriftserver
> SET hive.default.fileformat=Parquet;insert overwrite table tmp_10 partition(pt='1') select
count(1) count from tmp_11;
> --error
>  !exit
> -------------------------------------------------------------------------------------
> 17/08/14 18:13:42 ERROR SparkExecuteStatementOperation: Error executing query, currentState
RUNNING, 
> java.lang.reflect.InvocationTargetException
> ......
> ......
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://dc-hadoop54:50001/group/user/user1/meta/hive-temp-table/user1.db/tmp_11/.hive-staging_hive_2017-08-14_18-13-39_035_6303339779053
> 512282-2/-ext-10000/part-00000 to destination hdfs://dc-hadoop54:50001/group/user/user1/meta/hive-temp-table/user1.db/tmp_11/pt=1/part-00000
>         at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2644)
>         at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:2711)
>         at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1403)
>         at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324)
>         ... 45 more
> Caused by: java.io.IOException: Filesystem closed
> ....
> -------------------------------------------------------------------------------------
> the doc about the parquet table desc here http://spark.apache.org/docs/latest/sql-programming-guide.html#parquet-files
> Hive metastore Parquet table conversion
> When reading from and writing to Hive metastore Parquet tables, Spark SQL will try to
use its own Parquet support instead of Hive SerDe for better performance. This behavior is
controlled by the spark.sql.hive.convertMetastoreParquet configuration, and is turned on by
default.
> I am confused the problem appear in the table(partitions)  but it is ok with table(with
out partitions) . It means spark do not use its own parquet ?
> Maybe someone give any suggest how could I avoid the issue?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message