Hi, All,

I need to overwrite data in a Hive table and I use the following code to do so:

df = sqlContext.sql(my-spark-sql-statement);
df.write.format("orc").mode("overwrite").saveAsTable("foo") // I also tried 'insertInto("foo")

The "df.count" shows that there are only 452 records in the result.
But "select count(*) from foo" (run in beeline) shows that there are 716 records.

The final table contains more data than expected.

Does anyone know the reason and how to overwrite data in a Hive table with spark sql?

I'm using spark 2.2




This email message may contain confidential and/or privileged information. If you are not the intended recipient, please do not read, save, forward, disclose or copy the contents of this email or open any file attached to this email. We will be grateful if you could advise the sender immediately by replying this email, and delete this email and any attachment or links to this email completely and immediately from your computer system.