spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From robbyki <...@git.apache.org>
Subject [GitHub] spark issue #16209: [SPARK-10849][SQL] Adds option to the JDBC data source w...
Date Thu, 04 Jan 2018 11:43:02 GMT
Github user robbyki commented on the issue:

    https://github.com/apache/spark/pull/16209
  
    Is there a recommended workaround to achieve exactly this in spark 2.1? I'm going through
several resources to try and understand how to maintain my schema created outside of spark
and then just truncating my tables from spark followed by writing with a savemode of overwrite.
My problem exactly this issue with respect to my db netezza failing when it sees spark trying
to save a text data type so I then have to go specify in my new jdbc dialect to use varchar(n)
which does work however that just replaces all of my varchar columns (different lengths for
different columns) with whatever I specified in my dialect which is not what I want. How can
I just have it save the TEXT as varchar without specifying a length in the custom dialect?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message