phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Mahonin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-2288) Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame
Date Wed, 04 Nov 2015 14:56:27 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989651#comment-14989651
] 

Josh Mahonin commented on PHOENIX-2288:
---------------------------------------

Hmm, [~navis] has an updated commit here:

https://github.com/navis/phoenix/commit/b641b47317c5a0670f553466c49c43e7d0140bf8

It's still comparing on Types, but it's within the ColumnInfo class, instead of PhoenixRuntime.

What do you think?

> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame
> -------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2288
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2288
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.5.2
>            Reporter: Josh Mahonin
>         Attachments: PHOENIX-2288.patch
>
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, the underlying
precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying column. These
appear to be exposed in the ResultSetMetaData, but if there was a way to expose these somehow
through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message