spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reynold Xin <r...@databricks.com>
Subject Re: Dataframe.fillna from 1.3.0
Date Mon, 20 Apr 2015 20:47:56 GMT
Ah ic. You can do something like


df.select(coalesce(df("a"), lit(0.0)))

On Mon, Apr 20, 2015 at 1:44 PM, Olivier Girardot <
o.girardot@lateral-thoughts.com> wrote:

> From PySpark it seems to me that the fillna is relying on Java/Scala code,
> that's why I was wondering.
> Thank you for answering :)
>
> Le lun. 20 avr. 2015 à 22:22, Reynold Xin <rxin@databricks.com> a écrit :
>
>> You can just create fillna function based on the 1.3.1 implementation of
>> fillna, no?
>>
>>
>> On Mon, Apr 20, 2015 at 2:48 AM, Olivier Girardot <
>> o.girardot@lateral-thoughts.com> wrote:
>>
>>> a UDF might be a good idea no ?
>>>
>>> Le lun. 20 avr. 2015 à 11:17, Olivier Girardot <
>>> o.girardot@lateral-thoughts.com> a écrit :
>>>
>>> > Hi everyone,
>>> > let's assume I'm stuck in 1.3.0, how can I benefit from the *fillna*
>>> API
>>> > in PySpark, is there any efficient alternative to mapping the records
>>> > myself ?
>>> >
>>> > Regards,
>>> >
>>> > Olivier.
>>> >
>>>
>>
>>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message