spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Antonio Piccolboni (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-6817) DataFrame UDFs in R
Date Wed, 13 Jan 2016 07:41:39 GMT

    [ https://issues.apache.org/jira/browse/SPARK-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15095776#comment-15095776
] 

Antonio Piccolboni edited comment on SPARK-6817 at 1/13/16 7:41 AM:
--------------------------------------------------------------------

My question made sense only wrt the block or vectorized design. If you are implementing plain-vanilla
UDFs in R, my questions is meaningless. The performance implications of calling an R function
for each row are ominous so I am not sure why you are going down this path. Imagine you want
to add a column with random numbers from a distribution. You can use a regular UDF on each
row or a block UDF on a block of a million rows. That means a single R call vs a million.

system.time(rnorm(10^6))
   user  system elapsed 
  0.089   0.002   0.092 
> z = rep_len(1, 10^6); system.time(sapply(z, rnorm))
   user  system elapsed 
  4.272   0.317   4.588 

That's 45 times slower. Plus R is choke full of vectorized functions. There are no builtin
scalar types  in R. So there are plenty of examples of block UDF that one can write in R efficiently
(no interpreter loops of any sort).


was (Author: piccolbo):
My question made sense only wrt the block or vectorized design. If you are implementing plain-vanilla
UDFs in R, my questions is meaningless. The performance implications of calling an R function
for each row are ominous so I am not sure why you are going down this path. Imagine you want
to add a column with random numbers from a distribution. You can use a regular UDF on each
row or a block UDF on a block of a million rows. That means a single R call vs a million.

system.time(rnorm(10^6))
   user  system elapsed 
  0.089   0.002   0.092 
> z = rep_len(1, 10^6); system.time(sapply(z, rnorm))
   user  system elapsed 
  4.272   0.317   4.588 

That's 45 times slower. Plus R is choke full of vectorized functions. There are no builtin
scalar types  in R. So there are plenty of examples of block UDF that one can write in R efficiently
(no interpreter loops of any sort.

> DataFrame UDFs in R
> -------------------
>
>                 Key: SPARK-6817
>                 URL: https://issues.apache.org/jira/browse/SPARK-6817
>             Project: Spark
>          Issue Type: New Feature
>          Components: SparkR, SQL
>            Reporter: Shivaram Venkataraman
>         Attachments: SparkR UDF Design Documentation v1.pdf
>
>
> This depends on some internal interface of Spark SQL, should be done after merging into
Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message