spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From falaki <>
Subject [GitHub] spark pull request: [SPARKR][SPARK-8452] expose jobGroup API in Sp...
Date Thu, 18 Jun 2015 22:36:55 GMT
Github user falaki commented on a diff in the pull request:
    --- Diff: R/pkg/R/sparkR.R ---
    @@ -278,3 +278,38 @@ sparkRHive.init <- function(jsc = NULL) {
       assign(".sparkRHivesc", hiveCtx, envir = .sparkREnv)
    +#' Assigns a group ID to all the jobs started by this thread until the group ID is set
to a
    +#' different value or cleared.
    +#' @param sc The existing 
    +#' @param groupid the ID to be assigned to job groups
    +#' @param description description for the the job group ID
    +#' @param interruptOnCancel flag to indicate if the job is interrupted on job cancellation
    +setJobGroup <- function(groupId, description, interruptOnCancel) {
    +  if (exists(".sparkRjsc", envir = env)) {
    --- End diff --
    Yes, that is perfectly fine. Just wondering why doesn't sparkR.stop() follow that convention?

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message