carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ravindra Pesala (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CARBONDATA-1624) If SORT_SCOPE is non-GLOBAL_SORT with Spark, set 'carbon.number.of.cores.while.loading' dynamically as per the available executor cores
Date Fri, 27 Oct 2017 06:01:17 GMT

    [ https://issues.apache.org/jira/browse/CARBONDATA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221758#comment-16221758
] 

Ravindra Pesala commented on CARBONDATA-1624:
---------------------------------------------

Welcome to contribute.
We should not use the carbonproperties anymore for this dynamic cores as it impacts other
loads. First find the available cores which we can allocate for loading per executor before
 submitting and pass the same information to carbon in RDD compute.

> If SORT_SCOPE is non-GLOBAL_SORT with Spark, set 'carbon.number.of.cores.while.loading'
dynamically as per the available executor cores 
> ----------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1624
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1624
>             Project: CarbonData
>          Issue Type: Improvement
>          Components: data-load, spark-integration
>    Affects Versions: 1.3.0
>            Reporter: Zhichao  Zhang
>            Assignee: Zhichao  Zhang
>            Priority: Minor
>
> If we are using carbondata + spark to load data, we can set 
> carbon.number.of.cores.while.loading to the  number of executor cores. 
> For example, when set the number of executor cores to 6, it shows that there are at 
> least 6 cores per node for loading data, so we can set 
> carbon.number.of.cores.while.loading to 6 automatically. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message