carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jacky Li (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (CARBONDATA-1624) If SORT_SCOPE is non-GLOBAL_SORT with Spark, set 'carbon.number.of.cores.while.loading' dynamically as per the available executor cores
Date Thu, 09 Nov 2017 14:48:07 GMT

     [ https://issues.apache.org/jira/browse/CARBONDATA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jacky Li resolved CARBONDATA-1624.
----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.3.0

> If SORT_SCOPE is non-GLOBAL_SORT with Spark, set 'carbon.number.of.cores.while.loading'
dynamically as per the available executor cores 
> ----------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1624
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1624
>             Project: CarbonData
>          Issue Type: Improvement
>          Components: data-load, spark-integration
>    Affects Versions: 1.3.0
>            Reporter: Zhichao  Zhang
>            Assignee: Zhichao  Zhang
>            Priority: Minor
>             Fix For: 1.3.0
>
>          Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> If we are using carbondata + spark to load data, we can set 
> carbon.number.of.cores.while.loading to the  number of executor cores. 
> For example, when set the number of executor cores to 6, it shows that there are at 
> least 6 cores per node for loading data, so we can set 
> carbon.number.of.cores.while.loading to 6 automatically. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message