carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From xuchuanyin <>
Subject [GitHub] carbondata pull request #1261: [CARBONDATA-1373] Enhance update performance ...
Date Wed, 16 Aug 2017 07:32:45 GMT
GitHub user xuchuanyin opened a pull request:

    [CARBONDATA-1373] Enhance update performance by increasing parallelism

    # Scenario
    Recently I have tested the update feature provided in Carbondata and found its poor performance.
    I had a table containing about 14 million records with about 370 columns(no dictionary
columns) and the data files are about 3.8 GB in total. All the data files were in one segment.
    I performed an update SQL which update a column for all the records and the SQL looked
like `UPDATE myTable SET (col1)=(col1+1000) WHERE TRUE`. In my environment, the update job
failed with 'executor lost errors'. And I found 'spill data' related messages in the container
    # Analyze
    I've read about the implementation of update-delete in Carbondata in ISSUE#440. The update
consists a delete and an insert operation. And the error occurred during the insert operation.
    After studying the code, I have found that while doing inserting, the updated records
are grouped by the `segmentId`, which means all the recoreds in one segment will be processed
in only one task, thus will cause task failure when the amount of input data is quite large.
    # Solution
    We should improve the parallelism when doing update for a segment.
    I append a random key to the `segmentId` to increase the partition number before doing
the insertion stage and then remove the suffix when doing the real insertion.
    # Modification
    + Increase parallelism while processing one segment in update, this is achieved by distributing
records to different partitions by using a customized partitioner.
    + Add a property to configure the parallelism.
    + Clean up local files after update (previous bugs)
    + Remove useless imports
    + Add tests
    + Add related documents
    # Notes
    I have tested in my example and the job finished in about 13 minutes successfully. The
records were updated as expected.
    Comparing to the previous implementation, the update performance has been enhanced:
    Origin(Parallelism(1) + GroupBy): Update **FAILED**
    Adding Parallelism(6) + GroupBy: Update **SUCCESSFULLY** using **13mins**
    Parallelism(1) +  PartitionBy: Update **SUCCESSFULLY** using **21mins**
    Adding Parallelism(6) + PartitionBy: Update **SUCCESSFULLY** using **5mins**

You can merge this pull request into a Git repository by running:

    $ git pull enhance_update_perf

Alternatively you can review and apply these changes as the patch at:

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1261
commit ebfe1ca2a125c0b736e917d8a7956f5e39dedc50
Author: xuchuanyin <>
Date:   2017-08-11T15:00:20Z

    Enhance update performance by increasing parallelism
    + Increase parallelism while processing one segment in update
    + Use partitionBy instead of groupby
    + Add a property to configure the parallelism
    + Clean up local files after update (previous bugs)
    + Remove useless imports


If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

View raw message