carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Commented] (CARBONDATA-241) OOM error during query execution in long run
Date Fri, 16 Sep 2016 18:11:22 GMT


ASF GitHub Bot commented on CARBONDATA-241:

Github user gvramana commented on a diff in the pull request:
    --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/ ---
    @@ -101,6 +106,8 @@
       //comma separated list of input segment numbers
       public static final String INPUT_SEGMENT_NUMBERS =
    +  public static final String INVALID_SEGMENT_NUMBERS =
    +      "mapreduce.input.carboninputformat.invalidsegmentnumbers";
    --- End diff --
    Invalid segment deletion, need not be through CarbonInputFormat, When Invalid segments
list given to Btree(both in Driver and executor it should able it delete invalid blocks).

> OOM error during query execution in long run
> --------------------------------------------
>                 Key: CARBONDATA-241
>                 URL:
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: kumar vishal
>            Assignee: kumar vishal
> **Problem:** During long run query execution is taking more time and it is throwing out
of memory issue.
> **Reason**: In compaction we are compacting segments and each segment metadata is loaded
in memory. So after compaction compacted segments are invalid but its meta data is not removed
from memory because of this duplicate metadata is pile up and it is taking more memory and
after few days query exeution is throwing OOM
> **Solution**: Need to remove invalid blocks from memory

This message was sent by Atlassian JIRA

View raw message