kylin-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "lk_hadoop"<>
Subject 回复: 答复: can't pass step Build Cube In-Mem
Date Thu, 11 Apr 2019 09:48:58 GMT
I think that's not too much :

Cuboid Distribution
Current Cuboid Distribution
[Cuboid Count: 49] [Row Count: 1117994636]

Recommend Cuboid Distribution
[Cuboid Count: 168] [Row Count: 464893216]



发件人:Na Zhai <>
发送时间:2019-04-11 17:42
主题:答复: can't pass step Build Cube In-Mem

Hi, lk_hadoop. 

Does Cube planner recommend too many cuboid? If so, it may cause OOM. 

发送自 Windows 10 版邮件<>应用

发件人: lk_hadoop <> 
发送时间: Tuesday, April 9, 2019 9:21:59 AM 
收件人: dev 
主题: can't pass step Build Cube In-Mem 

hi,all : 
   I'm using kylin-2.6.1-cdh57, and the source row count is 500 million,I can success build
cube . 
   but when I use the cube planner , it has one step : Build Cube In-Mem for job :OPTIMIZE
   the config about the kylin_job_conf_inmem.xml is : 


        <value>-Xmx8192m -XX:OnOutOfMemoryError='kill -9 %p'</value> 


        <description>The maximum permissible size of the split metainfo file. 
            The JobTracker won't attempt to read split metainfo files bigger than 
            the configured value. No limits if set to -1. 

        <description>No description</description> 


    finally the map job will be killed for OnOutOfMemoryError  , but when I giev more mem
for map job , I will get another error :  java.nio.BufferOverflowException 

    why kylin will run the job inmem ? how can I avoid it ? 


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message