hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Misha Dmitriev (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings
Date Thu, 12 Jul 2018 22:25:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16542265#comment-16542265
] 

Misha Dmitriev commented on HIVE-19668:
---------------------------------------

Thank you for checking, [~vihangk1] [~aihuaxu] and [~stakiar]. In the end, it turns out that
at least some failures are reproducible locally, and my changes are responsible. Not allĀ {{CommonToken}}s
can be madeĀ {{ImmutableToken}}s, because for some of them the type may be rewritten in some
special operators later. I've already found one such type in the past, and now eliminating
others. Will post the updated patch once I am done.

> Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate
strings
> ----------------------------------------------------------------------------------------------
>
>                 Key: HIVE-19668
>                 URL: https://issues.apache.org/jira/browse/HIVE-19668
>             Project: Hive
>          Issue Type: Improvement
>          Components: HiveServer2
>    Affects Versions: 3.0.0
>            Reporter: Misha Dmitriev
>            Assignee: Misha Dmitriev
>            Priority: Major
>         Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, image-2018-05-22-17-41-39-572.png
>
>
> I've recently analyzed a HS2 heap dump, obtained when there was a huge memory spike during
compilation of some big query. The analysis was done with jxray ([www.jxray.com).|http://www.jxray.com)./]
It turns out that more than 90% of the 20G heap was used by data structures associated with
query parsing ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple opportunities
for optimizations here. One of them is to stop the code from creating duplicate instances
of {{org.antlr.runtime.CommonToken}} class. See a sample of these objects in the attached
image:
> !image-2018-05-22-17-41-39-572.png|width=879,height=399!
> Looks like these particular {{CommonToken}} objects are constants, that don't change
once created. I see some code, e.g. in {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}},
where such objects are apparently repeatedly created with e.g. {{new CommonToken(HiveParser.TOK_INSERT,
"TOK_INSERT")}} If these 33 token kinds are instead created once and reused, we will save
more than 1/10th of the heap in this scenario. Plus, since these objects are small but very
numerous, getting rid of them will remove a gread deal of pressure from the GC.
> Another source of waste are duplicate strings, that collectively waste 26.1% of memory.
Some of them come from CommonToken objects that have the same text (i.e. for multiple CommonToken
objects the contents of their 'text' Strings are the same, but each has its own copy of that
String). Other duplicate strings come from other sources, that are easy enough to fix by adding
String.intern() calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message