hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler
Date Thu, 25 Oct 2018 01:02:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663077#comment-16663077
] 

Hive QA commented on HIVE-20512:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12945471/HIVE-20512.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15506 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/14630/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14630/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14630/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12945471 - PreCommit-HIVE-Build

> Improve record and memory usage logging in SparkRecordHandler
> -------------------------------------------------------------
>
>                 Key: HIVE-20512
>                 URL: https://issues.apache.org/jira/browse/HIVE-20512
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Bharathkrishna Guruvayoor Murali
>            Priority: Major
>         Attachments: HIVE-20512.1.patch, HIVE-20512.2.patch, HIVE-20512.3.patch
>
>
> We currently log memory usage and # of records processed in Spark tasks, but we should
improve the methodology for how frequently we log this info. Currently we use the following
code:
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
>     // A very simple counter to keep track of number of rows processed by the
>     // reducer. It dumps
>     // every 1 million times, and quickly before that
>     if (currentThreshold >= 1000000) {
>       return currentThreshold + 1000000;
>     }
>     return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you have to process
a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would help in debugging
tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message