hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Junping Du <...@hortonworks.com>
Subject Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
Date Tue, 24 Oct 2017 21:06:54 GMT
     Do we have any solid evidence to show the HDFS unit tests going through the roof are
due to serious memory leak by HDFS? Normally, I don't expect memory leak are identified in
our UTs - mostly, it (test jvm gone) is just because of test or deployment issues. 
     Unless there is concrete evidence, my concern on seriously memory leak for HDFS on 2.8
is relatively low given some companies (Yahoo, Alibaba, etc.) have deployed 2.8 on large production
environment for months. Non-serious memory leak (like forgetting to close stream in non-critical
path, etc.) and other non-critical bugs always happens here and there that we have to live



From: Allen Wittenauer <aw@effectivemachines.com>
Sent: Tuesday, October 24, 2017 8:27 AM
To: Hadoop Common
Cc: Hdfs-dev; mapreduce-dev@hadoop.apache.org; yarn-dev@hadoop.apache.org
Subject: Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

> On Oct 23, 2017, at 12:50 PM, Allen Wittenauer <aw@effectivemachines.com> wrote:
> With no other information or access to go on, my current hunch is that one of the HDFS
unit tests is ballooning in memory size.  The easiest way to kill a Linux machine is to eat
all of the RAM, thanks to overcommit and that’s what this “feels” like.
> Someone should verify if 2.8.2 has the same issues before a release goes out …

        FWIW, I ran 2.8.2 last night and it has the same problems.

        Also: the node didn’t die!  Looking through the workspace (so the next run will
destroy them), two sets of logs stand out:




        It looks like my hunch is correct:  RAM in the HDFS unit tests are going through the
roof.  It’s also interesting how MANY log files there are.  Is surefire not picking up that
jobs are dying?  Maybe not if memory is getting tight.

        Anyway, at the point, branch-2.8 and higher are probably fubar’d. Additionally,
I’ve filed YETUS-561 so that Yetus-controlled Docker containers can have their RAM limits
set in order to prevent more nodes going catatonic.

To unsubscribe, e-mail: yarn-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-dev-help@hadoop.apache.org

To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org

View raw message