hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Clark <>
Subject RE: Seeing strange limit
Date Wed, 30 Dec 2015 15:02:08 GMT
    <value>-Xmx1024m -XX:-UseGCOverheadLimit</value>

I think this is the limit I need to tweak.

From: Gary Clark []
Sent: Wednesday, December 30, 2015 8:59 AM
Subject: RE: Seeing strange limit

Thanks, currently  have the below:


# The following applies to multiple commands (fs, dfs, fsck, distcp etc)


I’m assuming just raising the above would work.

Much Appreciated,
Gary C

From: Edward Capriolo []
Sent: Wednesday, December 30, 2015 8:55 AM
Subject: Re: Seeing strange limit

This message means the garbage collector runs but is unable to free memory after trying for
a while.

This can happen for a lot of reasons. With hive it usually happens when a query has a lot
of intermediate data.

For example imaging a few months ago count (distinct(ip)) returned 20k. Everything works,
then your data changes and suddenly you have issues.

Try tuning mostly raising your xmx.

On Wednesday, December 30, 2015, Gary Clark <<>>

I have a multi-node cluster (hadoop 2.6.0) and am seeing the below message causing the hive
workflow to fail:

Looking at the hadoop logs I see the below:

45417 [main] ERROR org.apache.hadoop.hive.ql.Driver  - FAILED: Execution Error, return code
-101 from GC overhead limit exceeded

I have been running for months without problems. When I removed a large amount of the files
from the directory which I was running a query on the query succeeded. It looks like I’m
hitting a limit not sure how to remedy this.

Has anybody else seen this problem?

Gary C

Sorry this was sent from mobile. Will do less grammar and spell check than usual.
View raw message