hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Decimus Phostle <>
Subject Re: Setting in Hive script results in MR job getting 'killed' right away
Date Fri, 04 Apr 2014 22:17:17 GMT
Turns out it was just a trivial/inane Hive-ism of setting the 'value' in a
particular way. *sigh*. The SO link( has details.

On Fri, Apr 4, 2014 at 12:41 PM, Decimus Phostle

> Hello Folks,
> I have been having a few jobs failing due to OutOfMemory and GC overhead
> limit exceeded errors. To counter these I tried setting `SET
>"-Xmx3G -XX:+UseConcMarkSweepGC";` at the start of
> the hive script**.
> Basically any time I add this option to the script, the MR jobs that get
> scheduled(for the first of several queries in the script) are 'killed'
> right away.
> Any thoughts on how to rectify this? Are there any other params that need
> to be tinkered with in conjunction with max heap space(eg. `io.sort.mb`)?
> Any help would be **most appreciated**.
> FWIW, I am using `hive-0.7.0` with `hadoop-0.20.2`. The default setting
> for max heap size in our cluster is 1200M.
> TIA.
> ** - Some other alternatives that were tried(with comical effect but no
> discernible change in outcome):
> - `SET"-Xmx3G";`
> - `SET"-server -Xmx3072M";`
> - `SET ="-server -Xmx3072M";`
>   `SET ="-server -Xmx3072M";`
> - `SET"-Xmx2G";`
> Update: I am beginning to think that this doesn't even have anything to do
> with the heap size setting. Tinkering with in any
> way is causing the same outcome. For example setting it thusly, SET
>"-XX:+UseConcMarkSweepGC"; is having the same result
> of MR jobs getting killed right away. Or even setting heap size explicitly
> in the script to what is the 'cluster default' causes this.
> Note: I have an SO question open at: if you'd prefer
> to answer it there. Thanks.

View raw message