Hi Rekha
 
Yes the hcatalog server was up and and still running. I can query tables via pig scripts and also run hive queries. As a matter of fact its still
running.
 
Before I applied a patch for THRIFT-1468 I had seen my server crash frequently under similar circumstances (OutOfMemory). Since the patch
havent seen any crashes (just that error once)
 
I did take java heap dump just after I saw the error and did not see any increase in the heap size. I read in GC tuning docs that if
full gc is taking longer (taking 98% of time), JVM may throw that OutOfMemory error - but I am not really sure (I am using CMS so I am not sure if that
applies)
 
I can check if I get same error as THRIFT-1205
 
Isnt HIVE-2715 same as fixing THRIFT-1468 (atleast for in terms of its  resolution)?
 
Thanks
A
 
 


 
On Tue, Aug 28, 2012 at 2:33 AM, Joshi, Rekha <Rekha_Joshi@intuit.com> wrote:
Hi Agateaa,

Impressive bug description.

Can you confirm HCat server was up (inspite of thread dump/GC) and for all practical purposes commands were getting executed in a normal fashion for fairly good time after the GC issues were noticed on log? 
Unless there is a self-healing effect built-in :-) /timeout after which the error is automatically invalid/system is reset/space is reclaimed, there must be a way it would have directly impact the system, and not just known because one checks the log.

I do not have the same patched environment as yours, but would you care to unpatch Thrift-1468 and then check if your system bug behavior is in sync with -
https://issues.apache.org/jira/browse/THRIFT-1205
https://issues.apache.org/jira/browse/THRIFT-1468
https://issues.apache.org/jira/browse/HIVE-2715

Or especially since you did not enter arbitrary data, can you confirm you get usual if you do enter provide arbitrary data?

Thanks
Rekha

From: agateaaa <agateaaa@gmail.com>
Reply-To: <hcatalog-user@incubator.apache.org>
Date: Mon, 27 Aug 2012 10:38:01 -0700
To: <hcatalog-user@incubator.apache.org>
Subject: Re: HCatalog Thrift Error

Correction:

I have a fairly small server (VM) 1GB RAM and 1 CPU  and using HCatalog Version 0.4, Hive 0.9 (patched for HIVE-3008) with Thrift 0.7 (patched for THRIFT-1468)


On Mon, Aug 27, 2012 at 10:27 AM, agateaaa <agateaaa@gmail.com> wrote:
Hi,

I got this error over the weekend hcat.err log file.

Noticed at the approximately same time Full GC was happening in the gc logs.

Exception in thread "pool-1-thread-200" java.lang.OutOfMemoryError: Java heap space
        at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
        at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:81)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Exception in thread "pool-1-thread-201" java.lang.OutOfMemoryError: Java heap space
        at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
        at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:81)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Exception in thread "pool-1-thread-202" java.lang.OutOfMemoryError: Java heap space
        at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
        at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:81)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Exception in thread "pool-1-thread-203" java.lang.OutOfMemoryError: Java heap space
        at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:81)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)


I noticed that the hcatalog server had not shutdown, don't see any other abnormality in the logs


Searching led me to these two thrift issues
https://issues.apache.org/jira/browse/THRIFT-601
https://issues.apache.org/jira/browse/THRIFT-1205

Only difference is that in my case HCatalog server did not crash and I wasn't trying to send
any arbritary data to the thrift server at the telnet port

I have a fairly small server (VM) 1GB RAM and 1 CPU  and using HCatalog Version 0.4, Hive 0.9 (patched HIVE-3008) with Thrift 0.7 (patched for THRIFT-1438)

Has anyone seen this before ?

Thanks
- A