hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "apachehadoop (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HIVE-6212) Using Presto-0.56 for sql query,but HiveServer the console print java.lang.OutOfMemoryError: Java heap space
Date Thu, 16 Jan 2014 08:51:21 GMT

     [ https://issues.apache.org/jira/browse/HIVE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

apachehadoop updated HIVE-6212:
-------------------------------

    Description: 
Hi friends:
Now I can't open the page https://groups.google.com/forum/#!forum/presto-users ,so show my
question here.
I have started hiveserver and started presto-server on a machine with commands below:
hive --service hiveserver -p 9083
./launcher run
When I use the presto-client-cli command ./presto --server localhost:9083 --catalog hive --schema
default ,the console shows presto:default>,input the command as show tables the console
prints Error running command: java.nio.channels.ClosedChannelException,
and the hiveserver console print as below:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap space
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

my configuration file below:
node.properties
node.environment=production
node.id=cc4a1bbf-5b98-4935-9fde-2cf1c98e8774
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/data

config.properties
coordinator=true
datasources=jmx
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://slave4:8080

jvm.config
-server
-Xmx16G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.56/presto-server-0.56/lib/floatingdecimal-0.1.jar

log.properties
com.facebook.presto=DEBUG

catalog/hive.properties
connector.name=hive-cdh4
hive.metastore.uri=thrift://slave4:9083

HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56

Last I had increased the Java heap size for the Hive metastore,but it still given me the same
error informations ,please help me to check if that is a bug of CDH5.Now I have no idea,god
!

please help me ,thanks.


  was:


Hi friends:
Now I can't open the page https://groups.google.com/forum/#!forum/presto-users ,so show my
question here.
I have started hiveserver and started presto-server on a machine with commands below:
hive --service hiveserver -p 9083
./launcher run
When I use the presto-client-cli command ./presto --server localhost:9083 --catalog hive --schema
default ,the console shows presto:default>,input the command as show tables the console
prints Error running command: java.nio.channels.ClosedChannelException,
and the hiveserver console print as below:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap space
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

my configuration file below:
node.properties
node.environment=production
node.id=cc4a1bbf-5b98-4935-9fde-2cf1c98e8774
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/data

config.properties
coordinator=true
datasources=jmx
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://slave4:8080

jvm.config
-server
-Xmx16G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.56/presto-server-0.56/lib/floatingdecimal-0.1.jar

log.properties
com.facebook.presto=DEBUG

catalog/hive.properties
connector.name=hive-cdh4
hive.metastore.uri=thrift://master:9083

HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56

Last I had increased the Java heap size for the Hive metastore,but it still given me the same
error informations ,please help me to check if that is a bug of CDH5.Now I have no idea,god
!

please help me ,thanks.



> Using Presto-0.56 for sql query,but HiveServer the console print java.lang.OutOfMemoryError:
Java heap space
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-6212
>                 URL: https://issues.apache.org/jira/browse/HIVE-6212
>             Project: Hive
>          Issue Type: Test
>          Components: HiveServer2
>    Affects Versions: 0.11.0
>         Environment: HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56
>            Reporter: apachehadoop
>             Fix For: 0.11.0
>
>
> Hi friends:
> Now I can't open the page https://groups.google.com/forum/#!forum/presto-users ,so show
my question here.
> I have started hiveserver and started presto-server on a machine with commands below:
> hive --service hiveserver -p 9083
> ./launcher run
> When I use the presto-client-cli command ./presto --server localhost:9083 --catalog hive
--schema default ,the console shows presto:default>,input the command as show tables the
console prints Error running command: java.nio.channels.ClosedChannelException,
> and the hiveserver console print as below:
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap space
> at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
> at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
> at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> my configuration file below:
> node.properties
> node.environment=production
> node.id=cc4a1bbf-5b98-4935-9fde-2cf1c98e8774
> node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/data
> config.properties
> coordinator=true
> datasources=jmx
> http-server.http.port=8080
> presto-metastore.db.type=h2
> presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/db/MetaStore
> task.max-memory=1GB
> discovery-server.enabled=true
> discovery.uri=http://slave4:8080
> jvm.config
> -server
> -Xmx16G
> -XX:+UseConcMarkSweepGC
> -XX:+ExplicitGCInvokesConcurrent
> -XX:+CMSClassUnloadingEnabled
> -XX:+AggressiveOpts
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:OnOutOfMemoryError=kill -9 %p
> -XX:PermSize=150M
> -XX:MaxPermSize=150M
> -XX:ReservedCodeCacheSize=150M
> -Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.56/presto-server-0.56/lib/floatingdecimal-0.1.jar
> log.properties
> com.facebook.presto=DEBUG
> catalog/hive.properties
> connector.name=hive-cdh4
> hive.metastore.uri=thrift://slave4:9083
> HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56
> Last I had increased the Java heap size for the Hive metastore,but it still given me
the same error informations ,please help me to check if that is a bug of CDH5.Now I have no
idea,god !
> please help me ,thanks.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message