hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mich Talebzadeh" <m...@peridale.co.uk>
Subject RE: Problem with getting start of Hive on Spark
Date Wed, 02 Dec 2015 09:44:33 GMT
Hi Link,

 

Thanks for comments.

 

This is what is in my hive-exec-1.2.1.jar file that I have added to
$SPARK_HOME/lib

 

hduser@rhes564::/usr/lib/hive/lib> jar tvf hive-exec-1.2.1.jar|grep
org.apache.hive.spark.client.Job

  1215 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobHandle.class

  1188 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobContext.class

   941 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobHandle$Listener.class

  1381 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobHandle$State.class

  3170 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobContextImpl.class

   724 Fri Jun 19 02:04:52 BST 2015 org/apache/hive/spark/client/Job.class

  1141 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobHandleImpl$1.class

  7839 Fri Jun 19 02:04:52 BST 2015
org/apache/hive/spark/client/JobHandleImpl.class

 

But still getting

 

15/12/02 09:36:42 [RPC-Handler-3]: WARN rpc.RpcDispatcher: Received error
message:io.netty.handler.codec.DecoderException:
java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job

 

I am no expert on Java but there must be a work around to this I gather.

 

Regards,

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Link Qian [mailto:fastupload@outlook.com] 
Sent: 02 December 2015 06:37
To: user@hive.apache.org
Subject: RE: Problem with getting start of Hive on Spark

 

Mich,

Spark SQL orients programmer or developer, as HQL was designed for data
analyst who preferred to.  Spark can accelerate batch analysis job in
backend HQL was parsed and translated to job execution plan and submit
Spark/cluster to execute finally when set it as hive execution engine.

after quickly read login you thread. It shows that you had encountered the
same situation as mime with the log. 
I make sure that there is org.apache.hive.spark.client.Job class  in the
hive-exec-1.2.1.jar file that added to spark context. But spark job log
report the envioroment miss the class. It's strange to me. In another
thread, Jimmy said that Hive on Spark only support Spark-1.3.1.  

----------------------------------------------------------------------------
----------------------------------------------------------------------------
15/11/30 07:31:36 [main]: INFO status.SparkJobMonitor: state = SENT
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO yarn.Client: Application report for
application_1448886638370_0001 (state: RUNNING)
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO yarn.Client: 
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      client
token: N/A
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
diagnostics: N/A
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
ApplicationMaster host: 192.168.1.12
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
ApplicationMaster RPC port: 0
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      queue:
default
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      start
time: 1448886649489
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      final
status: UNDEFINED
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
tracking URL:
http://namenode.localdomain:8088/proxy/application_1448886638370_0001/
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      user:
hadoop
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO cluster.YarnClientSchedulerBackend: Application
application_1448886638370_0001 has started running.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 51326.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO netty.NettyBlockTransferService: Server created on 51326
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO storage.BlockManagerMasterEndpoint: Registering block manager
192.168.1.10:51326 <http://192.168.1.10:51326>  with 66.8 MB RAM,
BlockManagerId(driver, 192.168.1.10, 51326)
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO storage.BlockManagerMaster: Registered BlockManager
state = SENT
15/11/30 07:31:37 [main]: INFO status.SparkJobMonitor: state = SENT
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready
for scheduling beginning after waiting maxRegisteredResourcesWaitingTime:
30000(ms)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type java.lang.Integer (2 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=REPLY id=0 payload=java.lang.Integer
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO spark.SparkContext: Added JAR
file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar at
http://192.168.1.10:41276/jars/hive-exec-1.2.1.jar with timestamp
1448886697575
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$NullMessage (2 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=REPLY id=1
payload=org.apache.hive.spark.client.rpc.Rpc$NullMessage
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO rpc.RpcDispatcher: [DriverProtocol] Closing channel due to
exception in pipeline (java.lang.NoClassDefFoundError:
org/apache/hive/spark/client/Job).
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type java.lang.String (3720 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=ERROR id=2 payload=java.lang.String
15/11/30 07:31:37 [RPC-Handler-3]: WARN rpc.RpcDispatcher: Received error
message:io.netty.handler.codec.DecoderException:
java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job
    at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.
java:358)
    at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder
.java:230)
    at
io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.jav
a:103)
    at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractCha
nnelHandlerContext.java:308)
    at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChann
elHandlerContext.java:294)
    at
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHand
lerAdapter.java:86)
    at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractCha
nnelHandlerContext.java:308)
    at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChann
elHandlerContext.java:294)
    at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeli
ne.java:846)
    at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioBy
teChannel.java:131)
    at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.
java:468)
    at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventEx
ecutor.java:111)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
    at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName
(DefaultClassResolver.java:136)
    at
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClas
s(DefaultClassResolver.java:115)
    at
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
    at
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(Objec
tField.java:99)
    at
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(F
ieldSerializer.java:507)
    at
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:
776)
    at
org.apache.hive.spark.client.rpc.KryoMessageCodec.decode(KryoMessageCodec.ja
va:96)
    at
io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:4
2)
    at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.
java:327)
    ... 15 more
Caused by: java.lang.ClassNotFoundException:
org.apache.hive.spark.client.Job
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 39 more
.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 WARN client.RemoteDriver: Shutting down driver because RPC channel
was closed.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO client.RemoteDriver: Shutting down remote driver.
15/11/30 07:31:37 [RPC-Handler-3]: WARN client.SparkClientImpl: Client RPC
channel closed unexpectedly.



Best Regrads,
Link

  _____  

Date: Tue, 1 Dec 2015 21:53:51 -0800
Subject: Re: Problem with getting start of Hive on Spark
From: xzhang@cloudera.com <mailto:xzhang@cloudera.com> 
To: user@hive.apache.org <mailto:user@hive.apache.org> 

Mich,

My understanding of Spark is that it's still a batch processing engine, even
though under the cover it tries to use as much memory possible. Spark
executors are the task runners, which are launched on demand by a Spark
application and will be gone once the application finishes. Therefore, any
datasets (such as those for tables) will eventually need to be materialized.
Also, the input datasets come from some persistent storage, so they also
need to be deserialized. In this sense, it has no difference from MR.
However, it does more than MR in that multiple shuffles can happen in one
job w/o the need of storing temporary data on disk (making much use of
memory). Therefore, it's far from a memory database. Of course, using memory
means faster processing, but the benefits shrinks when dealing with larger
datasets, as you figured.

The benefits of Hive on Spark is that Hive users can take advantage of Spark
without leaving Hive. In fact, user shouldn't even know much about Spark. In
this way, users get all the features and nice things in Hive. On the other
hand, SparkSQL tries to match a given past version of Hive but still
provides a subset of Hive's functionality. It needs Hive's metastore, but
user will have to leave Hive and learn another tool in order to use it.
Lastly, Hive is poised for enterprise use cases, where data and services are
secured and shared. This is not the case for SparkSQL, where it's more like
a client tool, hard to be managed in an enterprise deployment.

Anyway, it's up to your use case to determine what you want to use. I hope
the above discussion gives you some sense of the major differences between
the two.

Thanks,

Xuefu

 

 

On Tue, Dec 1, 2015 at 10:58 AM, Mich Talebzadeh <mich@peridale.co.uk
<mailto:mich@peridale.co.uk> > wrote:

Hi Xuefu,

 

I had incorrect entry for the hostname in /etc/hosts file. It was using
localhost (127.0.0.1) as opposed to the correct setting for the hostname IP
address (50.140.197.217)

 

So far so good .

 

There is another wider question if I may:

 

If spark-sql can be used on Hive tables, is there any reason why we need to
use spark as an execution engine for Hive? My understanding is that spark is
an memory offering, in other words it behaves like an in-memory database,
assuming there is enough memory but crucially with Hive the table is created
as a normal Hive table in HDFS directory  say
/user/hive/warehouse/asehadoop.db/tt. All the queries on that table can be
carried out using spark thus taking advantage of faster in-memory computing
(as opposed to taking the computing to the blocks/disks where data is stored
as is the case with MR).

 

The second point is that I guess for very large tables and limit on memory,
the advantage of spark will be lost. So one might as well use Hive for all
DML activities and use Spark for DQs.

 

If we had Hive running on Spark execution engine, it seems all is expected
is that the queries are sent to Spark engine as Spark client and that sounds
like a lot of RPC calls (like distributed queries in RDBMS). They will never
be efficient.

 

Please correct me if this is a lot of hot air.

 

Thanks,

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Xuefu Zhang [mailto:xzhang@cloudera.com <mailto:xzhang@cloudera.com> ]

Sent: 01 December 2015 17:11
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: Problem with getting start of Hive on Spark

 

Mich,

I'm glad that you solved the network issue. Would you mind sharing what you
did?

It seemed that you got the same problem as Link did now. I think it's a
connectivity issue. Rather, there is class loading issue. The Spark
application (remote driver) is launched successfully, but when a job is
submitted via the remote driver, the request failed to deserialize due to
missing class. Because of this, the RPC between Hive and the remote driver
is closed. I think the root cause might be version mismatch. Hive 1.2.1
works with Spark 1.3.1. If you want to work with Spark 1.5, you might want
to try Hive 2.0 instead.

Thanks,

Xuefu

 

On Tue, Dec 1, 2015 at 8:00 AM, Mich Talebzadeh <mich@peridale.co.uk
<mailto:mich@peridale.co.uk> > wrote:

Hi Xuefu,

 

I resolved the issue of the IP address so that is good news from my side.

 

Now this the process that causes full error. I am running Hive in debug
mode. Again if I get the bandwidth I will dig deeper so to speak. On the
face of it it looks some intermittent connectivity issue?

 

cat hive_on_spark.hql

set spark.home=/usr/lib/spark;

set hive.execution.engine=spark;

set spark.master=spark://rhes564:7077;

set spark.eventLog.enabled=true;

set spark.eventLog.dir=/usr/lib/spark/logs;

set spark.executor.memory=512m;

set spark.serializer=org.apache.spark.serializer.KryoSerializer;

use asehadoop;

select count(1) from t;

exit;

 

hduser@rhes564::/home/hduser/dba/bin> /usr/lib/hive/bin/hive --hiveconf
hive.root.logger=DEBUG,console -f hive_on_spark.hql

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/usr/lib/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j
/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.
7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/usr/lib/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j
/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.
7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

15/11/30 16:57:56 [main]: DEBUG common.LogUtils: Using hive-site.xml found
on CLASSPATH at /usr/lib/hive/conf/hive-site.xml

15/11/30 16:57:57 [main]: DEBUG session.SessionState: SessionState user:
null

 

Logging initialized using configuration in
jar:file:/usr/lib/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties

15/11/30 16:57:57 [main]: INFO SessionState:

Logging initialized using configuration in
jar:file:/usr/lib/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties

15/11/30 16:57:57 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: hive

15/11/30 16:57:57 [main]: DEBUG lib.MutableMetricsFactory: field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with
annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
about=, value=[Rate of successful kerberos logins and latency
(milliseconds)], always=false, type=DEFAULT, sampleName=Ops)

15/11/30 16:57:57 [main]: DEBUG lib.MutableMetricsFactory: field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with
annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
about=, value=[Rate of failed kerberos logins and latency (milliseconds)],
always=false, type=DEFAULT, sampleName=Ops)

15/11/30 16:57:57 [main]: DEBUG lib.MutableMetricsFactory: field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with
annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
about=, value=[GetGroups], always=false, type=DEFAULT, sampleName=Ops)

15/11/30 16:57:57 [main]: DEBUG impl.MetricsSystemImpl: UgiMetrics, User and
group related metrics

15/11/30 16:57:57 [main]: DEBUG security.Groups:  Creating new Groups object

15/11/30 16:57:57 [main]: DEBUG util.NativeCodeLoader: Trying to load the
custom-built native-hadoop library...

15/11/30 16:57:57 [main]: DEBUG util.NativeCodeLoader: Failed to load
native-hadoop with error: java.lang.UnsatisfiedLinkError:
/home/hduser/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6:
version `GLIBC_2.12' not found (required by
/home/hduser/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0)

15/11/30 16:57:57 [main]: DEBUG util.NativeCodeLoader:
java.library.path=/home/hduser/hadoop-2.6.0/lib/native

15/11/30 16:57:57 [main]: WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable

15/11/30 16:57:57 [main]: DEBUG util.PerformanceAdvisory: Falling back to
shell based

15/11/30 16:57:57 [main]: DEBUG
security.JniBasedUnixGroupsMappingWithFallback: Group mapping
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping

15/11/30 16:57:57 [main]: DEBUG security.Groups: Group mapping
impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
cacheTimeout=300000; warningDeltaMs=5000

15/11/30 16:57:57 [main]: DEBUG security.UserGroupInformation: hadoop login

15/11/30 16:57:57 [main]: DEBUG security.UserGroupInformation: hadoop login
commit

15/11/30 16:57:57 [main]: DEBUG security.UserGroupInformation: using local
user:UnixPrincipal: hduser

15/11/30 16:57:57 [main]: DEBUG security.UserGroupInformation: Using user:
"UnixPrincipal: hduser" with name hduser

15/11/30 16:57:57 [main]: DEBUG security.UserGroupInformation: User entry:
"hduser"

15/11/30 16:57:57 [main]: DEBUG security.UserGroupInformation: UGI
loginUser:hduser (auth:SIMPLE)

15/11/30 16:57:57 [main]: INFO hive.metastore: Trying to connect to
metastore with URI thrift://localhost:9083

15/11/30 16:57:57 [main]: DEBUG security.Groups: Returning fetched groups
for 'hduser'

15/11/30 16:57:57 [main]: INFO hive.metastore: Connected to metastore.

15/11/30 16:57:57 [main]: DEBUG : address: rhes564/50.140.197.217
<http://50.140.197.217>  isLoopbackAddress: false, with host 50.140.197.217
rhes564

15/11/30 16:57:57 [main]: DEBUG hdfs.BlockReaderLocal:
dfs.client.use.legacy.blockreader.local = false

15/11/30 16:57:57 [main]: DEBUG hdfs.BlockReaderLocal:
dfs.client.read.shortcircuit = false

15/11/30 16:57:57 [main]: DEBUG hdfs.BlockReaderLocal:
dfs.client.domain.socket.data.traffic = false

15/11/30 16:57:57 [main]: DEBUG hdfs.BlockReaderLocal:
dfs.domain.socket.path =

15/11/30 16:57:57 [main]: DEBUG hdfs.DFSClient: No KeyProvider found.

15/11/30 16:57:57 [main]: DEBUG retry.RetryUtils: multipleLinearRandomRetry
= null

15/11/30 16:57:57 [main]: DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER,
rpcRequestWrapperClass=class
org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper,
rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker
@5ccc0017
<mailto:rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRp
cInvoker@5ccc0017> 

15/11/30 16:57:57 [main]: DEBUG ipc.Client: getting client out of cache:
org.apache.hadoop.ipc.Client@15bf218e
<mailto:org.apache.hadoop.ipc.Client@15bf218e> 

15/11/30 16:57:57 [main]: DEBUG util.PerformanceAdvisory: Both short-circuit
local reads and UNIX domain socket are disabled.

15/11/30 16:57:57 [main]: DEBUG sasl.DataTransferSaslUtil:
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
configuration for dfs.data.transfer.protection

15/11/30 16:57:57 [main]: DEBUG ipc.Client: The ping interval is 60000 ms.

15/11/30 16:57:57 [main]: DEBUG ipc.Client: Connecting to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000> 

15/11/30 16:57:57 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser:
starting, having connections 1

15/11/30 16:57:57 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #0

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #0

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 44ms

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #1

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #1

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:58 [main]: DEBUG session.SessionState: HDFS root scratch dir:
/tmp/hive with schema null, permission: rwx-wx-wx

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #2

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #2

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:58 [main]: INFO session.SessionState: Created local
directory: /tmp/hive/dec45671-0b5f-4496-8b3a-66f097d0c6f0_resources

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #3

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #3

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 0ms

15/11/30 16:57:58 [main]: DEBUG hdfs.DFSClient:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0: masked=rwx------

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #4

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #4

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 2ms

15/11/30 16:57:58 [main]: INFO session.SessionState: Created HDFS directory:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #5

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #5

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:58 [main]: INFO session.SessionState: Created local
directory: /tmp/hive/dec45671-0b5f-4496-8b3a-66f097d0c6f0

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #6

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #6

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 0ms

15/11/30 16:57:58 [main]: DEBUG hdfs.DFSClient:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/_tmp_space.db:
masked=rwx------

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #7

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #7

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 1ms

15/11/30 16:57:58 [main]: INFO session.SessionState: Created HDFS directory:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/_tmp_space.db

15/11/30 16:57:58 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #8

15/11/30 16:57:58 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #8

15/11/30 16:57:58 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 0ms

15/11/30 16:57:58 [main]: DEBUG CliDriver: CliDriver inited with classpath
/apps/sybase/sqlserver/1570/jConnect-7_0/classes/jconn4.jar:/apps/sybase/sql
server/1570/jConnect-7_0/classes:/home/hduser/hive/javawork/classes:/usr/lib
/hive/conf:/usr/lib/hive/lib/ST4-4.0.4.jar:/usr/lib/hive/lib/accumulo-core-1
.6.0.jar:/usr/lib/hive/lib/accumulo-fate-1.6.0.jar:/usr/lib/hive/lib/accumul
o-start-1.6.0.jar:/usr/lib/hive/lib/accumulo-trace-1.6.0.jar:/usr/lib/hive/l
ib/activation-1.1.jar:/usr/lib/hive/lib/ant-1.9.1.jar:/usr/lib/hive/lib/ant-
launcher-1.9.1.jar:/usr/lib/hive/lib/antlr-2.7.7.jar:/usr/lib/hive/lib/antlr
-runtime-3.4.jar:/usr/lib/hive/lib/apache-log4j-extras-1.2.17.jar:/usr/lib/h
ive/lib/asm-commons-3.1.jar:/usr/lib/hive/lib/asm-tree-3.1.jar:/usr/lib/hive
/lib/avro-1.7.5.jar:/usr/lib/hive/lib/bonecp-0.8.0.RELEASE.jar:/usr/lib/hive
/lib/calcite-avatica-1.2.0-incubating.jar:/usr/lib/hive/lib/calcite-core-1.2
.0-incubating.jar:/usr/lib/hive/lib/calcite-linq4j-1.2.0-incubating.jar:/usr
/lib/hive/lib/commons-beanutils-1.7.0.jar:/usr/lib/hive/lib/commons-beanutil
s-core-1.8.0.jar:/usr/lib/hive/lib/commons-cli-1.2.jar:/usr/lib/hive/lib/com
mons-codec-1.4.jar:/usr/lib/hive/lib/commons-collections-3.2.1.jar:/usr/lib/
hive/lib/commons-compiler-2.7.6.jar:/usr/lib/hive/lib/commons-compress-1.4.1
.jar:/usr/lib/hive/lib/commons-configuration-1.6.jar:/usr/lib/hive/lib/commo
ns-dbcp-1.4.jar:/usr/lib/hive/lib/commons-digester-1.8.jar:/usr/lib/hive/lib
/commons-httpclient-3.0.1.jar:/usr/lib/hive/lib/commons-io-2.4.jar:/usr/lib/
hive/lib/commons-lang-2.6.jar:/usr/lib/hive/lib/commons-logging-1.1.3.jar:/u
sr/lib/hive/lib/commons-math-2.1.jar:/usr/lib/hive/lib/commons-pool-1.5.4.ja
r:/usr/lib/hive/lib/commons-vfs2-2.0.jar:/usr/lib/hive/lib/curator-client-2.
6.0.jar:/usr/lib/hive/lib/curator-framework-2.6.0.jar:/usr/lib/hive/lib/cura
tor-recipes-2.6.0.jar:/usr/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar:/usr/l
ib/hive/lib/datanucleus-core-3.2.10.jar:/usr/lib/hive/lib/datanucleus-rdbms-
3.2.9.jar:/usr/lib/hive/lib/derby-10.10.2.0.jar:/usr/lib/hive/lib/eigenbase-
properties-1.1.5.jar:/usr/lib/hive/lib/geronimo-annotation_1.0_spec-1.1.1.ja
r:/usr/lib/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/usr/lib/hive/lib/geron
imo-jta_1.1_spec-1.1.1.jar:/usr/lib/hive/lib/groovy-all-2.1.6.jar:/usr/lib/h
ive/lib/guava-14.0.1.jar:/usr/lib/hive/lib/hamcrest-core-1.1.jar:/usr/lib/hi
ve/lib/hive-accumulo-handler-1.2.1.jar:/usr/lib/hive/lib/hive-ant-1.2.1.jar:
/usr/lib/hive/lib/hive-beeline-1.2.1.jar:/usr/lib/hive/lib/hive-cli-1.2.1.ja
r:/usr/lib/hive/lib/hive-common-1.2.1.jar:/usr/lib/hive/lib/hive-contrib-1.2
.1.jar:/usr/lib/hive/lib/hive-exec-1.2.1.jar:/usr/lib/hive/lib/hive-hbase-ha
ndler-1.2.1.jar:/usr/lib/hive/lib/hive-hwi-1.2.1.jar:/usr/lib/hive/lib/hive-
jdbc-1.2.1-standalone.jar:/usr/lib/hive/lib/hive-jdbc-1.2.1.jar:/usr/lib/hiv
e/lib/hive-metastore-1.2.1.jar:/usr/lib/hive/lib/hive-serde-1.2.1.jar:/usr/l
ib/hive/lib/hive-service-1.2.1.jar:/usr/lib/hive/lib/hive-shims-0.20S-1.2.1.
jar:/usr/lib/hive/lib/hive-shims-0.23-1.2.1.jar:/usr/lib/hive/lib/hive-shims
-1.2.1.jar:/usr/lib/hive/lib/hive-shims-common-1.2.1.jar:/usr/lib/hive/lib/h
ive-shims-scheduler-1.2.1.jar:/usr/lib/hive/lib/hive-testutils-1.2.1.jar:/us
r/lib/hive/lib/httpclient-4.4.jar:/usr/lib/hive/lib/httpcore-4.4.jar:/usr/li
b/hive/lib/ivy-2.4.0.jar:/usr/lib/hive/lib/janino-2.7.6.jar:/usr/lib/hive/li
b/jcommander-1.32.jar:/usr/lib/hive/lib/jconn4.jar:/usr/lib/hive/lib/jdo-api
-3.0.1.jar:/usr/lib/hive/lib/jetty-all-7.6.0.v20120127.jar:/usr/lib/hive/lib
/jetty-all-server-7.6.0.v20120127.jar:/usr/lib/hive/lib/jline-2.12.jar:/usr/
lib/hive/lib/joda-time-2.5.jar:/usr/lib/hive/lib/jpam-1.1.jar:/usr/lib/hive/
lib/json-20090211.jar:/usr/lib/hive/lib/jsr305-3.0.0.jar:/usr/lib/hive/lib/j
ta-1.1.jar:/usr/lib/hive/lib/junit-4.11.jar:/usr/lib/hive/lib/libfb303-0.9.2
.jar:/usr/lib/hive/lib/libthrift-0.9.2.jar:/usr/lib/hive/lib/log4j-1.2.16.ja
r:/usr/lib/hive/lib/mail-1.4.1.jar:/usr/lib/hive/lib/maven-scm-api-1.4.jar:/
usr/lib/hive/lib/maven-scm-provider-svn-commons-1.4.jar:/usr/lib/hive/lib/ma
ven-scm-provider-svnexe-1.4.jar:/usr/lib/hive/lib/netty-3.7.0.Final.jar:/usr
/lib/hive/lib/ojdbc6.jar:/usr/lib/hive/lib/opencsv-2.3.jar:/usr/lib/hive/lib
/oro-2.0.8.jar:/usr/lib/hive/lib/paranamer-2.3.jar:/usr/lib/hive/lib/parquet
-hadoop-bundle-1.6.0.jar:/usr/lib/hive/lib/pentaho-aggdesigner-algorithm-5.1
.5-jhyde.jar:/usr/lib/hive/lib/plexus-utils-1.5.6.jar:/usr/lib/hive/lib/rege
xp-1.3.jar:/usr/lib/hive/lib/servlet-api-2.5.jar:/usr/lib/hive/lib/snappy-ja
va-1.0.5.jar:/usr/lib/hive/lib/spark-assembly-1.5.2-hadoop2.4.0.jar:/usr/lib
/hive/lib/stax-api-1.0.1.jar:/usr/lib/hive/lib/stringtemplate-3.2.1.jar:/usr
/lib/hive/lib/super-csv-2.2.0.jar:/usr/lib/hive/lib/tempus-fugit-1.1.jar:/us
r/lib/hive/lib/velocity-1.5.jar:/usr/lib/hive/lib/xz-1.0.jar:/usr/lib/hive/l
ib/zookeeper-3.4.6.jar:/usr/lib/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.j
ar::/usr/lib/hbase/conf:/usr/lib/hbase/lib/hbase-protocol-1.0.0.jar:/usr/lib
/hbase/lib/hbase-client-1.0.0.jar:/usr/lib/hbase/lib/htrace-core-3.1.0-incub
ating.jar:/usr/lib/hbase/lib/hbase-server-1.0.0.jar:/usr/lib/hbase/lib/hbase
-hadoop-compat-1.0.0.jar:/usr/lib/hbase/lib/hbase-common-1.0.0.jar:/usr/lib/
hbase/lib/netty-all-4.0.23.Final.jar:/home/hduser/hadoop-2.6.0/contrib/capac
ity-scheduler/*.jar:/home/hduser/hadoop-2.6.0/etc/hadoop:/home/hduser/hadoop
-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/hduser/hado
op-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hduser/hado
op-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/hduser/hadoop-2.6.0
/share/hadoop/common/lib/avro-1.7.4.jar:/home/hduser/hadoop-2.6.0/share/hado
op/common/lib/stax-api-1.0-2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/comm
on/lib/commons-configuration-1.6.jar:/home/hduser/hadoop-2.6.0/share/hadoop/
common/lib/jasper-compiler-5.5.23.jar:/home/hduser/hadoop-2.6.0/share/hadoop
/common/lib/gson-2.2.4.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib
/java-xmlbuilder-0.4.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/a
pacheds-i18n-2.0.0-M15.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib
/mockito-all-1.8.5.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/com
mons-net-3.1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.ja
r:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/hom
e/hduser/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home
/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/hduser
/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:
/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/hom
e/hduser/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hd
user/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hduser/h
adoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hduser/hadoop-
2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop-2.6.0/s
hare/hadoop/common/lib/activation-1.1.jar:/home/hduser/hadoop-2.6.0/share/ha
doop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hadoop-2.6.0/share/h
adoop/common/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop-2.6.0/share/had
oop/common/lib/curator-recipes-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/had
oop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.0/share
/hadoop/common/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.0/share/hadoop/c
ommon/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop-2.6.0/share/hadoop/c
ommon/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/
common/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common
/lib/junit-4.11.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/log4j-
1.2.17.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1
.4.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/h
ome/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.ja
r:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.
jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:
/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hom
e/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/
hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/h
duser/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hduser/had
oop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.
6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hduser/hadoop-2.6.0/share/
hadoop/common/lib/hadoop-auth-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hado
op/common/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hadoop-2.6.0/sha
re/hadoop/common/lib/curator-client-2.6.0.jar:/home/hduser/hadoop-2.6.0/shar
e/hadoop/common/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/commo
n/lib/httpclient-4.2.5.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib
/hadoop-annotations-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/
lib/api-util-1.0.0-M20.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib
/slf4j-log4j12-1.7.5.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/p
rotobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/jet
s3t-0.9.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2
.5.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.j
ar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.
jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:
/home/hduser/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hdus
er/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/hduser/hado
op-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.
6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hduser/hadoop-2.6.0/share/
hadoop/common/lib/jersey-json-1.9.jar:/home/hduser/hadoop-2.6.0/share/hadoop
/common/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.0/share/hado
op/common/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/c
ommon/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/had
oop-nfs-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/hadoop-commo
n-2.6.0-tests.jar:/home/hduser/hadoop-2.6.0/share/hadoop/common/hadoop-commo
n-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs:/home/hduser/hadoop-
2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.
6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hduser/hadoop-2.6.0/shar
e/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop
/hdfs/lib/commons-el-1.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib
/htrace-core-3.0.4.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/netty
-3.6.2.Final.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core
-1.9.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-
1.9.13.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:
/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/h
ome/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hduser/
hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hduser/hadoop
-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.0
/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.0/share/hado
op/hdfs/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/
lib/xercesImpl-2.9.1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp
-api-2.1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/ho
me/hduser/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/h
duser/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hdu
ser/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hduser/ha
doop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hduser/hadoop-2.6.0/s
hare/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.0/share/hado
op/hdfs/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs
/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs
/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdf
s-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.
0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.ja
r:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:
/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hdu
ser/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/hduser/hadoop-2.6
.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/y
arn/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/
jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jav
ax.inject-1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.
Final.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.ja
r:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/h
duser/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/home/hduser/had
oop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hduser/hadoop-2.6.0/s
hare/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.0
/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.0/share/hado
op/yarn/lib/aopalliance-1.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/
lib/commons-httpclient-3.1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/l
ib/zookeeper-3.4.6.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/log4j
-1.2.17.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.
4.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/home
/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/ho
me/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hdu
ser/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hduser/ha
doop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.0/
share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.0/share/ha
doop/yarn/lib/guice-servlet-3.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/y
arn/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/l
ib/asm-3.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java
-2.5.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9
.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.
jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/h
ome/hduser/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hduse
r/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hduser/hadoo
p-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hduser/hadoop-2.6.0
/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hduser/hadoop-2.6.0/share/h
adoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.0/share/h
adoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop-2.6.0/share/hadoop/
yarn/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/hadoop
-yarn-server-tests-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/had
oop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/home/hduser/hadoop-2.
6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/home/hduser/hadoop-2.6.0/sh
are/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/home/hd
user/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/hom
e/hduser/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2
.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.
0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applica
tionhistoryservice-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/had
oop-yarn-registry-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/yarn/hado
op-yarn-server-nodemanager-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/
yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/
hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/had
oop/mapreduce/lib/paranamer-2.3.jar:/home/hduser/hadoop-2.6.0/share/hadoop/m
apreduce/lib/avro-1.7.4.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce
/lib/guice-3.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/xz-1
.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/javax.inject-1.j
ar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.ja
r:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/
home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:
/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.
13.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/aopalliance-1.0.
jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/hom
e/hduser/hadoop-2.6.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hdus
er/hadoop-2.6.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hduse
r/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hduser
/hadoop-2.6.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hduser
/hadoop-2.6.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hduser/hadoop-2.6
.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/home/hduser/hado
op-2.6.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hduser/had
oop-2.6.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hduser/hado
op-2.6.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hduser/
hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hduser/hado
op-2.6.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hduser/hadoop
-2.6.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hduser/h
adoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-t
ests.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-c
lient-core-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/hadoop
-mapreduce-client-hs-plugins-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoo
p/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/home/hduser/hadoop-
2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/home/hduse
r/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.j
ar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-
app-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapred
uce-client-hs-2.6.0.jar:/home/hduser/hadoop-2.6.0/share/hadoop/mapreduce/had
oop-mapreduce-client-shuffle-2.6.0.jar

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: /usr/lib/spark

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: spark

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: spark://rhes564:7077

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: true

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: /usr/lib/spark/logs

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: 512m

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on: org.apache.spark.serializer.KryoSerializer

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=Driver.run
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=TimeToSubmit
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=compile
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on:

use asehadoop

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=parse
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO parse.ParseDriver: Parsing command:

use asehadoop

15/11/30 16:57:58 [main]: INFO parse.ParseDriver: Parse Completed

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=parse
start=1448902678093 end=1448902678719 duration=626
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: DEBUG ql.Driver: Encoding valid txns info
9223372036854775807:

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG
method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO ql.Driver: Semantic Analysis Completed

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG
method=semanticAnalyze start=1448902678722 end=1448902678762 duration=40
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO ql.Driver: Returning Hive schema:
Schema(fieldSchemas:null, properties:null)

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=compile
start=1448902678069 end=1448902678769 duration=700
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO metadata.Hive: Dumping metastore api call
timing information for : compilation phase

15/11/30 16:57:58 [main]: DEBUG metadata.Hive: Total time spent in each
metastore function (ms): {isCompatibleWith_(HiveConf, )=0,
getDatabase_(String, )=6, getAllDatabases_()=17, getFunctions_(String,
String, )=10}

15/11/30 16:57:58 [main]: INFO ql.Driver: Concurrency mode is disabled, not
creating a lock manager

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG
method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO ql.Driver: Starting
command(queryId=hduser_20151130165758_735f7d9e-dee9-4f20-ab5c-77af628f079d):

use asehadoop

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=TimeToSubmit
start=1448902678069 end=1448902678793 duration=724
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=runTasks
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG
method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO ql.Driver: Starting task [Stage-0:DDL] in
serial mode

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=runTasks
start=1448902678793 end=1448902678803 duration=10
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO metadata.Hive: Dumping metastore api call
timing information for : execution phase

15/11/30 16:57:58 [main]: DEBUG metadata.Hive: Total time spent in each
metastore function (ms): {isCompatibleWith_(HiveConf, )=2,
getDatabase_(String, )=4}

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG
method=Driver.execute start=1448902678769 end=1448902678803 duration=34
from=org.apache.hadoop.hive.ql.Driver>

OK

15/11/30 16:57:58 [main]: INFO ql.Driver: OK

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=releaseLocks
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=releaseLocks
start=1448902678803 end=1448902678803 duration=0
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=Driver.run
start=1448902678069 end=1448902678803 duration=734
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: DEBUG ql.Driver: Shutting down query

use asehadoop

Time taken: 0.737 seconds

15/11/30 16:57:58 [main]: INFO CliDriver: Time taken: 0.737 seconds

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=Driver.run
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=TimeToSubmit
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=compile
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: DEBUG parse.VariableSubstitution: Substitution is
on:

select count(1) from t

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG method=parse
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO parse.ParseDriver: Parsing command:

select count(1) from t

15/11/30 16:57:58 [main]: INFO parse.ParseDriver: Parse Completed

15/11/30 16:57:58 [main]: INFO log.PerfLogger: </PERFLOG method=parse
start=1448902678805 end=1448902678816 duration=11
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: DEBUG ql.Driver: Encoding valid txns info
9223372036854775807:

15/11/30 16:57:58 [main]: INFO log.PerfLogger: <PERFLOG
method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:58 [main]: INFO parse.CalcitePlanner: Starting Semantic
Analysis

15/11/30 16:57:59 [main]: DEBUG exec.FunctionRegistry: Looking up
GenericUDAF: count

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Completed phase 1 of
Semantic Analysis

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Get metadata for source
tables

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Get metadata for
subqueries

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Get metadata for
destination tables

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #9

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #9

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getEZForPath
took 1ms

15/11/30 16:57:59 [main]: DEBUG hdfs.DFSClient:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hive_2015-11-30_16-57-
58_805_2366423616530225987-1: masked=rwx------

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #10

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #10

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 5ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #11

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #11

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [main]: INFO ql.Context: New scratch dir is
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Completed getting
MetaData in Semantic Analysis

15/11/30 16:57:59 [main]: INFO parse.BaseSemanticAnalyzer: Not invoking CBO
because the statement has too few joins

15/11/30 16:57:59 [main]: DEBUG hive.log: DDL: struct t { varchar(30) owner,
varchar(30) object_name, varchar(30) subobject_name, i64 object_id, i64
data_object_id, varchar(19) object_type, timestamp created, timestamp
last_ddl_time, varchar(19) timestamp, varchar(7) status, varchar(1)
temporary2, varchar(1) generated, varchar(1) secondary, i64 namespace,
varchar(30) edition_name, varchar(4000) padding1, varchar(3500) padding2,
varchar(32) attribute, i32 op_type, timestamp op_time}

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Created Table Plan for
t TS[0]

15/11/30 16:57:59 [main]: DEBUG exec.FunctionRegistry: Looking up
GenericUDAF: count

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: RR before GB
t{(owner,owner: varchar(30))(object_name,object_name:
varchar(30))(subobject_name,subobject_name:
varchar(30))(object_id,object_id: bigint)(data_object_id,data_object_id:
bigint)(object_type,object_type: varchar(19))(created,created:
timestamp)(last_ddl_time,last_ddl_time: timestamp)(timestamp,timestamp:
varchar(19))(status,status: varchar(7))(temporary2,temporary2:
varchar(1))(generated,generated: varchar(1))(secondary,secondary:
varchar(1))(namespace,namespace: bigint)(edition_name,edition_name:
varchar(30))(padding1,padding1: varchar(4000))(padding2,padding2:
varchar(3500))(attribute,attribute: varchar(32))(op_type,op_type:
int)(op_time,op_time:
timestamp)(block__offset__inside__file,BLOCK__OFFSET__INSIDE__FILE:
bigint)(input__file__name,INPUT__FILE__NAME: string)(row__id,ROW__ID:
struct<transactionid:bigint,bucketid:int,rowid:bigint>)}  after GB
{((tok_function count 1),_col0: bigint)}

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: tree: (TOK_SELECT
(TOK_SELEXPR (TOK_FUNCTION count 1)))

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: genSelectPlan: input =
{((tok_function count 1),_col0: bigint)}  starRr = t{(owner,owner:
varchar(30))(object_name,object_name:
varchar(30))(subobject_name,subobject_name:
varchar(30))(object_id,object_id: bigint)(data_object_id,data_object_id:
bigint)(object_type,object_type: varchar(19))(created,created:
timestamp)(last_ddl_time,last_ddl_time: timestamp)(timestamp,timestamp:
varchar(19))(status,status: varchar(7))(temporary2,temporary2:
varchar(1))(generated,generated: varchar(1))(secondary,secondary:
varchar(1))(namespace,namespace: bigint)(edition_name,edition_name:
varchar(30))(padding1,padding1: varchar(4000))(padding2,padding2:
varchar(3500))(attribute,attribute: varchar(32))(op_type,op_type:
int)(op_time,op_time:
timestamp)(block__offset__inside__file,BLOCK__OFFSET__INSIDE__FILE:
bigint)(input__file__name,INPUT__FILE__NAME: string)(row__id,ROW__ID:
struct<transactionid:bigint,bucketid:int,rowid:bigint>)}

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Created Select Plan
row schema: null{(_c0,_col0: bigint)}

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Created Select Plan
for clause: insclause-0

15/11/30 16:57:59 [main]: DEBUG ql.Context: Created staging dir =
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000/.hive-staging_hive
_2015-11-30_16-57-58_805_2366423616530225987-1 for path =
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000

15/11/30 16:57:59 [main]: INFO common.FileUtils: Creating directory if it
doesn't exist:
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000/.hive-staging_hive
_2015-11-30_16-57-58_805_2366423616530225987-1

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #12

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #12

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 4ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #13

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #13

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #14

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #14

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #15

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #15

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [main]: DEBUG hdfs.DFSClient:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hive_2015-11-30_16-57-
58_805_2366423616530225987-1/-mr-10000/.hive-staging_hive_2015-11-30_16-57-5
8_805_2366423616530225987-1: masked=rwxr-xr-x

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #16

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #16

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #17

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #17

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 0ms

15/11/30 16:57:59 [main]: DEBUG shims.HadoopShimsSecure:
{-chgrp,-R,supergroup,hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496
-8b3a-66f097d0c6f0/hive_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10
000}

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #18

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #18

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #19

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #19

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getListing took
0ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #20

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #20

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getListing took
1ms

15/11/30 16:57:59 [main]: DEBUG shims.HadoopShimsSecure: Return value is :0

15/11/30 16:57:59 [main]: DEBUG shims.HadoopShimsSecure:
{-chmod,-R,700,hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-6
6f097d0c6f0/hive_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000}

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #21

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #21

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #22

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #22

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: setPermission
took 1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #23

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #23

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getListing took
1ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #24

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #24

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: setPermission
took 3ms

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #25

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #25

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getListing took
1ms

15/11/30 16:57:59 [main]: DEBUG shims.HadoopShimsSecure: Return value is :0

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #26

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #26

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 0ms

15/11/30 16:57:59 [main]: DEBUG shims.HadoopShimsSecure:
FileStatus{path=hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-
66f097d0c6f0/hive_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000;
isDirectory=true; modification_time=1448902679395; access_time=0;
owner=hduser; group=supergroup; permission=rwx------; isSymlink=false}

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #27

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #27

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:57:59 [main]: DEBUG lazy.LazySimpleSerDe:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with:
columnNames=[_col0] columnTypes=[bigint] separator=[[B@170807fb]
nullstring=\N lastColumnTakesRest=false timestampFormats=null

15/11/30 16:57:59 [main]: DEBUG lazy.LazySimpleSerDe:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with:
columnNames=[_col0] columnTypes=[bigint] separator=[[B@6e30562b]
nullstring=\N lastColumnTakesRest=false timestampFormats=null

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Set stats collection
dir :
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000/.hive-staging_hive
_2015-11-30_16-57-58_805_2366423616530225987-1/-ext-10002

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Created FileSink Plan
for clause: insclause-0dest_path:
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1/-mr-10000 row schema:
null{(_c0,_col0: bigint)}

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Created Body Plan for
Query Block null

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Created Plan for Query
Block null

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: Before logical
optimization

TS[0]-SEL[1]-GBY[2]-RS[3]-GBY[4]-SEL[5]-FS[6]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:TS[0] with rs:(owner: varchar(30)|{t}owner,object_name:
varchar(30)|{t}object_name,subobject_name:
varchar(30)|{t}subobject_name,object_id: bigint|{t}object_id,data_object_id:
bigint|{t}data_object_id,object_type: varchar(19)|{t}object_type,created:
timestamp|{t}created,last_ddl_time: timestamp|{t}last_ddl_time,timestamp:
varchar(19)|{t}timestamp,status: varchar(7)|{t}status,temporary2:
varchar(1)|{t}temporary2,generated: varchar(1)|{t}generated,secondary:
varchar(1)|{t}secondary,namespace: bigint|{t}namespace,edition_name:
varchar(30)|{t}edition_name,padding1: varchar(4000)|{t}padding1,padding2:
varchar(3500)|{t}padding2,attribute: varchar(32)|{t}attribute,op_type:
int|{t}op_type,op_time: timestamp|{t}op_time,BLOCK__OFFSET__INSIDE__FILE:
bigint|{t}block__offset__inside__file,INPUT__FILE__NAME:
string|{t}input__file__name,ROW__ID:
struct<transactionid:bigint,bucketid:int,rowid:bigint>|{t}row__id)

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator TS[0]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:SEL[1] with rs:(owner: varchar(30)|{t}owner,object_name:
varchar(30)|{t}object_name,subobject_name:
varchar(30)|{t}subobject_name,object_id: bigint|{t}object_id,data_object_id:
bigint|{t}data_object_id,object_type: varchar(19)|{t}object_type,created:
timestamp|{t}created,last_ddl_time: timestamp|{t}last_ddl_time,timestamp:
varchar(19)|{t}timestamp,status: varchar(7)|{t}status,temporary2:
varchar(1)|{t}temporary2,generated: varchar(1)|{t}generated,secondary:
varchar(1)|{t}secondary,namespace: bigint|{t}namespace,edition_name:
varchar(30)|{t}edition_name,padding1: varchar(4000)|{t}padding1,padding2:
varchar(3500)|{t}padding2,attribute: varchar(32)|{t}attribute,op_type:
int|{t}op_type,op_time: timestamp|{t}op_time,BLOCK__OFFSET__INSIDE__FILE:
bigint|{t}block__offset__inside__file,INPUT__FILE__NAME:
string|{t}input__file__name,ROW__ID:
struct<transactionid:bigint,bucketid:int,rowid:bigint>|{t}row__id)

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator SEL[1]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcFactory: New
column list:(Column[owner] Column[object_name] Column[subobject_name]
Column[object_id] Column[data_object_id] Column[object_type] Column[created]
Column[last_ddl_time] Column[timestamp] Column[status] Column[temporary2]
Column[generated] Column[secondary] Column[namespace] Column[edition_name]
Column[padding1] Column[padding2] Column[attribute] Column[op_type]
Column[op_time] Column[BLOCK__OFFSET__INSIDE__FILE]
Column[INPUT__FILE__NAME] Column[ROW__ID])

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:GBY[2] with rs:(_col0: bigint|{}(tok_function count 1))

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator GBY[2]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:RS[3] with rs:(VALUE._col0: bigint|{}(tok_function count 1))

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator RS[3]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:GBY[4] with rs:(_col0: bigint|{}(tok_function count 1))

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator GBY[4]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:SEL[5] with rs:(_col0: bigint|{null}_c0)

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator SEL[5]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcFactory: New
column list:(Column[_col0])

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:FS[6] with rs:(_col0: bigint|{})

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator FS[6]

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for FS(6)

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for SEL(5)

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for GBY(4)

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for RS(3)

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for GBY(2)

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for SEL(1)

15/11/30 16:57:59 [main]: INFO ppd.OpProcFactory: Processing for TS(0)

15/11/30 16:57:59 [main]: DEBUG ppd.PredicatePushDown: After PPD:

TS[0]-SEL[1]-GBY[2]-RS[3]-GBY[4]-SEL[5]-FS[6]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:TS[0] with rs:(owner: varchar(30)|{t}owner,object_name:
varchar(30)|{t}object_name,subobject_name:
varchar(30)|{t}subobject_name,object_id: bigint|{t}object_id,data_object_id:
bigint|{t}data_object_id,object_type: varchar(19)|{t}object_type,created:
timestamp|{t}created,last_ddl_time: timestamp|{t}last_ddl_time,timestamp:
varchar(19)|{t}timestamp,status: varchar(7)|{t}status,temporary2:
varchar(1)|{t}temporary2,generated: varchar(1)|{t}generated,secondary:
varchar(1)|{t}secondary,namespace: bigint|{t}namespace,edition_name:
varchar(30)|{t}edition_name,padding1: varchar(4000)|{t}padding1,padding2:
varchar(3500)|{t}padding2,attribute: varchar(32)|{t}attribute,op_type:
int|{t}op_type,op_time: timestamp|{t}op_time,BLOCK__OFFSET__INSIDE__FILE:
bigint|{t}block__offset__inside__file,INPUT__FILE__NAME:
string|{t}input__file__name,ROW__ID:
struct<transactionid:bigint,bucketid:int,rowid:bigint>|{t}row__id)

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator TS[0]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:SEL[1] with rs:(owner: varchar(30)|{t}owner,object_name:
varchar(30)|{t}object_name,subobject_name:
varchar(30)|{t}subobject_name,object_id: bigint|{t}object_id,data_object_id:
bigint|{t}data_object_id,object_type: varchar(19)|{t}object_type,created:
timestamp|{t}created,last_ddl_time: timestamp|{t}last_ddl_time,timestamp:
varchar(19)|{t}timestamp,status: varchar(7)|{t}status,temporary2:
varchar(1)|{t}temporary2,generated: varchar(1)|{t}generated,secondary:
varchar(1)|{t}secondary,namespace: bigint|{t}namespace,edition_name:
varchar(30)|{t}edition_name,padding1: varchar(4000)|{t}padding1,padding2:
varchar(3500)|{t}padding2,attribute: varchar(32)|{t}attribute,op_type:
int|{t}op_type,op_time: timestamp|{t}op_time,BLOCK__OFFSET__INSIDE__FILE:
bigint|{t}block__offset__inside__file,INPUT__FILE__NAME:
string|{t}input__file__name,ROW__ID:
struct<transactionid:bigint,bucketid:int,rowid:bigint>|{t}row__id)

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator SEL[1]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcFactory: New
column list:(Column[owner] Column[object_name] Column[subobject_name]
Column[object_id] Column[data_object_id] Column[object_type] Column[created]
Column[last_ddl_time] Column[timestamp] Column[status] Column[temporary2]
Column[generated] Column[secondary] Column[namespace] Column[edition_name]
Column[padding1] Column[padding2] Column[attribute] Column[op_type]
Column[op_time] Column[BLOCK__OFFSET__INSIDE__FILE]
Column[INPUT__FILE__NAME] Column[ROW__ID])

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:GBY[2] with rs:(_col0: bigint|{}(tok_function count 1))

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator GBY[2]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:RS[3] with rs:(VALUE._col0: bigint|{}(tok_function count 1))

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator RS[3]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:GBY[4] with rs:(_col0: bigint|{}(tok_function count 1))

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator GBY[4]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:SEL[5] with rs:(_col0: bigint|{null}_c0)

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator SEL[5]

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcFactory: New
column list:(Column[_col0])

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx: Getting
constants of op:FS[6] with rs:(_col0: bigint|{})

15/11/30 16:57:59 [main]: DEBUG optimizer.ConstantPropagateProcCtx:
Offerring constants [] to operator FS[6]

15/11/30 16:57:59 [main]: DEBUG optimizer.ColumnPrunerProcFactory: Reduce
Sink Operator 3 key:[]

15/11/30 16:57:59 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 3
oldColExprMap: {VALUE._col0=Column[_col0]}

15/11/30 16:57:59 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 3
newColExprMap: {VALUE._col0=Column[_col0]}

15/11/30 16:57:59 [main]: DEBUG optimizer.IdentityProjectRemover: Identity
project remover optimization removed : SEL[5]

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG
method=partition-retrieving
from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG
method=partition-retrieving start=1448902679650 end=1448902679653 duration=3
from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>

15/11/30 16:57:59 [main]: DEBUG exec.TableScanOperator: Setting stats (Num
rows: 1767886 Data size: 67670776 Basic stats: COMPLETE Column stats:
COMPLETE) on TS[0]

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [0]
STATS-TS[0] (t):  numRows: 1767886 dataSize: 67670776 basicStatsState:
COMPLETE colStatsState: COMPLETE colStats: {}

15/11/30 16:57:59 [main]: DEBUG exec.SelectOperator: Setting stats (Num
rows: 1767886 Data size: 67670776 Basic stats: COMPLETE Column stats:
COMPLETE) on SEL[1]

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [0]
STATS-SEL[1]:  numRows: 1767886 dataSize: 67670776 basicStatsState: COMPLETE
colStatsState: COMPLETE colStats: {}

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory:
STATS-GBY[2]: inputSize: 67670776 maxSplitSize: 256000000 parallelism: 1
containsGroupingSet: false sizeOfGroupingSet: 1

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory:
STATS-GBY[2] hashAgg: true

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [Case 3]
STATS-GBY[2]: cardinality: 1

15/11/30 16:57:59 [main]: DEBUG exec.GroupByOperator: Setting stats (Num
rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE) on GBY[2]

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [0]
STATS-GBY[2]:  numRows: 1 dataSize: 8 basicStatsState: COMPLETE
colStatsState: COMPLETE colStats: {_col0= colName: _col0 colType: bigint
countDistincts: 1 numNulls: 0 avgColLen: 8.0 numTrues: 0 numFalses: 0
isPrimaryKey: false}

15/11/30 16:57:59 [main]: DEBUG exec.ReduceSinkOperator: Setting stats (Num
rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE) on RS[3]

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [0]
STATS-RS[3]:  numRows: 1 dataSize: 8 basicStatsState: COMPLETE
colStatsState: COMPLETE colStats: {VALUE._col0= colName: VALUE._col0
colType: bigint countDistincts: 1 numNulls: 0 avgColLen: 8.0 numTrues: 0
numFalses: 0 isPrimaryKey: false}

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory:
STATS-GBY[4]: inputSize: 1 maxSplitSize: 256000000 parallelism: 1
containsGroupingSet: false sizeOfGroupingSet: 1

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory:
STATS-GBY[4] hashAgg: false

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [Case 9]
STATS-GBY[4]: cardinality: 1

15/11/30 16:57:59 [main]: DEBUG exec.GroupByOperator: Setting stats (Num
rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE) on GBY[4]

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [0]
STATS-GBY[4]:  numRows: 1 dataSize: 8 basicStatsState: COMPLETE
colStatsState: COMPLETE colStats: {_col0= colName: _col0 colType: bigint
countDistincts: 1 numNulls: 0 avgColLen: 8.0 numTrues: 0 numFalses: 0
isPrimaryKey: false}

15/11/30 16:57:59 [main]: DEBUG annotation.StatsRulesProcFactory: [0]
STATS-FS[6]:  numRows: 1 dataSize: 8 basicStatsState: COMPLETE
colStatsState: COMPLETE colStats: {_col0= colName: _col0 colType: bigint
countDistincts: 1 numNulls: 0 avgColLen: 8.0 numTrues: 0 numFalses: 0
isPrimaryKey: false}

15/11/30 16:57:59 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #28

15/11/30 16:57:59 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #28

15/11/30 16:57:59 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getListing took
1ms

15/11/30 16:57:59 [main]: DEBUG exec.TableScanOperator: Setting traits
(org.apache.hadoop.hive.ql.plan.OpTraits@74a19718) on TS[0]

15/11/30 16:57:59 [main]: DEBUG exec.SelectOperator: Setting traits
(org.apache.hadoop.hive.ql.plan.OpTraits@1cc6e47a) on SEL[1]

15/11/30 16:57:59 [main]: DEBUG exec.GroupByOperator: Setting traits
(org.apache.hadoop.hive.ql.plan.OpTraits@45c21458) on GBY[2]

15/11/30 16:57:59 [main]: DEBUG exec.ReduceSinkOperator: Setting traits
(org.apache.hadoop.hive.ql.plan.OpTraits@48c5c12f) on RS[3]

15/11/30 16:57:59 [main]: DEBUG exec.GroupByOperator: Setting traits
(org.apache.hadoop.hive.ql.plan.OpTraits@2f0e19fc) on GBY[4]

15/11/30 16:57:59 [main]: DEBUG exec.FileSinkOperator: Setting traits
(org.apache.hadoop.hive.ql.plan.OpTraits@2f0e19fc) on FS[6]

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: After logical
optimization

TS[0]-SEL[1]-GBY[2]-RS[3]-GBY[4]-FS[6]

15/11/30 16:57:59 [main]: DEBUG security.Groups: Returning cached groups for
'hduser'

15/11/30 16:57:59 [main]: DEBUG security.Groups: Returning cached groups for
'hduser'

15/11/30 16:57:59 [main]: DEBUG session.SessionState: Session is using
authorization class class
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationPro
vider

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG
method=SparkOptimizeOperatorTree
from=org.apache.hadoop.hive.ql.parse.spark.SparkCompiler>

15/11/30 16:57:59 [main]: INFO spark.SetSparkReducerParallelism: Number of
reducers determined to be: 1

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG
method=SparkOptimizeOperatorTree start=1448902679679 end=1448902679685
duration=6 from=org.apache.hadoop.hive.ql.parse.spark.SparkCompiler>

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG
method=SparkGenerateTaskTree
from=org.apache.hadoop.hive.ql.parse.spark.SparkCompiler>

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: Root operator: TS[0]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: Leaf operator: RS[3]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkUtils: Adding map work (Map 1)
for TS[0]

15/11/30 16:57:59 [main]: DEBUG hive.log: DDL: struct t { varchar(30) owner,
varchar(30) object_name, varchar(30) subobject_name, i64 object_id, i64
data_object_id, varchar(19) object_type, timestamp created, timestamp
last_ddl_time, varchar(19) timestamp, varchar(7) status, varchar(1)
temporary2, varchar(1) generated, varchar(1) secondary, i64 namespace,
varchar(30) edition_name, varchar(4000) padding1, varchar(3500) padding2,
varchar(32) attribute, i32 op_type, timestamp op_time}

15/11/30 16:57:59 [main]: DEBUG hive.log: DDL: struct t { varchar(30) owner,
varchar(30) object_name, varchar(30) subobject_name, i64 object_id, i64
data_object_id, varchar(19) object_type, timestamp created, timestamp
last_ddl_time, varchar(19) timestamp, varchar(7) status, varchar(1)
temporary2, varchar(1) generated, varchar(1) secondary, i64 namespace,
varchar(30) edition_name, varchar(4000) padding1, varchar(3500) padding2,
varchar(32) attribute, i32 op_type, timestamp op_time}

15/11/30 16:57:59 [main]: DEBUG hive.log: DDL: struct t { varchar(30) owner,
varchar(30) object_name, varchar(30) subobject_name, i64 object_id, i64
data_object_id, varchar(19) object_type, timestamp created, timestamp
last_ddl_time, varchar(19) timestamp, varchar(7) status, varchar(1)
temporary2, varchar(1) generated, varchar(1) secondary, i64 namespace,
varchar(30) edition_name, varchar(4000) padding1, varchar(3500) padding2,
varchar(32) attribute, i32 op_type, timestamp op_time}

15/11/30 16:57:59 [main]: DEBUG optimizer.GenMapRedUtils: Adding
hdfs://rhes564:9000/user/hive/warehouse/asehadoop.db/t of tablet

15/11/30 16:57:59 [main]: DEBUG optimizer.GenMapRedUtils: Information added
for path hdfs://rhes564:9000/user/hive/warehouse/asehadoop.db/t

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: First pass. Leaf
operator: RS[3]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: Root operator: GBY[4]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: Leaf operator: FS[6]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkUtils: Adding reduce work
(Reducer 2) for GBY[4]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkUtils: Setting up reduce sink:
RS[3] with following reduce work: Reducer 2

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: Removing RS[3] as parent
from GBY[4]

15/11/30 16:57:59 [main]: DEBUG spark.GenSparkWork: First pass. Leaf
operator: FS[6]

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG
method=SparkGenerateTaskTree start=1448902679686 end=1448902679703
duration=17 from=org.apache.hadoop.hive.ql.parse.spark.SparkCompiler>

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG
method=SparkOptimizeTaskTree
from=org.apache.hadoop.hive.ql.parse.spark.SparkCompiler>

15/11/30 16:57:59 [main]: DEBUG parse.TaskCompiler: Skipping runtime skew
join optimization

15/11/30 16:57:59 [main]: DEBUG physical.NullScanTaskDispatcher: Looking at:
Map 1

15/11/30 16:57:59 [main]: INFO physical.NullScanTaskDispatcher: Looking for
table scans where optimization is applicable

15/11/30 16:57:59 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null
table scans

15/11/30 16:57:59 [main]: DEBUG physical.NullScanTaskDispatcher: Looking at:
Map 1

15/11/30 16:57:59 [main]: INFO physical.NullScanTaskDispatcher: Looking for
table scans where optimization is applicable

15/11/30 16:57:59 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null
table scans

15/11/30 16:57:59 [main]: DEBUG physical.NullScanTaskDispatcher: Looking at:
Map 1

15/11/30 16:57:59 [main]: INFO physical.NullScanTaskDispatcher: Looking for
table scans where optimization is applicable

15/11/30 16:57:59 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null
table scans

15/11/30 16:57:59 [main]: DEBUG parse.TaskCompiler: Skipping vectorization

15/11/30 16:57:59 [main]: DEBUG parse.TaskCompiler: Skipping stage id
rearranger

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG
method=SparkOptimizeTaskTree start=1448902679704 end=1448902679718
duration=14 from=org.apache.hadoop.hive.ql.parse.spark.SparkCompiler>

15/11/30 16:57:59 [main]: INFO parse.CalcitePlanner: Completed plan
generation

15/11/30 16:57:59 [main]: INFO ql.Driver: Semantic Analysis Completed

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: validation start

15/11/30 16:57:59 [main]: DEBUG parse.CalcitePlanner: not validating
writeEntity, because entity is neither table nor partition

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG
method=semanticAnalyze start=1448902678816 end=1448902679719 duration=903
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:59 [main]: DEBUG lazy.LazySimpleSerDe:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe initialized with:
columnNames=[_col0] columnTypes=[bigint] separator=[[B@51e08af7]
nullstring=\N lastColumnTakesRest=false timestampFormats=null

15/11/30 16:57:59 [main]: INFO exec.ListSinkOperator: Initializing operator
OP[7]

15/11/30 16:57:59 [main]: DEBUG lazy.LazySimpleSerDe:
org.apache.hadoop.hive.serde2.DelimitedJSONSerDe initialized with:
columnNames=[] columnTypes=[] separator=[[B@7bf2b1dc] nullstring=NULL
lastColumnTakesRest=false timestampFormats=null

15/11/30 16:57:59 [main]: INFO exec.ListSinkOperator: Initialization Done 7
OP

15/11/30 16:57:59 [main]: INFO exec.ListSinkOperator: Operator 7 OP
initialized

15/11/30 16:57:59 [main]: INFO ql.Driver: Returning Hive schema:
Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)],
properties:null)

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG method=compile
start=1448902678804 end=1448902679762 duration=958
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:59 [main]: INFO metadata.Hive: Dumping metastore api call
timing information for : compilation phase

15/11/30 16:57:59 [main]: DEBUG metadata.Hive: Total time spent in each
metastore function (ms): {isCompatibleWith_(HiveConf, )=1, getTable_(String,
String, )=34}

15/11/30 16:57:59 [main]: INFO ql.Driver: Concurrency mode is disabled, not
creating a lock manager

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG
method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:59 [main]: INFO ql.Driver: Starting
command(queryId=hduser_20151130165758_727c5878-4db9-4a29-9416-61a394ac2773):

select count(1) from t

Query ID = hduser_20151130165758_727c5878-4db9-4a29-9416-61a394ac2773

15/11/30 16:57:59 [main]: INFO ql.Driver: Query ID =
hduser_20151130165758_727c5878-4db9-4a29-9416-61a394ac2773

Total jobs = 1

15/11/30 16:57:59 [main]: INFO ql.Driver: Total jobs = 1

15/11/30 16:57:59 [main]: INFO log.PerfLogger: </PERFLOG method=TimeToSubmit
start=1448902678804 end=1448902679763 duration=959
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG method=runTasks
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:57:59 [main]: INFO log.PerfLogger: <PERFLOG
method=task.SPARK.Stage-1 from=org.apache.hadoop.hive.ql.Driver>

Launching Job 1 out of 1

15/11/30 16:57:59 [main]: INFO ql.Driver: Launching Job 1 out of 1

15/11/30 16:57:59 [main]: INFO ql.Driver: Starting task [Stage-1:MAPRED] in
serial mode

In order to change the average load for a reducer (in bytes):

15/11/30 16:57:59 [main]: INFO spark.SparkTask: In order to change the
average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=<number>

15/11/30 16:57:59 [main]: INFO spark.SparkTask:   set
hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

15/11/30 16:57:59 [main]: INFO spark.SparkTask: In order to limit the
maximum number of reducers:

  set hive.exec.reducers.max=<number>

15/11/30 16:57:59 [main]: INFO spark.SparkTask:   set
hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

15/11/30 16:57:59 [main]: INFO spark.SparkTask: In order to set a constant
number of reducers:

  set mapreduce.job.reduces=<number>

15/11/30 16:57:59 [main]: INFO spark.SparkTask:   set
mapreduce.job.reduces=<number>

15/11/30 16:57:59 [main]: INFO session.SparkSessionManagerImpl: Setting up
the session manager.

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.connect.timeout ->
1000).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.serializer ->
org.apache.spark.serializer.KryoSerializer).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.eventLog.enabled -> true).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.eventLog.dir ->
/usr/lib/spark/logs).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.rpc.threads -> 8).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.secret.bits -> 256).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.home -> /usr/lib/spark).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.rpc.max.size ->
52428800).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.master -> spark://rhes564:7077).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.executor.memory -> 512m).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.server.connect.timeout
-> 90000).

15/11/30 16:57:59 [main]: DEBUG logging.InternalLoggerFactory: Using SLF4J
as the default logging framework

15/11/30 16:57:59 [main]: DEBUG channel.MultithreadEventLoopGroup:
-Dio.netty.eventLoopThreads: 24

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent0:
java.nio.Buffer.address: available

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent0:
sun.misc.Unsafe.theUnsafe: available

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent0:
sun.misc.Unsafe.copyMemory: available

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent0:
java.nio.Bits.unaligned: true

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent: Java version: 7

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent:
-Dio.netty.noUnsafe: false

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent: sun.misc.Unsafe:
available

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent:
-Dio.netty.noJavassist: false

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent: Javassist:
unavailable

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent: You don't have
Javassist in your class path or you don't have enough permission to load
dynamically generated classes.  Please check the configuration for better
performance.

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent:
-Dio.netty.tmpdir: /tmp (java.io.tmpdir)

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent:
-Dio.netty.bitMode: 64 (sun.arch.data.model)

15/11/30 16:57:59 [main]: DEBUG internal.PlatformDependent:
-Dio.netty.noPreferDirect: false

15/11/30 16:57:59 [main]: DEBUG nio.NioEventLoop:
-Dio.netty.noKeySetOptimization: false

15/11/30 16:57:59 [main]: DEBUG nio.NioEventLoop:
-Dio.netty.selectorAutoRebuildThreshold: 512

15/11/30 16:57:59 [main]: DEBUG internal.ThreadLocalRandom:
-Dio.netty.initialSeedUniquifier: 0xd8ebcbdc152b0b7b (took 0 ms)

15/11/30 16:57:59 [main]: DEBUG buffer.ByteBufUtil:
-Dio.netty.allocator.type: unpooled

15/11/30 16:57:59 [main]: DEBUG buffer.ByteBufUtil:
-Dio.netty.threadLocalDirectBufferSize: 65536

15/11/30 16:57:59 [main]: DEBUG util.NetUtil: Loopback interface: lo (lo,
127.0.0.1)

15/11/30 16:57:59 [main]: DEBUG util.NetUtil: /proc/sys/net/core/somaxconn:
128

15/11/30 16:57:59 [main]: WARN rpc.RpcConfiguration: Your hostname, rhes564,
resolves to a loopback address, but we couldn't find  any external IP
address!

15/11/30 16:57:59 [main]: WARN rpc.RpcConfiguration: Set
hive.spark.client.server.address if you need to bind to another address.

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.connect.timeout ->
1000).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.serializer ->
org.apache.spark.serializer.KryoSerializer).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.eventLog.enabled -> true).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.eventLog.dir ->
/usr/lib/spark/logs).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.rpc.threads -> 8).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.secret.bits -> 256).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.home -> /usr/lib/spark).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.rpc.max.size ->
52428800).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.master -> spark://rhes564:7077).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load spark
property from hive configuration (spark.executor.memory -> 512m).

15/11/30 16:57:59 [main]: INFO spark.HiveSparkClientFactory: load RPC
property from hive configuration (hive.spark.client.server.connect.timeout
-> 90000).

15/11/30 16:58:00 [main]: INFO client.SparkClientImpl: Running client driver
with argv: /usr/lib/spark/bin/spark-submit --properties-file
/tmp/spark-submit.9131386023383484888.properties --class
org.apache.hive.spark.client.RemoteDriver
/usr/lib/hive/lib/hive-exec-1.2.1.jar --remote-host rhes564 --remote-port
26896 --conf hive.spark.client.connect.timeout=1000 --conf
hive.spark.client.server.connect.timeout=90000 --conf
hive.spark.client.channel.log.level=null --conf
hive.spark.client.rpc.max.size=52428800 --conf
hive.spark.client.rpc.threads=8 --conf hive.spark.client.secret.bits=256

15/11/30 16:58:00 [stderr-redir-1]: INFO client.SparkClientImpl: Warning:
Ignoring non-spark config property:
hive.spark.client.server.connect.timeout=90000

15/11/30 16:58:00 [stderr-redir-1]: INFO client.SparkClientImpl: Warning:
Ignoring non-spark config property: hive.spark.client.rpc.threads=8

15/11/30 16:58:00 [stderr-redir-1]: INFO client.SparkClientImpl: Warning:
Ignoring non-spark config property: hive.spark.client.connect.timeout=1000

15/11/30 16:58:00 [stderr-redir-1]: INFO client.SparkClientImpl: Warning:
Ignoring non-spark config property: hive.spark.client.secret.bits=256

15/11/30 16:58:00 [stderr-redir-1]: INFO client.SparkClientImpl: Warning:
Ignoring non-spark config property: hive.spark.client.rpc.max.size=52428800

15/11/30 16:58:00 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:00 INFO client.RemoteDriver: Connecting to: rhes564:26896

15/11/30 16:58:01 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:01 WARN rpc.Rpc: Invalid log level null, reverting to default.

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG util.ResourceLeakDetector:
-Dio.netty.leakDetectionLevel: simple

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG util.Recycler:
-Dio.netty.recycler.maxCapacity.default: 262144

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$SaslMessage (41 bytes)

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG internal.Cleaner0:
java.nio.ByteBuffer.cleaner(): available

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.RpcServer$SaslServerHandler:
Handling SASL challenge message...

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.RpcServer$SaslServerHandler:
Sending SASL challenge response...

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Encoded
message of type org.apache.hive.spark.client.rpc.Rpc$SaslMessage (98 bytes)

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$SaslMessage (275 bytes)

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.RpcServer$SaslServerHandler:
Handling SASL challenge message...

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.RpcServer$SaslServerHandler:
Sending SASL challenge response...

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Encoded
message of type org.apache.hive.spark.client.rpc.Rpc$SaslMessage (45 bytes)

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.RpcServer$SaslServerHandler:
SASL negotiation finished with QOP auth.

15/11/30 16:58:01 [main]: DEBUG session.SparkSessionManagerImpl: New session
(30d627c3-a88d-4126-b616-944a6b7d327b) is created.

15/11/30 16:58:01 [main]: INFO log.PerfLogger: <PERFLOG
method=SparkSubmitJob from=org.apache.hadoop.hive.ql.exec.spark.SparkTask>

15/11/30 16:58:01 [main]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Registered outstanding rpc 0
(org.apache.hive.spark.client.BaseProtocol$SyncJobRequest).

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Encoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)

15/11/30 16:58:01 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:01 INFO spark.SparkContext: Running Spark version 1.5.2

15/11/30 16:58:01 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Encoded
message of type org.apache.hive.spark.client.BaseProtocol$SyncJobRequest
(159 bytes)

15/11/30 16:58:01 [main]: INFO ql.Context: New scratch dir is
hdfs://rhes564:9000/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hiv
e_2015-11-30_16-57-58_805_2366423616530225987-1

15/11/30 16:58:01 [main]: DEBUG hdfs.DFSClient:
/tmp/hive/hduser/dec45671-0b5f-4496-8b3a-66f097d0c6f0/hive_2015-11-30_16-57-
58_805_2366423616530225987-1/-mr-10003: masked=rwxr-xr-x

15/11/30 16:58:01 [IPC Parameter Sending Thread #0]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #29

15/11/30 16:58:01 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #29

15/11/30 16:58:01 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 3ms

15/11/30 16:58:02 [main]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Registered outstanding rpc 1
(org.apache.hive.spark.client.BaseProtocol$JobRequest).

15/11/30 16:58:02 [main]: DEBUG client.SparkClientImpl: Send
JobRequest[3d26e448-e721-49db-a667-298b335b3b7a].

15/11/30 16:58:02 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Encoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)

15/11/30 16:58:02 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Encoded
message of type org.apache.hive.spark.client.BaseProtocol$JobRequest (126802
bytes)

15/11/30 16:58:02 [main]: INFO log.PerfLogger: </PERFLOG
method=SparkSubmitJob start=1448902681932 end=1448902682056 duration=124
from=org.apache.hadoop.hive.ql.exec.spark.SparkTask>

Starting Spark Job = 3d26e448-e721-49db-a667-298b335b3b7a

15/11/30 16:58:02 [main]: INFO spark.SparkTask: Starting Spark Job =
3d26e448-e721-49db-a667-298b335b3b7a

15/11/30 16:58:02 [main]: INFO log.PerfLogger: <PERFLOG method=SparkRunJob
from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>

15/11/30 16:58:02 [main]: INFO log.PerfLogger: <PERFLOG
method=SparkSubmitToRunning
from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>

state = SENT

15/11/30 16:58:02 [main]: INFO status.SparkJobMonitor: state = SENT

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.SecurityManager: Changing view acls to: hduser

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.SecurityManager: Changing modify acls to: hduser

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(hduser); users
with modify permissions: Set(hduser)

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO slf4j.Slf4jLogger: Slf4jLogger started

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO Remoting: Starting remoting

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@50.140.197.217:25977 <http://50.140.197.217:25977>
]

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO util.Utils: Successfully started service 'sparkDriver' on port
25977.

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.SparkEnv: Registering MapOutputTracker

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.SparkEnv: Registering BlockManagerMaster

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO storage.DiskBlockManager: Created local directory at
/tmp/blockmgr-9cb5b2ee-6388-4a3a-b661-92fc64abdd46

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO storage.MemoryStore: MemoryStore started with capacity 529.9
MB

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-4a493a0a-746b-4db8-afc4-905a5926999a/httpd-302ea1fd-6573-4192-890
7-6657a24e5e22

15/11/30 16:58:02 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:02 INFO spark.HttpServer: Starting HTTP Server

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO server.Server: jetty-8.y.z-SNAPSHOT

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:39801 <http://0.0.0.0:39801> 

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO util.Utils: Successfully started service 'HTTP file server' on
port 39801.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO spark.SparkEnv: Registering OutputCommitCoordinator

state = SENT

15/11/30 16:58:03 [main]: INFO status.SparkJobMonitor: state = SENT

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO server.Server: jetty-8.y.z-SNAPSHOT

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040 <http://0.0.0.0:4040> 

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO util.Utils: Successfully started service 'SparkUI' on port
4040.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO ui.SparkUI: Started SparkUI at http://50.140.197.217:4040

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO spark.SparkContext: Added JAR
file:/usr/lib/hive/lib/hive-exec-1.2.1.jar at
http://50.140.197.217:39801/jars/hive-exec-1.2.1.jar with timestamp
1448902683296

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 WARN metrics.MetricsSystem: Using default name DAGScheduler for
source because spark.app.id <http://spark.app.id>  is not set.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO client.AppClient$ClientEndpoint: Connecting to master
spark://rhes564:7077...

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark
cluster with app ID app-20151130165803-0001

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 24395.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO netty.NettyBlockTransferService: Server created on 24395

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO storage.BlockManagerMaster: Trying to register BlockManager

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO storage.BlockManagerMasterEndpoint: Registering block manager
50.140.197.217:24395 <http://50.140.197.217:24395>  with 529.9 MB RAM,
BlockManagerId(driver, 50.140.197.217, 24395)

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO storage.BlockManagerMaster: Registered BlockManager

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO scheduler.EventLoggingListener: Logging events to
file:/usr/lib/spark/logs/app-20151130165803-0001

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO spark.SparkContext: Added JAR
file:/usr/lib/hive/lib/hive-exec-1.2.1.jar at
http://50.140.197.217:39801/jars/hive-exec-1.2.1.jar with timestamp
1448902683852

15/11/30 16:58:03 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)

15/11/30 16:58:03 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$NullMessage (2 bytes)

15/11/30 16:58:03 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=REPLY id=0
payload=org.apache.hive.spark.client.rpc.Rpc$NullMessage

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO rpc.RpcDispatcher: [DriverProtocol] Closing channel due to
exception in pipeline (java.lang.NoClassDefFoundError:
org/apache/hive/spark/client/Job).

15/11/30 16:58:03 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)

15/11/30 16:58:03 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type java.lang.String (3901 bytes)

15/11/30 16:58:03 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=ERROR id=1 payload=java.lang.String

15/11/30 16:58:03 [RPC-Handler-3]: WARN rpc.RpcDispatcher: Received error
message:io.netty.handler.codec.DecoderException:
java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job

        at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.
java:358)

        at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder
.java:230)

        at
io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.jav
a:103)

        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractCha
nnelHandlerContext.java:308)

        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChann
elHandlerContext.java:294)

        at
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHand
lerAdapter.java:86)

        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractCha
nnelHandlerContext.java:308)

        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChann
elHandlerContext.java:294)

        at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeli
ne.java:846)

        at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioBy
teChannel.java:131)

        at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)

        at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.
java:468)

        at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)

        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)

        at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventEx
ecutor.java:111)

        at java.lang.Thread.run(Thread.java:724)

Caused by: java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job

        at java.lang.ClassLoader.defineClass1(Native Method)

        at java.lang.ClassLoader.defineClass(ClassLoader.java:792)

        at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)

        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)

        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

        at java.security.AccessController.doPrivileged(Native Method)

        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)

        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

        at java.lang.ClassLoader.loadClass(ClassLoader.java:411)

        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

        at java.lang.Class.forName0(Native Method)

        at java.lang.Class.forName(Class.java:270)

        at
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName
(DefaultClassResolver.java:136)

        at
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClas
s(DefaultClassResolver.java:115)

        at
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)

        at
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(Objec
tField.java:99)

        at
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(F
ieldSerializer.java:507)

        at
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:
776)

        at
org.apache.hive.spark.client.rpc.KryoMessageCodec.decode(KryoMessageCodec.ja
va:96)

        at
io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:4
2)

        at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.
java:327)

        ... 15 more

Caused by: java.lang.ClassNotFoundException:
org.apache.hive.spark.client.Job

        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

        at java.security.AccessController.doPrivileged(Native Method)

        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)

        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

        ... 39 more

.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 WARN client.RemoteDriver: Shutting down driver because RPC channel
was closed.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO client.RemoteDriver: Shutting down remote driver.

15/11/30 16:58:03 [RPC-Handler-3]: WARN client.SparkClientImpl: Client RPC
channel closed unexpectedly.

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/metrics/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/kill,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/api,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/static,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/json,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs,null}

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO ui.SparkUI: Stopped Spark web UI at http://50.140.197.217:4040

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO scheduler.DAGScheduler: Stopping DAGScheduler

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO cluster.SparkDeploySchedulerBackend: Shutting down all
executors

15/11/30 16:58:03 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:03 INFO cluster.SparkDeploySchedulerBackend: Asking each executor to
shut down

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO storage.MemoryStore: MemoryStore cleared

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO storage.BlockManager: BlockManager stopped

state = SENT

15/11/30 16:58:04 [main]: INFO status.SparkJobMonitor: state = SENT

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO storage.BlockManagerMaster: BlockManagerMaster stopped

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO spark.SparkContext: Successfully stopped SparkContext

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO util.ShutdownHookManager: Shutdown hook called

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting
down remote daemon.

15/11/30 16:58:04 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
16:58:04 INFO util.ShutdownHookManager: Deleting directory
/tmp/spark-4a493a0a-746b-4db8-afc4-905a5926999a

state = SENT

15/11/30 16:58:05 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:06 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:07 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:08 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:09 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:10 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:11 [main]: INFO status.SparkJobMonitor: state = SENT

15/11/30 16:58:11 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser:
closed

15/11/30 16:58:11 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser:
stopped, remaining connections 0

state = SENT

15/11/30 16:58:12 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:13 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:14 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:15 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:16 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:17 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:18 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:19 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:20 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:21 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:22 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:23 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:24 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:25 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:26 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:27 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:28 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:29 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:30 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:31 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:32 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:33 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:34 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:35 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:36 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:37 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:38 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:39 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:40 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:41 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:42 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:43 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:44 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:45 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:46 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:47 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:48 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:49 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:50 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:51 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:52 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:53 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:54 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:55 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:56 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:57 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:58 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:58:59 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:59:00 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:59:01 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:59:02 [main]: INFO status.SparkJobMonitor: state = SENT

state = SENT

15/11/30 16:59:03 [main]: INFO status.SparkJobMonitor: state = SENT

15/11/30 16:59:03 [main]: INFO status.SparkJobMonitor: Job hasn't been
submitted after 61s. Aborting it.

Status: SENT

15/11/30 16:59:03 [main]: ERROR status.SparkJobMonitor: Status: SENT

15/11/30 16:59:03 [main]: INFO log.PerfLogger: </PERFLOG method=SparkRunJob
start=1448902682057 end=1448902743343 duration=61286
from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>

Failed to execute spark task, with exception
'java.lang.IllegalStateException(RPC channel is closed.)'

15/11/30 16:59:03 [main]: ERROR spark.SparkTask: Failed to execute spark
task, with exception 'java.lang.IllegalStateException(RPC channel is
closed.)'

java.lang.IllegalStateException: RPC channel is closed.

        at
com.google.common.base.Preconditions.checkState(Preconditions.java:149)

        at org.apache.hive.spark.client.rpc.Rpc.call(Rpc.java:272)

        at org.apache.hive.spark.client.rpc.Rpc.call(Rpc.java:259)

        at
org.apache.hive.spark.client.SparkClientImpl$ClientProtocol.cancel(SparkClie
ntImpl.java:499)

        at
org.apache.hive.spark.client.SparkClientImpl.cancel(SparkClientImpl.java:180
)

        at
org.apache.hive.spark.client.JobHandleImpl.cancel(JobHandleImpl.java:62)

        at
org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.cancelJob
(RemoteSparkJobRef.java:54)

        at
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:122)

        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)

        at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)

        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)

        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)

        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)

        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)

        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)

        at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)

        at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)

        at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)

        at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)

        at
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:409)

        at
org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:425)

        at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:714)

        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)

        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

 

15/11/30 16:59:03 [main]: ERROR spark.SparkTask: Failed to execute spark
task, with exception 'java.lang.IllegalStateException(RPC channel is
closed.)'

java.lang.IllegalStateException: RPC channel is closed.

        at
com.google.common.base.Preconditions.checkState(Preconditions.java:149)

        at org.apache.hive.spark.client.rpc.Rpc.call(Rpc.java:272)

        at org.apache.hive.spark.client.rpc.Rpc.call(Rpc.java:259)

        at
org.apache.hive.spark.client.SparkClientImpl$ClientProtocol.cancel(SparkClie
ntImpl.java:499)

        at
org.apache.hive.spark.client.SparkClientImpl.cancel(SparkClientImpl.java:180
)

        at
org.apache.hive.spark.client.JobHandleImpl.cancel(JobHandleImpl.java:62)

        at
org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.cancelJob
(RemoteSparkJobRef.java:54)

        at
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:122)

        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)

        at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)

        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)

        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)

        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)

        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)

        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)

        at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)

        at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)

        at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)

        at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)

        at
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:409)

        at
org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:425)

        at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:714)

        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)

        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

15/11/30 16:59:03 [main]: DEBUG ipc.Client: The ping interval is 60000 ms.

15/11/30 16:59:03 [main]: DEBUG ipc.Client: Connecting to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000> 

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser:
starting, having connections 1

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #30

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #30

15/11/30 16:59:03 [main]: DEBUG ipc.ProtobufRpcEngine: Call: delete took 3ms

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #31

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #31

15/11/30 16:59:03 [main]: DEBUG ipc.ProtobufRpcEngine: Call: delete took 1ms

FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.spark.SparkTask

15/11/30 16:59:03 [main]: ERROR ql.Driver: FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

15/11/30 16:59:03 [main]: DEBUG ql.Driver: Shutting down query

select count(1) from t

15/11/30 16:59:03 [main]: INFO metadata.Hive: Dumping metastore api call
timing information for : execution phase

15/11/30 16:59:03 [main]: DEBUG metadata.Hive: Total time spent in each
metastore function (ms): {isCompatibleWith_(HiveConf, )=1}

15/11/30 16:59:03 [main]: INFO log.PerfLogger: </PERFLOG
method=Driver.execute start=1448902679762 end=1448902743371 duration=63609
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:59:03 [main]: INFO log.PerfLogger: <PERFLOG method=releaseLocks
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:59:03 [main]: INFO log.PerfLogger: </PERFLOG method=releaseLocks
start=1448902743371 end=1448902743371 duration=0
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:59:03 [main]: INFO exec.ListSinkOperator: 7 finished. closing...

15/11/30 16:59:03 [main]: INFO exec.ListSinkOperator: 7 Close done

15/11/30 16:59:03 [main]: DEBUG ql.Driver: Shutting down query

select count(1) from t

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #32

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #32

15/11/30 16:59:03 [main]: DEBUG ipc.ProtobufRpcEngine: Call: delete took 3ms

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #33

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #33

15/11/30 16:59:03 [main]: DEBUG ipc.ProtobufRpcEngine: Call: delete took 1ms

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #34

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #34

15/11/30 16:59:03 [main]: DEBUG ipc.ProtobufRpcEngine: Call: delete took 1ms

15/11/30 16:59:03 [main]: INFO log.PerfLogger: <PERFLOG method=releaseLocks
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:59:03 [main]: INFO log.PerfLogger: </PERFLOG method=releaseLocks
start=1448902743377 end=1448902743377 duration=0
from=org.apache.hadoop.hive.ql.Driver>

15/11/30 16:59:03 [main]: DEBUG session.SessionState: Removing resource dir
/tmp/hive/dec45671-0b5f-4496-8b3a-66f097d0c6f0_resources

15/11/30 16:59:03 [main]: DEBUG session.SparkSessionManagerImpl: Closing
session (30d627c3-a88d-4126-b616-944a6b7d327b).

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #35

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #35

15/11/30 16:59:03 [main]: DEBUG ipc.ProtobufRpcEngine: Call: delete took 1ms

15/11/30 16:59:03 [Thread-9]: INFO session.SparkSessionManagerImpl: Closing
the session manager.

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #36

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #36

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #37

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #37

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #38

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #38

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 1ms

15/11/30 16:59:03 [IPC Parameter Sending Thread #1]: DEBUG ipc.Client: IPC
Client (1997160570) connection to rhes564/50.140.197.217:9000
<http://50.140.197.217:9000>  from hduser sending #39

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser got
value #39

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo
took 0ms

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.Client: stopping client from cache:
org.apache.hadoop.ipc.Client@15bf218e

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.Client: removing client from cache:
org.apache.hadoop.ipc.Client@15bf218e

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.Client: stopping actual client
because no more references remain: org.apache.hadoop.ipc.Client@15bf218e

15/11/30 16:59:03 [Thread-0]: DEBUG ipc.Client: Stopping client

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser:
closed

15/11/30 16:59:03 [IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser]:
DEBUG ipc.Client: IPC Client (1997160570) connection to
rhes564/50.140.197.217:9000 <http://50.140.197.217:9000>  from hduser:
stopped, remaining connections 0

hduser@rhes564::/home/hduser/dba/bin>

 

 

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

 

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Xuefu Zhang [mailto:xzhang@cloudera.com <mailto:xzhang@cloudera.com> ]

Sent: 01 December 2015 13:50
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: Problem with getting start of Hive on Spark

 

Mich,

As I understand, you have a problem with Hive on Spark due to duel network
interfaces. I agree that this is something that should be fixed in Hive.
However, saying Hive on Spark doesn't work seems unfair. At Cloudera, we
have many customers that successfully deployed Hive on Spark on their
clusters.

As discussed in another thread, we don't have all the bandwidth we like to
answer every user problem. Thus, it's crucial to provided as much
information as possible when reporting a problem. This includes reproducing
steps as well as Hive, Spark, and/ Yarn logs.

Thanks,

Xuefu

 

On Tue, Dec 1, 2015 at 1:32 AM, Mich Talebzadeh <mich@peridale.co.uk
<mailto:mich@peridale.co.uk> > wrote:

Hi Link,

 

I am afraid it seems that using Spark as the execution engine for Hive does
not seem to work. I am still trying to make it work.

 

An alternative is to use Spark with Hive data set. To be precise spark-sql.
You set spark to use Hive metastore and then use Hive as the heavy DML
engine. That will give you the ability to use spark for queries that can be
done in-memory.

 

HTH

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Link Qian [mailto:fastupload@outlook.com
<mailto:fastupload@outlook.com> ] 
Sent: 01 December 2015 00:57


To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: RE: Problem with getting start of Hive on Spark

 

Hi Mich,
I set hive execution engine as Spark.

Link Qian


  _____  


From: mich@peridale.co.uk <mailto:mich@peridale.co.uk> 
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: RE: Problem with getting start of Hive on Spark
Date: Mon, 30 Nov 2015 16:15:31 +0000

To clarify are you running Hive and using Spark as the execution engine (as
opposed to default Hive execution engine MapReduce)?

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Link Qian [mailto:fastupload@outlook.com] 
Sent: 30 November 2015 13:21
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Problem with getting start of Hive on Spark

 

Hello,

Following the Hive wiki page,
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+St
arted
<https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark:+Getting+Sta
rted> , I got a several fails that execute HQL based on Spark engine with
yarn. I have hadoop-2.6.2, yarn-2.6.2 and Spark-1.5.2.
The fails got either spark-1.5.2-hadoop2.6 distribution version or
spark-1.5.2-without-hive customer compiler version with instruction on that
wiki page.

Hive cli submits spark job but the job runs a short time and RM web app
shows the job is successfully.  but hive cli show the job fails.

Here is a snippet of hive cli debug log. any suggestion?


15/11/30 07:31:36 [main]: INFO status.SparkJobMonitor: state = SENT
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO yarn.Client: Application report for
application_1448886638370_0001 (state: RUNNING)
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO yarn.Client: 
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      client
token: N/A
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
diagnostics: N/A
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
ApplicationMaster host: 192.168.1.12
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
ApplicationMaster RPC port: 0
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      queue:
default
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      start
time: 1448886649489
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      final
status: UNDEFINED
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:
tracking URL:
http://namenode.localdomain:8088/proxy/application_1448886638370_0001/
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl:      user:
hadoop
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO cluster.YarnClientSchedulerBackend: Application
application_1448886638370_0001 has started running.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 51326.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO netty.NettyBlockTransferService: Server created on 51326
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO storage.BlockManagerMasterEndpoint: Registering block manager
192.168.1.10:51326 <http://192.168.1.10:51326>  with 66.8 MB RAM,
BlockManagerId(driver, 192.168.1.10, 51326)
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO storage.BlockManagerMaster: Registered BlockManager
state = SENT
15/11/30 07:31:37 [main]: INFO status.SparkJobMonitor: state = SENT
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready
for scheduling beginning after waiting maxRegisteredResourcesWaitingTime:
30000(ms)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type java.lang.Integer (2 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=REPLY id=0 payload=java.lang.Integer
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO spark.SparkContext: Added JAR
file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar at
http://192.168.1.10:41276/jars/hive-exec-1.2.1.jar with timestamp
1448886697575
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$NullMessage (2 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=REPLY id=1
payload=org.apache.hive.spark.client.rpc.Rpc$NullMessage
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO rpc.RpcDispatcher: [DriverProtocol] Closing channel due to
exception in pipeline (java.lang.NoClassDefFoundError:
org/apache/hive/spark/client/Job).
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type org.apache.hive.spark.client.rpc.Rpc$MessageHeader (5 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.KryoMessageCodec: Decoded
message of type java.lang.String (3720 bytes)
15/11/30 07:31:37 [RPC-Handler-3]: DEBUG rpc.RpcDispatcher: [ClientProtocol]
Received RPC message: type=ERROR id=2 payload=java.lang.String
15/11/30 07:31:37 [RPC-Handler-3]: WARN rpc.RpcDispatcher: Received error
message:io.netty.handler.codec.DecoderException:
java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job
    at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.
java:358)
    at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder
.java:230)
    at
io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.jav
a:103)
    at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractCha
nnelHandlerContext.java:308)
    at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChann
elHandlerContext.java:294)
    at
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHand
lerAdapter.java:86)
    at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractCha
nnelHandlerContext.java:308)
    at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChann
elHandlerContext.java:294)
    at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeli
ne.java:846)
    at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioBy
teChannel.java:131)
    at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.
java:468)
    at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventEx
ecutor.java:111)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/apache/hive/spark/client/Job
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
    at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName
(DefaultClassResolver.java:136)
    at
org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClas
s(DefaultClassResolver.java:115)
    at
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
    at
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(Objec
tField.java:99)
    at
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(F
ieldSerializer.java:507)
    at
org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:
776)
    at
org.apache.hive.spark.client.rpc.KryoMessageCodec.decode(KryoMessageCodec.ja
va:96)
    at
io.netty.handler.codec.ByteToMessageCodec$1.decode(ByteToMessageCodec.java:4
2)
    at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.
java:327)
    ... 15 more
Caused by: java.lang.ClassNotFoundException:
org.apache.hive.spark.client.Job
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 39 more
.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 WARN client.RemoteDriver: Shutting down driver because RPC channel
was closed.
15/11/30 07:31:37 [stderr-redir-1]: INFO client.SparkClientImpl: 15/11/30
07:31:37 INFO client.RemoteDriver: Shutting down remote driver.
15/11/30 07:31:37 [RPC-Handler-3]: WARN client.SparkClientImpl: Client RPC
channel closed unexpectedly.


best regards,
Link Qian 

 

 

 


Mime
View raw message