Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 9179C200C3D for ; Tue, 14 Mar 2017 12:05:48 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 90039160B7E; Tue, 14 Mar 2017 11:05:48 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 8DCFC160B7C for ; Tue, 14 Mar 2017 12:05:47 +0100 (CET) Received: (qmail 60202 invoked by uid 500); 14 Mar 2017 11:05:46 -0000 Mailing-List: contact issues-help@carbondata.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@carbondata.incubator.apache.org Delivered-To: mailing list issues@carbondata.incubator.apache.org Received: (qmail 60193 invoked by uid 99); 14 Mar 2017 11:05:46 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Mar 2017 11:05:46 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 4FCC51AFB4F for ; Tue, 14 Mar 2017 11:05:46 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.451 X-Spam-Level: * X-Spam-Status: No, score=1.451 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_NEUTRAL=0.652] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 44CERcee2QK1 for ; Tue, 14 Mar 2017 11:05:43 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id B23265F39E for ; Tue, 14 Mar 2017 11:05:42 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E49A5E087D for ; Tue, 14 Mar 2017 11:05:41 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 9CDA2243AC for ; Tue, 14 Mar 2017 11:05:41 +0000 (UTC) Date: Tue, 14 Mar 2017 11:05:41 +0000 (UTC) From: "Ravindra Pesala (JIRA)" To: issues@carbondata.incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (CARBONDATA-732) User unable to execute the select/Load query using thrift server. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 14 Mar 2017 11:05:48 -0000 [ https://issues.apache.org/jira/browse/CARBONDATA-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-732. ---------------------------------------- Resolution: Fixed Fix Version/s: 1.0.1-incubating > User unable to execute the select/Load query using thrift server. > ------------------------------------------------------------------ > > Key: CARBONDATA-732 > URL: https://issues.apache.org/jira/browse/CARBONDATA-732 > Project: CarbonData > Issue Type: Bug > Components: sql > Affects Versions: 1.0.0-incubating > Environment: Spark 2.1 > Reporter: Vinod Rohilla > Assignee: anubhav tarar > Fix For: 1.0.1-incubating > > Attachments: LOG_FIle > > Time Spent: 1.5h > Remaining Estimate: 0h > > Result does not display to user while hit Select/Load query. > Steps to reproduce: > 1:Hit the query : > 0: jdbc:hive2://localhost:10000> select * from t4; > Note: Cursor Keep blinking on beeline. > 2: Logs on Thrift server: > Error sending result StreamResponse{streamId=/jars/carbondata_2.11-1.0.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar, byteCount=19350001, body=FileSegmentManagedBuffer{file=/opt/spark-2.1.0/carbonlib/carbondata_2.11-1.0.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar, offset=0, length=19350001}} to /192.168.2.179:48291; closing connection > java.lang.AbstractMethodError > at io.netty.util.ReferenceCountUtil.touch(ReferenceCountUtil.java:73) > at io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107) > at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:811) > at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:724) > at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:111) > at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:739) > at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:731) > at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:817) > at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:724) > at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:305) > at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:739) > at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802) > at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:815) > at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:795) > at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:832) > at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1032) > at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:296) > at org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194) > at org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150) > at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111) > at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119) > at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) > at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) > at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) > at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) > at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) > at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) > at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) > at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) > at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) > at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) > at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) > at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) > at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) > at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) > at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565) > at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) > at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) > at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) > at java.lang.Thread.run(Thread.java:745) > ERROR 28-02 13:43:44,656 - Still have 1 requests outstanding when connection from /192.168.2.179:58030 is closed > INFO 28-02 13:46:57,954 - Session disconnected without closing properly, close it now > ERROR 28-02 13:46:57,958 - Error executing query, currentState CLOSED, > java.lang.InterruptedException > at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) > at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) > at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202) > at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218) > at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:153) > at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:619) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958) > at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935) > at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) > at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) > at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) > at org.apache.spark.rdd.RDD.collect(RDD.scala:934) > at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275) > at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371) > at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) > at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765) > at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370) > at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2375) > at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2375) > at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2778) > at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2375) > at org.apache.spark.sql.Dataset.collect(Dataset.scala:2351) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:235) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > ERROR 28-02 13:46:57,959 - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Illegal Operation state transition from CLOSED to ERROR > at org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:92) > at org.apache.hive.service.cli.OperationState.validateTransition(OperationState.java:98) > at org.apache.hive.service.cli.operation.Operation.setState(Operation.java:126) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:255) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:163) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:160) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) > at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:173) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Please check attached log file > Expected Result: User should be able to execute the select/Load query using thrift server. -- This message was sent by Atlassian JIRA (v6.3.15#6346)