impala-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexander Behm (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (IMPALA-5117) PlannerTest.testUnion fails in exhaustive release run
Date Thu, 27 Apr 2017 20:56:04 GMT

     [ https://issues.apache.org/jira/browse/IMPALA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Alexander Behm resolved IMPALA-5117.
------------------------------------
    Resolution: Duplicate

HDFS/Metadata race again: IMPALA-3887

> PlannerTest.testUnion fails in exhaustive release run
> -----------------------------------------------------
>
>                 Key: IMPALA-5117
>                 URL: https://issues.apache.org/jira/browse/IMPALA-5117
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Frontend
>    Affects Versions: Impala 2.9.0
>            Reporter: Lars Volker
>            Assignee: Taras Bobrovytsky
>            Priority: Blocker
>              Labels: broken-build
>
> [~tarasbob] - I'm assigning this to you thinking it may be related to your change here:
https://gerrit.cloudera.org/#/c/5816/
> {noformat}
> -------------------------------------------------------------------------------
> Test set: org.apache.impala.planner.PlannerTest
> -------------------------------------------------------------------------------
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 62.191 sec <<<
FAILURE! - in org.apache.impala.planner.PlannerTest
> testUnion(org.apache.impala.planner.PlannerTest)  Time elapsed: 2.923 sec  <<<
FAILURE!
> java.lang.AssertionError: 
> Section DISTRIBUTEDPLAN of query:
> select id, bigint_col from functional.alltypestiny
> union all
> select sum(int_col), bigint_col from functional.alltypes
>   where year=2009 and month=2
>   group by bigint_col
> union all
> select a.id, a.bigint_col
>   from functional.alltypestiny a inner join functional.alltypestiny b
>   on (a.id = b.id)
> union all
> select 1000, 2000
> Actual does not match expected result:
> PLAN-ROOT SINK
> |
> 10:EXCHANGE [UNPARTITIONED]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> |
> 00:UNION
> |  constant-operands=1
> |
> |--06:HASH JOIN [INNER JOIN, BROADCAST]
> |  |  hash predicates: a.id = b.id
> |  |  runtime filters: RF000 <- b.id
> |  |
> |  |--09:EXCHANGE [BROADCAST]
> |  |  |
> |  |  05:SCAN HDFS [functional.alltypestiny b]
> |  |     partitions=4/4 files=4 size=460B
> |  |
> |  04:SCAN HDFS [functional.alltypestiny a]
> |     partitions=4/4 files=4 size=460B
> |     runtime filters: RF000 -> a.id
> |
> |--08:AGGREGATE [FINALIZE]
> |  |  output: sum:merge(int_col)
> |  |  group by: bigint_col
> |  |
> |  07:EXCHANGE [HASH(bigint_col)]
> |  |
> |  03:AGGREGATE [STREAMING]
> |  |  output: sum(int_col)
> |  |  group by: bigint_col
> |  |
> |  02:SCAN HDFS [functional.alltypes]
> |     partitions=1/24 files=1 size=18.12KB
> |
> 01:SCAN HDFS [functional.alltypestiny]
>    partitions=4/4 files=4 size=460B
> Expected:
> PLAN-ROOT SINK
> |
> 11:EXCHANGE [UNPARTITIONED]
> |
> 00:UNION
> |  constant-operands=1
> |
> |--06:HASH JOIN [INNER JOIN, PARTITIONED]
> |  |  hash predicates: a.id = b.id
> |  |  runtime filters: RF000 <- b.id
> |  |
> |  |--10:EXCHANGE [HASH(b.id)]
> |  |  |
> |  |  05:SCAN HDFS [functional.alltypestiny b]
> |  |     partitions=4/4 files=4 size=460B
> |  |
> |  09:EXCHANGE [HASH(a.id)]
> |  |
> |  04:SCAN HDFS [functional.alltypestiny a]
> |     partitions=4/4 files=4 size=460B
> |     runtime filters: RF000 -> a.id
> |
> |--08:AGGREGATE [FINALIZE]
> |  |  output: sum:merge(int_col)
> |  |  group by: bigint_col
> |  |
> |  07:EXCHANGE [HASH(bigint_col)]
> |  |
> |  03:AGGREGATE [STREAMING]
> |  |  output: sum(int_col)
> |  |  group by: bigint_col
> |  |
> |  02:SCAN HDFS [functional.alltypes]
> |     partitions=1/24 files=1 size=18.12KB
> |
> 01:SCAN HDFS [functional.alltypestiny]
>    partitions=4/4 files=4 size=460B
> Verbose plan:
> F06:PLAN FRAGMENT [UNPARTITIONED]
>   PLAN-ROOT SINK
>   |
>   10:EXCHANGE [UNPARTITIONED]
>      hosts=2 per-host-mem=unavailable
>      tuple-ids=5 row-size=12B cardinality=27
> F05:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F06, EXCHANGE=10, UNPARTITIONED]
>   00:UNION
>   |  constant-operands=1
>   |  hosts=2 per-host-mem=0B
>   |  tuple-ids=5 row-size=12B cardinality=27
>   |
>   |--06:HASH JOIN [INNER JOIN, BROADCAST]
>   |  |  hash predicates: a.id = b.id
>   |  |  runtime filters: RF000 <- b.id
>   |  |  hosts=2 per-host-mem=36B
>   |  |  tuple-ids=3,4 row-size=16B cardinality=8
>   |  |
>   |  |--09:EXCHANGE [BROADCAST]
>   |  |     hosts=2 per-host-mem=0B
>   |  |     tuple-ids=4 row-size=4B cardinality=8
>   |  |
>   |  04:SCAN HDFS [functional.alltypestiny a, RANDOM]
>   |     partitions=4/4 files=4 size=460B
>   |     runtime filters: RF000 -> a.id
>   |     table stats: 8 rows total
>   |     column stats: all
>   |     hosts=2 per-host-mem=48.00MB
>   |     tuple-ids=3 row-size=12B cardinality=8
>   |
>   |--08:AGGREGATE [FINALIZE]
>   |  |  output: sum:merge(int_col)
>   |  |  group by: bigint_col
>   |  |  hosts=1 per-host-mem=10.00MB
>   |  |  tuple-ids=2 row-size=16B cardinality=10
>   |  |
>   |  07:EXCHANGE [HASH(bigint_col)]
>   |     hosts=1 per-host-mem=0B
>   |     tuple-ids=2 row-size=16B cardinality=10
>   |
>   01:SCAN HDFS [functional.alltypestiny, RANDOM]
>      partitions=4/4 files=4 size=460B
>      table stats: 8 rows total
>      column stats: all
>      hosts=2 per-host-mem=48.00MB
>      tuple-ids=0 row-size=12B cardinality=8
> F01:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F05, EXCHANGE=07, HASH(bigint_col)]
>   03:AGGREGATE [STREAMING]
>   |  output: sum(int_col)
>   |  group by: bigint_col
>   |  hosts=1 per-host-mem=10.00MB
>   |  tuple-ids=2 row-size=16B cardinality=10
>   |
>   02:SCAN HDFS [functional.alltypes, RANDOM]
>      partitions=1/24 files=1 size=18.12KB
>      table stats: 7300 rows total
>      column stats: all
>      hosts=1 per-host-mem=32.00MB
>      tuple-ids=1 row-size=12B cardinality=280
> F04:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F05, EXCHANGE=09, BROADCAST]
>   05:SCAN HDFS [functional.alltypestiny b, RANDOM]
>      partitions=4/4 files=4 size=460B
>      table stats: 8 rows total
>      column stats: all
>      hosts=2 per-host-mem=48.00MB
>      tuple-ids=4 row-size=4B cardinality=8
> Section DISTRIBUTEDPLAN of query:
> select count(id), sum(bigint_col) from functional.alltypes
> union all
> select id, bigint_col from functional.alltypessmall order by id limit 10
> union all
> select id, bigint_col from functional.alltypestiny
> union all
> select sum(int_col), bigint_col from functional.alltypes
>   where year=2009 and month=2
>   group by bigint_col
> union all
> select a.id, a.bigint_col
>   from functional.alltypestiny a inner join functional.alltypestiny b
>   on (a.id = b.id)
> union all
> select 1000, 2000
> Actual does not match expected result:
> PLAN-ROOT SINK
> |
> 19:EXCHANGE [UNPARTITIONED]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> |
> 00:UNION
> |  constant-operands=1
> |
> |--10:HASH JOIN [INNER JOIN, BROADCAST]
> |  |  hash predicates: a.id = b.id
> |  |  runtime filters: RF000 <- b.id
> |  |
> |  |--16:EXCHANGE [BROADCAST]
> |  |  |
> |  |  09:SCAN HDFS [functional.alltypestiny b]
> |  |     partitions=4/4 files=4 size=460B
> |  |
> |  08:SCAN HDFS [functional.alltypestiny a]
> |     partitions=4/4 files=4 size=460B
> |     runtime filters: RF000 -> a.id
> |
> |--15:AGGREGATE [FINALIZE]
> |  |  output: sum:merge(int_col)
> |  |  group by: bigint_col
> |  |
> |  14:EXCHANGE [HASH(bigint_col)]
> |  |
> |  07:AGGREGATE [STREAMING]
> |  |  output: sum(int_col)
> |  |  group by: bigint_col
> |  |
> |  06:SCAN HDFS [functional.alltypes]
> |     partitions=1/24 files=1 size=18.12KB
> |
> |--05:SCAN HDFS [functional.alltypestiny]
> |     partitions=4/4 files=4 size=460B
> |
> |--18:EXCHANGE [RANDOM]
> |  |
> |  13:MERGING-EXCHANGE [UNPARTITIONED]
> |  |  order by: id ASC
> |  |  limit: 10
> |  |
> |  04:TOP-N [LIMIT=10]
> |  |  order by: id ASC
> |  |
> |  03:SCAN HDFS [functional.alltypessmall]
> |     partitions=4/4 files=4 size=6.32KB
> |
> 17:EXCHANGE [RANDOM]
> |
> 12:AGGREGATE [FINALIZE]
> |  output: count:merge(id), sum:merge(bigint_col)
> |
> 11:EXCHANGE [UNPARTITIONED]
> |
> 02:AGGREGATE
> |  output: count(id), sum(bigint_col)
> |
> 01:SCAN HDFS [functional.alltypes]
>    partitions=24/24 files=24 size=478.45KB
> Expected:
> PLAN-ROOT SINK
> |
> 20:EXCHANGE [UNPARTITIONED]
> |
> 00:UNION
> |  constant-operands=1
> |
> |--10:HASH JOIN [INNER JOIN, PARTITIONED]
> |  |  hash predicates: a.id = b.id
> |  |  runtime filters: RF000 <- b.id
> |  |
> |  |--17:EXCHANGE [HASH(b.id)]
> |  |  |
> |  |  09:SCAN HDFS [functional.alltypestiny b]
> |  |     partitions=4/4 files=4 size=460B
> |  |
> |  16:EXCHANGE [HASH(a.id)]
> |  |
> |  08:SCAN HDFS [functional.alltypestiny a]
> |     partitions=4/4 files=4 size=460B
> |     runtime filters: RF000 -> a.id
> |
> |--15:AGGREGATE [FINALIZE]
> |  |  output: sum:merge(int_col)
> |  |  group by: bigint_col
> |  |
> |  14:EXCHANGE [HASH(bigint_col)]
> |  |
> |  07:AGGREGATE [STREAMING]
> |  |  output: sum(int_col)
> |  |  group by: bigint_col
> |  |
> |  06:SCAN HDFS [functional.alltypes]
> |     partitions=1/24 files=1 size=18.12KB
> |
> |--05:SCAN HDFS [functional.alltypestiny]
> |     partitions=4/4 files=4 size=460B
> |
> |--19:EXCHANGE [RANDOM]
> |  |
> |  13:MERGING-EXCHANGE [UNPARTITIONED]
> |  |  order by: id ASC
> |  |  limit: 10
> |  |
> |  04:TOP-N [LIMIT=10]
> |  |  order by: id ASC
> |  |
> |  03:SCAN HDFS [functional.alltypessmall]
> |     partitions=4/4 files=4 size=6.32KB
> |
> 18:EXCHANGE [RANDOM]
> |
> 12:AGGREGATE [FINALIZE]
> |  output: count:merge(id), sum:merge(bigint_col)
> |
> 11:EXCHANGE [UNPARTITIONED]
> |
> 02:AGGREGATE
> |  output: count(id), sum(bigint_col)
> |
> 01:SCAN HDFS [functional.alltypes]
>    partitions=24/24 files=24 size=478.45KB
> Verbose plan:
> F10:PLAN FRAGMENT [UNPARTITIONED]
>   PLAN-ROOT SINK
>   |
>   19:EXCHANGE [UNPARTITIONED]
>      hosts=3 per-host-mem=unavailable
>      tuple-ids=9 row-size=16B cardinality=38
> F09:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F10, EXCHANGE=19, UNPARTITIONED]
>   00:UNION
>   |  constant-operands=1
>   |  hosts=3 per-host-mem=0B
>   |  tuple-ids=9 row-size=16B cardinality=38
>   |
>   |--10:HASH JOIN [INNER JOIN, BROADCAST]
>   |  |  hash predicates: a.id = b.id
>   |  |  runtime filters: RF000 <- b.id
>   |  |  hosts=2 per-host-mem=36B
>   |  |  tuple-ids=7,8 row-size=16B cardinality=8
>   |  |
>   |  |--16:EXCHANGE [BROADCAST]
>   |  |     hosts=2 per-host-mem=0B
>   |  |     tuple-ids=8 row-size=4B cardinality=8
>   |  |
>   |  08:SCAN HDFS [functional.alltypestiny a, RANDOM]
>   |     partitions=4/4 files=4 size=460B
>   |     runtime filters: RF000 -> a.id
>   |     table stats: 8 rows total
>   |     column stats: all
>   |     hosts=2 per-host-mem=48.00MB
>   |     tuple-ids=7 row-size=12B cardinality=8
>   |
>   |--15:AGGREGATE [FINALIZE]
>   |  |  output: sum:merge(int_col)
>   |  |  group by: bigint_col
>   |  |  hosts=1 per-host-mem=10.00MB
>   |  |  tuple-ids=6 row-size=16B cardinality=10
>   |  |
>   |  14:EXCHANGE [HASH(bigint_col)]
>   |     hosts=1 per-host-mem=0B
>   |     tuple-ids=6 row-size=16B cardinality=10
>   |
>   |--05:SCAN HDFS [functional.alltypestiny, RANDOM]
>   |     partitions=4/4 files=4 size=460B
>   |     table stats: 8 rows total
>   |     column stats: all
>   |     hosts=2 per-host-mem=48.00MB
>   |     tuple-ids=4 row-size=12B cardinality=8
>   |
>   |--18:EXCHANGE [RANDOM]
>   |     hosts=3 per-host-mem=0B
>   |     tuple-ids=3 row-size=12B cardinality=10
>   |
>   17:EXCHANGE [RANDOM]
>      hosts=3 per-host-mem=0B
>      tuple-ids=1 row-size=16B cardinality=1
> F01:PLAN FRAGMENT [UNPARTITIONED]
>   DATASTREAM SINK [FRAGMENT=F09, EXCHANGE=17, RANDOM]
>   12:AGGREGATE [FINALIZE]
>   |  output: count:merge(id), sum:merge(bigint_col)
>   |  hosts=3 per-host-mem=unavailable
>   |  tuple-ids=1 row-size=16B cardinality=1
>   |
>   11:EXCHANGE [UNPARTITIONED]
>      hosts=3 per-host-mem=unavailable
>      tuple-ids=1 row-size=16B cardinality=1
> F00:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=11, UNPARTITIONED]
>   02:AGGREGATE
>   |  output: count(id), sum(bigint_col)
>   |  hosts=3 per-host-mem=10.00MB
>   |  tuple-ids=1 row-size=16B cardinality=1
>   |
>   01:SCAN HDFS [functional.alltypes, RANDOM]
>      partitions=24/24 files=24 size=478.45KB
>      table stats: 7300 rows total
>      column stats: all
>      hosts=3 per-host-mem=160.00MB
>      tuple-ids=0 row-size=12B cardinality=7300
> F03:PLAN FRAGMENT [UNPARTITIONED]
>   DATASTREAM SINK [FRAGMENT=F09, EXCHANGE=18, RANDOM]
>   13:MERGING-EXCHANGE [UNPARTITIONED]
>      order by: id ASC
>      limit: 10
>      hosts=3 per-host-mem=unavailable
>      tuple-ids=3 row-size=12B cardinality=10
> F02:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F03, EXCHANGE=13, UNPARTITIONED]
>   04:TOP-N [LIMIT=10]
>   |  order by: id ASC
>   |  hosts=3 per-host-mem=120B
>   |  tuple-ids=3 row-size=12B cardinality=10
>   |
>   03:SCAN HDFS [functional.alltypessmall, RANDOM]
>      partitions=4/4 files=4 size=6.32KB
>      table stats: 100 rows total
>      column stats: all
>      hosts=3 per-host-mem=32.00MB
>      tuple-ids=2 row-size=12B cardinality=100
> F05:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F09, EXCHANGE=14, HASH(bigint_col)]
>   07:AGGREGATE [STREAMING]
>   |  output: sum(int_col)
>   |  group by: bigint_col
>   |  hosts=1 per-host-mem=10.00MB
>   |  tuple-ids=6 row-size=16B cardinality=10
>   |
>   06:SCAN HDFS [functional.alltypes, RANDOM]
>      partitions=1/24 files=1 size=18.12KB
>      table stats: 7300 rows total
>      column stats: all
>      hosts=1 per-host-mem=32.00MB
>      tuple-ids=5 row-size=12B cardinality=280
> F08:PLAN FRAGMENT [RANDOM]
>   DATASTREAM SINK [FRAGMENT=F09, EXCHANGE=16, BROADCAST]
>   09:SCAN HDFS [functional.alltypestiny b, RANDOM]
>      partitions=4/4 files=4 size=460B
>      table stats: 8 rows total
>      column stats: all
>      hosts=2 per-host-mem=48.00MB
>      tuple-ids=8 row-size=4B cardinality=8
> 	at org.junit.Assert.fail(Assert.java:88)
> 	at org.apache.impala.planner.PlannerTestBase.runPlannerTestFile(PlannerTestBase.java:741)
> 	at org.apache.impala.planner.PlannerTestBase.runPlannerTestFile(PlannerTestBase.java:746)
> 	at org.apache.impala.planner.PlannerTest.testUnion(PlannerTest.java:151)
> {noformat}
> [~mjacobs] - The logfiles also show issues with the Kudu client. Are these expected?
> {noformat}
> Mar 24, 2017 8:21:49 PM org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline
> WARNING: An exception was thrown by a user handler while handling an exception event
([id: 0x3c465af1, /127.0.0.1:34994 :> impala-boost-static-burst-slave-0b01.vpc.cloudera.com/127.0.0.1:31201]
EXCEPTION: java.lang.NullPointerException)
> java.lang.NullPointerException
> 	at org.apache.kudu.client.TabletClient.cleanup(TabletClient.java:640)
> 	at org.apache.kudu.client.TabletClient.exceptionCaught(TabletClient.java:711)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
> 	at org.apache.kudu.client.TabletClient.handleUpstream(TabletClient.java:595)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.exceptionCaught(SimpleChannelUpstreamHandler.java:153)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:60)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaught(FrameDecoder.java:377)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:48)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:566)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:184)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:291)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
> 	at org.apache.kudu.client.Negotiator.finish(Negotiator.java:636)
> 	at org.apache.kudu.client.Negotiator.handleTokenExchangeResponse(Negotiator.java:554)
> 	at org.apache.kudu.client.Negotiator.handleResponse(Negotiator.java:247)
> 	at org.apache.kudu.client.Negotiator.messageReceived(Negotiator.java:229)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:184)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Mar 24, 2017 8:21:51 PM org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector
> WARNING: Unexpected exception in the selector loop.
> java.lang.IllegalStateException: cannot be started once stopped
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.HashedWheelTimer.start(HashedWheelTimer.java:279)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.HashedWheelTimer.newTimeout(HashedWheelTimer.java:337)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss$RegisterTask.run(NioClientBoss.java:185)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Mar 24, 2017 8:21:52 PM org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector
> WARNING: Unexpected exception in the selector loop.
> java.lang.IllegalStateException: cannot be started once stopped
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.HashedWheelTimer.start(HashedWheelTimer.java:279)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.HashedWheelTimer.newTimeout(HashedWheelTimer.java:337)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss$RegisterTask.run(NioClientBoss.java:185)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> 	at org.apache.kudu.client.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message