kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jun Rao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-435) Keep track of the transient test failure for Kafka-343 on Apache Jenkins
Date Thu, 02 Aug 2012 22:35:02 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13427683#comment-13427683
] 

Jun Rao commented on KAFKA-435:
-------------------------------

Trace of first test failure.

[info] Test Starting: testProduceCorrectlyReceivesResponse(kafka.producer.SyncProducerTest)
[2012-08-01 17:24:52,585] ERROR KafkaApi on Broker 0, error processing ProducerRequest on
topic1:0 (kafka.server.KafkaApis:99)
kafka.common.UnknownTopicException: Topic topic1 doesn't exist in the cluster
	at kafka.server.KafkaZooKeeper.ensurePartitionLeaderOnThisBroker(KafkaZooKeeper.scala:93)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:209)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:204)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:204)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:203)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis.produceToLocalLog(KafkaApis.scala:203)
	at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:156)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
	at java.lang.Thread.run(Thread.java:662)
[2012-08-01 17:24:52,586] ERROR KafkaApi on Broker 0, error processing ProducerRequest on
topic2:0 (kafka.server.KafkaApis:99)
kafka.common.UnknownTopicException: Topic topic2 doesn't exist in the cluster
	at kafka.server.KafkaZooKeeper.ensurePartitionLeaderOnThisBroker(KafkaZooKeeper.scala:93)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:209)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:204)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:204)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:203)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis.produceToLocalLog(KafkaApis.scala:203)
	at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:156)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
	at java.lang.Thread.run(Thread.java:662)
[2012-08-01 17:24:52,587] ERROR KafkaApi on Broker 0, error processing ProducerRequest on
topic3:0 (kafka.server.KafkaApis:99)
kafka.common.UnknownTopicException: Topic topic3 doesn't exist in the cluster
	at kafka.server.KafkaZooKeeper.ensurePartitionLeaderOnThisBroker(KafkaZooKeeper.scala:93)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:209)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:204)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:204)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:203)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis.produceToLocalLog(KafkaApis.scala:203)
	at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:156)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
	at java.lang.Thread.run(Thread.java:662)
[2012-08-01 17:24:54,150] ERROR KafkaApi on Broker 0, error processing ProducerRequest on
topic2:0 (kafka.server.KafkaApis:99)
kafka.common.UnknownTopicException: Topic topic2 doesn't exist in the cluster
	at kafka.server.KafkaZooKeeper.ensurePartitionLeaderOnThisBroker(KafkaZooKeeper.scala:93)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:209)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2$$anonfun$apply$8.apply(KafkaApis.scala:204)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:204)
	at kafka.server.KafkaApis$$anonfun$produceToLocalLog$2.apply(KafkaApis.scala:203)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
	at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
	at kafka.server.KafkaApis.produceToLocalLog(KafkaApis.scala:203)
	at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:156)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:58)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
	at java.lang.Thread.run(Thread.java:662)
[error] Test Failed: testProduceCorrectlyReceivesResponse(kafka.producer.SyncProducerTest)
java.net.SocketTimeoutException
	at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
	at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
	at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
	at kafka.utils.Utils$.read(Utils.scala:630)
	at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
	at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
	at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
	at kafka.network.BlockingChannel.receive(BlockingChannel.scala:92)
	at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:79)
	at kafka.producer.SyncProducer.doSend(SyncProducer.scala:77)
	at kafka.producer.SyncProducer.send(SyncProducer.scala:111)
	at kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse(SyncProducerTest.scala:166)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:164)
	at junit.framework.TestCase.runBare(TestCase.java:130)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:120)
	at junit.framework.TestSuite.runTest(TestSuite.java:228)
	at junit.framework.TestSuite.run(TestSuite.java:223)
	at junit.framework.TestSuite.runTest(TestSuite.java:228)
	at junit.framework.TestSuite.run(TestSuite.java:223)
	at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:309)
	at org.scalatest.tools.ScalaTestFramework$ScalaTestRunner.run(ScalaTestFramework.scala:40)
	at sbt.TestRunner.run(TestFramework.scala:53)
	at sbt.TestRunner.runTest$1(TestFramework.scala:67)
	at sbt.TestRunner.run(TestFramework.scala:76)
	at sbt.TestFramework$$anonfun$10$$anonfun$apply$11.runTest$2(TestFramework.scala:194)
	at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
	at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
	at sbt.NamedTestTask.run(TestFramework.scala:92)
	at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
	at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
	at sbt.TaskManager$Task.invoke(TaskManager.scala:62)
	at sbt.impl.RunTask.doRun$1(RunTask.scala:77)
	at sbt.impl.RunTask.runTask(RunTask.scala:85)
	at sbt.impl.RunTask.sbt$impl$RunTask$$runIfNotRoot(RunTask.scala:60)
	at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
	at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
	at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
	at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
	at sbt.Control$.trapUnit(Control.scala:19)
	at sbt.Distributor$Run$Worker.run(ParallelRunner.scala:131)

                
> Keep track of the transient test failure for Kafka-343 on Apache Jenkins
> ------------------------------------------------------------------------
>
>                 Key: KAFKA-435
>                 URL: https://issues.apache.org/jira/browse/KAFKA-435
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Yang Ye
>            Assignee: Yang Ye
>            Priority: Minor
>
> See: http://mail-archives.apache.org/mod_mbox/incubator-kafka-commits/201208.mbox/browser
> Error message:
> ------------------------------------------
> [...truncated 3415 lines...]
> [2012-08-01 17:27:08,432] ERROR KafkaApi on Broker 0, error when processing request (test_topic,0,-1,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset -1 is out of range
> 	at kafka.log.Log$.findRange(Log.scala:46)
> 	at kafka.log.Log.read(Log.scala:265)
> 	at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
> 	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> 	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> 	at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
> 	at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:272)
> 	at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
> 	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
> 	at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:08,446] ERROR Closing socket for /67.195.138.9 because of error (kafka.network.Processor:99)
> java.io.IOException: Connection reset by peer
> 	at sun.nio.ch.FileDispatcher.read0(Native Method)
> 	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
> 	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
> 	at sun.nio.ch.IOUtil.read(IOUtil.java:171)
> 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
> 	at kafka.utils.Utils$.read(Utils.scala:630)
> 	at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> 	at kafka.network.Processor.read(SocketServer.scala:296)
> 	at kafka.network.Processor.run(SocketServer.scala:212)
> 	at java.lang.Thread.run(Thread.java:662)
> [info] Test Passed: testResetToEarliestWhenOffsetTooLow(kafka.integration.AutoOffsetResetTest)
> [info] Test Starting: testResetToLatestWhenOffsetTooHigh(kafka.integration.AutoOffsetResetTest)
> [2012-08-01 17:27:09,203] ERROR KafkaApi on Broker 0, error when processing request (test_topic,0,10000,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset 10000 is out of range
> 	at kafka.log.Log$.findRange(Log.scala:46)
> 	at kafka.log.Log.read(Log.scala:265)
> 	at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
> 	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> 	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> 	at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
> 	at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:272)
> 	at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
> 	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
> 	at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:11,197] ERROR Closing socket for /67.195.138.9 because of error (kafka.network.Processor:99)
> java.io.IOException: Connection reset by peer
> 	at sun.nio.ch.FileDispatcher.read0(Native Method)
> 	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
> 	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
> 	at sun.nio.ch.IOUtil.read(IOUtil.java:171)
> 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
> 	at kafka.utils.Utils$.read(Utils.scala:630)
> 	at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> 	at kafka.network.Processor.read(SocketServer.scala:296)
> 	at kafka.network.Processor.run(SocketServer.scala:212)
> 	at java.lang.Thread.run(Thread.java:662)
> [info] Test Passed: testResetToLatestWhenOffsetTooHigh(kafka.integration.AutoOffsetResetTest)
> [info] Test Starting: testResetToLatestWhenOffsetTooLow(kafka.integration.AutoOffsetResetTest)
> [2012-08-01 17:27:12,365] ERROR KafkaApi on Broker 0, error when processing request (test_topic,0,-1,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset -1 is out of range
> 	at kafka.log.Log$.findRange(Log.scala:46)
> 	at kafka.log.Log.read(Log.scala:265)
> 	at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
> 	at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
> 	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> 	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> 	at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
> 	at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:272)
> 	at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
> 	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
> 	at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:13,044] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a0beb0012, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:13,246] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a0beb0016, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:14,333] ERROR Closing socket for /67.195.138.9 because of error (kafka.network.Processor:99)
> java.io.IOException: Connection reset by peer
> 	at sun.nio.ch.FileDispatcher.read0(Native Method)
> 	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
> 	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
> 	at sun.nio.ch.IOUtil.read(IOUtil.java:171)
> 	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
> 	at kafka.utils.Utils$.read(Utils.scala:630)
> 	at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> 	at kafka.network.Processor.read(SocketServer.scala:296)
> 	at kafka.network.Processor.run(SocketServer.scala:212)
> 	at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:14,347] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a167e0004, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [info] Test Passed: testResetToLatestWhenOffsetTooLow(kafka.integration.AutoOffsetResetTest)
> [info] == core-kafka / kafka.integration.AutoOffsetResetTest ==
> [info] 
> [info] == core-kafka / kafka.integration.TopicMetadataTest ==
> [info] Test Starting: testTopicMetadataRequest(kafka.integration.TopicMetadataTest)
> [info] Test Passed: testTopicMetadataRequest(kafka.integration.TopicMetadataTest)
> [info] Test Starting: testBasicTopicMetadata(kafka.integration.TopicMetadataTest)
> [info] Test Passed: testBasicTopicMetadata(kafka.integration.TopicMetadataTest)
> [info] Test Starting: testAutoCreateTopic(kafka.integration.TopicMetadataTest)
> [info] Test Passed: testAutoCreateTopic(kafka.integration.TopicMetadataTest)
> [info] == core-kafka / kafka.integration.TopicMetadataTest ==
> [info] 
> [info] == core-kafka / kafka.server.LeaderElectionTest ==
> [info] Test Starting: testLeaderElectionAndEpoch(kafka.server.LeaderElectionTest)
> [2012-08-01 17:27:15,189] ERROR Kafka Log on Broker 1, Cannot truncate log to 0 since
the
> log start offset is 0 and end offset is 0 (kafka.log.Log:93)
> [2012-08-01 17:27:15,694] ERROR Closing socket for /67.195.138.9 because of error (kafka.network.Processor:99)
> java.io.IOException: Connection reset by peer
> 	at sun.nio.ch.FileDispatcher.write0(Native Method)
> 	at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
> 	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
> 	at sun.nio.ch.IOUtil.write(IOUtil.java:40)
> 	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
> 	at kafka.api.PartitionDataSend.writeTo(FetchResponse.scala:66)
> 	at kafka.network.MultiSend.writeTo(Transmission.scala:94)
> 	at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> 	at kafka.network.MultiSend.writeCompletely(Transmission.scala:87)
> 	at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:142)
> 	at kafka.network.MultiSend.writeTo(Transmission.scala:94)
> 	at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> 	at kafka.network.MultiSend.writeCompletely(Transmission.scala:87)
> 	at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:219)
> 	at kafka.network.Processor.write(SocketServer.scala:321)
> 	at kafka.network.Processor.run(SocketServer.scala:214)
> 	at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:15,834] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a167e0007, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:15,835] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a167e0012, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:17,261] ERROR Kafka Log on Broker 1, Cannot truncate log to 0 since
the
> log start offset is 0 and end offset is 0 (kafka.log.Log:93)
> [2012-08-01 17:27:19,635] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a252b0014, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:19,636] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a252b0015, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:19,645] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a252b0006, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:19,665] WARN EndOfStreamException: Unable to read additional data from
client
> sessionid 0x138e33a252b0018, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634)
> [2012-08-01 17:27:19,728] ERROR Unexpected Exception:  (org.apache.zookeeper.server.NIOServerCnxn:445)
> java.nio.channels.CancelledKeyException
> 	at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
> 	at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:418)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1509)
> 	at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:171)
> 	at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:135)
> [2012-08-01 17:27:19,729] ERROR Unexpected Exception:  (org.apache.zookeeper.server.NIOServerCnxn:445)
> java.nio.channels.CancelledKeyException
> 	at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
> 	at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:418)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1509)
> 	at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:171)
> 	at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:135)
> [2012-08-01 17:27:19,729] ERROR Unexpected Exception:  (org.apache.zookeeper.server.NIOServerCnxn:445)
> java.nio.channels.CancelledKeyException
> 	at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
> 	at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:418)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1509)
> 	at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:171)
> 	at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:135)
> [2012-08-01 17:27:19,729] ERROR Unexpected Exception:  (org.apache.zookeeper.server.NIOServerCnxn:445)
> java.nio.channels.CancelledKeyException
> 	at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
> 	at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:418)
> 	at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1509)
> 	at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:171)
> 	at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:135)
> [info] Test Passed: testLeaderElectionAndEpoch(kafka.server.LeaderElectionTest)
> [info] == core-kafka / kafka.server.LeaderElectionTest ==
> [info] 
> [info] == core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==
> [info] Test Starting: testKafkaLog4jConfigs(kafka.log4j.KafkaLog4jAppenderTest)
> log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkEventThread).
> log4j:WARN Please initialize the log4j system properly.
> [info] Test Passed: testKafkaLog4jConfigs(kafka.log4j.KafkaLog4jAppenderTest)
> [info] Test Starting: testZkConnectLog4jAppends(kafka.log4j.KafkaLog4jAppenderTest)
> [info] Test Passed: testZkConnectLog4jAppends(kafka.log4j.KafkaLog4jAppenderTest)
> [info] == core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==
> [info] 
> [info] == core-kafka / kafka.javaapi.consumer.ZookeeperConsumerConnectorTest
> ==
> [info] Test Starting: testBasic(kafka.javaapi.consumer.ZookeeperConsumerConnectorTest)
> [info] Test Passed: testBasic(kafka.javaapi.consumer.ZookeeperConsumerConnectorTest)
> [info] == core-kafka / kafka.javaapi.consumer.ZookeeperConsumerConnectorTest
> ==
> [info] 
> [info] == core-kafka / Test cleanup 1 ==
> [info] Deleting directory /tmp/sbt_501f0f08
> [info] == core-kafka / Test cleanup 1 ==
> [info] 
> [info] == core-kafka / test-finish ==
> [error] Failed: : Total 136, Failed 3, Errors 0, Passed 133, Skipped
0
> [info] == core-kafka / test-finish ==
> [info] 
> [info] == core-kafka / test-cleanup ==
> [info] == core-kafka / test-cleanup ==
> [info] 
> [info] == java-examples / test-compile ==
> [info]   Source analysis: 0 new/modified, 0 indirectly invalidated, 0
removed.
> [info] Compiling test sources...
> [info] Nothing to compile.
> [info]   Post-analysis: 0 classes.
> [info] == java-examples / test-compile ==
> [info] 
> [info] == hadoop consumer / copy-test-resources ==
> [info] == hadoop consumer / copy-test-resources ==
> [info] 
> [info] == hadoop consumer / copy-resources ==
> [info] == hadoop consumer / copy-resources ==
> [info] 
> [info] == perf / copy-resources ==
> [info] == perf / copy-resources ==
> [info] 
> [info] == java-examples / copy-test-resources ==
> [info] == java-examples / copy-test-resources ==
> [info] 
> [info] == perf / test-compile ==
> [info]   Source analysis: 0 new/modified, 0 indirectly invalidated, 0
removed.
> [info] Compiling test sources...
> [info] Nothing to compile.
> [info]   Post-analysis: 0 classes.
> [info] == perf / test-compile ==
> [info] 
> [info] == hadoop consumer / test-compile ==
> [info]   Source analysis: 0 new/modified, 0 indirectly invalidated, 0
removed.
> [info] Compiling test sources...
> [info] Nothing to compile.
> [info]   Post-analysis: 0 classes.
> [info] == hadoop consumer / test-compile ==
> [info] 
> [info] == perf / copy-test-resources ==
> [info] == perf / copy-test-resources ==
> [info] 
> [info] == hadoop producer / copy-resources ==
> [info] == hadoop producer / copy-resources ==
> [info] 
> [info] == java-examples / copy-resources ==
> [info] == java-examples / copy-resources ==
> [error] Error running kafka.producer.SyncProducerTest: Test FAILED
> [error] Error running kafka.server.LogRecoveryTest: Test FAILED
> [error] Error running kafka.server.ServerShutdownTest: Test FAILED
> [error] Error running test: One or more subtasks failed
> [info] 
> [info] Total time: 229 s, completed Aug 1, 2012 5:27:24 PM
> [info] 
> [info] Total session time: 229 s, completed Aug 1, 2012 5:27:24 PM
> [error] Error during build.
> Build step 'Execute shell' marked build as failure

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

Mime
View raw message