Return-Path: X-Original-To: apmail-incubator-kafka-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-kafka-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 94B6ED019 for ; Tue, 27 Nov 2012 18:17:33 +0000 (UTC) Received: (qmail 59509 invoked by uid 500); 27 Nov 2012 18:17:33 -0000 Delivered-To: apmail-incubator-kafka-users-archive@incubator.apache.org Received: (qmail 59481 invoked by uid 500); 27 Nov 2012 18:17:33 -0000 Mailing-List: contact kafka-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: kafka-users@incubator.apache.org Delivered-To: mailing list kafka-users@incubator.apache.org Received: (qmail 59468 invoked by uid 99); 27 Nov 2012 18:17:33 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Nov 2012 18:17:33 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jay.kreps@gmail.com designates 209.85.220.47 as permitted sender) Received: from [209.85.220.47] (HELO mail-pa0-f47.google.com) (209.85.220.47) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Nov 2012 18:17:26 +0000 Received: by mail-pa0-f47.google.com with SMTP id fa10so3470988pad.6 for ; Tue, 27 Nov 2012 10:17:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=ILmA5jpNgZ63ZKj1tfM+BPb8V8yHGQio/t84hpgEcng=; b=wVEcJC9tLvzqUCX75AKEHM3bdKtdDwmZVBjdASWSs1tC/BCsRichlwthMMUmFy3KDR cw4H75Aee/BgLAouZKC9vqPGRhVXGAst8mGFfljMov5ci843pqpl7nQoLhs0EoR5ab7G Ry7tNNJh8fGtlZQ2Ar4REreYk8C4G78dy8HB1ZriF/tt8a0NuYpLx/mhA1lV4NPKgdA4 5OWe+h7nq9A3r2FRdiL8Dpb25bjCahfZv9TGJe5BWexQfUjMKvJgDlke/f7/EeQv40rZ cqXbM/ndkpBae9BWv8DWuuir1veSYTw/9cD3LNwG1jNxbz+WUr5D8RF4Er/whlnbMhko Uy6Q== MIME-Version: 1.0 Received: by 10.68.138.229 with SMTP id qt5mr49553559pbb.122.1354040226449; Tue, 27 Nov 2012 10:17:06 -0800 (PST) Received: by 10.66.122.67 with HTTP; Tue, 27 Nov 2012 10:17:06 -0800 (PST) In-Reply-To: References: Date: Tue, 27 Nov 2012 10:17:06 -0800 Message-ID: Subject: Re: 0.8.0 producer can't connect to cluster? From: Jay Kreps To: kafka-users@incubator.apache.org Content-Type: multipart/alternative; boundary=047d7b15aa5bb589bf04cf7e0ebb X-Virus-Checked: Checked by ClamAV on apache.org --047d7b15aa5bb589bf04cf7e0ebb Content-Type: text/plain; charset=ISO-8859-1 Yes, we had to patch metrics due to a bug. The patched jar is shipped with the distribution until they get a release out. But logging this in debug is a bug. Would you be willing to file a ticket for this? Thanks! -jay On Tue, Nov 27, 2012 at 10:12 AM, Chris Curtin wrote: > Okay, figured out that you need to turn on Log4j logger to DEBUG then you > get a NoSuchMethodError around yammer. (see below) > > I'm running yammer 2.2.0 since that seems to be all I can find via maven. > Is there a different version needed? > > Thanks, > > Chris > > 257 [main] INFO kafka.producer.SyncProducer - Connected to > 10.121.31.55:9094 for producing > 286 [main] WARN kafka.producer.async.DefaultEventHandler - failed to > send to broker 3 with data Map([test1,0] -> > ByteBufferMessageSet(MessageAndOffset(Message(magic = 2, attributes = 0, > crc = 1906312613, key = null, payload = java.nio.HeapByteBuffer[pos=0 > lim=22 cap=22]),0), )) > java.lang.NoSuchMethodError: com.yammer.metrics.core.TimerContext.stop()J > at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:36) > at kafka.producer.SyncProducer.send(SyncProducer.scala:94) > at > > kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:221) > at > > kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:87) > at > > kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:81) > at > scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80) > at > scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80) > at scala.collection.Iterator$class.foreach(Iterator.scala:631) > at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161) > at > scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194) > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) > at scala.collection.mutable.HashMap.foreach(HashMap.scala:80) > at > > kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:81) > at > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:57) > at kafka.producer.Producer.send(Producer.scala:75) > at kafka.javaapi.producer.Producer.send(Producer.scala:32) > at com.silverpop.kafka.playproducer.TestProducer.main(TestProducer.java:40) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) > > > On Tue, Nov 27, 2012 at 12:30 PM, Jun Rao wrote: > > > When the producer fails to send in 3 retries, it will log an error. > Before > > that, in the producer, we log the failure of each of the retries in warn > > and it tells you the cause. > > > > Also, did you create the topic first? > > > > Thanks, > > > > Jun > > > > On Tue, Nov 27, 2012 at 8:56 AM, Chris Curtin > >wrote: > > > > > The error from when it fails is all I get. Nothing on the broker side > and > > > no other errors or exceptions on the client. > > > > > > Where should I be looking for the reasons? Is there a callback I should > > be > > > implementing? > > > > > > Thanks, > > > > > > Chris > > > > > > > > > On Tue, Nov 27, 2012 at 11:49 AM, Jun Rao wrote: > > > > > > > Do you have the exception that caused each retry to fail? > > > > > > > > Thanks, > > > > > > > > Jun > > > > > > > > On Tue, Nov 27, 2012 at 7:17 AM, Chris Curtin < > curtin.chris@gmail.com > > > > >wrote: > > > > > > > > > Hi, > > > > > > > > > > Installed 0.8.0 yesterday, 3 physical machines, 9 brokers running > (3 > > > per > > > > > machine). 2 topics, with 3 replicas each > > > > > > > > > > the console producer/consumer examples work fine. > > > > > > > > > > When I run my producer logic I get the following error whether on > the > > > > > cluster or in my dev environment: > > > > > > > > > > Exception in thread "main" > kafka.common.FailedToSendMessageException: > > > > > Failed to send messages after 3 tries. > > > > > at > > > > > > > > > > > > > > > > > > > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:70) > > > > > at kafka.producer.Producer.send(Producer.scala:75) > > > > > at kafka.javaapi.producer.Producer.send(Producer.scala:32) > > > > > at > > > > > > com.silverpop.kafka.playproducer.TestProducer.main(TestProducer.java:40) > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > > > > at > > > > > > > > > > > > > > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > > > > > at > > > > > > > > > > > > > > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > > > > > at java.lang.reflect.Method.invoke(Method.java:597) > > > > > at > > com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) > > > > > > > > > > Code is pretty basic: > > > > > > > > > > public class TestProducer { > > > > > public static void main(String[] args) { > > > > > > > > > > String zookeeper = args[0]; > > > > > long events = Long.parseLong(args[1]); > > > > > long blocks = Long.parseLong(args[2]); > > > > > > > > > > Random rnd = new Random(); > > > > > > > > > > Properties props = new Properties(); > > > > > props.put("broker.list", "mongodb03.atlnp1:9092"); > > > > > > > > > > > > > > > props.put("serializer.class", > > > "kafka.serializer.StringEncoder"); > > > > > ProducerConfig config = new ProducerConfig(props); > > > > > Producer producer = new Producer > > > > String>(config); > > > > > > > > > > > > > > > for (long nBlocks = 0; nBlocks < blocks; nBlocks++) { > > > > > for (long nEvents = 0; nEvents < events; nEvents++) { > > > > > long runtime = new Date().getTime(); > > > > > String msg = runtime + "," + (50 + nBlocks) + "," + > > > > > nEvents+ "," + rnd.nextInt(1000); > > > > > KeyedMessage data = new > > > > > KeyedMessage("test1", msg); > > > > > producer.send(data); > > > > > } > > > > > } > > > > > > > > > > > > > > > } > > > > > > > > > > using Zookeeper doesn't matter. Changing broker.list to include > all 9 > > > > > brokers doesn't matter. Changing Producer and KeyedMessage to be > > > > > > > String> doesn't matter. > > > > > > > > > > Thoughts on what I'm doing wrong? > > > > > > > > > > Thanks, > > > > > > > > > > Chris > > > > > > > > > > > > > > > --047d7b15aa5bb589bf04cf7e0ebb--