Return-Path: X-Original-To: apmail-kafka-users-archive@www.apache.org Delivered-To: apmail-kafka-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 31F7910188 for ; Sat, 12 Oct 2013 00:16:42 +0000 (UTC) Received: (qmail 73115 invoked by uid 500); 12 Oct 2013 00:16:41 -0000 Delivered-To: apmail-kafka-users-archive@kafka.apache.org Received: (qmail 73097 invoked by uid 500); 12 Oct 2013 00:16:41 -0000 Mailing-List: contact users-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@kafka.apache.org Delivered-To: mailing list users@kafka.apache.org Received: (qmail 73089 invoked by uid 99); 12 Oct 2013 00:16:41 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 12 Oct 2013 00:16:41 +0000 X-ASF-Spam-Status: No, hits=2.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of hsy541@gmail.com designates 209.85.214.169 as permitted sender) Received: from [209.85.214.169] (HELO mail-ob0-f169.google.com) (209.85.214.169) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 12 Oct 2013 00:16:37 +0000 Received: by mail-ob0-f169.google.com with SMTP id wp4so3355601obc.0 for ; Fri, 11 Oct 2013 17:16:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=khZ8yGeCyhVHKa5zBnCTeuZLCkeWTct64Vw4oIOH7mQ=; b=b3Z0vY0HL8GnYhV1uiG4A3T7QAAC9aVwlBSbFfitkXJ4AUx3XOYTe+pqApwJmu8X74 QbdU0Aeqpb0QNi1Hv8Zi1UgfwaCxR4Ni9ha8kECkrTri6NBIaS/PktP+ZP12Ac3CBmag WhCTAiISQZbMBV2UoCvsgXpQtSkBvS0jFHAfGVmcFaD9lTbPKCZXSPRCSmp3gVtHeT2P xWwMMCN2ncJjqghSwJNsiUfTe6hFJKBdmbQ6EFcFTRtPbtRmyLk+pnALbyjxvYave0R0 46AdOaXGWpmNqEahwTHq80PltZxQxhRQ5Xc4txNPN9BJ846kuTdZFWBF1sY0v7Fcf9o4 0d1g== MIME-Version: 1.0 X-Received: by 10.182.71.82 with SMTP id s18mr16531723obu.9.1381536976608; Fri, 11 Oct 2013 17:16:16 -0700 (PDT) Received: by 10.76.113.208 with HTTP; Fri, 11 Oct 2013 17:16:16 -0700 (PDT) In-Reply-To: <5133607134025723777@unknownmsgid> References: <5133607134025723777@unknownmsgid> Date: Fri, 11 Oct 2013 17:16:16 -0700 Message-ID: Subject: Re: Is there a way to pull out kafka metadata from zookeeper? From: "hsy541@gmail.com" To: users@kafka.apache.org Content-Type: multipart/alternative; boundary=e89a8fb1fde6bc260404e880249d X-Virus-Checked: Checked by ClamAV on apache.org --e89a8fb1fde6bc260404e880249d Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable That's why I'm asking, I would like to see a kafka zookeeper client api to get TopicMetadata instead of my own hacky way to query the zookeeper Thanks! Best, Siyuan On Fri, Oct 11, 2013 at 4:00 PM, Bruno D. Rodrigues < bruno.rodrigues@litux.org> wrote: > Why not ask zookeeper for the list of brokers and then ask a random > broker for the metadata (and repeat if the broker is down), even if > it's two calls. > > Heck it already does unnecessary connections. It connects to a broker, > gets the metadata, disconnects, and then connects again for the data. > If it's already assumed a producer or consumer will take some seconds > until ready, what is another call gonna prejudice the flow. > > Then producers and consumers would then be consistently configured. Or > allow the producers to also go to a broker instead of zookeeper. > > This way the consumer needs to know and hardcode at least one node. > The node can fail. It can be changed. > > I thought zookeeper served to abstract this kind of complexity > > > > > > > > -- > Bruno Rodrigues > Sent from my iPhone > > No dia 11/10/2013, =E0s 22:40, Neha Narkhede > escreveu: > > >>> For each consumer consumes different > > topic/replica I have to specify those 20 brokers and go over all of the= m > to > > know which broker is alive. And even worse how about I dynamically add > new > > broker into the cluster and remove the old one > > > > TopicMetadataRequest is a batch API and you can get metadata informatio= n > > for either a list of all topics or all topics in the cluster, if you > > specify an empty list of topics. Adding a broker is not a problem since > the > > metadata request also returns the list of brokers in a cluster. The > reason > > this is better than reading from zookeeper is because the same operatio= n > > would require multiple zookeeper roundtrips, instead of a single > > TopicMetadataRequest roundtrip to some kafka broker. > > > > Thanks, > > Neha > > > > > >> On Fri, Oct 11, 2013 at 11:30 AM, hsy541@gmail.com > wrote: > >> > >> Thanks guys! > >> But I feel weird. Assume I have 20 brokers for 10 different topics wit= h > 2 > >> partitions and 2 replicas for each. For each consumer consumes > different > >> topic/replica I have to specify those 20 brokers and go over all of > them to > >> know which broker is alive. And even worse how about I dynamically add > new > >> broker into the cluster and remove the old one. I think it's nice to > have a > >> way to get metadata from zookeeper(centralized coordinator?) directly. > >> > >> Best, > >> Siyuan > >> > >> > >> On Fri, Oct 11, 2013 at 9:12 AM, Neha Narkhede >>> wrote: > >> > >>> If, for some reason, you don't have access to a virtual IP or load > >>> balancer, you need to round robin once through all the brokers before > >>> failing a TopicMetadataRequest. So unless all the brokers in your > cluster > >>> are down, this should not be a problem. > >>> > >>> Thanks, > >>> Neha > >>> > >>> > >>> On Thu, Oct 10, 2013 at 10:50 PM, hsy541@gmail.com > >>> wrote: > >>> > >>>> Hi guys, > >>>> > >>>> I'm trying to maintain a bunch of simple kafka consumer to consume > >>> messages > >>>> from brokers. I know there is a way to send TopicMetadataRequest to > >>> broker > >>>> and get the response from the broker. But you have to specify the > >> broker > >>>> list to query the information. But broker might not be available > >> because > >>> of > >>>> some failure. My question is is there any api I can call and query > >> broker > >>>> metadata for topic/partition directly from zookeeper? I know I can > >> query > >>>> that information using zookeeper API. But that's not friendly > >>> datastructure > >>>> like the TopicMetadata/PartitionMetadata. Thank you! > >>>> > >>>> Best, > >>>> Siyuan > >> > --e89a8fb1fde6bc260404e880249d--