Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 81197DCE2 for ; Thu, 14 Mar 2013 02:36:14 +0000 (UTC) Received: (qmail 59082 invoked by uid 500); 14 Mar 2013 02:36:13 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 58877 invoked by uid 500); 14 Mar 2013 02:36:12 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 58844 invoked by uid 99); 14 Mar 2013 02:36:12 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Mar 2013 02:36:12 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [202.147.45.59] (HELO mx3.optiver.com.au) (202.147.45.59) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Mar 2013 02:36:07 +0000 Received: from localhost (mx3 [127.0.0.1]) by mx3.optiver.com.au (Postfix) with ESMTP id 957EFBFD0D4; Thu, 14 Mar 2013 13:33:17 +1100 (EST) X-Virus-Scanned: amavisd-new at optiver.com.au Received: from mx3.optiver.com.au ([127.0.0.1]) by localhost (mx3.optiver.com.au [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 4pluPYHMIUIs; Thu, 14 Mar 2013 13:33:11 +1100 (EST) Received: from mail.comp.optiver.com (opsywnvm0072.comp.optiver.com [10.70.144.41]) by mx3.optiver.com.au (Postfix) with ESMTP id F3301BFD0C9 for ; Thu, 14 Mar 2013 13:33:10 +1100 (EST) Received: from OPSYWNSR0152.comp.optiver.com ([10.70.144.39]) by opsywnvm0072.comp.optiver.com ([10.70.144.41]) with mapi; Thu, 14 Mar 2013 13:35:39 +1100 From: James Stewart To: "user@flume.apache.org" Date: Thu, 14 Mar 2013 13:35:35 +1100 Subject: RE: FLume OG Choke Limit Not Working Thread-Topic: FLume OG Choke Limit Not Working Thread-Index: Ac4esdhaEVYbV/lQRQ2nfBpiqbQ4CABpYVew Message-ID: <3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131BA0B@OPSYWNSR0152.comp.optiver.com> References: <3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131B4B3@OPSYWNSR0152.comp.optiver.com> In-Reply-To: <3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131B4B3@OPSYWNSR0152.comp.optiver.com> Accept-Language: en-US, en-AU Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-AU Content-Type: multipart/alternative; boundary="_000_3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131BA0BOPSYWNSR015_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131BA0BOPSYWNSR015_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hello again, Further to my previous problem with the Choke decorator under Flume OG, I'v= e tried separating out my 'aggregator' into 3 logical nodes as follows... OLD: exec setChokeLimit aggregator.mydomain.com mychoke 150 exec config aggregator.mydomain.com 'collectorSource(35853)' 'batch(100, 10= 00) gzip choke("mychoke") agentBESink("collector.mydomain.com", 35853)' NEW: exec setChokeLimit aggregator.mydomain.com mychoke 150 exec config batcher 'collectorSource(35853)' 'batch(100, 1000) agentBESink(= "aggregator.mydomain.com", 35854)' exec config zipper 'collectorSource(35854)' 'gzip agentBESink("aggregator.m= ydomain.com ", 35855)' exec config choker 'collectorSource(35855)' 'choke("mychoke") agentBESink("= collector.mydomain.com ", 35853)' Looking at the stats, it seems that the 'choker' decorator is having no eff= ect on the data passing through it. Even when I set 'mychoke' to 10, (which= equates to 80Kbit/sec) I still see regular spikes above 1.5Mbit/sec. The Flume User Guide says: "The choke decorator works as follows: when append() is called on the sink = to which the choke is attached, the append() call works normally if the amo= unt of data transferred (during a small duration of time) is within the lim= it assigned to the choke-id corresponding to the choke. If the limit has be= en exceeded, then append() is blocked for a small duration of time." Our traffic is quite bursty - could this be a problem for the Choke decorat= or? Any help is much appreciated as I've hit a bit of a wall. Cheers, James From: James Stewart [mailto:James.Stewart@Optiver.com.au] Sent: Tuesday, 12 March 2013 10:54 AM To: user@flume.apache.org Subject: FLume OG Choke Limit Not Working Hello all, I'm using flume OG (unable to upgrade to NG at this stage) and I am having = trouble with the choke decorator. I am aggregating the data flows from several logical nodes at a single 'agg= regator' logical node. The data flows should be batched, zipped, choked and= then sent on to another 'collector' logical node. I am using the following= config to achieve this: exec setChokeLimit aggregator.mydomain.com mychoke 150 exec config aggregator.mydomain.com 'collectorSource(35853)' 'batch(100, 10= 00) gzip choke("mychoke") agentBESink("collector.mydomain.com", 35853)' The choke decorator should limit transfer to 150KB/sec, which equates to 1.= 2Mbit/sec. However I am regularly recording Flume traffic spikes of 5Mbit/s= ec and more. Can anybody suggest what I might be doing wrong? Is it ok to chain the batc= h, gzip and choke decorators like this, or should they each be in a separat= e logical node? Thanks, James ________________________________ Information contained in this communication (including any attachments) is = confidential and may be privileged or subject to copyright. If you have rec= eived this communication in error you are not authorised to use the informa= tion in any way and Optiver requests that you notify the sender by return e= mail, destroy all copies and delete the information from your system. Optiv= er does not represent, warrant or guarantee that this communication is free= from computer viruses or other defects or that the integrity of this commu= nication has been maintained. Any views expressed in this communication are= those of the individual sender. Optiver does not accept liability for any = loss or damage caused directly or indirectly by this communication or its u= se. Please consider the environment before printing this email. --_000_3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131BA0BOPSYWNSR015_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hello again,

 

F= urther to my previous problem with the Choke decorator under Flume OG, I= 217;ve tried separating out my ‘aggregator’ into 3 logical node= s as follows…

 

OLD:

exec setChoke= Limit aggregator.mydomain.com mychoke 150

exec config aggregator.mydomain.com 'collectorSource(35853)' 'batch(100, = 1000) gzip choke("mychoke") agentBESink("collector.mydomain.= com", 35853)'

 

=

NEW:

exec setChokeL= imit aggregator.mydomain.com mychoke 150

exec config batcher 'collectorSource(35853)' 'batch(100, 1000) agentBESink= ("aggregator.mydomain.com", 35854)'

exec config zipper 'collectorSource(35854)' 'gzip agentBESink("a= ggregator.mydomain.com ", 35855)'

e= xec config choker 'collectorSource(35855)' 'choke("mychoke") agen= tBESink("collector.mydomain.com ", 35853)'

 

Looking at the stats= , it seems that the ‘choker’ decorator is having no effect on t= he data passing through it. Even when I set ‘mychoke’ to 10, (w= hich equates to 80Kbit/sec) I still see regular spikes above 1.5Mbit/sec.

 

The Flume User Guide says:

 <= /o:p>

“The choke decorator works as follow= s: when append() is called on the sink to which the choke is attached, the = append() call works normally if the amount of data transferred (during a sm= all duration of time) is within the limit assigned to the choke-id correspo= nding to the choke. If the limit has been exceeded, then append() is blocke= d for a small duration of time.”

 

Our traffic is quite bursty &#= 8211; could this be a problem for the Choke decorator?

 

Any help is much a= ppreciated as I’ve hit a bit of a wall.

 

Cheers,

 

James<= /p>

 

James Stewart [mailto:Ja= mes.Stewart@Optiver.com.au]
Sent: Tuesday, 12 March 2013 10:54 A= M
To: user@flume.apache.org
Subject: FLume OG Choke Lim= it Not Working

&= nbsp;

Hello all,

 

I’m using flume OG (= unable to upgrade to NG at this stage) and I am having trouble with the cho= ke decorator.

 

I am aggregating the data flows from several logical nodes = at a single ‘aggregator’ logical node. The data flows should be= batched, zipped, choked and then sent on to another ‘collector’= ; logical node. I am using the following config to achieve this:=

 

exec set= ChokeLimit aggregator.mydomain.com mychoke 150

exec config aggregator.mydomain.com 'collectorSource(35853)' 'batch(= 100, 1000) gzip choke("mychoke") agentBESink("collector.mydo= main.com", 35853)'

 

The choke decorator should limit transfer to 150K= B/sec, which equates to 1.2Mbit/sec. However I am regularly recording Flume= traffic spikes of 5Mbit/sec and more.

<= o:p> 

Can anybody suggest what I might b= e doing wrong? Is it ok to chain the batch, gzip and choke decorators like = this, or should they each be in a separate logical node?

 

Thanks,

 

James<= o:p>

 

 


Information contained in this communication (includin= g any attachments) is confidential and may be privileged or subject to copy= right. If you have received this communication in error you are not authori= sed to use the information in any way and Optiver requests that you notify = the sender by return email, destroy all copies and delete the information f= rom your system. Optiver does not represent, warrant or guarantee that this= communication is free from computer viruses or other defects or that the i= ntegrity of this communication has been maintained. Any views expressed in = this communication are those of the individual sender. Optiver does not acc= ept liability for any loss or damage caused directly or indirectly by this = communication or its use.

Please consider the environment before pri= nting this email.

= = --_000_3B7D6CD11AEFB4488B8F2E3E60E9F6DA01002131BA0BOPSYWNSR015_--