Return-Path: X-Original-To: apmail-stratos-dev-archive@minotaur.apache.org Delivered-To: apmail-stratos-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8209B17474 for ; Fri, 7 Nov 2014 20:35:39 +0000 (UTC) Received: (qmail 59740 invoked by uid 500); 7 Nov 2014 20:35:39 -0000 Delivered-To: apmail-stratos-dev-archive@stratos.apache.org Received: (qmail 59691 invoked by uid 500); 7 Nov 2014 20:35:39 -0000 Mailing-List: contact dev-help@stratos.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@stratos.apache.org Delivered-To: mailing list dev@stratos.apache.org Received: (qmail 59680 invoked by uid 99); 7 Nov 2014 20:35:39 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Nov 2014 20:35:39 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of nirmal070125@gmail.com designates 209.85.212.177 as permitted sender) Received: from [209.85.212.177] (HELO mail-wi0-f177.google.com) (209.85.212.177) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Nov 2014 20:35:12 +0000 Received: by mail-wi0-f177.google.com with SMTP id ex7so5629722wid.16 for ; Fri, 07 Nov 2014 12:35:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=6+j4z0HZfVZaIYcAWAD9YQybAluOXC6/W28+/swteR8=; b=T3HRjzklp0m1gDMgwSVNQttxw4Ylv22zf7k98GsGx2HFCs+O1rQBPlmDvRu0oBQLRI 7ekms5t8+CtKbXDB5A2QFoK46jpdYgWldNmO2FRnsDTn5CmDYDkxg0kMaAostOW4bP6x XC4/8rVvxtUF0sc9Xwi5UvaA4al/iQoI9MZ4QowQK/KvmV6DRFTDjJ0+2+dfxZ2OtSJa G3xD0kr6gxjwmqty5ADN2/nwAMr02GaeBi6w83q2BnWXthbIQvhkQtt3yCVskywXKfGG zxBEKmn440DoOSGbjNsroY16CNG/OULmC/o4FfpD1422s1/6fVMbzRxkIBlMD/y+3Teq mvdQ== MIME-Version: 1.0 X-Received: by 10.194.184.75 with SMTP id es11mr19613995wjc.35.1415392511923; Fri, 07 Nov 2014 12:35:11 -0800 (PST) Received: by 10.194.61.211 with HTTP; Fri, 7 Nov 2014 12:35:11 -0800 (PST) In-Reply-To: References: Date: Fri, 7 Nov 2014 21:35:11 +0100 Message-ID: Subject: Re: [Fixed] CEP sends very large values for gradient and second derivative of load average From: Nirmal Fernando To: dev Content-Type: multipart/alternative; boundary=047d7b6da766e41fe505074abf31 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b6da766e41fe505074abf31 Content-Type: text/plain; charset=UTF-8 Sorry for the delayed response Imesh. On Thu, Nov 6, 2014 at 7:29 PM, Imesh Gunaratne wrote: > Thanks for your response Nirmal, please see my thoughts below: > > On Thu, Nov 6, 2014 at 7:38 PM, Nirmal Fernando > wrote: > >> AFAIU if it is statistics, it's all about random data, samples and >> normalization. You don't use all values to do estimations. And this is an >> estimation for gradient per say! >> > > True, however the random data needs to be accurate as much as possible. > Yes, Imesh but this was the first step. We were crawling and let's stand gradually. :-) > >>> Well, statistics we are calculating is for a cluster as a whole not >> member wise. Since, we autoscale a cluster. >> > > Yes for autoscaling a cluster the aggregated statistics should be > calculated against the cluster. However I do not think that we can mix each > statistic accorss members when calculating differences. Different members > of a cluster might be running at different resource usage levels at a given > point of time. > In my understanding, members of a cluster are homogeneous from the allocated resource point of view. It's true that their usage levels are different at a given point of time (which is obvious). I doubt whether your suggestion is scalable in a system where we have 10s of clusters and 100s of members in each cluster (since we need to run an execution plan for each member ). This would be very costly IMO. Therefore aggregation might needed to be done at the member level first and > then on the cluster level. WDYT? > > > -- > Imesh Gunaratne > > Technical Lead, WSO2 > Committer & PMC Member, Apache Stratos > -- Best Regards, Nirmal Nirmal Fernando. PPMC Member & Committer of Apache Stratos, Senior Software Engineer, WSO2 Inc. Blog: http://nirmalfdo.blogspot.com/ --047d7b6da766e41fe505074abf31 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Sorry for the delayed response Imesh.

On Thu, Nov 6, 2014 at 7:29 PM, Im= esh Gunaratne <imesh@apache.org> wrote:
Thanks for your response Nirmal, please= see my thoughts below:

On Thu, Nov 6, 2014 at 7:38 PM,= Nirmal Fernando=C2=A0<nirmal070125@gmail.com>=C2=A0wro= te:
AFAIU if it is statistics, it's al= l about random data, samples and normalization. You don't use all value= s to do estimations. And this is an estimation for gradient per say!
<= /blockquote>

True, however the random data needs = to be accurate as much as possible.=C2=A0

Yes, Imesh but this was the first step. We were craw= ling and let's stand gradually. :-)=C2=A0

Well, statistics we are calculating is for a cluster as a whole not member= wise. Since, we autoscale a cluster.=C2=A0

Yes for autoscaling a cluster the aggregate= d statistics should be calculated against the cluster. However I do not thi= nk that we can mix each statistic accorss members when calculating differen= ces. Different members of a cluster might be running at different resource = usage levels at a given point of time.

In my understanding, members of a cluster are homogene= ous from the allocated resource point of view. It's true that their usa= ge levels are different at a given point of time (which is obvious).=C2=A0<= /div>

I doubt whether your suggestion is scalable in a s= ystem where we have 10s of clusters and 100s of members in each cluster (si= nce we need to run an execution plan for each member ). This would be very = costly IMO.

Therefore a= ggregation might needed to be done at the member level first and then on th= e cluster level. WDYT?

=

--
Imesh Gunaratne

Technical Lead, WSO2
Committer & PMC Member,= Apache Stratos



--
Best Regards,
Nirmal

Nirmal= Fernando.
PPMC Member & Committer of Apache Stratos,
Senior Soft= ware Engineer, WSO2 Inc.

--047d7b6da766e41fe505074abf31--