Return-Path: X-Original-To: apmail-helix-user-archive@minotaur.apache.org Delivered-To: apmail-helix-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D076D17FF8 for ; Fri, 6 Mar 2015 18:38:19 +0000 (UTC) Received: (qmail 19772 invoked by uid 500); 6 Mar 2015 18:38:04 -0000 Delivered-To: apmail-helix-user-archive@helix.apache.org Received: (qmail 19727 invoked by uid 500); 6 Mar 2015 18:38:03 -0000 Mailing-List: contact user-help@helix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@helix.apache.org Delivered-To: mailing list user@helix.apache.org Received: (qmail 19716 invoked by uid 99); 6 Mar 2015 18:38:03 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Mar 2015 18:38:03 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of zzhang@linkedin.com designates 8.18.31.21 as permitted sender) Received: from [8.18.31.21] (HELO eat1-mav01.corp.linkedin.com) (8.18.31.21) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Mar 2015 18:37:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linkedin.com; i=@linkedin.com; q=dns/txt; s=proddkim1024; t=1425667079; x=1457203079; h=from:to:subject:date:message-id:references:in-reply-to: mime-version; bh=xxVicwmFtOWLrtc27IpSVfW93O98dyhqANOC0xSPN30=; b=yYVG6MMZ+g2G3EwQoa6rvfKUJqTqRmU8BcApIqs42l1t5klH7UANos1T TcAq+ws3AsGGWZxhjUpWTWyDVVRLbrhcM/v+qv8kX0u3poKCbhX839pzg rDCQBRiDHJUX0m2mVNGet7UXHDuIdqEJLJjs3QG/l1greti2a2sTyrEeX Q=; X-IronPort-AV: E=Sophos;i="5.11,354,1422950400"; d="scan'208,217";a="337176" Received: from ESV4-MB02.linkedin.biz ([fe80::8093:3d15:3c8e:a479]) by ESV4-HT02.linkedin.biz ([::1]) with mapi id 14.03.0195.001; Fri, 6 Mar 2015 10:37:38 -0800 From: Zhen Zhang To: "user@helix.apache.org" Subject: RE: Bug in offlining of Bucketized resources Thread-Topic: Bug in offlining of Bucketized resources Thread-Index: AQHQWDwXTuRkXNBm60S3AuxSqZqdF50PyB8I Date: Fri, 6 Mar 2015 18:37:37 +0000 Message-ID: <23CA11DC8830BA44A37C6B44B14D013CB9D1F064@ESV4-MB02.linkedin.biz> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.18.46.254] Content-Type: multipart/alternative; boundary="_000_23CA11DC8830BA44A37C6B44B14D013CB9D1F064ESV4MB02linkedi_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_23CA11DC8830BA44A37C6B44B14D013CB9D1F064ESV4MB02linkedi_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi Varun, Thanks for pointing the bug. I will check it. Meanwhile, you can try disabl= e the resource to bring all partitions to OFFLINE like this: helix-admin.sh --zkSvr localhost:2181 --enableResource clusterName resource= Name false Note that this will put all partitions to OFFLINE, but not drop them. If yo= u want to drop them, simply delete the ideal-state will do. Thanks, Jason ________________________________ From: Varun Sharma [varun@pinterest.com] Sent: Friday, March 06, 2015 10:32 AM To: user@helix.apache.org Subject: Bug in offlining of Bucketized resources Hi folks, I am seeing a bug with how ideal state is set for Bucketized resources. I h= ave a state model factory with default state of OFFLINE and a transition fr= om OFFLINE->DROPPED. To offline a resource, I simply create an ideal state = with no partition assignments and that has worked for me in the past. However, for bucketized resources, it seems that this approach is not worki= ng. Meaning that when I do a set on the setResourceIdealState - the ideal s= tate is not updating to the expected value. Is there any other way to grace= fully offline a bucketized resource ? Thanks Varun --_000_23CA11DC8830BA44A37C6B44B14D013CB9D1F064ESV4MB02linkedi_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
Hi Varun,

Thanks for pointing the bug. I will check it. Meanwhile, you can try d= isable the resource to bring all partitions to OFFLINE like this:
helix-admin.sh --zkSvr localhost:2181 --enableResource clusterName resourceName false
Note that this will put all partitions to OFFLINE, but not drop them. = If you want to drop them, simply delete the ideal-state will do.

Thanks,
Jason

From: Varun Sharma [varun@pinterest.com]<= br> Sent: Friday, March 06, 2015 10:32 AM
To: user@helix.apache.org
Subject: Bug in offlining of Bucketized resources

Hi folks,

I am seeing a bug with how ideal state is set for Bucketized resources= . I have a state model factory with default state of OFFLINE and a transiti= on from OFFLINE->DROPPED. To offline a resource, I simply create an idea= l state with no partition assignments and that has worked for me in the past.

However, for bucketized resources, it seems that this approach is not = working. Meaning that when I do a set on the setResourceIdealState - the id= eal state is not updating to the expected value. Is there any other way to = gracefully offline a bucketized resource ?

Thanks
Varun
--_000_23CA11DC8830BA44A37C6B44B14D013CB9D1F064ESV4MB02linkedi_--