Return-Path: X-Original-To: apmail-helix-user-archive@minotaur.apache.org Delivered-To: apmail-helix-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D5150ECA2 for ; Tue, 12 Feb 2013 07:57:01 +0000 (UTC) Received: (qmail 61586 invoked by uid 500); 12 Feb 2013 07:57:01 -0000 Delivered-To: apmail-helix-user-archive@helix.apache.org Received: (qmail 61528 invoked by uid 500); 12 Feb 2013 07:57:00 -0000 Mailing-List: contact user-help@helix.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@helix.incubator.apache.org Delivered-To: mailing list user@helix.incubator.apache.org Received: (qmail 61498 invoked by uid 99); 12 Feb 2013 07:56:59 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Feb 2013 07:56:59 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of g.kishore@gmail.com designates 209.85.212.180 as permitted sender) Received: from [209.85.212.180] (HELO mail-wi0-f180.google.com) (209.85.212.180) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Feb 2013 07:56:54 +0000 Received: by mail-wi0-f180.google.com with SMTP id hi8so4025520wib.13 for ; Mon, 11 Feb 2013 23:56:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=R7WCZgslhTRukfscqdj9nN8ALU/5c6rh1ir6Cpwf+o8=; b=ItJFR3aUhyTc32BA/QQsx+9/lSQxc+a64fmk3JupeP+oUv31huQ6c6D9uHNH6thK6a uTyGgyyd86bsIuBSxFV3zw2gyfUAleiWxPeTkbPsscNxz8O318IiANbEhZ95XddnlGV0 5KN3FEnFEo+CmpQfPKrjyEdh2xwkes34blgl9UsdENjbpbIGIeAHTueaA1zKCWML+rH7 U9D2gOudzGpoPYGEt89bn8euevTu6xhsMt175GqxeONyO97uOs6mtT4OS43PGOAEtuIR +O7f0ezs3KZAC4NW5Wc1ucnZ3vWp3/vZ5KqyFksJXacY/E60h3yz9KRHWSXpBlychnoF Ar1w== MIME-Version: 1.0 X-Received: by 10.180.104.10 with SMTP id ga10mr1198111wib.23.1360655792640; Mon, 11 Feb 2013 23:56:32 -0800 (PST) Received: by 10.194.140.116 with HTTP; Mon, 11 Feb 2013 23:56:32 -0800 (PST) In-Reply-To: References: Date: Mon, 11 Feb 2013 23:56:32 -0800 Message-ID: Subject: Re: Question about participant dropping a subset of assigned partitions From: kishore g To: user@helix.incubator.apache.org Content-Type: multipart/alternative; boundary=f46d04374a092e914a04d5825d85 X-Virus-Checked: Checked by ClamAV on apache.org --f46d04374a092e914a04d5825d85 Content-Type: text/plain; charset=ISO-8859-1 Hi Abhishek, Regarding the standalone agent, Santiago has started this thread http://helix-dev.markmail.org/message/5h2fogbigexnhb4s. I think it supports recursion where one agent can manage another cluster. This is still in design phase and there are multiple use cases that need a similar solution. Feel free to contribute to the design/implementation on this JIRA https://issues.apache.org/jira/browse/HELIX-45. If you are setting only the idealstate, I suggest you use the customcodeinvoker. You can registers for various changes like( nodes starting/stopping) etc. Take a look at this. https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=blob;f=helix-core/src/test/java/org/apache/helix/integration/TestHelixCustomCodeRunner.java;h=9bf79b8b34c14b7ce1e3fc45a45ceb19fdac4874;hb=437eb42e This has the advantage that you can still benefit from features like ensuring constraints and throttling that comes with default controller. And as i mentioned earlier, this will allow you to upgrade to a newer Helix version with out any issues. thanks, Kishore G On Thu, Feb 7, 2013 at 10:02 AM, Abhishek Rai wrote: > Thanks again for the thoughtful feedback Kishore. > > On Sat, Feb 2, 2013 at 12:20 AM, kishore g wrote: > >> Thanks Abhishek, you are thinking in the right direction. >> >> Your point about disable-enable happening too quickly is valid. However, >> how fast can you detect the c++ process crash and restart. Is C++ process >> running in a daemon mode and it is restarted automatically or the >> helix-java-proxy agent will be responsible to start the c++ process. >> > > Both options are possible at this time, but I was evaluating the > disable-partition suggestion for soundness. However, I agree that it's a > highly unlikely scenario. > > >> The messaging solution will work but the alternative of modeling each c++ >> process as a participant makes sense and is the right thing to do. >> >> Can you provide more details on "its ok if the java-proxy dies" why does >> it not effect the correctness? when the agent is restarted does it >> re-register the c++ instances, do you plan to store the c++ pid in Helix so >> that the agent can remember the processes it had started? >> > > Death or restarts of the java proxy may result in temporarily > unavailability of some data until the controller rebalances the lost > partitions. Death or restarts of the C++ DDS process also affects > availability but if the Java proxy stays up, then (1) controller may not > notice the unavailability, and (2) DDS clients may continue to think that > their data is reachable when it's not. Sorry I misused the term > "correctness" for describing the weaker availability in the latter case. > > >> >> The reason I am asking these questions is there is a similar effort on >> writing a stand alone helix agent that acts as proxies for other processes >> started on the node. In general this approach seems to be quite useful and >> might have some common functionality that can be leverage across multiple >> implementations. >> > > That's great! The proxy agent will be very useful. I wonder if a good > goal would be to enable recursion in the proxy such that the participant > itself can be a controller for another Helix cluster. Thus the > proxy-participant could delegate its set of assigned resources to another > set of participants. This may be trivially true. > > >> As for the writing the custom rebalancer, you have two options 1) as you >> mentioned you can write inside the controller 2) there is another feature >> called CustomCodeInvoker. You can basically write your logic to change the >> idealstate in this and simply run it along with your java proxy and helix >> will ensure it is actively running on only one node. This has an overhead >> of around 50-100ms on reacting to failure but is much cleaner. If you are >> doing 1) you need to be careful to not change existing code but simply add >> a new stage in the pipeline. That way you will be able to upgrade Helix to >> get new features without breaking your functionality. >> > > Thanks for the suggestions. I'm doing 1) but in a slightly different way, > please let me know if I'm totally off in the wrong direction :-) Or if I'm > likely to run into problems with future Helix upgrades. > > I've subclassed GenericHelixController and implemented all listener > callbacks. This subclass registers for all events and ensures that > GenericHelixController listeners run for each event. Internally, it > implements the scheduling logic that it needs and applies it via > ZKHelixAdmin.setResourceIdealState(). Do you see any clear benefits of > changing this to insert a new stage in GenericHelixController's pipeline? > > I'm following a similar scheme in the custom participant except that it > directly registers the listeners with Helix without using a > GenericHelixController. I will take a closer look at CustomCodeInvoker, > looks very useful. > > >> >> On the topic of "Helix does not have a c++ library", do you think it >> would make it easy if there was a c++ library?. This may not that hard to >> write because only thing that needs to be written in c++ is participant >> code which simply acts on the message from controller. Majority of the code >> is in controller and it can still be run as java. We are working on a >> python agent and i hope some one will write a c++ agent. >> > > Thanks for the update, yeah I am thinking of taking a stab at it in 1-2 > months if it still seems useful. > > >> >> One of the good things of modeling each c++ process as an instance is >> that in future if there is a c++ helix agent then you can easily migrate to >> it. >> > > Cool. > Thanks again! > Abhishek > > >> >> Hope this helps. >> >> thanks, >> Kishore G >> >> >> >> On Fri, Feb 1, 2013 at 9:44 PM, Terence Yim wrote: >> >>> Hi, >>> >>> What do mean by "fake" live instance that you mentioned? I think the >>> Java proxy could simply creates one HelixManager participant per C++ >>> instance (hence there are N HelixManager instances in the Java proxy) and >>> disconnect them accordingly based on the liveness of the C++ process. >>> >>> Terence >>> >>> On Fri, Feb 1, 2013 at 7:19 PM, Abhishek Rai wrote: >>> >>>> Thanks for the quick and thoughtful response Kishore! Comments inline. >>>> >>>> >>>> On Fri, Feb 1, 2013 at 6:33 PM, kishore g wrote: >>>> >>>>> Hi Abhishek, >>>>> >>>>> Thanks for the good question. We have two options(listed later in the >>>>> email) for allowing a partition to drop the partitions. However, It works >>>>> only in two modes (auto, custom) of three mode(auto_rebalance, auto, >>>>> custom) the controller can support. >>>>> More info about modes here >>>>> http://helix.incubator.apache.org/Features.html >>>>> >>>>> Can you let me know which mode you are running it in?. >>>>> >>>> >>>> We are planning to use CUSTOM mode since we have some specific >>>> requirements about (1) the desired state for each partition, and (2) >>>> scheduling partitions to instances. Our requirements for (1) are not >>>> expressible in the FSM framework. >>>> >>>> Also is it sufficient if the disabled partitions are re-assigned >>>>> uniformly to other nodes or you want to other partitions from other nodes >>>>> to be assigned to this node. >>>>> >>>> >>>> Once a participant disables some partitions, it's alright for the >>>> default rebalancing logic to kick in. >>>> >>>> >>>>> >>>>> Also it will help us if you can tell the use case when you need this >>>>> feature. >>>>> >>>> >>>> Sure, I'm still trying to hash things out but here is a summary. The >>>> DDS nodes are C++ processes, which is the crux of the problem. AFAIK Helix >>>> does not have a C++ library, so I'm planning to use a participant written >>>> in Java, which runs as a separate process on the same node, receives state >>>> transitions from the controller, and proxies them to the C++ process via an >>>> RPC interface. The problem is that the C++ process and the Java-Helix >>>> proxy can fail independently. I'm not worried about the Java-Helix proxy >>>> crashing since that would knock of all partitions in the C++ process from >>>> Helix view, which does not affect correctness. >>>> >>>> But when the C++ process crashes, the Java-Helix proxy needs to let the >>>> controller know asap, so the Helix "external view" can be updated, >>>> rebalancing can start, etc. One alternative is to invoke >>>> "manager.disconnect()" from the Helix proxy. But this would knock off all >>>> partitions managed by the proxy (I want to retain the ability for the proxy >>>> to manage multiple C++ programs). Hence the question about selectively >>>> dropping certain partitions, viz., the ones in a crashed C++ program. >>>> >>>> >>>>> To summarize, you can achieve this in AUTO and CUSTOM but not in >>>>> AUTO_REBALANCE mode because the goal of controller is always to assign the >>>>> partitions evenly among nodes. But you bring up a good use case, depending >>>>> the behavior we might be able to support it easily. >>>>> >>>>> 1. Disable a partition on a given node: Disabling a partition on a >>>>> particular node should automatically trigger rebalancing. This can be done >>>>> either by admin using command line tool >>>>> helix-admin.sh --zkSvr >>>>> --enablePartition >>>> true/false> >>>>> >>>>> or programmatically if you have the access to manager, you can invoke >>>>> this >>>>> >>>>> manager.getClusterManagementTool().enablePartition(enabled, >>>>> >>>>> clusterName,instanceName,resourceName,partitionNames); >>>>> >>>>> This can be done in auto and custom. >>>>> >>>> I am not sure this will have the right effect in the scenario described >>>> above. Specifically, the Java proxy would need to disable all the crashed >>>> partitions, and then re-enable them when the C++ DDS process reboots >>>> successfully. If the disable-enable transitions happen too quickly, could >>>> the controller possibly miss the transition for some partition and not do >>>> anything? >>>> >>>>> 2. The other option is to change the mapping of partition --> node in >>>>> the ideal state. ( You can do this programmatically in custom modes and in >>>>> some what in auto mode as well). Doing this will send transitions to the >>>>> node to drop the partitions and reassign it to other nodes. >>>>> >>>> >>>> Yes, this seems like the most logical thing. The Java proxy will >>>> probably need to send a message to the controller to trigger this change in >>>> the ideal states of all crashed partitions. The messaging API would >>>> probably be useful here. >>>> >>>> Another alternative I'm considering is for the Java proxy to add a >>>> "fake" instance for each C++ process that it spawns locally. The custom >>>> rebalancer (that I'd write inside the controller) would then schedule the >>>> C++ DDS partitions on to these "fake" live instances. When the C++ process >>>> crashes, the Java proxy would simply disconnect the corresponding fake >>>> instance's manager. Does this make sense to you? Or do you have any other >>>> thoughts? >>>> >>>> Thanks again for your thoughtful feedback! >>>> Abhishek >>>> >>>> >>>>> >>>>> >>>>> >>>>> Thanks, >>>>> Kishore G >>>>> >>>>> >>>>> >>>>> On Fri, Feb 1, 2013 at 5:44 PM, Abhishek Rai wrote: >>>>> >>>>>> Hi Helix users! >>>>>> >>>>>> I'm a Helix newbie and need some advice about a use case. I'm using >>>>>> Helix to manage a storage system which fits the description of a DDS >>>>>> ("distributed data service" as defined in the Helix SOCC paper). Each >>>>>> participant hosts a bunch of partitions of a resource, as assigned by the >>>>>> controller. The set of partitions assigned to a participant changes >>>>>> dynamically as the controller rebalances partitions, nodes join or leave, >>>>>> etc. >>>>>> >>>>>> Additionally, I need the ability for a participant to "drop" a subset >>>>>> of partitions currently assigned to it. When a partition is dropped by a >>>>>> participant, Helix would remove the partition from the current state of the >>>>>> instance, update the external view, and make the partition available for >>>>>> rebalancing by the controller. Does the Java API provide a way of >>>>>> accomplishing this? If not, are there any workarounds? Or, was there a >>>>>> design rationale to disallow such actions from the participant? >>>>>> >>>>>> Thanks, >>>>>> Abhishek >>>>>> >>>>> >>>>> >>>> >>> >> > --f46d04374a092e914a04d5825d85 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Abhishek,

Regarding the standa= lone agent, Santiago has started this thread=A0http://helix-dev.markmail.org/messag= e/5h2fogbigexnhb4s. I think it supports recursion where one agent can m= anage another cluster. This is still in design phase and there are multiple= use cases that need a similar solution. Feel free to contribute to the des= ign/implementation on this JIRA=A0https://issues.apache.org/jira/browse/HELIX-45.

If you are setting only the idealstate, I s= uggest you use the customcodeinvoker. You can registers for various changes= like( nodes starting/stopping) etc. Take a look at this.=A0https:/= /git-wip-us.apache.org/repos/asf?p=3Dincubator-helix.git;a=3Dblob;f=3Dhelix= -core/src/test/java/org/apache/helix/integration/TestHelixCustomCodeRunner.= java;h=3D9bf79b8b34c14b7ce1e3fc45a45ceb19fdac4874;hb=3D437eb42e

This has the advantage that you can still b= enefit from features like ensuring constraints and throttling that comes wi= th default controller. And as i mentioned earlier, this will allow you to u= pgrade to a newer Helix version with out any issues.

thanks,
Kishore G



=
On Thu, Feb 7, 2013 at 10:02 AM, Abhishek Ra= i <abhishekrai@gmail.com> wrote:
Thanks again for the though= tful feedback Kishore.

On Sat, Feb 2, 2013 at 12:20 AM, kishore g <g.kishore@= gmail.com> wrote:
Thanks A= bhishek, you are thinking in the right direction.=A0

You= r point about disable-enable happening too quickly is valid. However, how f= ast can you detect the c++ process crash and restart. Is C++ process runnin= g in a daemon mode and it is restarted automatically or the helix-java-prox= y agent will be responsible to start the c++ process.

Both options are possible at t= his time, but I was evaluating the disable-partition suggestion for soundne= ss.=A0 However, I agree that it's a highly unlikely scenario.


The messaging solution will work but the alternative of= modeling each c++ process as a participant makes sense and is the right th= ing to do.=A0

Can you provide more details on &quo= t;its ok if the java-proxy dies" why does it not effect the correctnes= s? when the agent is restarted does it re-register the c++ instances, do yo= u plan to store the c++ pid in Helix so that the agent can remember the pro= cesses it had started?

Death or restarts of the java = proxy may result in temporarily unavailability of some data until the contr= oller rebalances the lost partitions.=A0 Death or restarts of the C++ DDS p= rocess also affects availability but if the Java proxy stays up, then (1) c= ontroller may not notice the unavailability, and (2) DDS clients may contin= ue to think that their data is reachable when it's not.=A0 Sorry I misu= sed the term "correctness" for describing the weaker availability= in the latter case.
=A0

The reason I am asking these questions is there is a si= milar effort on writing a stand alone helix agent that acts as proxies for = other processes started on the node. In general this approach seems to be q= uite useful and might have some common functionality that can be leverage a= cross multiple implementations.

That's great!=A0 The proxy= agent will be very useful.=A0 I wonder if a good goal would be to enable r= ecursion in the proxy such that the participant itself can be a controller = for another Helix cluster.=A0 Thus the proxy-participant could delegate its= set of assigned resources to another set of participants.=A0 This may be t= rivially true.


As for the writing the custom rebalancer, you have two = options 1) as you mentioned you can write inside the controller 2) there is= another feature called CustomCodeInvoker. You can basically write your log= ic to change the idealstate in this and simply run it along with your java = proxy and helix will ensure it is actively running on only one node. This h= as an overhead of around 50-100ms on reacting to failure but is much cleane= r. If you are doing 1) you need to be careful to not change existing code b= ut simply add a new stage in the pipeline. That way you will be able to upg= rade Helix to get new features without breaking your functionality.

Thanks for the suggestions.=A0= I'm doing 1) but in a slightly different way, please let me know if I&= #39;m totally off in the wrong direction :-)=A0 Or if I'm likely to run= into problems with future Helix upgrades.

I've subclassed GenericHelixController and implemented all listener= callbacks.=A0 This subclass registers for all events and ensures that Gene= ricHelixController listeners run for each event.=A0 Internally, it implemen= ts the scheduling logic that it needs and applies it via ZKHelixAdmin.setRe= sourceIdealState().=A0 Do you see any clear benefits of changing this to in= sert a new stage in GenericHelixController's pipeline?

I'm following a similar scheme in the custom participant= except that it directly registers the listeners with Helix without using a= GenericHelixController.=A0 I will take a closer look at CustomCodeInvoker,= looks very useful.
=A0

On the topic of "Helix does not have a c++ library= ", do you think it would make it easy if there was a c++ library?. Thi= s may not that hard to write because only thing that needs to be written in= c++ is participant code which simply acts on the message from controller. = Majority of the code is in controller and it can still be run as java. We a= re working on a python agent and i hope some one will write a c++ agent.

Thanks for the update, yeah I = am thinking of taking a stab at it in 1-2 months if it still seems useful.<= br>
=A0

One of the good things of modeling each c++ process as = an instance is that in future if there is a c++ helix agent then you can ea= sily migrate to it.

Cool.
Thanks again!
Abhishek
=A0

Hope this helps.

thanks,
Kishore G

<= div>


On F= ri, Feb 1, 2013 at 9:44 PM, Terence Yim <chtyim@gmail.com> wr= ote:
Hi,

Wh= at do mean by "fake" live instance that you mentioned? I think th= e Java proxy could simply creates one HelixManager participant per C++ inst= ance (hence there are N HelixManager instances in the Java proxy) and disco= nnect them accordingly based on the liveness of the C++ process.

Terence

On Fri, Feb 1, 2013 at 7:19 PM, Abhishek Rai <abhish= ekrai@gmail.com> wrote:
Thanks f= or the quick and thoughtful response Kishore!=A0 Comments inline.


On Fri, Feb 1, 2013 at 6:33 PM, kishore= g <g.kishore@gmail.com> wrote:
Hi Abhis= hek,

Thanks for the good question. We have two options(l= isted later in the email) for allowing a partition to drop the partitions. = =A0However, It works only in two modes (auto, custom) of three mode(auto_re= balance, auto, custom) the controller can support.=A0

Can you let me know which mode you a= re running it in?.

We are planning to use CUSTOM = mode since we have some specific requirements about (1) the desired state f= or each partition, and (2) scheduling partitions to instances.=A0 Our requi= rements for (1) are not expressible in the FSM framework.

Also is it sufficient if the disabled partitions are re-assig= ned uniformly to other nodes or you want to other partitions from other nod= es to be assigned to this node.=A0

Once a participant disables so= me partitions, it's alright for the default rebalancing logic to kick i= n.
=A0

Also it will help us=A0if you can tell the use case whe= n you need this feature.

= Sure, I'm still trying to hash things out but here is a summary.=A0 The= DDS nodes are C++ processes, which is the crux of the problem.=A0 AFAIK He= lix does not have a C++ library, so I'm planning to use a participant w= ritten in Java, which runs as a separate process on the same node, receives= state transitions from the controller, and proxies them to the C++ process= via an RPC interface.=A0 The problem is that the C++ process and the Java-= Helix proxy can fail independently.=A0 I'm not worried about the Java-H= elix proxy crashing since that would knock of all partitions in the C++ pro= cess from Helix view, which does not affect correctness.

But when the C++ process crashes, the Java-Helix proxy needs to let the= controller know asap, so the Helix "external view" can be update= d, rebalancing can start, etc.=A0 One alternative is to invoke "manage= r.disconnect()" from the Helix proxy.=A0 But this would knock off all = partitions managed by the proxy (I want to retain the ability for the proxy= to manage multiple C++ programs).=A0 Hence the question about selectively = dropping certain partitions, viz., the ones in a crashed C++ program.

To summarize, you can achieve this in AUTO and CUSTOM but not i= n AUTO_REBALANCE mode because the goal of controller is always to assign th= e partitions evenly among nodes. But you bring up a good use case, dependin= g the =A0behavior we might be able to support it easily.

1. Disable a partition on a given node: Disabling a par= tition on a particular node should automatically trigger rebalancing. This = can be done either by admin using command line tool
helix-admin.sh --zkSvr <ZookeeperServerAddress(Required)> --enablePar= tition <clusterName instanceName resourceName partitionName true/false&g= t;

or programmatically if you have the access = to manager, you can invoke this

manager.getClusterManagementTool().enablePartition(enabled,=A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 clusterName,instanceName,resourceName,partitionNames);

This = can be done in auto and custom.

I am not sure this will have the right effect= in the scenario described above.=A0 Specifically, the Java proxy would nee= d to disable all the crashed partitions, and then re-enable them when the C= ++ DDS process reboots successfully.=A0 If the disable-enable transitions h= appen too quickly, could the controller possibly miss the transition for so= me partition and not do anything?

2. The other option is to change the mapping of partition= --> node in the ideal state. ( You can do this programmatically in cust= om modes and in some what in auto mode as well). Doing this will send trans= itions to the node to drop the partitions and reassign it to other nodes.

Yes, this seems like the most = logical thing.=A0 The Java proxy will probably need to send a message to th= e controller to trigger this change in the ideal states of all crashed part= itions.=A0 The messaging API would probably be useful here.

Another alternative I'm considering is for the Java prox= y to add a "fake" instance for each C++ process that it spawns lo= cally.=A0 The custom rebalancer (that I'd write inside the controller) = would then schedule the C++ DDS partitions on to these "fake" liv= e instances.=A0 When the C++ process crashes, the Java proxy would simply d= isconnect the corresponding fake instance's manager.=A0 Does this make = sense to you?=A0 Or do you have any other thoughts?

Thanks again for your thoughtful feedback!
Abhishek
=A0



Thanks,
Kishore= G



On Fri, Feb 1, 2013 at 5:44 PM, Abhishek Rai <ab= hishekrai@gmail.com> wrote:
Hi Helix users!

I'm a Helix newbie and need some a= dvice about a use case.=A0 I'm using Helix to manage a storage system w= hich fits the description of a DDS ("distributed data service" as= defined in the Helix SOCC paper).=A0 Each participant hosts a bunch of par= titions of a resource, as assigned by the controller.=A0 The set of partiti= ons assigned to a participant changes dynamically as the controller rebalan= ces partitions, nodes join or leave, etc.

Additionally, I need the ability for a participant to "drop" = a subset of partitions currently assigned to it.=A0 When a partition is dro= pped by a participant, Helix would remove the partition from the current st= ate of the instance, update the external view, and make the partition avail= able for rebalancing by the controller.=A0 Does the Java API provide a way = of accomplishing this?=A0 If not, are there any workarounds?=A0 Or, was the= re a design rationale to disallow such actions from the participant?

Thanks,
Abhishek






--f46d04374a092e914a04d5825d85--