Return-Path: X-Original-To: apmail-ignite-user-archive@minotaur.apache.org Delivered-To: apmail-ignite-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C4335186B9 for ; Tue, 11 Aug 2015 21:31:06 +0000 (UTC) Received: (qmail 81370 invoked by uid 500); 11 Aug 2015 21:31:06 -0000 Delivered-To: apmail-ignite-user-archive@ignite.apache.org Received: (qmail 81331 invoked by uid 500); 11 Aug 2015 21:31:06 -0000 Mailing-List: contact user-help@ignite.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ignite.incubator.apache.org Delivered-To: mailing list user@ignite.incubator.apache.org Received: (qmail 81321 invoked by uid 99); 11 Aug 2015 21:31:06 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Aug 2015 21:31:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 39635DC54E for ; Tue, 11 Aug 2015 21:31:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.899 X-Spam-Level: ** X-Spam-Status: No, score=2.899 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id nHR5kYwm9213 for ; Tue, 11 Aug 2015 21:31:00 +0000 (UTC) Received: from mail-qg0-f46.google.com (mail-qg0-f46.google.com [209.85.192.46]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 07A3D42B1B for ; Tue, 11 Aug 2015 21:31:00 +0000 (UTC) Received: by qgdd90 with SMTP id d90so48561195qgd.3 for ; Tue, 11 Aug 2015 14:30:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=oh5vA5XvCU+kTAQHLICT9ssHbcOzL9rnvZx0/1fDYqE=; b=zllVQUI7y9ytdLvOHbdZlAYZtzSDCDG5sH++pu38XGo71c9Rxsjpt/uH7PwfF0w6B5 Bly3soj8cKRhWaIdvqKwgp2q6npvF+wLJjeTS7JBfZTISvb297Ww5ST/VOG3vPqol5yN WKWUm32MvOCQIkQy2/GSHERw9lCO2fiZizMc3I2n8VCm8hWCjknPmV3G1N/OqEoRxrcM MnqRj6O2eO0Gf2YqU96xAVkZadTrtnHlln7TM5ZMeG5L4bzNniYaWtehthleB04qnrbM XIJlIrDI69K9XndAuIpqISSpFUJUPXqXR45/Z2xVvNnn5dzYW4CXDPhKdPIQwlF+W7ol 5H5A== MIME-Version: 1.0 X-Received: by 10.140.152.208 with SMTP id 199mr53527973qhy.99.1439328653051; Tue, 11 Aug 2015 14:30:53 -0700 (PDT) Received: by 10.140.41.240 with HTTP; Tue, 11 Aug 2015 14:30:52 -0700 (PDT) In-Reply-To: <1439317864212-912.post@n6.nabble.com> References: <1439300564318-907.post@n6.nabble.com> <1439317864212-912.post@n6.nabble.com> Date: Tue, 11 Aug 2015 14:30:52 -0700 Message-ID: Subject: Re: Some question about the data loading From: Alexey Goncharuk To: user@ignite.incubator.apache.org Content-Type: multipart/alternative; boundary=001a1135490a149cb0051d0fd1e4 --001a1135490a149cb0051d0fd1e4 Content-Type: text/plain; charset=UTF-8 1. When new nodes join a cluster, partition-to-node assignment changes. Let's assume you have one backup. When you have just one node, it is responsible for all partitions, so your first node will try to load all the partitions. When second node joins grid, it will try to re-balance existing data from the first node and at the same time it will try to load the same set of partitions since we assumed you have one backup. When third node joins, it again will rebalance existing data from two first nodes and will try to load from the database 2/3s of partitions, and so on. In this scenario all nodes but the last one will try to load more partitions than necessary. The best approach here is to wait until your topology has enough nodes (you can create an event listener for node join/leave event) and then call loadCache(). 2. It should not run on client node because it does not make sense, I will be surprised if it is. 3. In the ticket you have created your XML configuration does not have a name, but in the code you do have cache name. So in your cluster you end up with two caches, one with cache store (defined in the XML) and one without cache store, so it looks like you end up calling loadCache on the cache without store. 2015-08-11 11:31 GMT-07:00 kcheng.mvp : > In fact I have another question about the data loading. > > 1: > Right now I am going to implement the data loading in the LifecycleBean > call > back method. > > @Override > public void onLifecycleEvent(LifecycleEventType evt) throws > IgniteException > { > if(evt == LifecycleEventType.AFTER_NODE_START){ > // do data loading here > } > > } > > > when I start the node one by one(I have four nodes), with the node starts > then the node cluster topology will change accordingly. > > I guess the *partition ID* will be changing with the topology changes. In > this case there are some data would not be loaded into cache, right? > > if the *LifecycleBean*is not the ideal place to preload the data, so what's > the best practice to do pre data loading > > > > > 2: there are two method to trigger data loading *IgniteCache.loadCache()* > and *IgniteCache.localLoadCache()* > > suppose there are 5 nodes(1 client nodes + 4 server nodes), when call > *IgniteCache.loadCache()* > from server node, client node would not load the data right? > > 3: > In fact I am running into issue when try to do data load, and I fired a bug > https://issues.apache.org/jira/browse/IGNITE-1234 > > > > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Some-question-about-the-data-loading-tp907p912.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. > --001a1135490a149cb0051d0fd1e4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
1. When new nodes join a cluster, partition-to-node assign= ment changes. Let's assume you have one backup. When you have just one = node, it is responsible for all partitions, so your first node will try to = load all the partitions. When second node joins grid, it will try to re-bal= ance existing data from the first node and at the same time it will try to = load the same set of partitions since we assumed you have one backup. When = third node joins, it again will rebalance existing data from two first node= s and will try to load from the database 2/3s of partitions, and so on. In = this scenario all nodes but the last one will try to load more partitions t= han necessary. The best approach here is to wait until your topology has en= ough nodes (you can create an event listener for node join/leave event) and= then call loadCache().
2. It should not run on client node because it = does not make sense, I will be surprised if it is.
3. In the tick= et you have created your XML configuration does not have a name, but in the= code you do have cache name. So in your cluster you end up with two caches= , one with cache store (defined in the XML) and one without cache store, so= it looks like you end up calling loadCache on the cache without store.

2015-08-1= 1 11:31 GMT-07:00 kcheng.mvp <kcheng.mvp@gmail.com>:
<= blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px= #ccc solid;padding-left:1ex">In fact I have another quest= ion about the data loading.

1:
Right now I am going to implement the data loading in the LifecycleBean cal= l
back method.

@Override
=C2=A0 =C2=A0 =C2=A0 =C2=A0 public void onLifecycleEvent(LifecycleEventType= evt) throws IgniteException
{
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if(evt =3D=3D Lifec= ycleEventType.AFTER_NODE_START){
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 // do data loading here
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

=C2=A0 =C2=A0 =C2=A0 =C2=A0 }


when I start the node one by one(I have four nodes), with the node starts then the node cluster topology will change accordingly.

I guess the *partition ID* will be changing with the topology changes. In this case there are some data would not be loaded into cache, right?

if the *LifecycleBean*is not the ideal place to preload the data, so what&#= 39;s
the best practice to do pre data loading




2: there are two method to trigger data loading *IgniteCache.loadCache()* and *IgniteCache.localLoadCache()*

suppose there are 5 nodes(1 client nodes + 4 server nodes), when cal= l
*IgniteCache.loadCache()*
from server node, client node would not load the data right?

3:
In fact I am running into issue when try to do data load, and I fired a bug=
https://issues.apache.org/jira/browse/IGNITE-1234






--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/= Some-question-about-the-data-loading-tp907p912.html
Sent from the Apache Ignite Users m= ailing list archive at Nabble.com.

--001a1135490a149cb0051d0fd1e4--