Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D4697200BA5 for ; Wed, 5 Oct 2016 02:10:44 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D2C47160AE8; Wed, 5 Oct 2016 00:10:44 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2A83D160ACC for ; Wed, 5 Oct 2016 02:10:42 +0200 (CEST) Received: (qmail 88128 invoked by uid 500); 5 Oct 2016 00:10:41 -0000 Mailing-List: contact commits-help@geode.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@geode.incubator.apache.org Delivered-To: mailing list commits@geode.incubator.apache.org Received: (qmail 88119 invoked by uid 99); 5 Oct 2016 00:10:41 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 05 Oct 2016 00:10:41 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 9FB42C1362 for ; Wed, 5 Oct 2016 00:10:40 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -5.969 X-Spam-Level: X-Spam-Status: No, score=-5.969 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, KAM_LOTSOFHASH=0.25, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id oczgkM8rct-5 for ; Wed, 5 Oct 2016 00:10:15 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with SMTP id 783A660DDE for ; Wed, 5 Oct 2016 00:09:45 +0000 (UTC) Received: (qmail 81438 invoked by uid 99); 5 Oct 2016 00:09:44 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 05 Oct 2016 00:09:44 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 47181DFC55; Wed, 5 Oct 2016 00:09:44 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: dbarnes@apache.org To: commits@geode.incubator.apache.org Date: Wed, 05 Oct 2016 00:10:27 -0000 Message-Id: In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [45/51] [partial] incubator-geode git commit: GEODE-1964: native client documentation (note: contains references to images in the geode-docs directories) archived-at: Wed, 05 Oct 2016 00:10:45 -0000 http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/how_events_work.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/how_events_work.html.md.erb b/geode-docs/developing/events/how_events_work.html.md.erb deleted file mode 100644 index 4553582..0000000 --- a/geode-docs/developing/events/how_events_work.html.md.erb +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: How Events Work ---- - -Members in your Geode distributed system receive cache updates from other members through cache events. The other members can be peers to the member, clients or servers or other distributed systems. - -## Events Features - -These are the primary features of Geode events: - -- Content-based events -- Asynchronous event notifications with conflation -- Synchronous event notifications for low latency -- High availability through redundant messaging queues -- Event ordering and once and only-once delivery -- Distributed event notifications -- Durable subscriptions -- Continuous querying - -## Types of Events - -There are two categories of events and event handlers. - -- Cache events in the caching API are used by applications with a cache. Cache events provide detail-level notification for changes to your data. Continuous query events are in this category. -- Administrative events in the administration API are used by administrative applications without caches. - -Both kinds of events can be generated by a single member operation. - -**Note:** -You can handle one of these categories of events in a single system member. You cannot handle both cache and administrative events in a single member. - -Because Geode maintains the order of administrative events and the order of cache events separately, using cache events and administrative events in a single process can cause unexpected results. - -## Event Cycle - -The following steps describe the event cycle: - -1. An operation begins, such as data put or a cache close. -2. The operation execution generates these objects: - - An object of type `Operation` that describes the method that triggered the event. - - An event object that describes the event, such as the member and region where the operation originated. - -3. The event handlers that can handle the event are called and passed the event objects. Different event types require different handler types in different locations. If there is no matching event handler, that does not change the effect of the operation, which happens as usual. -4. When the handler receives the event, it triggers the handler’s callback method for this event. The callback method can hand off the event object as input to another method. Depending on the type of event handler, the callbacks can be triggered before or after the operation. The timing depends on the event handler, not on the event itself. - **Note:** - For transactions, after-operation listeners receive the events after the transaction has committed. - -5. If the operation is distributed, so that it causes follow-on operations in other members, those operations generate their own events, which can be handled by their listeners in the same way. - -## Event Objects - -Event objects come in several types, depending on the operation. Some operations generate multiple objects of different types. All event objects contain data describing the event, and each event type carries slightly different kinds of data appropriate to its matching operation. An event object is stable. For example, its content does not change if you pass it off to a method on another thread. - -For cache events, the event object describes the operation performed in the local cache. If the event originated remotely, it describes the local application of the remote entry operation, not the remote operation itself. The only exception is when the local region has an empty data policy; then the event carries the information for the remote (originating) cache operation. - -## Event Distribution - -After a member processes an event in its local cache, it distributes it to remote caches according to the member's configuration and the configurations of the remote caches. For example, if a client updates its cache, the update is forwarded to the client's server. The server distributes the update to its peers and forwards it to any other clients according to their interest in the data entry. If the server system is part of a multi-site deployment and the data region is configured to use a gateway sender, then the gateway sender also forwards the update to a remote site, where the update is further distributed and propagated. - -## Event Handlers and Region Data Storage - -You can configure a region for no local data storage and still send and receive events for the region. Conversely, if you store data in the region, the cache is updated with data from the event regardless of whether you have any event handlers installed. - -## Multiple Listeners - -When multiple listeners are installed, as can be done with cache listeners, the listeners are invoked sequentially in the order they were added to the region or cache. Listeners are executed one at a time. So, unless you program a listener to pass off processing to another thread, you can use one listener's work in later listeners. - -## Event Ordering - -During a cache operation, event handlers are called at various stages of the operation. Some event handlers are called before a region update and some are called after the region update operation. Depending on the type of event handler being called, the event handler can receive the events in-order or out-of-order in which they are applied on Region. - -- `CacheWriter` and `AsyncEventListener` always receive events in the order in which they are applied on region. -- `CacheListener` and `CqListener` can receive events in a different order than the order in which they were applied on the region. - -**Note:** -An `EntryEvent` contains both the old value and the new value of the entry, which helps to indicate the value that was replaced by the cache operation on a particular key. - -- **[Peer-to-Peer Event Distribution](../../developing/events/how_cache_events_work.html)** - - When a region or entry operation is performed, Geode distributes the associated events in the distributed system according to system and cache configurations. - -- **[Client-to-Server Event Distribution](../../developing/events/how_client_server_distribution_works.html)** - - Clients and servers distribute events according to client activities and according to interest registered by the client in server-side cache changes. - -- **[Multi-Site (WAN) Event Distribution](../../developing/events/how_multisite_distribution_works.html)** - - Geode distributes a subset of cache events between distributed systems, with a minimum impact on each system's performance. Events are distributed only for regions that you configure to use a gateway sender for distribution. - -- **[List of Event Handlers and Events](../../developing/events/list_of_event_handlers_and_events.html)** - - Geode provides many types of events and event handlers to help you manage your different data and application needs. - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb b/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb deleted file mode 100644 index c43d850..0000000 --- a/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Multi-Site (WAN) Event Distribution ---- - -Geode distributes a subset of cache events between distributed systems, with a minimum impact on each system's performance. Events are distributed only for regions that you configure to use a gateway sender for distribution. - -## Queuing Events for Distribution - -In regions that are configured with one or more gateway senders (`gateway-sender-ids` attribute), events are automatically added to a gateway sender queue for distribution to other sites. Events that are placed in a gateway sender queue are distributed asynchronously to remote sites. For serial gateway queues, the ordering of events sent between sites can be preserved using the `order-policy` attribute. - -If a queue becomes too full, it is overflowed to disk to keep the member from running out of memory. You can optionally configure the queue to be persisted to disk (with the `enable-persistence` `gateway-sender` attribute). With persistence, if the member that manages the queue goes down, the member picks up where it left off after it restarts. - -## Operation Distribution from a Gateway Sender - -The multi-site installation is designed for minimal impact on distributed system performance, so only the farthest-reaching entry operations are distributed between sites. - -These operations are distributed: - -- entry create -- entry put -- entry distributed destroy, providing the operation is not an expiration action - -These operations are not distributed: - -- get -- invalidate -- local destroy -- expiration actions of any kind -- region operations - -## How a Gateway Sender Processes Its Queue - -Each primary gateway sender contains a processor thread that reads messages from the queue, batches them, and distributes the batches to a gateway receiver in a remote site. To process the queue, a gateway sender thread takes the following actions: - -1. Reads messages from the queue -2. Creates a batch of the messages -3. Synchronously distributes the batch to the other site and waits for a reply -4. Removes the batch from the queue after the other site has successfully replied - -Because the batch is not removed from the queue until after the other site has replied, the message cannot get lost. On the other hand, in this mode a message could be processed more than once. If a site goes offline in the middle of processing a batch of messages, then that same batch will be sent again once the site is back online. - -You can configure the batch size for messages as well as the batch time interval settings. A gateway sender processes a batch of messages from the queue when either the batch size or the time interval is reached. In an active network, it is likely that the batch size will be reached before the time interval. In an idle network, the time interval will most likely be reached before the batch size. This may result in some network latency that corresponds to the time interval. - -## How a Gateway Sender Handles Batch Processing Failure - -Exceptions can occur at different points during batch processing: - -- The gateway receiver could fail with acknowledgment. If processing fails while the gateway receiver is processing a batch, the receiver replies with a failure acknowledgment that contains the exception, including the identity of the message that failed, and the ID of the last message that it successfully processed. The gateway sender then removes the successfully processed messages and the failed message from the queue and logs an exception with the failed message information. The sender then continues processing the messages remaining in the queue. -- The gateway receiver can fail without acknowledgment. If the gateway receiver does not acknowledge a sent batch, the gateway sender does not know which messages were successfully processed. In this case the gateway sender re-sends the entire batch. -- No gateway receivers may be available for processing. If a batch processing exception occurs because there are no remote gateway receivers available, then the batch remains in the queue. The gateway sender waits for a time, and then attempts to re-send the batch. The time period between attempts is five seconds. The existing server monitor continuously attempts to connect to the gateway receiver, so that a connection can be made and queue processing can continue. Messages build up in the queue and possibly overflow to disk while waiting for the connection. - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb b/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb deleted file mode 100644 index 583fcdc..0000000 --- a/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: Implementing Cache Event Handlers ---- - -Depending on your installation and configuration, cache events can come from local operations, peers, servers, and remote sites. Event handlers register their interest in one or more events and are notified when the events occur. - - -For each type of handler, Geode provides a convenience class with empty stubs for the interface callback methods. - -**Note:** -Write-behind cache listeners are created by extending the `AsyncEventListener` interface, and they are configured with an `AsyncEventQueue` that you assign to one or more regions. - -**Procedure** - -1. Decide which events your application needs to handle. For each region, decide which events you want to handle. For the cache, decide whether to handle transaction events. -2. For each event, decide which handlers to use. The `*Listener` and `*Adapter` classes in `org.apache.geode.cache.util` show the options. -3. Program each event handler: - - 1. Extend the handler's adapter class. - 2. If you want to declare the handler in the `cache.xml`, implement the `org.apache.geode.cache.Declarable` interface as well. - 3. Implement the handler's callback methods as needed by your application. - - **Note:** - Improperly programmed event handlers can block your distributed system. Cache events are synchronous. To modify your cache or perform distributed operations based on events, avoid blocking your system by following the guidelines in [How to Safely Modify the Cache from an Event Handler Callback](writing_callbacks_that_modify_the_cache.html#writing_callbacks_that_modify_the_cache). - - Example: - - ``` pre - package myPackage; - import org.apache.geode.cache.Declarable; - import org.apache.geode.cache.EntryEvent; - import org.apache.geode.cache.util.CacheListenerAdapter; - import java.util.Properties; - - public class MyCacheListener extends CacheListenerAdapter implements Declarable { - /** Processes an afterCreate event. - * @param event The afterCreate EntryEvent received - */ - public void afterCreate(EntryEvent event) { - String eKey = event.getKey(); - String eVal = event.getNewValue(); - ... do work with event info - } - ... process other event types - } - - ``` - -4. Install the event handlers, either through the API or the `cache.xml`. - - XML Region Event Handler Installation: - - ``` pre - - - - - myPackage.MyCacheListener - - - - ``` - - Java Region Event Handler Installation: - - ``` pre - tradesRegion = cache.createRegionFactory(RegionShortcut.PARTITION) - .addCacheListener(new MyCacheListener()) - .create("trades"); - ``` - - XML Transaction Writer and Listener Installation: - - ``` pre - - - - com.company.data.MyTransactionListener - - jdbc:cloudscape:rmi:MyData - - - - . . . - - - com.company.data.MyTransactionWriter - - jdbc:cloudscape:rmi:MyData - - - - - . . . - - ``` - -The event handlers are initialized automatically during region creation when you start the member. - -## Installing Multiple Listeners on a Region - -XML: - -``` pre - - - . . . - - myCacheListener1 - - - myCacheListener2 - - - myCacheListener3 - - - -``` - -API: - -``` pre -CacheListener listener1 = new myCacheListener1(); -CacheListener listener2 = new myCacheListener2(); -CacheListener listener3 = new myCacheListener3(); - -Region nr = cache.createRegionFactory() - .initCacheListeners(new CacheListener[] - {listener1, listener2, listener3}) - .setScope(Scope.DISTRIBUTED_NO_ACK) - .create(name); - -``` http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb b/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb deleted file mode 100644 index d60f55a..0000000 --- a/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb +++ /dev/null @@ -1,182 +0,0 @@ ---- -title: Implementing Durable Client/Server Messaging ---- - - -Use durable messaging for subscriptions that you need maintained for your clients even when your clients are down or disconnected. You can configure any of your event subscriptions as durable. Events for durable queries and subscriptions are saved in queue when the client is disconnected and played back when the client reconnects. Other queries and subscriptions are removed from the queue. - -Use durable messaging for client/server installations that use event subscriptions. - -These are the high-level tasks described in this topic: - -1. Configure your client as durable -2. Decide which subscriptions should be durable and configure accordingly -3. Program your client to manage durable messaging for disconnect, reconnect, and event handling - -## Configure the Client as Durable - -Use one of the following methods: - -- `gemfire.properties` file: - - ``` pre - durable-client-id=31 - durable-client-timeout=200 - ``` - -- Java: - - ``` pre - Properties props = new Properties(); - props.setProperty("durable-client-id", "31"); - props.setProperty("durable-client-timeout", "" + 200); - CacheFactory cf = new CacheFactory(props); - ``` - - - -The `durable-client-id` indicates that the client is durable and gives the server an identifier to correlate the client to its durable messages. For a non-durable client, this id is an empty string. The ID can be any number that is unique among the clients attached to servers in the same distributed system. - -The `durable-client-timeout` tells the server how long to wait for client reconnect. When this timeout is reached, the server stops storing to the client's message queue and discards any stored messages. The default is 300 seconds. This is a tuning parameter. If you change it, take into account the normal activity of your application, the average size of your messages, and the level of risk you can handle, both in lost messages and in the servers' capacity to store enqueued messages. Assuming that no messages are being removed from the queue, how long can the server run before the queue reaches the maximum capacity? How many durable clients can the server handle? To assist with tuning, use the Geode message queue statistics for durable clients through the disconnect and reconnect cycles. - -## Configure Durable Subscriptions and Continuous Queries - -The register interest and query creation methods all have an optional boolean parameter for indicating durability. By default all are non-durable. - -``` pre -// Durable registration -// Define keySpecification, interestResultPolicy, durability -exampleRegion.registerInterest(keySpecification, interestResultPolicySpecification, true); - -// Durable CQ -// Define cqName, queryString, cqAttributes, durability -CqQuery myCq = queryService.newCq(cqName, queryString, cqAttributes, true); -``` - -Save only critical messages while the client is disconnected by only indicating durability for critical subscriptions and CQs. When the client is connected to its servers, it receives messages for all keys and queries reqistered. When the client is disconnected, non-durable interest registrations and CQs are discontinued but all messages already in the queue for them remain there. - -**Note:** -For a single durable client ID, you must maintain the same durability of your registrations and queries between client runs. - -## Program the Client to Manage Durable Messaging - -Program your durable client to be durable-messaging aware when it disconnects, reconnects, and handles events from the server. - -1. Disconnect with a request to keep your queues active by using `Pool.close` or `ClientCache.close` with the boolean `keepalive` parameter. - - ``` pre - clientCache.close(true); - ``` - - To be retained during client down time, durable continuous queries (CQs) must be executing at the time of disconnect. - -2. Program your durable client's reconnection to: - - 1. If desired, detect whether the previously registered subscription queue is available upon durable client reconnection and the count of pending events in the queue. Based on the results, you can then decide whether to receive the remaining events or close the cache if the number is too large. - - For example, for a client with only the default pool created: - - ``` pre - int pendingEvents = cache.getDefaultPool().getPendingEventCount(); - - if (pendingEvents == -2) { // client connected for the first time … // continue - } - else if (pendingEvents == -1) { // client reconnected but after the timeout period - … // handle possible data loss - } - else { // pendingEvents >= 0 - … // decide to invoke readyForEvents() or ClientCache::close(false)/pool.destroy() - } - ``` - - For a client with multiple pools: - - ``` pre - int pendingEvents = 0; - - int pendingEvents1 = PoolManager.find(“pool1”).getPendingEventCount(); - - pendingEvents += (pendingEvents1 > 0) ? pendingEvents1 : 0; - - int pendingEvents2 = PoolManager.find(“pool2”).getPendingEventCount(); - - pendingEvents += (pendingEvents2 > 0) ? pendingEvents2 : 0; - - // process individual pool counts separately. - ``` - - The `getPendingEventCount` API can return the following possible values: - - A value representing a count of events pending at the server. Note that this count is an approximate value based on the time the durable client pool connected or reconnected to the server. Any number of invocations will return the same value. - - A zero value if there are no events pending at server for this client pool - - A negative value indicates that no queue is available at the server for the client pool. - - -1 indicates that the client pool has reconnected to the server after its durable-client-timeout period has elapsed. The pool's subscription queue has been removed possibly causing data loss. - - A value of -2 indicates that this client pool has connected to server for the first time. - - 2. Connect, initialize the client cache, regions, and any cache listeners, and create and execute any durable continuous queries. - 3. Run all interest registration calls. - - **Note:** - Registering interest with `InterestResultPolicy.KEYS_VALUES` initializes the client cache with the *current* values of specified keys. If concurrency checking is enabled for the region, any earlier (older) region events that are replayed to the client are ignored and are not sent to configured listeners. If your client must process all replayed events for a region, register with `InterestResultPolicy.KEYS` or `InterestResultPolicy.NONE` when reconnecting. Or, disable concurrency checking for the region in the client cache. See [Consistency for Region Updates](../distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045). - - 4. Call `ClientCache.readyForEvents` so the server will replay stored events. If the ready message is sent earlier, the client may lose events. - - ``` pre - ClientCache clientCache = ClientCacheFactory.create(); - // Here, create regions, listeners that are not defined in the cache.xml . . . - // Here, run all register interest calls before doing anything else - clientCache.readyForEvents(); - ``` - -3. When you program your durable client `CacheListener`: - 1. Implement the callback methods to behave properly when stored events are replayed. The durable client’s `CacheListener` must be able to handle having events played after the fact. Generally listeners receive events very close to when they happen, but the durable client may receive events that occurred minutes before and are not relevant to current cache state. - 2. Consider whether to use the `CacheListener` callback method, `afterRegionLive`, which is provided specifically for the end of durable event replay. You can use it to perform application-specific operations before resuming normal event handling. If you do not wish to use this callback, and your listener is an instance of `CacheListener` (instead of a `CacheListenerAdapter`) implement `afterRegionLive` as an empty method. - -## Initial Operation - -The initial startup of a durable client is similar to the startup of any other client, except that it specifically calls the `ClientCache.readyForEvents` method when all regions and listeners on the client are ready to process messages from the server. - -## Disconnection - -While the client and servers are disconnected, their operation varies depending on the circumstances. - -- **Normal disconnect**. When a client closes its connection, the servers stop sending messages to the client and release its connection. If the client requests it, the servers maintain the queues and durable interest list information until the client reconnects or times out. The non-durable interest lists are discarded. The servers continue to queue up incoming messages for entries on the durable interest list. All messages that were in the queue when the client disconnected remain in the queue. If the client requests not to have its subscriptions maintained, or if there are no durable subscriptions, the servers unregister the client and do the same cleanup as for a non-durable client. -- **Abnormal disconnect**. If the client crashes or loses its connections to all servers, the servers automatically maintain its message queue and durable subscriptions until it reconnects or times out. -- **Client disconnected but operational**. If the client operates while it is disconnected, it gets what data it can from the local client cache. Since updates are not allowed, the data can become stale. An `UnconnectedException` occurs if an update is attempted. -- **Client stays disconnected past timeout period**. The servers track how long to keep a durable subscription queue alive based on the `durable-client-timeout` setting. If the client remains disconnected longer than the timeout, the servers unregister the client and do the same cleanup that is performed for a non-durable client. The servers also log an alert. When a timed-out client reconnects, the servers treat it as a new client making its initial connection. - -## Reconnection - -During initialization, the client cache is not blocked from doing operations, so you might be receiving old stored events from the server at the same time that your client cache is being updated by much more current events. These are the things that can act on the cache concurrently: - -- Results returned by the server in response to the client’s interest registrations. -- Client cache operations by the application. -- Callbacks triggered by replaying old events from the queue - -Geode handles the conflicts between the application and interest registrations so they do not create cache update conflicts. But you must program your event handlers so they don't conflict with current operations. This is true for all event handlers, but it is especially important for those used in durable clients. Your handlers may receive events well after the fact and you must ensure your programming takes that into account. - -This figure shows the three concurrent procedures during the initialization process. The application begins operations immediately on the client (step 1), while the client’s cache ready message (also step 1) triggers a series of queue operations on the servers (starting with step 2 on the primary server). At the same time, the client registers interest (step 2 on the client) and receives a response from the server. Message B2 applies to an entry in Region A, so the cache listener handles B2’s event. Because B2 comes before the marker, the client does not apply the update to the cache. - -Durable client reconnection. - -## Durable Event Replay - -When a durable client reconnects before the timeout period, the servers replay the events that were stored while the client was gone and then resume normal event messaging to the client. To avoid overwriting current entries with old data, the stored events are not applied to the client cache. Stored events are distinguished from new normal events by a marker that is sent to the client once all old events are replayed. - -1. All servers with a queue for this client place a marker in their queue when the client reconnects. -2. The primary server sends the queued messages to the client up to the marker. -3. The client receives the messages but does not apply the usual automatic updates to its cache. If cache listeners are installed, they handle the events. -4. The client receives the marker message indicating that all past events have been played back. -5. The server sends the current list of live regions. -6. For every `CacheListener` in each live region on the client, the marker event triggers the `afterRegionLive` callback. After the callback, the client begins normal processing of events from the server and applies the updates to its cache. - -Even when a new client starts up for the first time, the client cache ready markers are inserted in the queues. If messages start coming into the new queues before the servers insert the marker, those messages are considered as having happened while the client was disconnected, and their events are replayed the same as in the reconnect case. - -## Application Operations During Interest Registration - -Application operations take precedence over interest registration responses. The client can perform operations while it is receiving its interest registration responses. When adding register interest responses to the client cache, the following rules are applied: - -- If the entry already exists in the cache with a valid value, it is not updated. -- If the entry is invalid, and the register interest response is valid, the valid value is put into the cache. -- If an entry is marked destroyed, it is not updated. Destroyed entries are removed from the system after the register interest response is completed. -- If the interest response does not contain any results, because all of those keys are absent from the server’s cache, the client’s cache can start out empty. If the queue contains old messages related to those keys, the events are still replayed in the client’s cache. - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb b/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb deleted file mode 100644 index 908b0f5..0000000 --- a/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb +++ /dev/null @@ -1,228 +0,0 @@ ---- -title: Implementing an AsyncEventListener for Write-Behind Cache Event Handling ---- - -An `AsyncEventListener` asynchronously processes batches of events after they have been applied to a region. You can use an `AsyncEventListener` implementation as a write-behind cache event handler to synchronize region updates with a database. - -## How an AsyncEventListener Works - -An `AsyncEventListener` instance is serviced by its own dedicated thread in which a callback method is invoked. Events that update a region are placed in an internal `AsyncEventQueue`, and one or more threads dispatch batches of events at a time to the listener implementation. - -You can configure an `AsyncEventQueue` to be either serial or parallel. A serial queue is deployed to one Geode member, and it delivers all of a region's events, in order of occurrence, to a configured `AsyncEventListener` implementation. A parallel queue is deployed to multiple Geode members, and each instance of the queue delivers region events, possibly simultaneously, to a local `AsyncEventListener` implementation. - -While a parallel queue provides the best throughput for writing events, it provides less control for ordering those events. With a parallel queue, you cannot preserve event ordering for a region as a whole because multiple Geode servers queue and deliver the region's events at the same time. However, the ordering of events for a given partition (or for a given queue of a distributed region) can be preserved. - -For both serial and parallel queues, you can control the maximum amount of memory that each queue uses, as well as the batch size and frequency for processing batches in the queue. You can also configure queues to persist to disk (instead of simply overflowing to disk) so that write-behind caching can pick up where it left off when a member shuts down and is later restarted. - -Optionally, a queue can use multiple threads to dispatch queued events. When you configure multiple threads for a serial queue, the logical queue that is hosted on a Geode member is divided into multiple physical queues, each with a dedicated dispatcher thread. You can then configure whether the threads dispatch queued events by key, by thread, or in the same order in which events were added to the queue. When you configure multiple threads for a parallel queue, each queue hosted on a Geode member is processed by dispatcher threads; the total number of queues created depends on the number of members that host the region. - -A `GatewayEventFilter` can be placed on the `AsyncEventQueue` to control whether a particular event is sent to a selected `AsyncEventListener`. For example, events associated with sensitive data could be detected and not queued. For more detail, see the Javadocs for `GatewayEventFilter`. - -A `GatewayEventSubstitutionFilter` can specify whether the event is transmitted in its entirety or in an altered representation. For example, to reduce the size of the data being serialized, it might be a more efficient to represent a full object by only its key. For more detail, see the Javadocs for `GatewayEventSubstitutionFilter`. - -## Operation Distribution from an AsyncEventQueue - -An `AsyncEventQueue` distributes these operations: - -- Entry create -- Entry put -- Entry distributed destroy, providing the operation is not an expiration action -- Expiration destroy, if the `forward-expiration-destroy` attribute is set to `true`. By default, this attribute is `false`, but you can set it to `true` using `cache.xml` or `gfsh`. To set this attribute in the Java API, use `AsyncEventQueueFactory.setForwardExpirationDestroy()`. See the javadocs for details. - -These operations are not distributed: - -- Get -- Invalidate -- Local destroy -- Region operations -- Expiration actions -- Expiration destroy, if the `forward-expiration-destroy` attribute is set to `false`. The default value is `false`. - -## Guidelines for Using an AsyncEventListener - -Review the following guidelines before using an AsyncEventListener: - -- If you use an `AsyncEventListener` to implement a write-behind cache listener, your code should check for the possibility that an existing database connection may have been closed due to an earlier exception. For example, check for `Connection.isClosed()` in a catch block and re-create the connection as needed before performing further operations. -- Use a serial `AsyncEventQueue` if you need to preserve the order of region events within a thread when delivering events to your listener implementation. Use parallel queues when the order of events within a thread is not important, and when you require maximum throughput for processing events. In both cases, serial and parallel, the order of operations on a given key is preserved within the scope of the thread. -- You must install the `AsyncEventListener` implementation on a Geode member that hosts the region whose events you want to process. -- If you configure a parallel `AsyncEventQueue`, deploy the queue on each Geode member that hosts the region. -- You can install a listener on more than one member to provide high availability and guarantee delivery for events, in the event that a member with the active `AsyncEventListener` shuts down. At any given time only one member has an active listener for dispatching events. The listeners on other members remain on standby for redundancy. For best performance and most efficient use of memory, install only one standby listener (redundancy of at most one). -- Install no more than one standby listener (redundancy of at most one) for performance and memory reasons. -- To preserve pending events through member shutdowns, configure Geode to persist the internal queue of the `AsyncEventListener` to an available disk store. By default, any pending events that reside in the internal queue of an `AsyncEventListener` are lost if the active listener's member shuts down. -- To ensure high availability and reliable delivery of events, configure the event queue to be both persistent and redundant. - -## Implementing an AsyncEventListener - -To receive region events for processing, you create a class that implements the `AsyncEventListener` interface. The `processEvents` method in your listener receives a list of queued `AsyncEvent` objects in each batch. - -Each `AsyncEvent` object contains information about a region event, such as the name of the region where the event occurred, the type of region operation, and the affected key and value. - -The basic framework for implementing a write-behind event handler involves iterating through the batch of events and writing each event to a database. For example: - -``` pre -class MyAsyncEventListener implements AsyncEventListener { - - public boolean processEvents(List events) { - - // Process each AsyncEvent - - for(AsyncEvent event: events) { - - // Write the event to a database - - } - } -} -``` - -## Processing AsyncEvents - -Use the [AsyncEventListener.processEvents](/releases/latest/javadoc/org/apache/geode/cache/asyncqueue/AsyncEventListener.html) method to process AsyncEvents. This method is called asynchronously when events are queued to be processed. The size of the list reflects the number of batch events where batch size is defined in the AsyncEventQueueFactory. The `processEvents` method returns a boolean; true if the AsyncEvents are processed correctly, and false if any events fail processing. As long as `processEvents` returns false, Geode continues to re-try processing the events. - -You can use the `getDeserializedValue` method to obtain cache values for entries that have been updated or created. Since the `getDeserializedValue` method will return a null value for destroyed entries, you should use the `getKey` method to obtain references to cache objects that have been destroyed. Here's an example of processing AsyncEvents: - -``` pre -public boolean processEvents(@SuppressWarnings("rawtypes") List list) - { - logger.log (Level.INFO, String.format("Size of List = %s", list.size())); - List newEntries = new ArrayList(); - - List updatedEntries = new ArrayList(); - List destroyedEntries = new ArrayList(); - int possibleDuplicates = 0; - - for (@SuppressWarnings("rawtypes") AsyncEvent ge: list) - { - - if (ge.getPossibleDuplicate()) - possibleDuplicates++; - - if ( ge.getOperation().equals(Operation.UPDATE)) - { - updatedEntries.add((JdbcBatch) ge.getDeserializedValue()); - } - else if ( ge.getOperation().equals(Operation.CREATE)) - { - newEntries.add((JdbcBatch) ge.getDeserializedValue()); - } - else if ( ge.getOperation().equals(Operation.DESTROY)) - { - destroyedEntries.add(ge.getKey().toString()); - } - - } -``` - -## Configuring an AsyncEventListener - -To configure a write-behind cache listener, you first configure an asynchronous queue to dispatch the region events, and then create the queue with your listener implementation. You then assign the queue to a region in order to process that region's events. - -**Procedure** - -1. Configure a unique `AsyncEventQueue` with the name of your listener implementation. You can optionally configure the queue for parallel operation, persistence, batch size, and maximum memory size. See [WAN Configuration](../../reference/topics/elements_ref.html#topic_7B1CABCAD056499AA57AF3CFDBF8ABE3) for more information. - - **gfsh configuration** - - ``` pre - gfsh>create async-event-queue --id=sampleQueue --persistent --disk-store=exampleStore --listener=com.myCompany.MyAsyncEventListener --listener-param=url#jdbc:db2:SAMPLE,username#gfeadmin,password#admin1 - ``` - - The parameters for this command uses the following syntax: - - ``` pre - create async-event-queue --id=value --listener=value [--group=value] [--batch-size=value] - [--persistent(=value)?] [--disk-store=value] [--max-queue-memory=value] [--listener-param=value(,value)*] - ``` - - For more information, see [create async-event-queue](../../tools_modules/gfsh/command-pages/create.html#topic_ryz_pb1_dk). - - **cache.xml Configuration** - - ``` pre - - - - MyAsyncEventListener - - jdbc:db2:SAMPLE - - - gfeadmin - - - admin1 - - - - ... - - ``` - - **Java Configuration** - - ``` pre - Cache cache = new CacheFactory().create(); - AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory(); - factory.setPersistent(true); - factory.setDiskStoreName("exampleStore"); - factory.setParallel(false); - AsyncEventListener listener = new MyAsyncEventListener(); - AsyncEventQueue asyncQueue = factory.create("sampleQueue", listener); - ``` - -2. If you are using a parallel `AsyncEventQueue`, the gfsh example above requires no alteration, as gfsh applies to all members. If using cache.xml or the Java API to configure your `AsyncEventQueue`, repeat the above configuration in each Geode member that will host the region. Use the same ID and configuration settings for each queue configuration. - **Note:** - You can ensure other members use the sample configuration by using the cluster configuration service available in gfsh. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html). - -3. On each Geode member that hosts the `AsyncEventQueue`, assign the queue to each region that you want to use with the `AsyncEventListener` implementation. - - **gfsh Configuration** - - ``` pre - gfsh>create region --name=Customer --async-event-queue-id=sampleQueue - ``` - - Note that you can specify multiple queues on the command line in a comma-delimited list. - - **cache.xml Configuration** - - ``` pre - - - - - - ... - - ``` - - **Java Configuration** - - ``` pre - RegionFactory rf1 = cache.createRegionFactory(); - rf1.addAsyncEventQueue(sampleQueue); - Region customer = rf1.create("Customer"); - - // Assign the queue to multiple regions as needed - RegionFactory rf2 = cache.createRegionFactory(); - rf2.addAsyncEventQueue(sampleQueue); - Region order = rf2.create("Order"); - ``` - - Using the Java API, you can also add and remove queues to regions that have already been created: - - ``` pre - AttributesMutator mutator = order.getAttributesMutator(); - mutator.addAsyncEventQueueId("sampleQueue"); - ``` - - See the [Geode API documentation](/releases/latest/javadoc/org/apache/geode/cache/AttributesMutator.html) for more information. - -4. Optionally configure persistence and conflation for the queue. - **Note:** - You must configure your AsyncEventQueue to be persistent if you are using persistent data regions. Using a non-persistent queue with a persistent region is not supported. - -5. Optionally configure multiple dispatcher threads and the ordering policy for the queue using the instructions in [Configuring Dispatcher Threads and Order Policy for Event Distribution](configuring_gateway_concurrency_levels.html). - -The `AsyncEventListener` receives events from every region configured with the associated `AsyncEventQueue`. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/limit_server_subscription_queue_size.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/limit_server_subscription_queue_size.html.md.erb b/geode-docs/developing/events/limit_server_subscription_queue_size.html.md.erb deleted file mode 100644 index ff571b2..0000000 --- a/geode-docs/developing/events/limit_server_subscription_queue_size.html.md.erb +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: Limit the Server's Subscription Queue Memory Use ---- - - -These are options for limiting the amount of server memory the subscription queues consume. - -- Optional: Conflate the subscription queue messages. -- Optional: Increase the frequency of queue synchronization. This only applies to configurations where server redundancy is used for high availability. Increase the client’s pool configuration, `subscription-ack-interval`. The client periodically sends a batch acknowledgment of messages to the server, rather than acknowledging each message individually. A lower setting speeds message delivery and generally reduces traffic between the server and client. A higher setting helps contain server queue size. Example: - - ``` pre - - - - ... - - ``` - - You might want to lower the interval if you have a very busy system and want to reduce the space required in the servers for the subscription queues. More frequent acknowledgments means fewer events held in the server queues awaiting acknowledgment. - -- Optional: Limit Queue Size. Cap the server queue size using overflow or blocking. These options help avoid out of memory errors on the server in the case of slow clients. A slow client slows the rate that the server can send messages, causing messages to back up in the queue, possibly leading to out of memory on the server. You can use one or the other of these options, but not both: - - Optional: Overflow to Disk. Configure subscription queue overflow by setting the server’s `client-subscription` properties. With overflow, the most recently used (MRU) events are written out to disk, keeping the oldest events, the ones that are next in line to be sent to the client, available in memory. Example: - - ``` pre - - - - - ``` - - - Optional: Block While Queue Full. Set the server’s `maximum-message-count` to the maximum number of event messages allowed in any single subscription queue before incoming messages are blocked. You can only limit the message count, not the size allocated for messages. Examples: - - XML: - - ``` pre - - - ``` - - API: - - ``` pre - Cache cache = ...; - CacheServer cacheServer = cache.addCacheServer(); - cacheServer.setPort(41414); - cacheServer.setMaximumMessageCount(50000); - cacheServer.start(); - ``` - - **Note:** - With this setting, one slow client can slow the server and all of its other clients because this blocks the threads that write to the queues. All operations that add messages to the queue block until the queue size drops to an acceptable level. If the regions feeding these queues are partitioned or have `distributed-ack` or `global` scope, operations on them remain blocked until their event messages can be added to the queue. If you are using this option and see stalling on your server region operations, your queue capacity might be too low for your application behavior. - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb deleted file mode 100644 index 5f63db1..0000000 --- a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: List of Event Handlers and Events ---- - -Geode provides many types of events and event handlers to help you manage your different data and application needs. - -## Event Handlers - -Use either cache handlers or membership handlers in any single application. Do not use both. The event handlers in this table are cache handlers unless otherwise noted. - - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Handler APIEvents receivedDescription
AsyncEventListenerAsyncEvent

Tracks changes in a region for write-behind processing. Extends th CacheCallback interface. You install a write-back cache listener to an AsyncEventQueue instance. You can then add the AsyncEventQueue instance to one or more regions for write-behind processing. See [Implementing an AsyncEventListener for Write-Behind Cache Event Handling](implementing_write_behind_event_handler.html#implementing_write_behind_cache_event_handling).

CacheCallback Superinterface of all cache event listeners. Functions only to clean up resources that the callback allocated.
CacheListenerRegionEvent, EntryEventTracks changes to region and its data entries. Responds synchronously. Extends CacheCallback interface. Installed in region. Receives only local cache events. Install one in every member where you want the events handled by this listener. In a partitioned region, the cache listener only fires in the primary data store. Listeners on secondaries are not fired.
CacheWriterRegionEvent, EntryEventReceives events for pending changes to the region and its data entries in this member or one of its peers. Has the ability to abort the operations in question. Extends CacheCallback interface. Installed in region. Receives events from anywhere in the distributed region, so you can install one in one member for the entire distributed region. Receives events only in primary data store in partitioned regions, so install one in every data store.
ClientMembershipListener -

(org.apache.geode.management .membership.ClientMembershipListener)

ClientMembershipEventOne of the interfaces that replaces the deprecated Admin APIs. You can use the ClientMembershipListener to receive membership events only about clients. This listener's callback methods are invoked when this process detects connection changes to clients. Callback methods include memberCrashed, memberJoined, memberLeft (graceful exit).
CqListenerCqEventReceives events from the server cache that satisfy a client-specified query. Extends CacheCallback interface. Installed in the client inside a CqQuery.
GatewayConflictResolverTimestampedEntryEventDecides whether to apply a potentially conflicting event to a region that is distributed over a WAN configuration. This event handler is called only when the distributed system ID of an update event is different from the ID that last updated the region entry.
MembershipListener -

(org.apache.geode.management .membership.MembershipListener)

MembershipEventUse this interface to receive membership events only about peers. This listener's callback methods are invoked when peer members join or leave the Geode distributed system. Callback methods include memberCrashed, memberJoined, and memberLeft (graceful exit).
RegionMembershipListenerRegionEventProvides after-event notification when a region with the same name has been created in another member and when other members hosting the region join or leave the distributed system. Extends CacheCallback and CacheListener. Installed in region as a CacheListener.
TransactionListenerTransactionEvent with embedded list of EntryEventTracks the outcome of transactions and changes to data entries in the transaction. -
-**Note:** -

Multiple transactions on the same cache can cause concurrent invocation of TransactionListener methods, so implement methods that do the appropriate synchronizing of the multiple threads for thread-safe operation.

-
-Extends CacheCallback interface. Installed in cache using transaction manager. Works with region-level listeners if needed.
TransactionWriterTransactionEvent with embedded list of EntryEventReceives events for pending transaction commits. Has the ability to abort the transaction. Extends CacheCallback interface. Installed in cache using transaction manager. At most one writer is called per transaction. Install a writer in every transaction host.
UniversalMembershipListenerAdapter -

(org.apache.geode .management.membership .UniversalMembershipListenerAdapter)

MembershipEvent and ClientMembershipEventOne of the interfaces that replaces the deprecated Admin APIs. Provides a wrapper for MembershipListener and ClientMembershipListener callbacks for both clients and peers.
- -## Events - -The events in this table are cache events unless otherwise noted. - - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
EventPassed to handler ...Description
AsyncEventAsyncEventListenerProvides information about a single event in the cache for asynchronous, write-behind processing.
CacheEvent Superinterface to RegionEvent and EntryEvent. This defines common event methods, and contains data needed to diagnose the circumstances of the event, including a description of the operation being performed, information about where the event originated, and any callback argument passed to the method that generated this event.
ClientMembershipEventClientMembershipListenerAn event delivered to a ClientMembershipListener when this process detects connection changes to servers or clients.
CqEventCqListenerProvides information about a change to the results of a continuous query running on a server on behalf of a client. CqEvents are processed on the client.
EntryEventCacheListener, CacheWriter, TransactionListener (inside the TransactionEvent)Extends CacheEvent for entry events. Contains information about an event affecting a data entry in the cache. The information includes the key, the value before this event, and the value after this event. EntryEvent.getNewValue returns the current value of the data entry. EntryEvent.getOldValue returns the value before this event if it is available. For a partitioned region, returns the old value if the local cache holds the primary copy of the entry. EntryEvent provides the Geode transaction ID if available. -

You can retrieve serialized values from EntryEvent using the getSerialized* methods. This is useful if you get values from one region’s events just to put them into a separate cache region. There is no counterpart put function as the put recognizes that the value is serialized and bypasses the serialization step.

MembershipEvent (membership event)MembershipListener

An event that describes the member that originated this event. Instances of this are delivered to a MembershipListener when a member has joined or left the distributed system.

RegionEventCacheListener, CacheWriter, RegionMembershipListenerExtends CacheEvent for region events. Provides information about operations that affect the whole region, such as reinitialization of the region after being destroyed.
TimestampedEntryEventGatewayConflictResolverExtends EntryEvent to include a timestamp and distributed system ID associated with the event. The conflict resolver can compare the timestamp and ID in the event with the values stored in the entry to decide whether the local system should apply the potentially conflicting event.
TransactionEventTransactionListener, TransactionWriterDescribes the work done in a transaction. This event may be for a pending or committed transaction, or for the work abandoned by an explicit rollback or failed commit. The work is represented by an ordered list of EntryEvent instances. The entry events are listed in the order in which the operations were performed in the transaction. -

As the transaction operations are performed, the entry events are conflated, with only the last event for each entry remaining in the list. So if entry A is modified, then entry B, then entry A, the list will contain the event for entry B followed by the second event for entry A.

- - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb b/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb deleted file mode 100644 index 31fe7a3..0000000 --- a/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: Resolving Conflicting Events ---- - -You can optionally create a `GatewayConflictResolver` cache plug-in to decide whether a potentially conflicting event that was delivered from another site should be applied to the local cache. - -By default, all regions perform consistency checks when a member applies an update received either from another cluster member or from a remote cluster over the WAN. The default consistency checking for WAN events is described in [How Consistency Is Achieved in WAN Deployments](../distributed_regions/how_region_versioning_works_wan.html#topic_fpy_z3h_j5). - -You can override the default consistency checking behavior by writing and configuring a custom `GatewayConflictResolver`. The `GatewayConflictResolver` implementation can use the timestamp and distributed system ID included in a WAN update event to determine whether or not to apply the update. For example, you may decide that updates from a particular cluster should always "win" a conflict when the timestamp difference between updates is less than some fixed period of time. - -## Implementing a GatewayConflictResolver - -**Note:** -A `GatewayConflictResolver` implementation is called only for update events that could cause a conflict in the region. This corresponds to update events that have a different distributed system ID than the distributed system that last updated the region entry. If the same distributed system ID makes consecutive updates to a region entry, no conflict is possible, and the `GatewayConflictResolver` is not called. - -**Procedure** - -1. Program the event handler: - 1. Create a class that implements the `GatewayConflictResolver` interface. - 2. If you want to declare the handler in `cache.xml`, implement the `org.apache.geode.cache.Declarable` interface as well. - 3. Implement the handler's `onEvent()` method to determine whether the WAN event should be allowed. `onEvent()` receives both a `TimestampedEntryEvent` and a `GatewayConflictHelper` instance. `TimestampedEntryEvent` has methods for obtaining the timestamp and distributed system ID of both the update event and the current region entry. Use methods in the `GatewayConflictHelper` to either disallow the update event (retaining the existing region entry value) or provide an alternate value. - - **Example:** - - ``` pre - public void onEvent(TimestampedEntryEvent event, GatewayConflictHelper helper) { - if (event.getOperation().isUpdate()) { - ShoppingCart oldCart = (ShoppingCart)event.getOldValue(); - ShoppingCart newCart = (ShoppingCart)event.getNewValue(); - oldCart.updateFromConflictingState(newCart); - helper.changeEventValue(oldCart); - } - } - ``` - - **Note:** - In order to maintain consistency in the region, your conflict resolver must always resolve two events in the same way regardless of which event it receives first. - -2. Install the conflict resolver for the cache, using either the `cache.xml` file or the Java API. - - **cache.xml** - - ``` pre - - ... - - myPackage.MyConflictResolver - - ... - - ``` - - **Java API** - - ``` pre - // Create or obtain the cache - Cache cache = new CacheFactory().create(); - - // Create and add a conflict resolver - cache.setGatewayConflictResolver(new MyConflictResolver); - ``` - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb b/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb deleted file mode 100644 index 0874bb7..0000000 --- a/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Tune the Client's Subscription Message Tracking Timeout ---- - - -If the client pool's `subscription-message-tracking-timeout` is set too low, your client will discard tracking records for live threads, increasing the likelihood of processing duplicate events from those threads. - -This setting is especially important in systems where it is vital to avoid or greatly minimize duplicate events. If you detect that duplicate messages are being processed by your clients, increasing the timeout may help. Setting `subscription-message-tracking-timeout` may not completely eliminate duplicate entries, but careful configuration can help minimize occurrences. - -Duplicates are monitored by keeping track of message sequence IDs from the source thread where the operation originated. For a long-running system, you would not want to track this information for very long periods or the information may be kept long enough for a thread ID to be recycled. If this happens, messages from a new thread may be discarded mistakenly as duplicates of messages from an old thread with the same ID. In addition, maintaining this tracking information for old threads uses memory that might be freed up for other things. - -To minimize duplicates and reduce the size of the message tracking list, set your client `subscription-message-tracking-timeout` higher than double the sum of these times: - -- The longest time your originating threads might wait between operations -- For redundant servers add: - - The server’s `message-sync-interval` - - Total time required for failover (usually 7-10 seconds, including the time to detect failure) - -You risk losing live thread tracking records if you set the value lower than this. This could result in your client processing duplicate event messages into its cache for the associated threads. It is worth working to set the `subscription-message-tracking-timeout` as low as you reasonably can. - -``` pre - - - ... - -``` http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb deleted file mode 100644 index 59206bc..0000000 --- a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Tuning Client/Server Event Messaging ---- - - -The server uses an asynchronous messaging queue to send events to its clients. Every event in the queue originates in an operation performed by a thread in a client, a server, or an application in the server’s or some other distributed system. The event message has a unique identifier composed of the originating thread’s ID combined with its member’s distributed system member ID, and the sequential ID of the operation. So the event messages originating in any single thread can be grouped and ordered by time from lowest sequence ID to highest. Servers and clients track the highest sequential ID for each member thread ID. - -A single client thread receives and processes messages from the server, tracking received messages to make sure it does not process duplicate sends. It does this using the process IDs from originating threads. - - - -The client’s message tracking list holds the highest sequence ID of any message received for each originating thread. The list can become quite large in systems where there are many different threads coming and going and doing work on the cache. After a thread dies, its tracking entry is not needed. To avoid maintaining tracking information for threads that have died, the client expires entries that have had no activity for more than the `subscription-message-tracking-timeout`. - -- **[Conflate the Server Subscription Queue](../../developing/events/conflate_server_subscription_queue.html)** - -- **[Limit the Server's Subscription Queue Memory Use](../../developing/events/limit_server_subscription_queue_size.html)** - -- **[Tune the Client's Subscription Message Tracking Timeout](../../developing/events/tune_client_message_tracking_timeout.html)** - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb deleted file mode 100644 index 299fd87..0000000 --- a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: How to Safely Modify the Cache from an Event Handler Callback ---- - -Event handlers are synchronous. If you need to change the cache or perform any other distributed operation from event handler callbacks, be careful to avoid activities that might block and affect your overall system performance. - -## Operations to Avoid in Event Handlers - -Do not perform distributed operations of any kind directly from your event handler. Geode is a highly distributed system and many operations that may seem local invoke distributed operations. - -These are common distributed operations that can get you into trouble: - -- Calling `Region` methods, on the event's region or any other region. -- Using the Geode `DistributedLockService`. -- Modifying region attributes. -- Executing a function through the Geode `FunctionService`. - -To be on the safe side, do not make any calls to the Geode API directly from your event handler. Make all Geode API calls from within a separate thread or executor. - -## How to Perform Distributed Operations Based on Events - -If you need to use the Geode API from your handlers, make your work asynchronous to the event handler. You can spawn a separate thread or use a solution like the `java.util.concurrent.Executor` interface. - -This example shows a serial executor where the callback creates a `Runnable` that can be pulled off a queue and run by another object. This preserves the ordering of events. - -``` pre -public void afterCreate(EntryEvent event) { - final Region otherRegion = cache.getRegion("/otherRegion"); - final Object key = event.getKey(); - final Object val = event.getNewValue(); - - serialExecutor.execute(new Runnable() { - public void run() { - try { - otherRegion.create(key, val); - } - catch (org.apache.geode.cache.RegionDestroyedException e) { - ... - } - catch (org.apache.geode.cache.EntryExistsException e) { - ... - } - } - }); - } -``` - -For additional information on the `Executor`, see the `SerialExecutor` example on the Oracle Java web site. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/eviction/chapter_overview.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/eviction/chapter_overview.html.md.erb b/geode-docs/developing/eviction/chapter_overview.html.md.erb deleted file mode 100644 index 4920294..0000000 --- a/geode-docs/developing/eviction/chapter_overview.html.md.erb +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Eviction ---- - -Use eviction to control data region size. - - - -- **[How Eviction Works](../../developing/eviction/how_eviction_works.html)** - - Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries. - -- **[Configure Data Eviction](../../developing/eviction/configuring_data_eviction.html)** - - Use eviction controllers to configure the eviction-attributes region attribute settings to keep your region within a specified limit. - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb deleted file mode 100644 index 42f3dbd..0000000 --- a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: Configure Data Eviction ---- - -Use eviction controllers to configure the eviction-attributes region attribute settings to keep your region within a specified limit. - - -Eviction controllers monitor region and memory use and, when the limit is reached, remove older entries to make way for new data. For heap percentage, the controller used is the Geode resource manager, configured in conjunction with the JVM's garbage collector for optimum performance. - -Configure data eviction as follows. You do not need to perform these steps in the sequence shown. - -1. Decide whether to evict based on: - - Entry count (useful if your entry sizes are relatively uniform). - - Total bytes used. In partitioned regions, this is set using `local-max-memory`. In non-partitioned, it is set in `eviction-attributes`. - - Percentage of application heap used. This uses the Geode resource manager. When the manager determines that eviction is required, the manager orders the eviction controller to start evicting from all regions where the eviction algorithm is set to `lru-heap-percentage`. Eviction continues until the manager calls a halt. Geode evicts the least recently used entry hosted by the member for the region. See [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager). - -2. Decide what action to take when the limit is reached: - - Locally destroy the entry. - - Overflow the entry data to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html). - -3. Decide the maximum amount of data to allow in the member for the eviction measurement indicated. This is the maximum for all storage for the region in the member. For partitioned regions, this is the total for all buckets stored in the member for the region - including any secondary buckets used for redundancy. -4. Decide whether to program a custom sizer for your region. If you are able to provide such a class, it might be faster than the standard sizing done by Geode. Your custom class must follow the guidelines for defining custom classes and, additionally, must implement `org.apache.geode.cache.util.ObjectSizer`. See [Requirements for Using Custom Classes in Data Caching](../../basic_config/data_entries_custom_classes/using_custom_classes.html). - -**Note:** -You can also configure Regions using the gfsh command-line interface, however, you cannot configure `eviction-attributes` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD) and [Disk Store Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA). - -Examples: - -``` pre -// Create an LRU memory eviction controller with max bytes of 1000 MB -// Use a custom class for measuring the size of each object in the region - - - - com.myLib.MySizer - - Super Sizer - - - - -``` - -``` pre -// Create a memory eviction controller on a partitioned region with max bytes of 512 MB - - - - - - org.apache.geode.cache.util.ObjectSizer - - - - - -``` - -``` pre -// Configure a partitioned region for heap LRU eviction. The resource manager controls the limits. - - -``` - -``` pre -Region currRegion = cache.createRegionFactory() - .setEvictionAttributes(EvictionAttributes.createLRUHeapAttributes(EvictionAction.LOCAL_DESTROY)) - .create("root"); -``` - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/eviction/how_eviction_works.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/eviction/how_eviction_works.html.md.erb b/geode-docs/developing/eviction/how_eviction_works.html.md.erb deleted file mode 100644 index 01d87d6..0000000 --- a/geode-docs/developing/eviction/how_eviction_works.html.md.erb +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: How Eviction Works ---- - -Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries. - - -You configure for eviction based on entry count, percentage of available heap, and absolute memory usage. You also configure what to do when you need to evict: destroy entries or overflow them to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html). - -When Geode determines that adding or updating an entry would take the region over the specified level, it overflows or removes enough older entries to make room. For entry count eviction, this means a one-to-one trade of an older entry for the newer one. For the memory settings, the number of older entries that need to be removed to make space depends entirely on the relative sizes of the older and newer entries. - -## Eviction in Partitioned Regions - -In partitioned regions, Geode removes the oldest entry it can find *in the bucket where the new entry operation is being performed*. Geode maintains LRU entry information on a bucket-by-bucket bases, as the cost of maintaining information across the partitioned region would be too great a performance hit. - -- For memory and entry count eviction, LRU eviction is done in the bucket where the new entry operation is being performed until the overall size of the combined buckets in the member has dropped enough to perform the operation without going over the limit. -- For heap eviction, each partitioned region bucket is treated as if it were a separate region, with each eviction action only considering the LRU for the bucket, and not the partitioned region as a whole. - -Because of this, eviction in partitioned regions may leave older entries for the region in other buckets in the local data store as well as in other stores in the distributed system. It may also leave entries in a primary copy that it evicts from a secondary copy or vice-versa. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/expiration/chapter_overview.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/expiration/chapter_overview.html.md.erb b/geode-docs/developing/expiration/chapter_overview.html.md.erb deleted file mode 100644 index 31ad4b2..0000000 --- a/geode-docs/developing/expiration/chapter_overview.html.md.erb +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Expiration ---- - -Use expiration to keep data current by removing stale entries. You can also use it to remove entries you are not using so your region uses less space. Expired entries are reloaded the next time they are requested. - -- **[How Expiration Works](../../developing/expiration/how_expiration_works.html)** - - Expiration removes old entries and entries that you are not using. You can destroy or invalidate entries. - -- **[Configure Data Expiration](../../developing/expiration/configuring_data_expiration.html)** - - Configure the type of expiration and the expiration action to use. - -