ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexander Lapin (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (IGNITE-11287) JDBC Thin: best effort affinity
Date Mon, 22 Apr 2019 09:42:00 GMT

    [ https://issues.apache.org/jira/browse/IGNITE-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803720#comment-16803720
] 

Alexander Lapin edited comment on IGNITE-11287 at 4/22/19 9:41 AM:
-------------------------------------------------------------------

Key points
 * IEP about affinity awareness [https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients]
 * Within jdbc thin client affinity awareness is switched off by default. In order to enable
it please add 'affinityAwareness=true' to connection string: jdbc:ignite:thin://127.0.0.1:10800..10802?affinityAwareness=true
 * Jdbc thin affinity awareness is an optimization, so almost always it should be transparent
to user. No new exceptions are expected, etc.

Test plan draft.
 # Сheck that requests go to the expected number of nodes for different combinations of conditions
 ** Transactional*
 *** Without params
 **** Select
 ***** Different partition tree options(All/NONE/Group/CONST) produced by different query
types.
 **** DML: Update, Delete

 ***** - // -
 *** With params
 **** - // -
 ** Non-Transactional
 *** - // -
 # Check that request/response functionality works fine if server response lacks partition
result.
 # Check that partition result is supplied only in case of rendezvous affinity function without
custom filters.
 # Check that best effort functionality works fine for different partitions count.
 # Сheck that a change in topology leads to jdbc thin affinity cache invalidation.
 ## Topology changed during partition result retrieval.
 ## Topology changed during cache distribution retrieval.
 ## Topology changed during best-effort-affinity-unrelated query.
 # Check that jdbc thin best effort affinity works fine if cache is full and new data still
coming. For given case we probably should decrease cache boundaries.
 # Check that proper connection is used if set of nodes we are connected to and set of nodes
derived from partitions
 ## Fully intersect;
 ## Partially intersect;
 ## Doesn't intersect, i.e.
||User Specified||Derived from partitons||
|host:port1 - > UUID1
 host:port2 -> UUID2|partition1 -> UUID3|
No intersection, so random connection should be used.
 # Check client reconnection after failure.
 # Check that jdbc thin best effort affinity skipped if it is switched off.

* Please, pay attention that in case of transactions we should use sticky connections.


was (Author: alapin):
Key points
 * IEP about affinity awareness [https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients]
 * Within jdbc thin client affinity awareness is switched off by default. In order to enable
it please add 'affinityAwareness=true' to connection string: jdbc:ignite:thin://127.0.0.1:10800..10802?affinityAwareness=true
 * Jdbc thin affinity awareness is an optimization, so almost always it should be transparent
to user. No new exceptions are expected, etc.

Test plan draft.
 # Сheck that requests go to the expected number of nodes for different combinations of conditions
 ** Transactional*
 *** Without params
 **** Select
 ***** Different partition tree options(All/NONE/Group/CONST) produced by different query
types.
 **** DML: Update, Delete

 ***** - // -
 *** With params
 **** - // -
 ** Non-Transactional
 *** - // -
 # Check that request/response functionality works fine if server response lacks partition
result.
 # Check that partition result is supplied only in case of rendezvous affinity function without
custom filters.
 # Check that best effort functionality works fine for different partitions count.
 # Сheck that a change in topology leads to jdbc thin affinity cache invalidation.
 ## Topology changed during partition result retrieval.
 ## Topology changed during cache distribution retrieval.
 ## Topology changed during best-effort-affinity-unrelated query.
 # Check that jdbc thin best effort affinity works fine if cache is full and new data still
coming. For given case we probably should decrease cache boundaries.
 # Check that proper connection is used if set of nodes we are connected to and set of nodes
derived from partitions
 ## Fully intersect;
 ## Partially intersect;
 ## Doesn't intersect, i.e.
||User Specified||Derived from partitons||
|host:port1 - > UUID1
 host:port2 -> UUID2|partition1 -> UUID3|

No intersection, so random connection should be used.

 # Check client reconnection after failure.
 # Check that jdbc thin best effort affinity skipped if it is switched off.

* Please, pay attention that in case of transactions we should use sticky connections.

> JDBC Thin: best effort affinity
> -------------------------------
>
>                 Key: IGNITE-11287
>                 URL: https://issues.apache.org/jira/browse/IGNITE-11287
>             Project: Ignite
>          Issue Type: New Feature
>          Components: jdbc
>            Reporter: Alexander Lapin
>            Assignee: Alexander Lapin
>            Priority: Major
>              Labels: iep-23, iep-24
>             Fix For: 2.8
>
>
> It's an umbrella ticket for implementing [IEP-23|https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients]
within the scope of JDBC Thin driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message