From user-return-32932-archive-asf-public=cust-asf.ponee.io@ignite.apache.org Thu Jan 14 16:32:38 2021 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mxout1-ec2-va.apache.org (mxout1-ec2-va.apache.org [3.227.148.255]) by mx-eu-01.ponee.io (Postfix) with ESMTPS id 1912C180663 for ; Thu, 14 Jan 2021 17:32:38 +0100 (CET) Received: from mail.apache.org (mailroute1-lw-us.apache.org [207.244.88.153]) by mxout1-ec2-va.apache.org (ASF Mail Server at mxout1-ec2-va.apache.org) with SMTP id 1A1324609C for ; Thu, 14 Jan 2021 16:32:37 +0000 (UTC) Received: (qmail 4781 invoked by uid 500); 14 Jan 2021 16:32:36 -0000 Mailing-List: contact user-help@ignite.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ignite.apache.org Delivered-To: mailing list user@ignite.apache.org Received: (qmail 4771 invoked by uid 99); 14 Jan 2021 16:32:36 -0000 Received: from spamproc1-he-de.apache.org (HELO spamproc1-he-de.apache.org) (116.203.196.100) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Jan 2021 16:32:36 +0000 Received: from localhost (localhost [127.0.0.1]) by spamproc1-he-de.apache.org (ASF Mail Server at spamproc1-he-de.apache.org) with ESMTP id B4F0F1FF3A9 for ; Thu, 14 Jan 2021 16:32:35 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamproc1-he-de.apache.org X-Spam-Flag: NO X-Spam-Score: 0.983 X-Spam-Level: X-Spam-Status: No, score=0.983 tagged_above=-999 required=6.31 tests=[KAM_DMARC_STATUS=0.01, SPF_SOFTFAIL=0.972, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-he-de.apache.org ([116.203.227.195]) by localhost (spamproc1-he-de.apache.org [116.203.196.100]) (amavisd-new, port 10024) with ESMTP id xaXgM-Sl29Zv for ; Thu, 14 Jan 2021 16:32:35 +0000 (UTC) Received-SPF: Softfail (mailfrom) identity=mailfrom; client-ip=162.255.23.37; helo=n6.nabble.com; envelope-from=jjimeno@omp.com; receiver= Received: from n6.nabble.com (n6.nabble.com [162.255.23.37]) by mx1-he-de.apache.org (ASF Mail Server at mx1-he-de.apache.org) with ESMTP id 1C4057FC28 for ; Thu, 14 Jan 2021 16:32:34 +0000 (UTC) Received: from n6.nabble.com (localhost [127.0.0.1]) by n6.nabble.com (Postfix) with ESMTP id 356001CC09892 for ; Thu, 14 Jan 2021 09:32:34 -0700 (MST) Date: Thu, 14 Jan 2021 09:32:34 -0700 (MST) From: jjimeno To: user@ignite.apache.org Message-ID: <1610641954172-0.post@n6.nabble.com> Subject: Multithread transactions in a C++ Thin Client MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hello all, We're developing a multithread application using one C++ Thin Client to connect to a cluster with a single Server Node. The C++ Thin Client version is "master" from January 21. We have implemented a "lock-and-update" system based on the "GetAndPut" function and PESSIMISTIC+READ_COMMITTED transactions. The idea is to lock a set of cache entries, update them and commit them atomically. In our tests we have detected a deadlock when following piece of code is executed for more than one thread on our application: ... ClientTransactions transactions = client.ClientTransactions(); ClientTransaction tx = transactions.TxStart(PESSIMISTIC, READ_COMMITTED); // This call should atomically get the current value for "key" and put "value" instead, locking the "key" cache entry at the same time auto oldValue = cache.GetAndPut(key, value); // Only the thread able of locking "key" should reach this code. Others have to wait for tx.Commit() to complete cache.Put (key, newValue); // After this call, other thread waiting in GetAndPut for "key" to be released should be able of continuing tx.Commit (); ... The thread reaching "cache.Put (key, newValue);" call, gets blocked in there, concretely in the lockGuard object created at the beginning of DataChannel::InternalSyncMessage function (data_channel.cpp:108). After debugging, we realized that this lockGuard is owned by a different thread, which is currently waiting on socket while executing GetAndPut function. According to this, my guess is that data routing for C++ Thin Clients is not multithread friendly. I did a test creating a C++ Thin Client for each different thread and the problem disappeared, but this is something I would like to avoid since threads are created and destroyed on the fly. So, my questions is: do I have to create a C++ thin client for each different thread or there is any workaround? Thanks in advance! -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/