Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id BD583200C3D for ; Tue, 14 Mar 2017 22:13:11 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id BBDC9160B7E; Tue, 14 Mar 2017 21:13:11 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id DC065160B63 for ; Tue, 14 Mar 2017 22:13:10 +0100 (CET) Received: (qmail 35397 invoked by uid 500); 14 Mar 2017 21:13:10 -0000 Mailing-List: contact users-help@qpid.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@qpid.apache.org Delivered-To: mailing list users@qpid.apache.org Received: (qmail 35385 invoked by uid 99); 14 Mar 2017 21:13:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Mar 2017 21:13:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 65C871A014C for ; Tue, 14 Mar 2017 21:13:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -5.003 X-Spam-Level: X-Spam-Status: No, score=-5.003 tagged_above=-999 required=6.31 tests=[RCVD_IN_DNSWL_HI=-5, RP_MATCHES_RCVD=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id tjtr1ghfTgCr for ; Tue, 14 Mar 2017 21:13:07 +0000 (UTC) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 94DE05F343 for ; Tue, 14 Mar 2017 21:13:06 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx16.intmail.prod.int.phx2.redhat.com [10.5.11.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7A666C054903 for ; Tue, 14 Mar 2017 21:13:06 +0000 (UTC) Received: from ovpn-124-251.rdu2.redhat.com (unknown [10.10.124.251]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2CDB82D655 for ; Tue, 14 Mar 2017 21:13:06 +0000 (UTC) Message-ID: <1489525985.3306.5.camel@redhat.com> Subject: Re: [qpid-proton-cpp] default_container does not implement thread safety at all? From: Andrew Stitcher To: users@qpid.apache.org Date: Tue, 14 Mar 2017 17:13:05 -0400 In-Reply-To: References: Organization: Red Hat Inc Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.74 on 10.5.11.28 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 14 Mar 2017 21:13:06 +0000 (UTC) archived-at: Tue, 14 Mar 2017 21:13:11 -0000 On Tue, 2017-03-14 at 01:23 +0200, Fj wrote: > ... (elided somewhat inflammatory comments) > Am I missing something obvious, That's difficult to say (obvious is so subjective!) The code you are looking at implements the multithreaded API, but is not multithreaded itself, and yes it is unfortunate that it doesn't do a thread safe inject, but C++03 has no threading capability and the st code will compile with 03. Since it's important for us to minimise dependencies for the proton libraries, this code can't do much better. In the same code drop you will find multithreaded implementations of the container API too - This is in the examples/cpp/mt directory, this code relies on external thread capabilities. Note that we use the threading abilities of the C++11 and later standard library, so you will need to compile with a modern compiler to get mt support. Note that this does mean that if you want to support *any* variant of multithreading with the C++ binding (either multiple worker threads for the container, or running the container in a separate thread to the rest of your application) you will need C++11 or later to compile the proton C++ binding library. I don't consider this to be a real problem at this point. C++11 is supported well enough for our purposes in all popular C++ compilers, and has been for quite some time - I think over 5 years. > or is the documentation there, and in > other > places, and the example, chemically pure wishful thinking, like, it > would > be pretty nice if we supported multithreading, and it would work in > such > and such ways then, but unfortunately we don't, as a matter of fact? > > If so, maybe you gals and guys should fix the documentation, and not > just > with "experimental" but with "DOESN'T WORK AT ALL" prominent on the > page? Experimental means that the API is not stable, but is provided so you can get something done, and hopefully let us know how it goes, and give us feedback so we can stabilise the API so that it can be released as no longer experimental. All this code is being replaced as we speak with a proactor based implementation of the container API; this should support multiple worker threads in the container as long as you have C++11 or later. but only a single thread with C++03. > On more constructive notes: > > 1) do people maybe use some custom containers, similar to the > epoll_container in the examples (which doesn't support schedule() > though, I > have to point out)? If so, I much desire to see one. Custom containers essentially go away in the latest version of the API. Instead of suggesting the user might want to implement their own container we are providing a proactor based container. We are providing proactor implementations based in libuv, epoll (for Linux), iocp (for Windows). It will be possible to write your own proactor if necessary, but the ones we will provide cover a lot of the usual implementation landscape. > > 2) why do you attempt to bolt on your event_loop thing to Connection > and > below when the only thing that has the run() method is Container, > therefore > that's the scope that any worker thread would have? It's the > Container that > should have the inject() method. Underlying C API notwithstanding. I think you may have a misunderstanding here. Although the run loops are owned by the container, the unit of serialisation is the connection. The "bolting on" is to give access from whichever object you have to a thread safe context to inject work. Even if the inject happens at the container level, it's convenient to be able to inject from an event handler based on the object contained in the event, rather than having to navigate up to the container and then injecting. You *must* specify the connection that is to serialised, as *everything* happens per connection, so there is no value in running code in the run loop without having some way to safely run code in the connection context. The container schedule API does run code only in the container context, with no connection serialisation, but it's not safe to use to refer to any connection (or lower ) state without injecting into the relevant connection. The "event_loop" concept is to indicate the unit of serialisation. In theory this could be any level of the object tree. I agree that "event_loop" is a somewhat confusing name to indicate the serialisation unit, but we couldn't think of a better one - probably we need to make the doc clearer. > > 3) What's the best way forward if I really have to have a threadsafe > container that I can inject some event into threadsafely: For immediate trying the example mt container is probably your best bet. In the near future the proactor based container is "the future" (tm). > >   3a) does the C API support thread safety? Like, the whole problem > is > touching the reactor's job queue or whatever it has with taking an > appropriate lock, and it must take that same lock itself for it to > work. I don't think the C code has any locking at all. The only way to run it thread safely is to dedicate a thread to its run loop and poke it with pn_reactor_wakeup() - this is thread safe. However we are planning to deprecate the reactor (although not immediately as the python binding still uses it) and now consider the proactor its replacement. > >   3b) I checked out the Python API, they use a mock EventInjector > pollable > thingie instead of reaching inside the reactor and telling it add an > event. I think this used the same model as the C code - stash something and then > >   3c) epoll example does yet another thing apparently (though I'm not > sure > it works because it's not covered by any tests): just tell the > reactor to > wake up and process the queued job list in your handler. Hope this clears up some questions. Andrew --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org For additional commands, e-mail: users-help@qpid.apache.org