Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 09228200CF6 for ; Mon, 18 Sep 2017 21:24:44 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 078DB1609DB; Mon, 18 Sep 2017 19:24:44 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id F189D1609D4 for ; Mon, 18 Sep 2017 21:24:42 +0200 (CEST) Received: (qmail 8459 invoked by uid 500); 18 Sep 2017 19:24:42 -0000 Mailing-List: contact dev-help@geode.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@geode.apache.org Delivered-To: mailing list dev@geode.apache.org Received: (qmail 8447 invoked by uid 99); 18 Sep 2017 19:24:41 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Sep 2017 19:24:41 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 28982183096 for ; Mon, 18 Sep 2017 19:24:41 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.48 X-Spam-Level: ** X-Spam-Status: No, score=2.48 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=pivotal-io.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id Jlv21RIQfyiu for ; Mon, 18 Sep 2017 19:24:37 +0000 (UTC) Received: from mail-it0-f41.google.com (mail-it0-f41.google.com [209.85.214.41]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 6C1855F6C0 for ; Mon, 18 Sep 2017 19:24:36 +0000 (UTC) Received: by mail-it0-f41.google.com with SMTP id w204so1748607itc.4 for ; Mon, 18 Sep 2017 12:24:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pivotal-io.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=R+rDsBtRSH0TdXXFGm3w1DOLaYCaZ/wba6TYlRqe87s=; b=ZlLOCqZtDYTw9MjbvJf4REmitP3TDcXenwAuZECmwfZ5okORb3f1YTAe7JmDigQlYw Cdu+Bzdw1vB7z5mKLf/QuuB+LKQuEwKrjYcopQ3eWF95zUTiHMabzatlRTgWYPH/cbfc wLQVeamJBHAZ4TqWPLxnABl5cvYrlfq85JOd5MZcmGm6s91ZbaVTOagd50RfAak35NXU uZOObeRmXMNPGFgelJ2oK4i6w+trau9rI7aqb3cbG2XGyDUrS6Lgg3Qk8HD9PQfjmdnL g2dTdE+lEBcjHODpQfKPvR57iBKFa565GMgGCJoTWq9N1Ybkb5hewfiAqh6j4pmS1TV1 cbSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=R+rDsBtRSH0TdXXFGm3w1DOLaYCaZ/wba6TYlRqe87s=; b=CeLtrsPOMttTPaKidN2UtZ6fWScfDxL8t6av2Abqg+mdbY0tsofbktrWTfPq3Uk9KJ VuE+6/rKhVHZwu33DvcUGGLH1vbygeY8u9wwPgAyIfbEfgvu2KD2jj5aCtoH9SgJwQZU KlMhEEpfvIR8n6uv5q+dFUCdyfNnBo3hivijPhQdC5GXUWLIV5rVQb3O+A4dhZacAOdd 4NnHUOeYUWI+cKsUCl+5Y1wArBVraWyvIDg4AVtS6QbXe0+S72xWHHr1EpIlwXccAInQ G55IRAdDPC55samcoph50p1TiZ+w0f1SkRobbuQqRkNhdDFc/19Ncrsmnsfl1moUVklj 04DA== X-Gm-Message-State: AHPjjUg6fAJgcl8BxG5+pGJYiGeFQBm3qt/KmCUtBp95Xlh3MzdSznyR OycO/gCjZB6QGT9IPkoUpC4FhujOZdNuWuAvpmwhHwD4 X-Google-Smtp-Source: AOwi7QCc6ckXEXBZCPSpFFDhgbMTKzSq80oe7dA86q30A5kkJnFkfUoLJ5X/rTRAPe7uu9NyzNfVRckNuKXbAHIgfig= X-Received: by 10.36.73.137 with SMTP id e9mr18273039itd.149.1505762674640; Mon, 18 Sep 2017 12:24:34 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.184.212 with HTTP; Mon, 18 Sep 2017 12:24:33 -0700 (PDT) In-Reply-To: References: <07F3EC88-79D0-4B2D-A467-B74DC5DA5D63@pivotal.io> <46427510-AF4A-45DC-93DA-D79F1B31610B@pivotal.io> From: Ernest Burghardt Date: Mon, 18 Sep 2017 13:24:33 -0600 Message-ID: Subject: Re: [Discuss] Investigation of C++ object return types To: dev@geode.apache.org Content-Type: multipart/alternative; boundary="001a1143d48056a1a905597bb1ba" archived-at: Mon, 18 Sep 2017 19:24:44 -0000 --001a1143d48056a1a905597bb1ba Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable +1 to passing in CacheConfig - might be a bit of refactoring (needed) but complexity level should be low +1 value approach as well as dumping the Factory On Mon, Sep 18, 2017 at 11:14 AM, David Kimura wrote: > +1 value approach (via some implementation from this thread) > > I think I like this. > > In all BAD cases, it's the user who shot themselves in the foot by using > std::move unsafely. I expect this behavior is the same behavior as for a= ny > other object. And if we're ever able to get rid of the Cache/CacheImpl > circular dependency then we can add a copy constructor. > > I also like the idea of passing in a CacheConfig. My concern though is > that it's piling on another big change. > > Thanks, > David > > On Sun, Sep 17, 2017 at 12:02 AM, Jacob Barrett > wrote: > > > Ok, one more idea. > > https://gist.github.com/pivotal-jbarrett/beed5f70c1f3a238cef94832b13dab > 7a > > > > The biggest issue with the value model is that we have been using a > factory > > to build the Cache object. We really don't need one and if we get rid o= f > it > > things look much better. They aren't perfect since we still need the ba= ck > > pointer from the impl to the cache for later use. If we didn't need tha= t > > then we could allow copy construction. As it stands right now this > version > > allows value, shared_prt, unique_ptr, etc. without any real overhead or > RVO > > issues. > > > > The big change is that, rather than have a factory that we set a bunch = of > > values on and then ask it to create the Cache, we create a CacheConfig > > object and pass that in to the Cache's constructor. Cache passes it to > > CacheImpl and CacheImpl sets itself up based on the config. If you look > at > > what the current factory model does it isn't that different. For clarit= y > I > > added an XmlCacheConfig object to that builds up the CacheConfig via Xm= l. > > You could imagine a YamlCacheConfig object *shiver*. The point is we > don't > > care as long as we get a CacheConfig with all the things we support at > > "init" time. > > > > I know it is a more radical change but I feel it is more C++ and more > > testable than the factory model. I also like that it solves some of the > > issues with the value model we were looking at. > > > > -Jake > > > > On Thu, Sep 14, 2017 at 5:16 PM Jacob Barrett > wrote: > > > > > Y'all here is an attempt to get the best of both worlds. > > > https://gist.github.com/pivotal-jbarrett/ > 52ba9ec5de0b494368d1c5282ef188 > > ef > > > > > > I thought I would try posting to Gist but so far not impressed, sorry= . > > > > > > The Gist of it is that we can overcome the thirdpary or transient > > > reference back to the public Cache instance by keeping a reference to > it > > in > > > the implementation instance and updating it whenever the move > constructor > > > is called. > > > > > > The downside is if your run this test it doesn't show RVO kicking in = on > > > the second test where we move the value into a shared pointer. There > are > > a > > > couple of pitfalls you can stumble into as well by trying to used the > > > previous instance to access the cache after the move operation, as > > > illustrated by the "BAD" commented lines. > > > > > > The upside is the choices this gives the use for ownership of their > > > factory constructed Cache instance. They can keep it a value or move = it > > to > > > unique or shared pointer. > > > > > > Overhead wise I think we better off in value as long as there are no > > > moves, rare I would thing, but the moves are pretty cheap at the poin= t > > > since we only maintain a unique_ptr. After moving into a shared_ptr i= t > > acts > > > identical to the shared_ptr model proposed earlier. > > > > > > -Jake > > > > > > > > > On Thu, Sep 14, 2017 at 3:36 PM Michael Martell > > > wrote: > > > > > >> Late to this party. > > >> > > >> Confession 1: I had to look up both RVO and copy-elision. > > >> Confession 2: I don't like using pointers at all. I used to, but I'v= e > > >> evolved to just use C# and Java :) > > >> > > >> Without investing a lot more time, I don't have strong feelings abou= t > > raw > > >> vs shared pointers. My only question is: Can we return ptr to abstra= ct > > >> class everywhere we return objects? Just thinking of mocking, which > > always > > >> wants to mock interfaces. > > >> > > >> On Thu, Sep 14, 2017 at 2:25 PM, Michael William Dodge < > > mdodge@pivotal.io > > >> > > > >> wrote: > > >> > > >> > +0 shared pointer > > >> > > > >> > > On 14 Sep, 2017, at 14:09, Ernest Burghardt < > eburghardt@pivotal.io> > > >> > wrote: > > >> > > > > >> > > Calling a vote for: > > >> > > > > >> > > - Raw pointer > > >> > > - shard pointer > > >> > > > > >> > > +1 raw Pointer, I had to look up RVO and am new to std::move(s) > > >> > > > > >> > > On Thu, Sep 14, 2017 at 3:02 PM, Michael William Dodge < > > >> > mdodge@pivotal.io> > > >> > > wrote: > > >> > > > > >> > >> I generally dig reference-counted pointers for avoiding lifetim= e > > >> issues > > >> > >> with objects allocated off the heap but I can live with bare > > >> pointers, > > >> > too. > > >> > >> > > >> > >> Sarge > > >> > >> > > >> > >>> On 13 Sep, 2017, at 16:25, Mark Hanson > > wrote: > > >> > >>> > > >> > >>> Hi All, > > >> > >>> > > >> > >>> I favor the =E2=80=9Cpointer" approach that is identified in t= he code > > >> sample. > > >> > >> There is greater clarity and less bytes seemingly created and > > >> written. > > >> > We > > >> > >> do sacrifice the potential ease of using an object, but in all,= I > > >> think > > >> > the > > >> > >> way our code is structured. It is not conducive to do a value > > >> approach, > > >> > >> from an efficiency standpoint, because of our use of the pimpl > > >> model. > > >> > >>> > > >> > >>> Thanks, > > >> > >>> Mark > > >> > >>> > > >> > >>>> On Sep 12, 2017, at 11:09 AM, Jacob Barrett < > jbarrett@pivotal.io > > > > > >> > >> wrote: > > >> > >>>> > > >> > >>>> My biggest concern with this model is that access to the publ= ic > > >> Cache > > >> > >>>> object from other public objects results in additional > > allocations > > >> of > > >> > a > > >> > >>>> Cache value. Think about when we are inside a Serializable > object > > >> and > > >> > we > > >> > >>>> access the Cache from DataOutput. > > >> > >>>> > > >> > >>>> As value: > > >> > >>>> Serializable* MyClass::fromData(DataInput& dataInput) { > > >> > >>>> auto cache =3D dataOutput.getCache(); > > >> > >>>> ... > > >> > >>>> } > > >> > >>>> In this the value of cache will RVO the allocation of Cache i= n > > the > > >> > >> getCache > > >> > >>>> call into the stack of this method, great. The problem is tha= t > > >> Cache > > >> > >> must > > >> > >>>> contain a std::shared_ptr which means that each > > >> allocation > > >> > >> is 8 > > >> > >>>> bytes (pointer to control block and pointer to CacheImpl) as > well > > >> as > > >> > >> having > > >> > >>>> to increment the strong counter in the control block. On > > >> exit/descope, > > >> > >> the > > >> > >>>> Cache will have to decrement the control block as well. > > >> > >>>> > > >> > >>>> Using current shared_ptr pimple model: > > >> > >>>> Serializable* MyClass::fromData(DataInput& dataInput) { > > >> > >>>> auto& cache =3D dataOutput.getCache(); > > >> > >>>> ... > > >> > >>>> } > > >> > >>>> We only suffer the ref allocation of 4 bytes and no ref count= . > > >> Since > > >> > >> this > > >> > >>>> function call can't survive past the lifespan of > Cache/CacheImpl > > >> they > > >> > >> don't > > >> > >>>> need to have shared_ptr and refcounting. > > >> > >>>> > > >> > >>>> Given that this method could be called numerous times is the > > >> overhead > > >> > of > > >> > >>>> the value version going to be a significant performance issue= ? > > >> > >>>> > > >> > >>>> I worry that moves and RVO is just beyond most developers. > Heck I > > >> > didn't > > >> > >>>> know anything about it until we started exploring it. > > >> > >>>> > > >> > >>>> > > >> > >>>> > > >> > >>>> -Jake > > >> > >>>> > > >> > >>>> > > >> > >>>> On Tue, Sep 12, 2017 at 8:06 AM David Kimura < > dkimura@pivotal.io > > > > > >> > >> wrote: > > >> > >>>> > > >> > >>>>> Follow up of attached discussion after more investigation. = I > > >> created > > >> > >> an > > >> > >>>>> example of returning Cache as shared pointer versus raw valu= e: > > >> > >>>>> > > >> > >>>>> https://github.com/dgkimura/geode-native-sandbox > > >> > >>>>> > > >> > >>>>> I still like returning by value as it lets the user do what > they > > >> want > > >> > >> with > > >> > >>>>> their object. > > >> > >>>>> > > >> > >>>>> // Here user creates object on their stack. > > >> > >>>>> auto c =3D CacheFactory::createFactory().create(); > > >> > >>>>> > > >> > >>>>> // Here user creates smart pointer in their heap. > > >> > >>>>> auto cptr =3D > > >> > >>>>> std::make_shared(CacheFactory::createFactory(). > > create()); > > >> > >>>>> > > >> > >>>>> Difficulty of implementing this is high due to circular > > >> dependencies > > >> > of > > >> > >>>>> Cache/CacheImpl as well as objects hanging off CacheImpl tha= t > > >> return > > >> > >>>>> Cache. We must be extra careful when dealing with move/copy > > >> > semantics > > >> > >> of > > >> > >>>>> Cache/CacheImpl. > > >> > >>>>> > > >> > >>>>> Alternative, is to keep as is and only permit heap allocatio= ns > > >> from > > >> > >>>>> factory using shared pointers. > > >> > >>>>> > > >> > >>>>> Thanks, > > >> > >>>>> David > > >> > >>>>> > > >> > >>> > > >> > >> > > >> > >> > > >> > > > >> > > > >> > > > > > > --001a1143d48056a1a905597bb1ba--