Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 6F7D4200C5C for ; Thu, 20 Apr 2017 23:58:54 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 6E3E9160B9F; Thu, 20 Apr 2017 21:58:54 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id DD6BA160B90 for ; Thu, 20 Apr 2017 23:58:53 +0200 (CEST) Received: (qmail 48197 invoked by uid 500); 20 Apr 2017 21:58:53 -0000 Mailing-List: contact dev-help@geode.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@geode.apache.org Delivered-To: mailing list dev@geode.apache.org Received: (qmail 48185 invoked by uid 99); 20 Apr 2017 21:58:52 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Apr 2017 21:58:52 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 9F24AF4A1F; Thu, 20 Apr 2017 21:58:52 +0000 (UTC) From: upthewaterspout To: dev@geode.apache.org Reply-To: dev@geode.apache.org References: In-Reply-To: Subject: [GitHub] geode issue #450: GEODE-2632: create ClientCachePutBench Content-Type: text/plain Message-Id: <20170420215852.9F24AF4A1F@git1-us-west.apache.org> Date: Thu, 20 Apr 2017 21:58:52 +0000 (UTC) archived-at: Thu, 20 Apr 2017 21:58:54 -0000 Github user upthewaterspout commented on the issue: https://github.com/apache/geode/pull/450 Regarding putting JMH benchmarks in the core - seems fine. I think I originally made geode-benchmarks a separate project so it would be easy to share code and compare benchmarks between modules - eg comparing lucene to OQL queries. But putting the benchmarks in each module maybe makes more sense. This does seem to be somewhat stretching what JMH is designed for. JMH is targeted towards *microbenchmarks* so launching a separate server process seems a bit of a stretch. In particular, it's not clear to me here whether your server is getting restarted between benchmark iterations. JMH specifically tries to restart the JVM multiple times to deal with inconsistencies, but maybe only your client is getting restarted? In general I think we should probably be focusing on single VM, smaller unit benchmarks with JMH - benchmarking distributed systems might be better done with a different framework and multiple hosts. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. ---