Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 5C671200CC8 for ; Fri, 14 Jul 2017 19:54:05 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 5ACE116E17A; Fri, 14 Jul 2017 17:54:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 7969216E178 for ; Fri, 14 Jul 2017 19:54:04 +0200 (CEST) Received: (qmail 68917 invoked by uid 500); 14 Jul 2017 17:54:03 -0000 Mailing-List: contact commits-help@beam.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@beam.apache.org Delivered-To: mailing list commits@beam.apache.org Received: (qmail 68870 invoked by uid 99); 14 Jul 2017 17:54:02 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Jul 2017 17:54:02 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 77561C00B6 for ; Fri, 14 Jul 2017 17:54:02 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id LfVlCyVByrJM for ; Fri, 14 Jul 2017 17:54:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 013195F6C4 for ; Fri, 14 Jul 2017 17:54:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 78A35E087B for ; Fri, 14 Jul 2017 17:54:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 37D2824761 for ; Fri, 14 Jul 2017 17:54:00 +0000 (UTC) Date: Fri, 14 Jul 2017 17:54:00 +0000 (UTC) From: "Stephen Sisk (JIRA)" To: commits@beam.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (BEAM-1799) IO ITs: simplify data loading design pattern MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 14 Jul 2017 17:54:05 -0000 [ https://issues.apache.org/jira/browse/BEAM-1799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087685#comment-16087685 ] Stephen Sisk commented on BEAM-1799: ------------------------------------ I realized [~echauchot] and I resolved this discussion on the PR, but it hasn't been reflected here. My comments -------------------- We don't validate the data that was written in this test because the read ITest is going to verify it immediately after the write test writes it. But I agree with your question about testing for scale. This test currently only runs for 1000 rows, but you'll note that that is captured in a few small variables. My thought is that we'll do something like what is captured in https://github.com/ssisk/beam/commit/7f96c844dd1b6181ace87303ac5574cf11cbd78f and have 2 or 3 different test sizes we run - one small (1000 rows), and one large (10/50 million rows) - the test will be parameterized and we'll be able to pass in a command line option for whether we want the test run to be small or large. --- Etienne seemed to be okay with this plan in the discussion in the PR. > IO ITs: simplify data loading design pattern > -------------------------------------------- > > Key: BEAM-1799 > URL: https://issues.apache.org/jira/browse/BEAM-1799 > Project: Beam > Issue Type: Improvement > Components: sdk-java-extensions > Reporter: Stephen Sisk > Assignee: Stephen Sisk > Fix For: 2.0.0 > > > Problems with the current solution > ============================= > * The IO IT data loading guidelines [1] are complicated & aren't "native junit" - you end up working around junit rather than working with it (I was a part of defining them[0], so I critique the rules with (heart) ) > * Doing data loading using external tools means we have additional dependencies outside of the tests themselves. If we *must* use them, it's worth the time, but I think we have another option. I find it especially amusing since the data loading tools are things like ycsb which themselves are benchmarking tools ("I heard you like performance benchmarking, so here's a performance benchmarking tool to use before you use your performance benchmarking tool"), and really are just solving the problem of "I want to write data in parallel to this data store" - that sounds familiar :) > The current guidelines also don't scale well to performance tests: > * We want to write medium sized data for perf tests - doing data loading using external tools means a minimum of 2 reads & writes. For the small scale ITs, that's not a big deal, but for the large scale tests, if we assume we're working with a fixed budget, more data transferred/stored ~= fewer tests. > * If you want to verify that large data sets are correct (or create them), you need to actually read and write those large data sets - currently, the plan is that data loading/testing infrastructure only runs on one machine, so those operations are going to be slow. We aren't working with actual large data sets, so it won't take too long, but it's always nice to have faster tests. > New Proposed Solution > =================== > Instead of trying to test read and write separately, the test should be a "write, then read back what you just wrote", all using the IO under test. To support scenarios like "I want to run my read test repeatedly without re-writing the data", tests would add flags for "skipCleanUp" and "useExistingData". > Check out the example I wrote up [2] > I didn't want to invest much time on this before I opened a Jira/talked to others, so I plan on expanding on this a bit more/formalizing it in the testing docs. > A reminder of some context: > * The goals for the ITs & Perf tests are that they are *not* intended to be the place where we exercise specific scenarios. Instead, they are tripwires designed to find problems with code *we already believe works* (as proven by the unit tests) when it runs against real data store instances/runners using multiple nodes of both. > There are some definite disadvantages: > * There is a class of bugs that you can miss doing this. (namely: "I mangled the data on the way into the data store, and then reverse-mangled it again on the way back out so it looks fine, even though it is bad in the db") I assume that many of us have tested storage code in the past, and so we've thought about this trade-off. In this particular environment, where it's expensive/tricky to do independent testing of the storage code, I think this is the right trade off. > * The data loading scripts cannot be re-used between languages. I think this will be a pretty small relative cost compared to the cost of writing the IO in multiple languages, so it shouldn't matter too much. I think we'll save more time in not needing to use external tools for loading data. > * Read-only or write-only data stores - in this case, we'll either need to default to the old plan, or implement data loading or verification using beam > * This assumes the data store support parallelism - in the case where the read or write cannot be split, we probably should limit the amount of data we process in the tests to what we can reasonably do on a single worker anyway. > * It's harder to debug when this fails - I agree, and part of what I hope to invest a little time in as I go forward is to make it easier to determine what the actual failure is. Presumably folks debugging a particular IO's failures have tools to look at that IO and will be able to quickly determine if it's failing on the read or write. > * As with the previously before accepted proposal, we are relying on junit's afterClass to do cleanups. I don't have a good answer for this - if it proves to be a problem, we can investigate. > * This focuses the test exclusively on reading and writing. To address this, if we wanted to write other types of tests, they could either piggy back off the writeThenRead test, or it might be that they should be restricted to smaller data sets and they should be tested independently from this test and simply write their own data to the data store. > There are some really nice advantages: > * The test ends up being pretty simple and elegant. > * We have no external dependencies > * Read and write occurs the bare minimum number of times > * I believe we'll be able to create shared PTransforms for generating test data & validating test data. > [0] Past discussion of IT guidelines - https://lists.apache.org/thread.html/a8ea2507aee4a849cbb6cd7f3ae23fc8b47d447bd553fa01d6da6348@%3Cdev.beam.apache.org%3E > [1] Current data loading for IT guidelines - https://docs.google.com/document/d/153J9jPQhMCNi_eBzJfhAg-NprQ7vbf1jNVRgdqeEE8I/edit#heading=h.uj505twpx0m > [2] Example of writeThenRead test - https://github.com/ssisk/beam/blob/jdbc-it-perf/sdks/java/io/jdbc/src/test/java/org/apache/beam/sdk/io/jdbc/JdbcIOIT.java#L147 -- This message was sent by Atlassian JIRA (v6.4.14#64029)