From dev-return-695-archive-asf-public=cust-asf.ponee.io@hudi.apache.org Fri Jun 21 22:12:48 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 850A718064F for ; Sat, 22 Jun 2019 00:12:48 +0200 (CEST) Received: (qmail 36068 invoked by uid 500); 21 Jun 2019 22:12:47 -0000 Mailing-List: contact dev-help@hudi.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hudi.apache.org Delivered-To: mailing list dev@hudi.apache.org Received: (qmail 36056 invoked by uid 99); 21 Jun 2019 22:12:47 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Jun 2019 22:12:47 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 1AD1C180618 for ; Fri, 21 Jun 2019 22:12:47 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.799 X-Spam-Level: * X-Spam-Status: No, score=1.799 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id odfH8nKJkPLd for ; Fri, 21 Jun 2019 22:12:44 +0000 (UTC) Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 49CCE5F17E for ; Fri, 21 Jun 2019 22:12:44 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id j29so6062240lfk.10 for ; Fri, 21 Jun 2019 15:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=h7CLdOSMVQ3KuxbKgUV2JPoGfz9OFcn1T6Y+zgY8Q4I=; b=XynndJrnHOd7GjOdLQDwkNroJbLzWAaaohky/eqS8LCMqtwm6MQd3G2FE97NHWIycn R7/2p4tCgyVU4Rwyg6yVI76vozBQOLUUkCO/FdoF09cSEfT9AzGZa79llvCRuGTIEfr0 gfiVEQgyyY5KH2Yh3q724dp7WrqWT0smQmGqsmUBntfjNJ/4EItFSeuBnHix09z9+Kld 1VZ7zehPBmmdInupXWbjoGkPX186ss9Vu/dXGmERDfi3vGrILy5+/v7h/MKvjhUqFqEV 7T1+i9pMc6IeFX8dmnQS1TpducC4kpAggJ573dEwiMIZLoHaUuQidMsT+xGE7sO+hLsa X7Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=h7CLdOSMVQ3KuxbKgUV2JPoGfz9OFcn1T6Y+zgY8Q4I=; b=CqqnGW6aLz16uBcfbotOG1ZIKQ2FDjCCwuWgM2RAYPFLhbQJNYmf+nawGu3wsYEky2 DHRSeVid2C4AJ0k1qL02A1K3i09+DVI/LwOf2Ksaaa7/jdotzAjs9+kXoxGmlKxCyDAF 4HI/XY1zQ2ycDsJTFfaEkVgbMmGFxSI9AC6iM9Arj36jIpEbHf8F6dmeACDA2bHgPRvX i3R4bAPJEScjdGr5AAymEB9MTpiN6fHGqoIKNlY8MOE/y24Tc0rJboNFS4s2XqTzSphb iZm+RN5i574TiCqNPMN+FFyNaV+ZM/HY5SSWsYOq2MD8u3HYrRL51xI2ym/Jw+38e5pM hpMQ== X-Gm-Message-State: APjAAAWBCJHX2rdOqqUae6x5Uw/SVXj2L9IkJ5/cPxIIif3F2GgrdCQY s88+pEhBe+jlfceVVGG4yQ+KGiEF8KfIzieela+91/be X-Google-Smtp-Source: APXvYqx7CUL7AuZeI5e8bypMMt2q1GKQXRgUiWQE/kBSGQAT6rCDOr/6MxqMZLKnAUOSeCU6R6agyCkjUHd+v5VyjHI= X-Received: by 2002:ac2:51a3:: with SMTP id f3mr59773232lfk.125.1561155163427; Fri, 21 Jun 2019 15:12:43 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Netsanet Gebretsadkan Date: Sat, 22 Jun 2019 00:12:32 +0200 Message-ID: Subject: Re: Add checkpoint metadata while using HoodieSparkSQLWriter To: dev@hudi.apache.org Content-Type: multipart/alternative; boundary="000000000000f49af3058bdcc24f" --000000000000f49af3058bdcc24f Content-Type: text/plain; charset="UTF-8" @Vinoth, Thanks , that would be great if Balaji could share it. Kind regards, On Thu, Jun 20, 2019 at 11:17 PM Vinoth Chandar wrote: > Hi, > > We usually test with our production workloads.. However, balaji recently > merged a DistributedTestDataSource, > > https://github.com/apache/incubator-hudi/commit/a0d7ab238473f22347e140b0e1e273ab80583eb7#diff-893dced90c18fd2698c6a16475f5536d > > > that can generate some random data for testing.. Balaji, do you mind > sharing a command that can be used to kick something off like that? > > > On Thu, Jun 20, 2019 at 1:54 AM Netsanet Gebretsadkan > wrote: > > > Dear Vinoth, > > > > I want to try to check out the performance comparison of upsert and bulk > > insert. But i couldn't find a clean data set more than 10 GB. > > Would it be possible to get a data set from Hudi team? For example i was > > using the stocks data that you provided on your demo. Hence, can i get > > more GB's of that dataset for my experiment? > > > > Thanks for your consideration. > > > > Kind regards, > > > > On Fri, Jun 7, 2019 at 7:59 PM Vinoth Chandar wrote: > > > > > > > > https://github.com/apache/incubator-hudi/issues/714#issuecomment-499981159 > > > > > > Just circling back with the resolution on the mailing list as well. > > > > > > On Tue, Jun 4, 2019 at 6:24 AM Netsanet Gebretsadkan < > net22geb@gmail.com > > > > > > wrote: > > > > > > > Dear Vinoth, > > > > > > > > Thanks for your fast response. > > > > I have created a new issue called Performance Comparison of > > > > HoodieDeltaStreamer and DataSourceAPI #714 with the screnshots of > the > > > > spark UI which can be found at the following link > > > > https://github.com/apache/incubator-hudi/issues/714. > > > > In the UI, it seems that the ingestion with the data source API is > > > > spending much time in the count by key of HoodieBloomIndex and > > workload > > > > profile. Looking forward to receive insights from you. > > > > > > > > Kinde regards, > > > > > > > > > > > > On Tue, Jun 4, 2019 at 6:35 AM Vinoth Chandar > > wrote: > > > > > > > > > Hi, > > > > > > > > > > Both datasource and deltastreamer use the same APIs underneath. So > > not > > > > > sure. If you can grab screenshots of spark UI for both and open a > > > ticket, > > > > > glad to take a look. > > > > > > > > > > On 2, well one of goals of Hudi is to break this dichotomy and > enable > > > > > streaming style (I call it incremental processing) of processing > even > > > in > > > > a > > > > > batch job. MOR is in production at uber. Atm MOR is lacking just > one > > > > > feature (incr pull using log files) that Nishith is planning to > merge > > > > soon. > > > > > PR #692 enables Hudi DeltaStreamer to ingest continuously while > > > managing > > > > > compaction etc in the same job. I already knocked off some index > > > > > performance problems and working on indexing the log files, which > > > should > > > > > unlock near real time ingest. > > > > > > > > > > Putting all these together, within a month or so near real time MOR > > > > vision > > > > > should be very real. Ofc we need community help with dev and > testing > > to > > > > > speed things up. :) > > > > > > > > > > Hope that gives you a clearer picture. > > > > > > > > > > Thanks > > > > > Vinoth > > > > > > > > > > On Mon, Jun 3, 2019 at 1:01 AM Netsanet Gebretsadkan < > > > net22geb@gmail.com > > > > > > > > > > wrote: > > > > > > > > > > > Thanks, Vinoth > > > > > > > > > > > > Its working now. But i have 2 questions: > > > > > > 1. The ingestion latency of using DataSource API with > > > > > > the HoodieSparkSQLWriter is high compared to using delta > > streamer. > > > > Why > > > > > is > > > > > > it slow? Are there specific option where we could specify to > > minimize > > > > the > > > > > > ingestion latency. > > > > > > For example: when i run the delta streamer its talking about 1 > > > > minute > > > > > to > > > > > > insert some data. If i use DataSource API with > > HoodieSparkSQLWriter, > > > > its > > > > > > taking 5 minutes. How can we optimize this? > > > > > > 2. Where do we categorize Hudi in general (Is it batch processing > > or > > > > > > streaming)? I am asking this because currently the copy on write > > is > > > > the > > > > > > one which is fully working and since the functionality of the > merge > > > on > > > > > read > > > > > > is not fully done which enables us to have a near real time > > > analytics, > > > > > can > > > > > > we consider Hudi as a batch job? > > > > > > > > > > > > Kind regards, > > > > > > > > > > > > > > > > > > On Thu, May 30, 2019 at 5:52 PM Vinoth Chandar < > vinoth@apache.org> > > > > > wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > Short answer, by default any parameter you pass in using > > > option(k,v) > > > > or > > > > > > > options() beginning with "_" would be saved to the commit > > metadata. > > > > > > > You can change "_" prefix to something else by using the > > > > > > > DataSourceWriteOptions.COMMIT_METADATA_KEYPREFIX_OPT_KEY(). > > > > > > > Reason you are not seeing the checkpointstr inside the commit > > > > metadata > > > > > is > > > > > > > because its just supposed to be a prefix for all such commit > > > > metadata. > > > > > > > > > > > > > > val metaMap = parameters.filter(kv => > > > > > > > > kv._1.startsWith(parameters(COMMIT_METADATA_KEYPREFIX_OPT_KEY))) > > > > > > > > > > > > > > On Thu, May 30, 2019 at 2:56 AM Netsanet Gebretsadkan < > > > > > > net22geb@gmail.com> > > > > > > > wrote: > > > > > > > > > > > > > > > I am trying to use the HoodieSparkSQLWriter to upsert data > from > > > any > > > > > > > > dataframe into a hoodie modeled table. Its creating > everything > > > > > > correctly > > > > > > > > but , i also want to save the checkpoint but i couldn't even > > > though > > > > > am > > > > > > > > passing it as an argument. > > > > > > > > > > > > > > > > inputDF.write() > > > > > > > > .format("com.uber.hoodie") > > > > > > > > .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY(), > > > > "_row_key") > > > > > > > > .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY(), > > > > > > > "partition") > > > > > > > > .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY(), > > > > > "timestamp") > > > > > > > > .option(HoodieWriteConfig.TABLE_NAME, tableName) > > > > > > > > > > > .option(DataSourceWriteOptions.COMMIT_METADATA_KEYPREFIX_OPT_KEY(), > > > > > > > > checkpointstr) > > > > > > > > .mode(SaveMode.Append) > > > > > > > > .save(basePath); > > > > > > > > > > > > > > > > am using the COMMIT_METADATA_KEYPREFIX_OPT_KEY() for > inserting > > > the > > > > > > > > checkpoint while using the dataframe writer but i couldn't > add > > > the > > > > > > > > checkpoint meta data in to the .hoodie meta data. Is there a > > way > > > i > > > > > can > > > > > > > add > > > > > > > > the checkpoint meta data while using the dataframe writer > API? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --000000000000f49af3058bdcc24f--