Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 1037C200B33 for ; Wed, 15 Jun 2016 03:09:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0EC5B160A56; Wed, 15 Jun 2016 01:09:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id DFE1E160A06 for ; Wed, 15 Jun 2016 03:09:07 +0200 (CEST) Received: (qmail 80979 invoked by uid 500); 15 Jun 2016 01:09:07 -0000 Mailing-List: contact user-help@kudu.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@kudu.incubator.apache.org Delivered-To: mailing list user@kudu.incubator.apache.org Received: (qmail 80971 invoked by uid 99); 15 Jun 2016 01:09:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 15 Jun 2016 01:09:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 8BE68C2170 for ; Wed, 15 Jun 2016 01:09:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.279 X-Spam-Level: * X-Spam-Status: No, score=1.279 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cloudera-com.20150623.gappssmtp.com Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id mNHHF2lUAMAq for ; Wed, 15 Jun 2016 01:09:03 +0000 (UTC) Received: from mail-qk0-f174.google.com (mail-qk0-f174.google.com [209.85.220.174]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id 298515F4E8 for ; Wed, 15 Jun 2016 01:09:02 +0000 (UTC) Received: by mail-qk0-f174.google.com with SMTP id s186so6318243qkc.1 for ; Tue, 14 Jun 2016 18:09:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudera-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=2Ilz0amspGt5nl0P6DTOe0gKAdtyI+J8voLHSB1gPtQ=; b=lXkNFB0/+bx81Jq42dUzYs61p2g3bysG+rU+UjdY8K/+PaK866BVS9ebdZcAvmpf9C 3Och9hxhvIS04YrlBvtkELGF6ZUO7VlLBWx6YoKrazqk1i2/pLB+aVVKUQWOtHtyBJOa 9r2Qtr7CVh3LOiykXvjqzDyl7M6nmvY0wFz8ibEd63WhcutDdlglNA5PDOb8ZXoPO8a5 nnV8fOrzqTgaQgZRdwJl0WX1mJSrqBknqlmhdbX1vL/Js14yDNYgHxi9svYhhVeVhzNg in5AaUyn1GIZpCHYBSg7scVU5SRSrc0+tdOTb69VI/j+0UK12WxGm8vPhoYWELmj+i7R wvmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=2Ilz0amspGt5nl0P6DTOe0gKAdtyI+J8voLHSB1gPtQ=; b=j6hjD+EtHuU9xMX7p0PgT5C3VeY4qiVVYPzfejUI2hTC5oOizgNUS0YrqbEyQHz9XP 1fy0Uint9GWmFfyVToLPxjMTGohfPNILE5Ls4K0t7nNLeWhKW0vGPrws9FbXPJGKoiJx ii6Tg2Y/8dP1ZcL7noFt3iGyzPJKTCAzYL30DG5K0NxUwmlLUjbQwC+xzLaGIdOqrHfl TVHDURZkmTPq28umwLHYQqFGFGjkKX0oMOBFLPECSN8JWXdtUUt9iDPWVKFiHC+dSFLe m2npL4BOkFZKo9DQmzpp9DEar9m+tfPT9JtJfwyFC/k3QmlVIGUou2iXVlVtgL13emyT OlXg== X-Gm-Message-State: ALyK8tIqFgg+sgFZyTRcLwjAKYLFyZgesiwRm5FxYOst5GF26XuPyYhiqcFcZe+5bGamX6lGdKRfNSOwDYoTgyUg X-Received: by 10.237.35.152 with SMTP id j24mr12728099qtc.96.1465952934962; Tue, 14 Jun 2016 18:08:54 -0700 (PDT) MIME-Version: 1.0 Received: by 10.55.191.4 with HTTP; Tue, 14 Jun 2016 18:08:15 -0700 (PDT) In-Reply-To: <1B0412DF-E2D7-4382-B3DF-882055D013DB@gmail.com> References: <99411687-F4C6-4DD5-896D-132543AAE005@amobee.com> <57D278EF-7D6D-4021-A84F-44EAB26F25DC@gmail.com> <790684AF-D25F-4BAA-A6A8-9A3C866DC8A7@gmail.com> <98227689-64E1-45C7-A6F6-5D122368AABA@gmail.com> <145873D3-8FE9-44EE-A7C1-E3BD36230C0C@gmail.com> <98C2770B-F161-429A-9B2F-3417596CB7CF@gmail.com> <51571B1E-993B-4819-9B3B-AA5F8FBEC8C9@gmail.com> <33B8A5A5-B499-4F7F-8FD6-0C52BE60CBBA@gmail.com> <7D703E18-468C-45BE-ADF2-99CDF66E4A64@gmail.com> <58E6279E-C554-4482-8D86-C1F293D1CB4D@gmail.com> <1B0412DF-E2D7-4382-B3DF-882055D013DB@gmail.com> From: Dan Burkert Date: Tue, 14 Jun 2016 18:08:15 -0700 Message-ID: Subject: Re: Spark on Kudu To: user@kudu.incubator.apache.org Content-Type: multipart/alternative; boundary=001a113c24caf25852053546c3a8 archived-at: Wed, 15 Jun 2016 01:09:10 -0000 --001a113c24caf25852053546c3a8 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I'm not sure exactly what the semantics will be, but at least one of them will be upsert. These modes come from spark, and they were really designed for file-backed storage and not table storage. We may want to do append = =3D upsert, and overwrite =3D truncate + insert. I think that may match the normal spark semantics more closely. - Dan On Tue, Jun 14, 2016 at 6:00 PM, Benjamin Kim wrote: > Dan, > > Thanks for the information. That would mean both =E2=80=9Cappend=E2=80=9D= and =E2=80=9Coverwrite=E2=80=9D > modes would be combined or not needed in the future. > > Cheers, > Ben > > On Jun 14, 2016, at 5:57 PM, Dan Burkert wrote: > > Right now append uses an update Kudu operation, which requires the row > already be present in the table. Overwrite maps to insert. Kudu very > recently got upsert support baked in, but it hasn't yet been integrated > into the Spark connector. So pretty soon these sharp edges will get a lo= t > better, since upsert is the way to go for most spark workloads. > > - Dan > > On Tue, Jun 14, 2016 at 5:41 PM, Benjamin Kim wrote: > >> I tried to use the =E2=80=9Cappend=E2=80=9D mode, and it worked. Over 3.= 8 million rows in >> 64s. I would assume that now I can use the =E2=80=9Coverwrite=E2=80=9D m= ode on existing >> data. Now, I have to find answers to these questions. What would happen = if >> I =E2=80=9Cappend=E2=80=9D to the data in the Kudu table if the data alr= eady exists? What >> would happen if I =E2=80=9Coverwrite=E2=80=9D existing data when the Dat= aFrame has data in >> it that does not exist in the Kudu table? I need to evaluate the best wa= y >> to simulate the UPSERT behavior in HBase because this is what our use ca= se >> is. >> >> Thanks, >> Ben >> >> >> >> On Jun 14, 2016, at 5:05 PM, Benjamin Kim wrote: >> >> Hi, >> >> Now, I=E2=80=99m getting this error when trying to write to the table. >> >> import scala.collection.JavaConverters._ >> val key_seq =3D Seq(=E2=80=9Cmy_id") >> val key_list =3D List(=E2=80=9Cmy_id=E2=80=9D).asJava >> kuduContext.createTable(tableName, df.schema, key_seq, new >> CreateTableOptions().setNumReplicas(1).addHashPartitions(key_list, 100)) >> >> df.write >> .options(Map("kudu.master" -> kuduMaster,"kudu.table" -> tableName)) >> .mode("overwrite") >> .kudu >> >> java.lang.RuntimeException: failed to write 1000 rows from DataFrame to >> Kudu; sample errors: Not found: key not found (error 0)Not found: key no= t >> found (error 0)Not found: key not found (error 0)Not found: key not foun= d >> (error 0)Not found: key not found (error 0) >> >> Does the key field need to be first in the DataFrame? >> >> Thanks, >> Ben >> >> On Jun 14, 2016, at 4:28 PM, Dan Burkert wrote: >> >> >> >> On Tue, Jun 14, 2016 at 4:20 PM, Benjamin Kim wrote= : >> >>> Dan, >>> >>> Thanks! It got further. Now, how do I set the Primary Key to be a >>> column(s) in the DataFrame and set the partitioning? Is it like this? >>> >>> kuduContext.createTable(tableName, df.schema, Seq(=E2=80=9Cmy_id"), new >>> CreateTableOptions().setNumReplicas(1).addHashPartitions(=E2=80=9Cmy_id= ")) >>> >>> java.lang.IllegalArgumentException: Table partitioning must be specifie= d >>> using setRangePartitionColumns or addHashPartitions >>> >> >> Yep. The `Seq("my_id")` part of that call is specifying the set of >> primary key columns, so in this case you have specified the single PK >> column "my_id". The `addHashPartitions` call adds hash partitioning to = the >> table, in this case over the column "my_id" (which is good, it must be o= ver >> one or more PK columns, so in this case "my_id" is the one and only vali= d >> combination). However, the call to `addHashPartition` also takes the >> number of buckets as the second param. You shouldn't get the >> IllegalArgumentException as long as you are specifying either >> `addHashPartitions` or `setRangePartitionColumns`. >> >> - Dan >> >> >>> >>> Thanks, >>> Ben >>> >>> >>> On Jun 14, 2016, at 4:07 PM, Dan Burkert wrote: >>> >>> Looks like we're missing an import statement in that example. Could yo= u >>> try: >>> >>> import org.kududb.client._ >>> >>> and try again? >>> >>> - Dan >>> >>> On Tue, Jun 14, 2016 at 4:01 PM, Benjamin Kim >>> wrote: >>> >>>> I encountered an error trying to create a table based on the >>>> documentation from a DataFrame. >>>> >>>> :49: error: not found: type CreateTableOptions >>>> kuduContext.createTable(tableName, df.schema, Seq("key")= , >>>> new CreateTableOptions().setNumReplicas(1)) >>>> >>>> Is there something I=E2=80=99m missing? >>>> >>>> Thanks, >>>> Ben >>>> >>>> On Jun 14, 2016, at 3:00 PM, Jean-Daniel Cryans >>>> wrote: >>>> >>>> It's only in Cloudera's maven repo: >>>> https://repository.cloudera.com/cloudera/cloudera-repos/org/kududb/kud= u-spark_2.10/0.9.0/ >>>> >>>> J-D >>>> >>>> On Tue, Jun 14, 2016 at 2:59 PM, Benjamin Kim >>>> wrote: >>>> >>>>> Hi J-D, >>>>> >>>>> I installed Kudu 0.9.0 using CM, but I can=E2=80=99t find the kudu-sp= ark jar >>>>> for spark-shell to use. Can you show me where to find it? >>>>> >>>>> Thanks, >>>>> Ben >>>>> >>>>> >>>>> On Jun 8, 2016, at 1:19 PM, Jean-Daniel Cryans >>>>> wrote: >>>>> >>>>> What's in this doc is what's gonna get released: >>>>> https://github.com/cloudera/kudu/blob/master/docs/developing.adoc#kud= u-integration-with-spark >>>>> >>>>> J-D >>>>> >>>>> On Tue, Jun 7, 2016 at 8:52 PM, Benjamin Kim >>>>> wrote: >>>>> >>>>>> Will this be documented with examples once 0.9.0 comes out? >>>>>> >>>>>> Thanks, >>>>>> Ben >>>>>> >>>>>> >>>>>> On May 28, 2016, at 3:22 PM, Jean-Daniel Cryans >>>>>> wrote: >>>>>> >>>>>> It will be in 0.9.0. >>>>>> >>>>>> J-D >>>>>> >>>>>> On Sat, May 28, 2016 at 8:31 AM, Benjamin Kim >>>>>> wrote: >>>>>> >>>>>>> Hi Chris, >>>>>>> >>>>>>> Will all this effort be rolled into 0.9.0 and be ready for use? >>>>>>> >>>>>>> Thanks, >>>>>>> Ben >>>>>>> >>>>>>> >>>>>>> On May 18, 2016, at 9:01 AM, Chris George < >>>>>>> Christopher.George@rms.com> wrote: >>>>>>> >>>>>>> There is some code in review that needs some more refinement. >>>>>>> It will allow upsert/insert from a dataframe using the datasource >>>>>>> api. It will also allow the creation and deletion of tables from a = dataframe >>>>>>> http://gerrit.cloudera.org:8080/#/c/2992/ >>>>>>> >>>>>>> Example usages will look something like: >>>>>>> http://gerrit.cloudera.org:8080/#/c/2992/5/docs/developing.adoc >>>>>>> >>>>>>> -Chris George >>>>>>> >>>>>>> >>>>>>> On 5/18/16, 9:45 AM, "Benjamin Kim" wrote: >>>>>>> >>>>>>> Can someone tell me what the state is of this Spark work? >>>>>>> >>>>>>> Also, does anyone have any sample code on how to update/insert data >>>>>>> in Kudu using DataFrames? >>>>>>> >>>>>>> Thanks, >>>>>>> Ben >>>>>>> >>>>>>> >>>>>>> On Apr 13, 2016, at 8:22 AM, Chris George < >>>>>>> Christopher.George@rms.com> wrote: >>>>>>> >>>>>>> SparkSQL cannot support these type of statements but we may be able >>>>>>> to implement similar functionality through the api. >>>>>>> -Chris >>>>>>> >>>>>>> On 4/12/16, 5:19 PM, "Benjamin Kim" wrote: >>>>>>> >>>>>>> It would be nice to adhere to the SQL:2003 standard for an =E2=80= =9Cupsert=E2=80=9D >>>>>>> if it were to be implemented. >>>>>>> >>>>>>> MERGE INTO table_name USING table_reference ON (condition) >>>>>>> WHEN MATCHED THEN >>>>>>> UPDATE SET column1 =3D value1 [, column2 =3D value2 ...] >>>>>>> WHEN NOT MATCHED THEN >>>>>>> INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 =E2=80= =A6]) >>>>>>> >>>>>>> Cheers, >>>>>>> Ben >>>>>>> >>>>>>> On Apr 11, 2016, at 12:21 PM, Chris George < >>>>>>> Christopher.George@rms.com> wrote: >>>>>>> >>>>>>> I have a wip kuduRDD that I made a few months ago. I pushed it into >>>>>>> gerrit if you want to take a look. >>>>>>> http://gerrit.cloudera.org:8080/#/c/2754/ >>>>>>> It does pushdown predicates which the existing input formatter base= d >>>>>>> rdd does not. >>>>>>> >>>>>>> Within the next two weeks I=E2=80=99m planning to implement a datas= ource for >>>>>>> spark that will have pushdown predicates and insertion/update funct= ionality >>>>>>> (need to look more at cassandra and the hbase datasource for best w= ay to do >>>>>>> this) I agree that server side upsert would be helpful. >>>>>>> Having a datasource would give us useful data frames and also make >>>>>>> spark sql usable for kudu. >>>>>>> >>>>>>> My reasoning for having a spark datasource and not using Impala is: >>>>>>> 1. We have had trouble getting impala to run fast with high concurr= ency >>>>>>> when compared to spark 2. We interact with datasources which do not >>>>>>> integrate with impala. 3. We have custom sql query planners for ext= ended >>>>>>> sql functionality. >>>>>>> >>>>>>> -Chris George >>>>>>> >>>>>>> >>>>>>> On 4/11/16, 12:22 PM, "Jean-Daniel Cryans" >>>>>>> wrote: >>>>>>> >>>>>>> You guys make a convincing point, although on the upsert side we'll >>>>>>> need more support from the servers. Right now all you can do is an = INSERT >>>>>>> then, if you get a dup key, do an UPDATE. I guess we could at least= add an >>>>>>> API on the client side that would manage it, but it wouldn't be ato= mic. >>>>>>> >>>>>>> J-D >>>>>>> >>>>>>> On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra < >>>>>>> mark@clearstorydata.com> wrote: >>>>>>> >>>>>>>> It's pretty simple, actually. I need to support versioned dataset= s >>>>>>>> in a Spark SQL environment. Instead of a hack on top of a Parquet= data >>>>>>>> store, I'm hoping (among other reasons) to be able to use Kudu's w= rite and >>>>>>>> timestamp-based read operations to support not only appending data= , but >>>>>>>> also updating existing data, and even some schema migration. The = most >>>>>>>> typical use case is a dataset that is updated periodically (e.g., = weekly or >>>>>>>> monthly) in which the the preliminary data in the previous window = (week or >>>>>>>> month) is updated with values that are expected to remain unchange= d from >>>>>>>> then on, and a new set of preliminary values for the current windo= w need to >>>>>>>> be added/appended. >>>>>>>> >>>>>>>> Using Kudu's Java API and developing additional functionality on >>>>>>>> top of what Kudu has to offer isn't too much to ask, but the ease = of >>>>>>>> integration with Spark SQL will gate how quickly we would move to = using >>>>>>>> Kudu and how seriously we'd look at alternatives before making tha= t >>>>>>>> decision. >>>>>>>> >>>>>>>> On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans < >>>>>>>> jdcryans@apache.org> wrote: >>>>>>>> >>>>>>>>> Mark, >>>>>>>>> >>>>>>>>> Thanks for taking some time to reply in this thread, glad it >>>>>>>>> caught the attention of other folks! >>>>>>>>> >>>>>>>>> On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra < >>>>>>>>> mark@clearstorydata.com> wrote: >>>>>>>>> >>>>>>>>>> Do they care being able to insert into Kudu with SparkSQL >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I care about insert into Kudu with Spark SQL. I'm currently >>>>>>>>>> delaying a refactoring of some Spark SQL-oriented insert functio= nality >>>>>>>>>> while trying to evaluate what to expect from Kudu. Whether Kudu= does a >>>>>>>>>> good job supporting inserts with Spark SQL will be a key conside= ration as >>>>>>>>>> to whether we adopt Kudu. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I'd like to know more about why SparkSQL inserts in necessary for >>>>>>>>> you. Is it just that you currently do it that way into some datab= ase or >>>>>>>>> parquet so with minimal refactoring you'd be able to use Kudu? Wo= uld >>>>>>>>> re-writing those SQL lines into Scala and directly use the Java A= PI's >>>>>>>>> KuduSession be too much work? >>>>>>>>> >>>>>>>>> Additionally, what do you expect to gain from using Kudu VS your >>>>>>>>> current solution? If it's not completely clear, I'd love to help = you think >>>>>>>>> through it. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, Apr 10, 2016 at 12:23 PM, Jean-Daniel Cryans < >>>>>>>>>> jdcryans@apache.org> wrote: >>>>>>>>>> >>>>>>>>>>> Yup, starting to get a good idea. >>>>>>>>>>> >>>>>>>>>>> What are your DS folks looking for in terms of functionality >>>>>>>>>>> related to Spark? A SparkSQL integration that's as fully featur= ed as >>>>>>>>>>> Impala's? Do they care being able to insert into Kudu with Spar= kSQL or just >>>>>>>>>>> being able to query real fast? Anything more specific to Spark = that I'm >>>>>>>>>>> missing? >>>>>>>>>>> >>>>>>>>>>> FWIW the plan is to get to 1.0 in late Summer/early Fall. At >>>>>>>>>>> Cloudera all our resources are committed to making things happe= n in time, >>>>>>>>>>> and a more fully featured Spark integration isn't in our plans = during that >>>>>>>>>>> period. I'm really hoping someone in the community will help wi= th Spark, >>>>>>>>>>> the same way we got a big contribution for the Flume sink. >>>>>>>>>>> >>>>>>>>>>> J-D >>>>>>>>>>> >>>>>>>>>>> On Sun, Apr 10, 2016 at 11:29 AM, Benjamin Kim < >>>>>>>>>>> bbuild11@gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Yes, we took Kudu for a test run using 0.6 and 0.7 versions. >>>>>>>>>>>> But, since it=E2=80=99s not =E2=80=9Cproduction-ready=E2=80=9D= , upper management doesn=E2=80=99t want to >>>>>>>>>>>> fully deploy it yet. They just want to keep an eye on it thoug= h. Kudu was >>>>>>>>>>>> so much simpler and easier to use in every aspect compared to = HBase. Impala >>>>>>>>>>>> was great for the report writers and analysts to experiment wi= th for the >>>>>>>>>>>> short time it was up. But, once again, the only blocker was th= e lack of >>>>>>>>>>>> Spark support for our Data Developers/Scientists. So, producti= on-level data >>>>>>>>>>>> population won=E2=80=99t happen until then. >>>>>>>>>>>> >>>>>>>>>>>> I hope this helps you get an idea where I am coming from=E2=80= =A6 >>>>>>>>>>>> >>>>>>>>>>>> Cheers, >>>>>>>>>>>> Ben >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Apr 10, 2016, at 11:08 AM, Jean-Daniel Cryans < >>>>>>>>>>>> jdcryans@apache.org> wrote: >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Apr 10, 2016 at 12:30 AM, Benjamin Kim < >>>>>>>>>>>> bbuild11@gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> J-D, >>>>>>>>>>>>> >>>>>>>>>>>>> The main thing I hear that Cassandra is being used as an >>>>>>>>>>>>> updatable hot data store to ensure that duplicates are taken = care of and >>>>>>>>>>>>> idempotency is maintained. Whether data was directly retrieve= d from >>>>>>>>>>>>> Cassandra for analytics, reports, or searches, it was not cle= ar as to what >>>>>>>>>>>>> was its main use. Some also just used it for a staging area t= o populate >>>>>>>>>>>>> downstream tables in parquet format. The last thing I heard w= as that CQL >>>>>>>>>>>>> was terrible, so that rules out much use of direct queries ag= ainst it. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I'm no C* expert, but I don't think CQL is meant for real >>>>>>>>>>>> analytics, just ease of use instead of plainly using the APIs.= Even then, >>>>>>>>>>>> Kudu should beat it easily on big scans. Same for HBase. We've= done >>>>>>>>>>>> benchmarks against the latter, not the former. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> As for our company, we have been looking for an updatable dat= a >>>>>>>>>>>>> store for a long time that can be quickly queried directly ei= ther using >>>>>>>>>>>>> Spark SQL or Impala or some other SQL engine and still handle= TB or PB of >>>>>>>>>>>>> data without performance degradation and many configuration h= eadaches. For >>>>>>>>>>>>> now, we are using HBase to take on this role with Phoenix as = a fast way to >>>>>>>>>>>>> directly query the data. I can see Kudu as the best way to fi= ll this gap >>>>>>>>>>>>> easily, especially being the closest thing to other relationa= l databases >>>>>>>>>>>>> out there in familiarity for the many SQL analytics people in= our company. >>>>>>>>>>>>> The other alternative would be to go with AWS Redshift for th= e same >>>>>>>>>>>>> reasons, but it would come at a cost, of course. If we went w= ith either >>>>>>>>>>>>> solutions, Kudu or Redshift, it would get rid of the need to = extract from >>>>>>>>>>>>> HBase to parquet tables or export to PostgreSQL to support mo= re of the SQL >>>>>>>>>>>>> language using by analysts or the reporting software we use.. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Ok, the usual then *smile*. Looks like we're not too far off >>>>>>>>>>>> with Kudu. Have you folks tried Kudu with Impala yet with thos= e use cases? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I hope this helps. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> It does, thanks for nice reply. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Cheers, >>>>>>>>>>>>> Ben >>>>>>>>>>>>> >>>>>>>>>>>>> On Apr 9, 2016, at 2:00 PM, Jean-Daniel Cryans < >>>>>>>>>>>>> jdcryans@apache.org> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Ha first time I'm hearing about SMACK. Inside Cloudera we lik= e >>>>>>>>>>>>> to refer to "Impala + Kudu" as Kimpala, but yeah it's not as = sexy. My >>>>>>>>>>>>> colleagues who were also there did say that the hype around S= park isn't >>>>>>>>>>>>> dying down. >>>>>>>>>>>>> >>>>>>>>>>>>> There's definitely an overlap in the use cases that Cassandra= , >>>>>>>>>>>>> HBase, and Kudu cater to. I wouldn't go as far as saying that= C* is just an >>>>>>>>>>>>> interim solution for the use case you describe. >>>>>>>>>>>>> >>>>>>>>>>>>> Nothing significant happened in Kudu over the past month, it'= s >>>>>>>>>>>>> a storage engine so things move slowly *smile*. I'd love to s= ee more >>>>>>>>>>>>> contributions on the Spark front. I know there's code out the= re that could >>>>>>>>>>>>> be integrated in kudu-spark, it just needs to land in gerrit.= I'm sure >>>>>>>>>>>>> folks will happily review it. >>>>>>>>>>>>> >>>>>>>>>>>>> Do you have relevant experiences you can share? I'd love to >>>>>>>>>>>>> learn more about the use cases for which you envision using K= udu as a C* >>>>>>>>>>>>> replacement. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> >>>>>>>>>>>>> J-D >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Apr 8, 2016 at 12:45 PM, Benjamin Kim < >>>>>>>>>>>>> bbuild11@gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi J-D, >>>>>>>>>>>>>> >>>>>>>>>>>>>> My colleagues recently came back from Strata in San Jose. >>>>>>>>>>>>>> They told me that everything was about Spark and there is a = big buzz about >>>>>>>>>>>>>> the SMACK stack (Spark, Mesos, Akka, Cassandra, Kafka). I st= ill think that >>>>>>>>>>>>>> Cassandra is just an interim solution as a low-latency, easi= ly queried data >>>>>>>>>>>>>> store. I was wondering if anything significant happened in r= egards to Kudu, >>>>>>>>>>>>>> especially on the Spark front. Plus, can you come up with yo= ur own proposed >>>>>>>>>>>>>> stack acronym to promote? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Cheers, >>>>>>>>>>>>>> Ben >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Mar 1, 2016, at 12:20 PM, Jean-Daniel Cryans < >>>>>>>>>>>>>> jdcryans@apache.org> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Ben, >>>>>>>>>>>>>> >>>>>>>>>>>>>> AFAIK no one in the dev community committed to any timeline. >>>>>>>>>>>>>> I know of one person on the Kudu Slack who's working on a be= tter RDD, but >>>>>>>>>>>>>> that's about it. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>> >>>>>>>>>>>>>> J-D >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, Mar 1, 2016 at 11:00 AM, Benjamin Kim < >>>>>>>>>>>>>> bkim@amobee.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi J-D, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Quick question=E2=80=A6 Is there an ETA for KUDU-1214? I wa= nt to >>>>>>>>>>>>>>> target a version of Kudu to begin real testing of Spark aga= inst it for our >>>>>>>>>>>>>>> devs. At least, I can tell them what timeframe to anticipat= e. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Just curious, >>>>>>>>>>>>>>> *Benjamin Kim* >>>>>>>>>>>>>>> *Data Solutions Architect* >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> [a=E2=80=A2mo=E2=80=A2bee] *(n.)* the company defining digi= tal marketing. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> *Mobile: +1 818 635 2900 <%2B1%20818%20635%202900>* >>>>>>>>>>>>>>> 3250 Ocean Park Blvd, Suite 200 | Santa Monica, CA 90405 >>>>>>>>>>>>>>> | www.amobee.com >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Feb 24, 2016, at 3:51 PM, Jean-Daniel Cryans < >>>>>>>>>>>>>>> jdcryans@apache.org> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The DStream stuff isn't there at all. I'm not sure if it's >>>>>>>>>>>>>>> needed either. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The kuduRDD is just leveraging the MR input format, ideally >>>>>>>>>>>>>>> we'd use scans directly. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The SparkSQL stuff is there but it doesn't do any sort of >>>>>>>>>>>>>>> pushdown. It's really basic. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The goal was to provide something for others to contribute >>>>>>>>>>>>>>> to. We have some basic unit tests that others can easily ex= tend. None of us >>>>>>>>>>>>>>> on the team are Spark experts, but we'd be really happy to = assist one >>>>>>>>>>>>>>> improve the kudu-spark code. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> J-D >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Feb 24, 2016 at 3:41 PM, Benjamin Kim < >>>>>>>>>>>>>>> bbuild11@gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> J-D, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> It looks like it fulfills most of the basic requirements >>>>>>>>>>>>>>>> (kudu RDD, kudu DStream) in KUDU-1214. Am I right? Besides= shoring up more >>>>>>>>>>>>>>>> Spark SQL functionality (Dataframes) and doing the documen= tation, what more >>>>>>>>>>>>>>>> needs to be done? Optimizations? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I believe that it=E2=80=99s a good place to start using Sp= ark with >>>>>>>>>>>>>>>> Kudu and compare it to HBase with Spark (not clean). >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> Ben >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Feb 24, 2016, at 3:10 PM, Jean-Daniel Cryans < >>>>>>>>>>>>>>>> jdcryans@apache.org> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> AFAIK no one is working on it, but we did manage to get >>>>>>>>>>>>>>>> this in for 0.7.0: >>>>>>>>>>>>>>>> https://issues.cloudera.org/browse/KUDU-1321 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> It's a really simple wrapper, and yes you can use SparkSQL >>>>>>>>>>>>>>>> on Kudu, but it will require a lot more work to make it fa= st/useful. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hope this helps, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> J-D >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Feb 24, 2016 at 3:08 PM, Benjamin Kim < >>>>>>>>>>>>>>>> bbuild11@gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I see this KUDU-1214 >>>>>>>>>>>>>>>>> targeted >>>>>>>>>>>>>>>>> for 0.8.0, but I see no progress on it. When this is comp= lete, will this >>>>>>>>>>>>>>>>> mean that Spark will be able to work with Kudu both progr= ammatically and as >>>>>>>>>>>>>>>>> a client via Spark SQL? Or is there more work that needs = to be done on the >>>>>>>>>>>>>>>>> Spark side for it to work? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Just curious. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Cheers, >>>>>>>>>>>>>>>>> Ben >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> >> > > --001a113c24caf25852053546c3a8 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I'm not sure exactly what the semantics will be, but a= t least one of them will be upsert.=C2=A0 These modes come from spark, and = they were really designed for file-backed storage and not table storage.=C2= =A0 We may want to do append =3D upsert, and overwrite =3D truncate + inser= t.=C2=A0 I think that may match the normal spark semantics more closely.
- Dan

On Tue, Jun 14, 2016 at 6:00 PM, Benjamin Kim <bbuild11@= gmail.com> wrote:
Dan,

Thanks for the informat= ion. That would mean both =E2=80=9Cappend=E2=80=9D and =E2=80=9Coverwrite= =E2=80=9D modes would be combined or not needed in the future.
Cheers,
Ben

On Jun 14, 2016, at 5:57 PM, Dan Bu= rkert <dan@clouder= a.com> wrote:

Right now append uses a= n update Kudu operation, which requires the row already be present in the t= able. Overwrite maps to insert.=C2=A0 Kudu very recently got upsert support= baked in, but it hasn't yet been integrated into the Spark connector.= =C2=A0 So pretty soon these sharp edges will get a lot better, since upsert= is the way to go for most spark workloads.

- Dan
<= /div>

On Tue, Jun = 14, 2016 at 5:41 PM, Benjamin Kim <bbuild11@gmail.com> wrot= e:
I = tried to use the =E2=80=9Cappend=E2=80=9D mode, and it worked. Over 3.8 mil= lion rows in 64s. I would assume that now I can use the =E2=80=9Coverwrite= =E2=80=9D mode on existing data. Now, I have to find answers to these quest= ions. What would happen if I =E2=80=9Cappend=E2=80=9D to the data in the Ku= du table if the data already exists? What would happen if I =E2=80=9Coverwr= ite=E2=80=9D existing data when the DataFrame has data in it that does not = exist in the Kudu table? I need to evaluate the best way to simulate the UP= SERT behavior in HBase because this is what our use case is.

=
Thanks,
Ben



<= blockquote type=3D"cite">
On Jun 14, 2016, at 5:05 PM, Benjamin Kim <= ;bbuild11@gmail.com= > wrote:

Hi,
<= br>
Now, I=E2=80=99m getting this error when trying to write to t= he table.

import scala.collection.JavaConverters._=
val key_seq =3D Seq(=E2=80=9Cmy_id")
val key_list= =3D List(=E2=80=9Cmy_id=E2=80=9D).asJava
kuduContext.createTable= (tableName, df.schema, key_seq, new CreateTableOptions().setNumReplicas(1).= addHashPartitions(key_list, 100))

df.write
=C2=A0 =C2=A0 .options(Map("kudu.master" -> kuduMaster= ,"kudu.table" -> tableName))
=C2=A0 =C2=A0 .mode(&qu= ot;overwrite")
=C2=A0 =C2=A0 .kudu

java.lang.RuntimeException: failed to write 1000 rows from DataFrame = to Kudu; sample errors: Not found: key not found (error 0)Not found: key no= t found (error 0)Not found: key not found (error 0)Not found: key not found= (error 0)Not found: key not found (error 0)

Does = the key field need to be first in the DataFrame?

T= hanks,
Ben

On Jun 14, 2016, at 4:28 PM, Dan Bu= rkert <dan@clouder= a.com> wrote:



On Tue, Jun 14, 2016 at 4:20 PM, Ben= jamin Kim <bbuild11@gmail.com> wrote:
Dan,

T= hanks! It got further. Now, how do I set the Primary Key to be a column(s) = in the DataFrame and set the partitioning? Is it like this?

<= /div>
kuduContext.createTable(tableName, df.schema, Seq(=E2=80=9Cm= y_id"), new CreateTableOptions().setNumReplicas(1).addHashPartitions(= =E2=80=9Cmy_id"))

java.lang.IllegalArgu= mentException: Table partitioning must be specified using setRangePartition= Columns or addHashPartitions

Ye= p.=C2=A0 The `Seq("my_id")` part of that call is specifying the s= et of primary key columns, so in this case you have specified the single PK= column "my_id".=C2=A0 The `addHashPartitions` call adds hash par= titioning to the table, in this case over the column "my_id" (whi= ch is good, it must be over one or more PK columns, so in this case "m= y_id" is the one and only valid combination).=C2=A0 However, the call = to `addHashPartition` also takes the number of buckets as the second param.= =C2=A0 You shouldn't get the IllegalArgumentException as long as you ar= e specifying either `addHashPartitions` or `setRangePartitionColumns`.

- Dan
=C2=A0

Thanks,
<= div>Ben


On Jun 14, 2016, at 4:07 PM, Dan Bu= rkert <dan@clouder= a.com> wrote:

Looks like we're mi= ssing an import statement in that example.=C2=A0 Could you try:

import org.kududb.client._
and tr= y again?

- Dan

On Tue, Jun 14, 2016 at 4:01 PM, Benjamin Kim <bbuild= 11@gmail.com> wrote:
I encountered an error trying to create a tab= le based on the documentation from a DataFrame.

<= ;console>:49: error: not found: type CreateTableOptions
=C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 kuduContext.createTable(tableNam= e, df.schema, Seq("key"), new CreateTableOptions().setNumReplicas= (1))

Is there something I=E2=80=99m missing?
=

Thanks,
Ben

On Jun 14, 2016, at 3:00 PM, Jean-D= aniel Cryans <j= dcryans@apache.org> wrote:


On Tue, Jun 14, 2016 at 2:59 P= M, Benjamin Kim <bbuild11@gmail.com> wrote:
Hi J-D,

I installed Kudu 0.9.0 using CM, but I can=E2=80=99t find the kudu-s= park jar for spark-shell to use. Can you show me where to find it?

Thanks,
Ben


On Jun 8, 2016, at 1:19 PM, Jean-Da= niel Cryans <jd= cryans@apache.org> wrote:

What's = in this doc is what's gonna get released:=C2=A0https://github.com/cloudera/kudu/blob/master/docs/= developing.adoc#kudu-integration-with-spark

J-D

On Tue, Ju= n 7, 2016 at 8:52 PM, Benjamin Kim <bbuild11@gmail.com> wro= te:
W= ill this be documented with examples once 0.9.0 comes out?

Thanks,
Ben


On May 28, 2016, at 3:22 PM, Jean-D= aniel Cryans <j= dcryans@apache.org> wrote:

It will be= in 0.9.0.

J-D
On Sat, May 28, 2016 at 8:31 AM, Benjamin Kim <= span dir=3D"ltr"><bbuild11@gmail.com> wrote:
Hi Chris,

Will all= this effort be rolled into 0.9.0 and be ready for use?

Thanks,
Ben


On May 18, 2016, at 9:01 AM, Chris = George <= Christopher.George@rms.com> wrote:

There is some code in review that needs some more refinement.
It will allow upsert/insert from a dataframe using the datasource api.= It will also allow the creation and deletion of tables from a dataframe

Example usages will look something like:

-Chris George


On 5/18/16, 9:45 AM, "Benjamin Kim" <bbuild11@gmail.com> wrote:

Can someone tell me what the state is of this Spark work?

Also, does anyone have any sample code on how to update/insert data in= Kudu using DataFrames?

Thanks,
Ben


On Apr 13, 2016, at 8:22 AM, Chris George <Christopher.George@rms.com> = wrote:

SparkSQL cannot support these type of statements but we may be able to= implement similar functionality through the api.
-Chris

On 4/12/16, 5:19 PM, "Benjamin Kim" <bbuild11@gmail.com> wrote:

It would be nice to adhere to the SQL:2003 standard for an =E2=80=9Cupsert= =E2=80=9D if it were to be implemented.

MERGE INTO table_name USING table_reference ON (condition)
=C2=A0WHEN MATCHED THEN
=C2=A0UPDATE SET column1 =3D value1 [, column2 =3D value2 ...]
=C2=A0WHEN NOT MATCHED THEN
=C2=A0INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 =E2=80= =A6])

Cheers,
Ben

On Apr 11, 2016, at 12:21 PM, Chris George <Christopher.George@rms.com>= wrote:

I have a wip kuduRDD that I made a few months ago. I pushed it into ge= rrit if you want to take a look.=C2=A0http://gerrit.cloudera.org:8080/#/c/2754= /
It does pushdown predicates which the existing input formatter based r= dd does not.

Within the next two weeks I=E2=80=99m planning to implement a datasour= ce for spark that will have pushdown predicates and insertion/update functi= onality (need to look more at cassandra and the hbase datasource for best w= ay to do this) I agree that server side upsert would be helpful.
Having a datasource would give us useful data frames and also make spa= rk sql usable for kudu.

My reasoning for having a spark datasource and not using Impala is: 1.= We have had trouble getting impala to run fast with high concurrency when = compared to spark 2. We interact with datasources which do not integrate wi= th impala. 3. We have custom sql query planners for extended sql functionality.

-Chris George


On 4/11/16, 12:22 PM, "Jean-Daniel Cryans" <jdcryans@apache.org> wro= te:

You guys make a convincing point, although on the upsert s= ide we'll need more support from the servers. Right now all you can do = is an INSERT then, if you get a dup key, do an UPDATE. I guess we could at = least add an API on the client side that would manage it, but it wouldn't be atomic.

J-D

On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra <mark@clear= storydata.com> wrote:
It's pretty simple, actually.=C2=A0 I need to support = versioned datasets in a Spark SQL environment.=C2=A0 Instead of a hack on t= op of a Parquet data store, I'm hoping (among other reasons) to be able= to use Kudu's write and timestamp-based read operations to support not only appending data, but also updating existing = data, and even some schema migration.=C2=A0 The most typical use case is a = dataset that is updated periodically (e.g., weekly or monthly) in which the= the preliminary data in the previous window (week or month) is updated with values that are expected to remain = unchanged from then on, and a new set of preliminary values for the current= window need to be added/appended.

Using Kudu's Java API and developing additional functionality on t= op of what Kudu has to offer isn't too much to ask, but the ease of int= egration with Spark SQL will gate how quickly we would move to using Kudu a= nd how seriously we'd look at alternatives before making that decision.=C2=A0

On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cry= ans <jdcryans@apach= e.org> wrote:
Mark,

Thanks for taking some time to reply in this thr= ead, glad it caught the attention of other folks!

On Sun, Apr 10, 2016 at 12:33 PM, Mark Ham= stra <mark@clearstorydata.com> wrote:
Do they care being able to insert into Kud= u with SparkSQL

I care about insert into Kudu with Spark SQL.=C2=A0 I'm currently = delaying a refactoring of some Spark SQL-oriented insert functionality whil= e trying to evaluate what to expect from Kudu.=C2=A0 Whether Kudu does a go= od job supporting inserts with Spark SQL will be a key consideration as to whether we adopt Kudu.

I'd like to know more about why SparkSQL inserts in necessary for = you. Is it just that you currently do it that way into some database or par= quet so with minimal refactoring you'd be able to use Kudu? Would re-wr= iting those SQL lines into Scala and directly use the Java API's KuduSession be too much work?

Additionally, what do you expect to gain from using Kudu VS your curre= nt solution? If it's not completely clear, I'd love to help you thi= nk through it.
=C2=A0

On Sun, Apr 10, 2016 at 12:23 PM, Jean-Daniel Cr= yans <jdcryans@apach= e.org> wrote:
Yup, starting to get a good idea.

What are your DS folks looking for in terms of functionality related t= o Spark? A SparkSQL integration that's as fully featured as Impala'= s? Do they care being able to insert into Kudu with SparkSQL or just being = able to query real fast? Anything more specific to Spark that I'm missing?

FWIW the plan is to get to 1.0 in late Summer/early Fall. At Cloudera = all our resources are committed to making things happen in time, and a more= fully featured Spark integration isn't in our plans during that period= . I'm really hoping someone in the community will help with Spark, the same way we got a big contribut= ion for the Flume sink.=C2=A0

J-D

On Sun, Apr 10, 2016 at 11:29 AM, Benjamin Kim <= span dir=3D"ltr"> <bbuild11@gmail.= com> wrote:
Yes, we took Kudu for a test run using = 0.6 and 0.7 versions. But, since it=E2=80=99s not =E2=80=9Cproduction-ready= =E2=80=9D, upper management doesn=E2=80=99t want to fully deploy it yet. Th= ey just want to keep an eye on it though. Kudu was so much simpler and easier to use in every aspect compared to HBase. Impala was gr= eat for the report writers and analysts to experiment with for the short ti= me it was up. But, once again, the only blocker was the lack of Spark suppo= rt for our Data Developers/Scientists. So, production-level data population won=E2=80=99t happen until then.

I hope this helps you get an idea where I am coming from=E2=80=A6

Cheers,
Ben


On Apr 10, 2016, at 11:08 AM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:<= /div>
On Sun, Apr 10, 2016 at 12:30 AM, Benjamin Kim <= span dir=3D"ltr"> <bbuild11@gmail.= com> wrote:
J-D,

The main thing I hear that Cassandra is being used as an updatable hot= data store to ensure that duplicates are taken care of and idempotency is = maintained. Whether data was directly retrieved from Cassandra for analytic= s, reports, or searches, it was not clear as to what was its main use. Some also just used it for a= staging area to populate downstream tables in parquet format. The last thi= ng I heard was that CQL was terrible, so that rules out much use of direct = queries against it.

I'm no C* expert, but I don't think CQL is meant for real anal= ytics, just ease of use instead of plainly using the APIs. Even then, Kudu = should beat it easily on big scans. Same for HBase. We've done benchmar= ks against the latter, not the former.
=C2=A0

As for our company, we have been looking for an updatable data store f= or a long time that can be quickly queried directly either using Spark SQL = or Impala or some other SQL engine and still handle TB or PB of data withou= t performance degradation and many configuration headaches. For now, we are using HBase to take on t= his role with Phoenix as a fast way to directly query the data. I can see K= udu as the best way to fill this gap easily, especially being the closest t= hing to other relational databases out there in familiarity for the many SQL analytics people in our company.= The other alternative would be to go with AWS Redshift for the same reason= s, but it would come at a cost, of course. If we went with either solutions= , Kudu or Redshift, it would get rid of the need to extract from HBase to parquet tables or export to Postg= reSQL to support more of the SQL language using by analysts or the reportin= g software we use..

Ok, the usual then *smile*. Looks like we're not too far off with = Kudu. Have you folks tried Kudu with Impala yet with those use cases?
=C2=A0

I hope this helps.

It does, thanks for nice reply.
=C2=A0

Cheers,
Ben=C2=A0

On Apr 9, 2016, at 2:00 PM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:
Ha first time I'm hearing about SMACK. Inside Cloudera= we like to refer to "Impala + Kudu" as Kimpala, but yeah it'= s not as sexy. My colleagues who were also there did say that the hype arou= nd Spark isn't dying down.

There's definitely an overlap in the use cases that Cassandra, HBa= se, and Kudu cater to. I wouldn't go as far as saying that C* is just a= n interim solution for the use case you describe.

Nothing significant happened in Kudu over the past month, it's a s= torage engine so things move slowly *smile*. I'd love to see more contr= ibutions on the Spark front. I know there's code out there that could b= e integrated in kudu-spark, it just needs to land in gerrit. I'm sure folks will happily review it.

Do you have relevant experiences you can share? I'd love to learn = more about the use cases for which you envision using Kudu as a C* replacem= ent.

Thanks,

J-D

On Fri, Apr 8, 2016 at 12:45 PM, Benjamin Kim <bbuild11@gmail.= com> wrote:
Hi J-D,

My colleagues recently came back from Strata in San Jose. They told me= that everything was about Spark and there is a big buzz about the SMACK st= ack (Spark, Mesos, Akka, Cassandra, Kafka). I still think that Cassandra is= just an interim solution as a low-latency, easily queried data store. I was wondering if anything s= ignificant happened in regards to Kudu, especially on the Spark front. Plus= , can you come up with your own proposed stack acronym to promote?

Cheers,
Ben


On Mar 1, 2016, at 12:20 PM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:
Hi Ben,

AFAIK no one in the dev community committed to any timeline. I know of= one person on the Kudu Slack who's working on a better RDD, but that&#= 39;s about it.

Regards,

J-D

On Tue, Mar 1, 2016 at 11:00 AM, Benjamin Kim <bkim@amobee.com> wrote:
Hi J-D,

Quick question=E2=80=A6 Is there an ETA for KUDU-1214? I want to targe= t a version of Kudu to begin real testing of Spark against it for our devs.= At least, I can tell them what timeframe to anticipate.

Just curious,

On Feb 24, 2016, at 3:51 PM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:
The DStream stuff isn't there at all. I'm not sure= if it's needed either.

The kuduRDD is just leveraging the MR input format, ideally we'd u= se scans directly.

The SparkSQL stuff is there but it doesn't do any sort of pushdown= . It's really basic.

The goal was to provide something for others to contribute to. We have= some basic unit tests that others can easily extend. None of us on the tea= m are Spark experts, but we'd be really happy to assist one improve the= kudu-spark code.

J-D

On Wed, Feb 24, 2016 at 3:41 PM, Benjamin Kim <bbuild11@gmail.= com> wrote:
J-D,

It looks like it fulfills most of the basic requirements (kudu RDD, ku= du DStream) in KUDU-1214. Am I right? Besides shoring up more Spark SQL fun= ctionality (Dataframes) and doing the documentation, what more needs to be = done? Optimizations?

I believe that it=E2=80=99s a good place to start using Spark with Kud= u and compare it to HBase with Spark (not clean).

Thanks,
Ben


On Feb 24, 2016, at 3:10 PM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:
AFAIK no one is working on it, but we did manage to get th= is in for 0.7.0:=C2=A0https://issues.cloudera.org/browse/KUDU-1321

It's a really simple wrapper, and yes you can use SparkSQL on Kudu= , but it will require a lot more work to make it fast/useful.

Hope this helps,

J-D

On Wed, Feb 24, 2016 at 3:08 PM, Benjamin Kim <bbuild11@gmail.= com> wrote:
I see this=C2=A0KUDU-1214=C2=A0targ= eted for 0.8.0, but I see no progress on it. When this is complete, will th= is mean that Spark will be able to work with Kudu both programmatically and as a client via Spark SQL? Or = is there more work that needs to be done on the Spark side for it to work?

Just curious.

Cheers,
Ben



























--001a113c24caf25852053546c3a8--