Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 5886A200CF8 for ; Thu, 14 Sep 2017 09:16:30 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 575B41609CD; Thu, 14 Sep 2017 07:16:30 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 9CE881609CC for ; Thu, 14 Sep 2017 09:16:29 +0200 (CEST) Received: (qmail 32577 invoked by uid 500); 14 Sep 2017 07:16:28 -0000 Mailing-List: contact issues-help@carbondata.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@carbondata.apache.org Delivered-To: mailing list issues@carbondata.apache.org Received: (qmail 32568 invoked by uid 99); 14 Sep 2017 07:16:28 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Sep 2017 07:16:28 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 834DDF558D; Thu, 14 Sep 2017 07:16:28 +0000 (UTC) From: aniketadnaik To: issues@carbondata.apache.org Reply-To: issues@carbondata.apache.org References: In-Reply-To: Subject: [GitHub] carbondata pull request #1352: [CARBONDATA-1174] Streaming Ingestion - schem... Content-Type: text/plain Message-Id: <20170914071628.834DDF558D@git1-us-west.apache.org> Date: Thu, 14 Sep 2017 07:16:28 +0000 (UTC) archived-at: Thu, 14 Sep 2017 07:16:30 -0000 Github user aniketadnaik commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1352#discussion_r138813548 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/streaming/CarbonStreamingIngestFileSourceExample.scala --- @@ -0,0 +1,132 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.examples + +import java.io.File + +import org.apache.commons.lang.RandomStringUtils +import org.apache.spark.sql.{SaveMode, SparkSession} + +import org.apache.carbondata.core.constants.CarbonCommonConstants +import org.apache.carbondata.core.util.CarbonProperties +import org.apache.carbondata.examples.utils.StreamingCleanupUtil + +object CarbonStreamingIngestFileSourceExample { + + def main(args: Array[String]) { + + val rootPath = new File(this.getClass.getResource("/").getPath + + "../../../..").getCanonicalPath + val storeLocation = s"$rootPath/examples/spark2/target/store" + val warehouse = s"$rootPath/examples/spark2/target/warehouse" + val metastoredb = s"$rootPath/examples/spark2/target" + val csvDataDir = s"$rootPath/examples/spark2/resources/csvDataDir" + // val csvDataFile = s"$csvDataDir/sampleData.csv" + // val csvDataFile = s"$csvDataDir/sample.csv" + val streamTableName = s"_carbon_file_stream_table_" + val stremTablePath = s"$storeLocation/default/$streamTableName" + val ckptLocation = s"$rootPath/examples/spark2/resources/ckptDir" + + CarbonProperties.getInstance() + .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd") + + // cleanup any residual files + StreamingCleanupUtil.main(Array(csvDataDir, ckptLocation)) + + import org.apache.spark.sql.CarbonSession._ + val spark = SparkSession + .builder() + .master("local") + .appName("CarbonFileStreamingExample") + .config("spark.sql.warehouse.dir", warehouse) + .getOrCreateCarbonSession(storeLocation, metastoredb) + + spark.sparkContext.setLogLevel("ERROR") + + // Writes Dataframe to CarbonData file: + import spark.implicits._ + import org.apache.spark.sql.types._ + + // Generate random data + val dataDF = spark.sparkContext.parallelize(1 to 10) + .map(id => (id, "name_ABC", "city_XYZ", 10000.00*id)). + toDF("id", "name", "city", "salary") + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS ${streamTableName}") + + // Create Carbon Table + // Saves dataframe to carbondata file + dataDF.write + .format("carbondata") + .option("tableName", streamTableName) + .option("compress", "true") + .option("tempCSV", "false") + .mode(SaveMode.Overwrite) + .save() + + spark.sql(s""" SELECT * FROM ${streamTableName} """).show() + + // Create csv data frame file + val csvDataDF = spark.sparkContext.parallelize(11 to 30) + .map(id => (id, + s"name_${RandomStringUtils.randomAlphabetic(4).toUpperCase}", + s"city_${RandomStringUtils.randomAlphabetic(2).toUpperCase}", + 10000.00*id)).toDF("id", "name", "city", "salary") + + // write data into csv file ( It will be used as a stream source) + csvDataDF.write. + format("com.databricks.spark.csv"). --- End diff -- sure, df.write.csv() will make it more readable since its native to Spark2.x. ---