Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7D0F9200D4B for ; Mon, 27 Nov 2017 10:38:05 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 7B9F1160C13; Mon, 27 Nov 2017 09:38:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 70483160C02 for ; Mon, 27 Nov 2017 10:38:04 +0100 (CET) Received: (qmail 22972 invoked by uid 500); 27 Nov 2017 09:38:03 -0000 Mailing-List: contact issues-help@carbondata.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@carbondata.apache.org Delivered-To: mailing list issues@carbondata.apache.org Received: (qmail 22963 invoked by uid 99); 27 Nov 2017 09:38:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 27 Nov 2017 09:38:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id CE64FC4840 for ; Mon, 27 Nov 2017 09:38:02 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.011 X-Spam-Level: X-Spam-Status: No, score=-99.011 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KB_WAM_FROM_NAME_SINGLEWORD=0.2, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id UB6Zq4gziCkS for ; Mon, 27 Nov 2017 09:38:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 213DE5F244 for ; Mon, 27 Nov 2017 09:38:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 7B0C1E015F for ; Mon, 27 Nov 2017 09:38:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 3149423F1D for ; Mon, 27 Nov 2017 09:38:00 +0000 (UTC) Date: Mon, 27 Nov 2017 09:38:00 +0000 (UTC) From: "Chetan Bhat (JIRA)" To: issues@carbondata.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Closed] (CARBONDATA-1726) Carbon1.3.0-Streaming - Null pointer exception is thrown when streaming is started in spark-shell MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 27 Nov 2017 09:38:05 -0000 [ https://issues.apache.org/jira/browse/CARBONDATA-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chetan Bhat closed CARBONDATA-1726. ----------------------------------- Resolution: Fixed Fix Version/s: 1.3.0 Issue is fixed in 1.3.0.B002 build jars. > Carbon1.3.0-Streaming - Null pointer exception is thrown when streaming is started in spark-shell > ------------------------------------------------------------------------------------------------- > > Key: CARBONDATA-1726 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1726 > Project: CarbonData > Issue Type: Bug > Components: data-query > Affects Versions: 1.3.0 > Environment: 3 node ant cluster SUSE 11 SP4 > Reporter: Chetan Bhat > Priority: Blocker > Labels: Functional > Fix For: 1.3.0 > > > Steps : > // prepare csv file for batch loading > cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin > // generate streamSample.csv > 100000001,batch_1,city_1,0.1,school_1:school_11$20 > 100000002,batch_2,city_2,0.2,school_2:school_22$30 > 100000003,batch_3,city_3,0.3,school_3:school_33$40 > 100000004,batch_4,city_4,0.4,school_4:school_44$50 > 100000005,batch_5,city_5,0.5,school_5:school_55$60 > // put to hdfs /tmp/streamSample.csv > ./hadoop fs -put streamSample.csv /tmp > // spark-beeline > cd /srv/spark2.2Bigdata/install/spark/sparkJdbc > bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G --num-executors 3 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar "hdfs://hacluster/user/sparkhive/warehouse" > bin/beeline -u jdbc:hive2://10.18.98.34:23040 > CREATE TABLE stream_table( > id INT, > name STRING, > city STRING, > salary FLOAT > ) > STORED BY 'carbondata' > TBLPROPERTIES('streaming'='true', 'sort_columns'='name'); > LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE stream_table OPTIONS('HEADER'='false'); > // spark-shell > cd /srv/spark2.2Bigdata/install/spark/sparkJdbc > bin/spark-shell --master yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G --num-executors 3 --jars /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar > import java.io.{File, PrintWriter} > import java.net.ServerSocket > import org.apache.spark.sql.{CarbonEnv, SparkSession} > import org.apache.spark.sql.hive.CarbonRelation > import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} > import org.apache.carbondata.core.constants.CarbonCommonConstants > import org.apache.carbondata.core.util.CarbonProperties > import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath} > CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd") > import org.apache.spark.sql.CarbonSession._ > val carbonSession = SparkSession. > builder(). > appName("StreamExample"). > config("spark.sql.warehouse.dir", "hdfs://hacluster/user/sparkhive/warehouse"). > config("javax.jdo.option.ConnectionURL", "jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8"). > config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver"). > config("javax.jdo.option.ConnectionPassword", "huawei"). > config("javax.jdo.option.ConnectionUserName", "sparksql"). > getOrCreateCarbonSession() > > carbonSession.sparkContext.setLogLevel("ERROR") > carbonSession.sql("select * from stream_table").show > def writeSocket(serverSocket: ServerSocket): Thread = { > val thread = new Thread() { > override def run(): Unit = { > // wait for client to connection request and accept > val clientSocket = serverSocket.accept() > val socketWriter = new PrintWriter(clientSocket.getOutputStream()) > var index = 0 > for (_ <- 1 to 1000) { > // write 5 records per iteration > for (_ <- 0 to 100) { > index = index + 1 > socketWriter.println(index.toString + ",name_" + index > + ",city_" + index + "," + (index * 10000.00).toString + > ",school_" + index + ":school_" + index + index + "$" + index) > } > socketWriter.flush() > Thread.sleep(2000) > } > socketWriter.close() > System.out.println("Socket closed") > } > } > thread.start() > thread > } > > def startStreaming(spark: SparkSession, tablePath: CarbonTablePath): Thread = { > val thread = new Thread() { > override def run(): Unit = { > var qry: StreamingQuery = null > try { > val readSocketDF = spark.readStream > .format("socket") > .option("host", "10.18.98.34") > .option("port", 7071) > .load() > // Write data from socket stream to carbondata file > qry = readSocketDF.writeStream > .format("carbondata") > .trigger(ProcessingTime("5 seconds")) > .option("checkpointLocation", tablePath.getStreamingCheckpointDir) > .option("tablePath", tablePath.getPath) > .start() > qry.awaitTermination() > } catch { > case _: InterruptedException => > println("Done reading and writing streaming data") > } finally { > qry.stop() > } > } > } > thread.start() > thread > } > val streamTableName = s"stream_table" > val carbonTable = CarbonEnv.getInstance(carbonSession).carbonMetastore. > lookupRelation(Some("default"), streamTableName)(carbonSession).asInstanceOf[CarbonRelation]. > tableMeta.carbonTable > val tablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier) > val serverSocket = new ServerSocket(7071) > val socketThread = writeSocket(serverSocket) > val streamingThread = startStreaming(carbonSession, tablePath) > **Issue : There is a null pointer exception when streaming is started.* > When the executor and driver cores and memory is increased while launching the spark shell the issue still occurs. > scala> import java.io.{File, PrintWriter} > import java.io.{File, PrintWriter} > scala> import java.net.ServerSocket > import java.net.ServerSocket > scala> > scala> import org.apache.spark.sql.{CarbonEnv, SparkSession} > import org.apache.spark.sql.{CarbonEnv, SparkSession} > scala> import org.apache.spark.sql.hive.CarbonRelation > import org.apache.spark.sql.hive.CarbonRelation > scala> import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} > import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} > scala> > scala> import org.apache.carbondata.core.constants.CarbonCommonConstants > import org.apache.carbondata.core.constants.CarbonCommonConstants > scala> import org.apache.carbondata.core.util.CarbonProperties > import org.apache.carbondata.core.util.CarbonProperties > scala> import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath} > import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath} > scala> > scala> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd") > res0: org.apache.carbondata.core.util.CarbonProperties = org.apache.carbondata.core.util.CarbonProperties@7212b28e > scala> > scala> import org.apache.spark.sql.CarbonSession._ > import org.apache.spark.sql.CarbonSession._ > scala> > scala> val carbonSession = SparkSession. > | builder(). > | appName("StreamExample"). > | config("spark.sql.warehouse.dir", "hdfs://hacluster/user/sparkhive/warehouse"). > | config("javax.jdo.option.ConnectionURL", "jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8"). > | config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver"). > | config("javax.jdo.option.ConnectionPassword", "huawei"). > | config("javax.jdo.option.ConnectionUserName", "sparksql"). > | getOrCreateCarbonSession() > carbonSession: org.apache.spark.sql.SparkSession = org.apache.spark.sql.CarbonSession@7593716d > scala> > | carbonSession.sparkContext.setLogLevel("ERROR") > scala> > scala> carbonSession.sql("select * from stream_table").show > +---------+-------+------+------+ > | id| name| city|salary| > +---------+-------+------+------+ > |100000001|batch_1|city_1| 0.1| > |100000002|batch_2|city_2| 0.2| > |100000003|batch_3|city_3| 0.3| > |100000004|batch_4|city_4| 0.4| > |100000005|batch_5|city_5| 0.5| > +---------+-------+------+------+ > scala> def writeSocket(serverSocket: ServerSocket): Thread = { > | val thread = new Thread() { > | override def run(): Unit = { > | // wait for client to connection request and accept > | val clientSocket = serverSocket.accept() > | val socketWriter = new PrintWriter(clientSocket.getOutputStream()) > | var index = 0 > | for (_ <- 1 to 1000) { > | // write 5 records per iteration > | for (_ <- 0 to 100) { > | index = index + 1 > | socketWriter.println(index.toString + ",name_" + index > | + ",city_" + index + "," + (index * 10000.00).toString + > | ",school_" + index + ":school_" + index + index + "$" + index) > | } > | socketWriter.flush() > | Thread.sleep(2000) > | } > | socketWriter.close() > | System.out.println("Socket closed") > | } > | } > | thread.start() > | thread > | } > writeSocket: (serverSocket: java.net.ServerSocket)Thread > scala> > | def startStreaming(spark: SparkSession, tablePath: CarbonTablePath): Thread = { > | val thread = new Thread() { > | override def run(): Unit = { > | var qry: StreamingQuery = null > | try { > | val readSocketDF = spark.readStream > | .format("socket") > | .option("host", "10.18.98.34") > | .option("port", 7071) > | .load() > | > | // Write data from socket stream to carbondata file > | qry = readSocketDF.writeStream > | .format("carbondata") > | .trigger(ProcessingTime("5 seconds")) > | .option("checkpointLocation", tablePath.getStreamingCheckpointDir) > | .option("tablePath", tablePath.getPath) > | .start() > | > | qry.awaitTermination() > | } catch { > | case _: InterruptedException => > | println("Done reading and writing streaming data") > | } finally { > | qry.stop() > | } > | } > | } > | thread.start() > | thread > | } > startStreaming: (spark: org.apache.spark.sql.SparkSession, tablePath: org.apache.carbondata.core.util.path.CarbonTablePath)Thread > scala> > scala> val streamTableName = s"stream_table" > streamTableName: String = stream_table > scala> > scala> val carbonTable = CarbonEnv.getInstance(carbonSession).carbonMetastore. > | lookupRelation(Some("default"), streamTableName)(carbonSession).asInstanceOf[CarbonRelation]. > | tableMeta.carbonTable > carbonTable: org.apache.carbondata.core.metadata.schema.table.CarbonTable = org.apache.carbondata.core.metadata.schema.table.CarbonTable@62cf8fda > scala> > scala> val tablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier) > tablePath: org.apache.carbondata.core.util.path.CarbonTablePath = hdfs://hacluster/user/hive/warehouse/carbon.store/default/stream_table > scala> > scala> val serverSocket = new ServerSocket(7071) > serverSocket: java.net.ServerSocket = ServerSocket[addr=0.0.0.0/0.0.0.0,localport=7071] > scala> val socketThread = writeSocket(serverSocket) > socketThread: Thread = Thread[Thread-103,5,main] > scala> val streamingThread = startStreaming(carbonSession, tablePath) > streamingThread: Thread = Thread[Thread-104,5,main] > * > **scala> Exception in thread "Thread-104" java.lang.NullPointerException > at $line29.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anon$1.run(:59)*** > Expected : The startstreaming should not throw exception and should be successful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)