Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 34EE6195BC for ; Tue, 12 Apr 2016 03:01:14 +0000 (UTC) Received: (qmail 13454 invoked by uid 500); 12 Apr 2016 03:01:10 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 13096 invoked by uid 500); 12 Apr 2016 03:01:09 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 13085 invoked by uid 99); 12 Apr 2016 03:01:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Apr 2016 03:01:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 0F3D9C6788 for ; Tue, 12 Apr 2016 03:01:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.198 X-Spam-Level: * X-Spam-Status: No, score=1.198 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 9oAz-mQFYO2q for ; Tue, 12 Apr 2016 03:01:07 +0000 (UTC) Received: from mail-oi0-f44.google.com (mail-oi0-f44.google.com [209.85.218.44]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTPS id D50E95F1B3 for ; Tue, 12 Apr 2016 03:01:06 +0000 (UTC) Received: by mail-oi0-f44.google.com with SMTP id y204so6674055oie.3 for ; Mon, 11 Apr 2016 20:01:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to; bh=+6KSnhyAmBWdKj4dWeypgPCgYaiZovbsV8Ibao6P4Bc=; b=PsH0Iv5X3qwbUzArCjtuTvZ7IooiUyI4wZUiJelceVojXqRTZ6cM3+TEajFKAlF8zD 0W1girvuML/BH8ohYR4AYlbm5oRlJ3ckrNlmBlsdCU9ctOxEWNqLdioz9k0wilr91nsq XvO39HOBiIvanxRWfI1dD9JGDM+/J33sKZrDkHOo7Q64dgqXeaA9TyKGEOl70vN/ghX6 v1C6jN0X3svqAxG9bK7+FePoIL33gZlAMZ6R08/bXxKjUCXgY4VyslGrIWztx9gQWTzh 1BDChXOSq2yC2qKPbEG8ddoPrADUpzTrNBio4DmOWq+qlGre/gUXdnIG2NhKbAXEE7Ny /EXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=+6KSnhyAmBWdKj4dWeypgPCgYaiZovbsV8Ibao6P4Bc=; b=IMe6uln4jMOYak/K8g0lTHHXdUfGuxYPWzDLFzTF+JNIdok7+AFDgb07KT0F8GzaeB iuMEaknuVnweGAzVeUjcqNaxb8jl7rkNf1hSs3nVliG3q01usP/K/TUOkZGEdl2NTyVn cVfJRkSZvB0DCfivf/vO+cIQx37yZFi3JfMS0kAkUStD2ZbFvY7QFXZUAsxcaZIv2N9j LHGBO+c0NbMXIXZxgAXRW6tDTK7Iz+TfOePL3DBIU1lXCQoFJ7PwWZx9oakA/cNKcV6I iGiTtM34IOJlP3rvhXI8lsQXkYN8zcayX8Z8lNcE/sI7eUsDTEjmaabJ4npzF95brcX9 X4xQ== X-Gm-Message-State: AOPr4FWOgOFnRlS5H1muIavm/Tx2aAfa9Bn8WuZFfjvWjEefxEh+rWThQBYt3ukl7OBj0862jqWOtkv2usWjHQ== X-Received: by 10.157.14.182 with SMTP id 51mr401195otj.158.1460430066012; Mon, 11 Apr 2016 20:01:06 -0700 (PDT) MIME-Version: 1.0 Received: by 10.202.185.195 with HTTP; Mon, 11 Apr 2016 20:00:26 -0700 (PDT) From: Divya Gehlot Date: Tue, 12 Apr 2016 11:00:26 +0800 Message-ID: Subject: [Error:] When writing To Phoenix 4.4 To: "user @spark" , user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a113ad0b24dd99c053040dff2 --001a113ad0b24dd99c053040dff2 Content-Type: text/plain; charset=UTF-8 Hi, I am getting error when I try to write data to Phoenix . *Software Confguration :* Spark 1.5.2 Phoenix 4.4 Hbase 1.1 *Spark Scala Script :* val dfLCR = readTable(sqlContext, "", "TEST") val schemaL = dfLCR.schema val lcrReportPath = "/TestDivya/Spark/Results/TestData/" val dfReadReport= sqlContext.read.format("com.databricks.spark.csv").option("header", "true").schema(schemaL).load(lcrReportPath) dfReadlcrReport.show() val dfWidCol = dfReadReport.withColumn("RPT_DATE",lit("2015-01-01")) val dfSelect = dfWidCol.select("RPT_DATE") dfSelect.write.format("org.apache.phoenix.spark").mode(SaveMode.Overwrite).options(collection.immutable.Map( "zkUrl" -> "localhost", "table" -> "TEST")).save() *Command Line to run Script * spark-shell --conf "spark.driver.extraClassPath=/usr/hdp/2.3.4.0-3485/phoenix/phoenix-client.jar" --conf "spark.executor.extraClassPath=/usr/hdp/2.3.4.0-3485/phoenix/phoenix-client.jar" --properties-file /TestDivya/Spark/Phoenix.properties --jars /usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/phoenix-client.jar --driver-class-path /usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/hbase/lib/phoenix-client-4.4.0.jar --packages com.databricks:spark-csv_2.10:1.4.0 --master yarn-client -i /TestDivya/Spark/WriteToPheonix.scala *Error Stack Trace :* 16/04/12 02:53:59 INFO YarnScheduler: Removed TaskSet 3.0, whose tasks have all completed, from pool org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 4 times, most recent failure: Lost task 1.3 in stage 3.0 (TID 410, ip-172-31-22-135.ap-southeast-1.compute.internal): java.lang.RuntimeException: java.sql.SQLException: No suitable driver found for jdbc:phoenix:localhost:2181:/hbase-unsecure; at org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1030) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1014) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.sql.SQLException: No suitable driver found for jdbc:phoenix:localhost:2181:/hbase-unsecure; at java.sql.DriverManager.getConnection(DriverManager.java:596) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:99) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputConnection(ConnectionUtil.java:82) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputConnection(ConnectionUtil.java:70) at org.apache.phoenix.mapreduce.PhoenixRecordWriter.(PhoenixRecordWriter.java:49) at org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:55) ... 8 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1914) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1055) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:998) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:938) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930) at org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:43) at org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47) at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:170) Could some body help me figuring out the missing properties/configurations or link ? Would really appreciate the help. Thanks, Divya --001a113ad0b24dd99c053040dff2 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,
I am getting error when I try to write data to Pho= enix .
Software Confguration :
Spark 1.5.2=C2=A0=
Phoenix 4.4=C2=A0
Hbase 1.1

<= b>Spark Scala Script :
val df= LCR =3D readTable(sqlContext, "", "TEST")
val schemaL =3D dfLCR.schema
val lcrReportPath =3D "/TestDivya/Spark/Results/TestDat= a/"
val dfReadReport=3D sqlContex= t.read.format("com.databricks.spark.csv").option("header&quo= t;, "true").schema(schemaL).load(lcrReportPath)
dfReadlcrReport.show()
val dfWidCol =3D dfReadReport.withColumn("RPT_DATE",lit(&q= uot;2015-01-01"))
val dfSelect = =3D dfWidCol.select("RPT_DATE")
dfSelect.write.format("org.apache.phoenix.spark").mode(SaveMo= de.Overwrite).options(collection.immutable.Map(
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "zkU= rl" -> "localhost",
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "table" -= > "TEST")).save()
<= br>
Command Line to run Script=C2= =A0
spark-shell = =C2=A0--conf "spark.driver.extraClassPath=3D/usr/hdp/2.3.4.0-3485/phoe= nix/phoenix-client.jar" =C2=A0--conf "spark.executor.extraClassPa= th=3D/usr/hdp/2.3.4.0-3485/phoenix/phoenix-client.jar" --properties-fi= le =C2=A0/TestDivya/Spark/Phoenix.properties --jars /usr/hdp/2.3.4.0-3485/p= hoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoen= ix/phoenix-client.jar =C2=A0--driver-class-path /usr/hdp/2.3.4.0-3485/phoen= ix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/hbase/lib= /phoenix-client-4.4.0.jar =C2=A0--packages com.databricks:spark-csv_2.10:1.= 4.0 =C2=A0--master yarn-client -i /TestDivya/Spark/WriteToPheonix.scala

Error Stack Trace :
= 16/04/12 02:53:59 INFO YarnScheduler: Removed TaskSet 3.0, whose tasks have= all completed, from pool
org.apache.spark.SparkException: Job ab= orted due to stage failure: Task 1 in stage 3.0 failed 4 times, most recent= failure: Lost task 1.3 in stage 3.0 (TID 410, ip-172-31-22-135.ap-southeas= t-1.compute.internal): java.lang.RuntimeException: java.sql.SQLException: N= o suitable driver found for jdbc:phoenix:localhost:2181:/hbase-unsecure;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.mapreduce.Phoenix= OutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rdd.PairRDDFunctions$$anonfun$save= AsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1030)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rdd.PairRDDFunctions$= $anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.sca= la:1014)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.schedule= r.ResultTask.runTask(ResultTask.scala:66)
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.spark.scheduler.Task.run(Task.scala:88)
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.executor.Executor$TaskRunner.r= un(Executor.scala:214)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.util.c= oncurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
<= div>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.util.concurrent.ThreadPoolExecutor$= Worker.run(ThreadPoolExecutor.java:615)
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.= SQLException: No suitable driver found for jdbc:phoenix:localhost:2181:/hba= se-unsecure;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.sql.DriverManage= r.getConnection(DriverManager.java:596)
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at java.sql.DriverManager.getConnection(DriverManager.java:187)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.mapreduce.util.Connect= ionUtil.getConnection(ConnectionUtil.java:99)
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputCon= nection(ConnectionUtil.java:82)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at or= g.apache.phoenix.mapreduce.util.ConnectionUtil.getOutputConnection(Connecti= onUtil.java:70)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix= .mapreduce.PhoenixRecordWriter.<init>(PhoenixRecordWriter.java:49)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.mapreduce.Phoenix= OutputFormat.getRecordWriter(PhoenixOutputFormat.java:55)
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 ... 8 more

Driver stacktrace:=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.sp= ark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.scheduler.DAGSchedule= rEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at org.apache.spark.scheduler.DAGSchedulerEventProcessLoo= p.onReceive(DAGScheduler.scala:1458)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSch= eduler.scala:1447)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spar= k.util.EventLoop$$anon$1.run(EventLoop.scala:48)
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at org.apache.spark.scheduler.DAGScheduler.runJob(DAGSchedule= r.scala:567)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.Spar= kContext.runJob(SparkContext.scala:1824)
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
<= div>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.SparkContext.runJob(Spa= rkContext.scala:1914)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.= apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)<= /div>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rdd.RDDOperationS= cope$.withScope(RDDOperationScope.scala:108)
=C2=A0 =C2=A0 =C2=A0= =C2=A0 at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rdd.PairRDDFunctions.saveAsNew= APIHadoopDataset(PairRDDFunctions.scala:998)
=C2=A0 =C2=A0 =C2=A0= =C2=A0 at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoo= pFile$2.apply$mcV$sp(PairRDDFunctions.scala:938)
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAP= IHadoopFile$2.apply(PairRDDFunctions.scala:930)
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHa= doopFile$2.apply(PairRDDFunctions.scala:930)
=C2=A0 =C2=A0 =C2=A0= =C2=A0 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationSc= ope.scala:147)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rd= d.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rdd.RDD.withScope(RDD.scala:31= 0)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.rdd.PairRDDFun= ctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930)
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.spark.DataFrameFunctions.saveToP= hoenix(DataFrameFunctions.scala:43)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 a= t org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala= :47)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.spark.sql.executio= n.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:170)
=




Divya=C2=A0
--001a113ad0b24dd99c053040dff2--