cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tiwari, Tarun" <Tarun.Tiw...@Kronos.com>
Subject Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper
Date Tue, 31 Mar 2015 14:42:47 GMT
Hi Experts,

I am getting java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper
while running a app to load data to Cassandra table using the datastax spark connector

Is there something else I need to import in the program or dependencies?

RUNTIME ERROR:  Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper
at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Below is my scala program

/*** ld_Cassandra_Table.scala ***/
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector
import com.datastax.spark.connector._

object ldCassandraTable {
                def main(args: Array[String]) {
  val fileName = args(0)
  val tblName = args(1)
  val conf = new SparkConf(true).set("spark.cassandra.connection.host", "<MASTER HOST>")
.setMaster("<MASTER URL>") .setAppName("LoadCassandraTableApp")
  val sc = new SparkContext(conf)
  sc.addJar("/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar")
  val normalfill = sc.textFile(fileName).map(line => line.split('|'))
  normalfill.map(line => (line(0), line(1), line(2), line(3), line(4), line(5), line(6),
line(7), line(8), line(9), line(10), line(11), line(12), line(13), line(14), line(15), line(16),
line(17), line(18), line(19), line(20), line(21))).saveToCassandra(keyspace, tblName, SomeColumns("wfctotalid",
"timesheetitemid", "employeeid", "durationsecsqty", "wageamt", "moneyamt", "applydtm", "laboracctid",
"paycodeid", "startdtm", "stimezoneid", "adjstartdtm", "adjapplydtm", "enddtm", "homeaccountsw",
"notpaidsw", "wfcjoborgid", "unapprovedsw", "durationdaysqty", "updatedtm", "totaledversion",
"acctapprovalnum"))
  println("Records Loaded to ".format(tblName))
  Thread.sleep(500)
  sc.stop()
}
}

Below is the sbt file:

name:= "POC"
version := "0.0.1"

scalaVersion := "2.10.4"

// additional libraries
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.1.1" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.1.1" % "provided",
  "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.1" % "provided"
)

Regards,
Tarun Tiwari | Workforce Analytics-ETL | Kronos India
M: +91 9540 28 27 77 | Tel: +91 120 4015200
Kronos | Time & Attendance * Scheduling * Absence Management * HR & Payroll * Hiring
* Labor Analytics
Join Kronos on: kronos.com<http://www.kronos.com/> | Facebook<http://www.kronos.com/facebook>
| Twitter<http://www.kronos.com/twitter> | LinkedIn<http://www.kronos.com/linkedin>
| YouTube<http://www.kronos.com/youtube>


Mime
View raw message