spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Muthu Jayakumar <bablo...@gmail.com>
Subject Re: Spark job on dataproc failing with Exception in thread "main" java.lang.NoSuchMethodError: com.googl
Date Thu, 20 Dec 2018 13:48:09 GMT
The error reads as Precondition.checkArgument() method is on an incorrect
parameter signature.
Could you check to see how many jars (before the Uber jar), actually
contain this method signature?
I smell an issue with jar version conflict or similar.

Thanks
Muthu

On Thu, Dec 20, 2018, 02:40 Mich Talebzadeh <mich.talebzadeh@gmail.com>
wrote:

> Anyone in Spark user group seen this error in case?
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 20 Dec 2018 at 09:38, <mich.talebzadeh@gmail.com> wrote:
>
>> Hi,
>>
>> I am trying a basic Spark job in Scala program. I compile it with SBT
>> with the following dependencies
>>
>> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0" %
>> "provided"
>> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0" %
>> "provided"
>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0" %
>> "provided"
>> libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.0.0"
>> % "provided"
>> libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka" %
>> "1.6.1" % "provided"
>> libraryDependencies += "org.apache.phoenix" % "phoenix-spark" %
>> "4.6.0-HBase-1.0"
>> libraryDependencies += "org.apache.hbase" % "hbase" % "1.2.6"
>> libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.2.6"
>> libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.2.6"
>> libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.2.6"
>> libraryDependencies += "org.mongodb.spark" %% "mongo-spark-connector" %
>> "2.2.0"
>> libraryDependencies += "org.mongodb" % "mongo-java-driver" % "3.8.1"
>> libraryDependencies += "org.apache.spark" %% "spark-streaming-twitter" %
>> "1.6.3"
>> libraryDependencies += "com.google.cloud.bigdataoss" %
>> "bigquery-connector" % "0.13.4-hadoop3"
>> libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" %
>> "1.9.4-hadoop3"
>> libraryDependencies += "com.google.code.gson" % "gson" % "2.8.5"
>> libraryDependencies += "com.google.guava" % "guava" % "27.0.1-jre"
>> libraryDependencies += "org.apache.httpcomponents" % "httpcore" % "4.4.8"
>>
>> It compiles fine and creates the Uber jar file. But when I run I get the
>> following error.
>>
>> Exception in thread "main" java.lang.NoSuchMethodError:
>> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
>> at
>> com.google.cloud.hadoop.io.bigquery.BigQueryStrings.parseTableReference(BigQueryStrings.java:68)
>> at
>> com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration.configureBigQueryInput(BigQueryConfiguration.java:260)
>> at simple$.main(simple.scala:150)
>> at simple.main(simple.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>> Sounds like there is incompatibility in GUAVA versions between compiles
>> and run? These are the versions thar are used:
>>
>>
>>    - Java openjdk version "1.8.0_181"
>>    - Spark version 2.3.2
>>    - Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
>>
>>
>> Appreciate any feedback.
>>
>> Thanks,
>>
>> Mich
>>
>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google Cloud Dataproc Discussions" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to cloud-dataproc-discuss+unsubscribe@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/cloud-dataproc-discuss/12ea0075-c18f-4c46-adbf-958ca24730d1%40googlegroups.com
>> <https://groups.google.com/d/msgid/cloud-dataproc-discuss/12ea0075-c18f-4c46-adbf-958ca24730d1%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

Mime
View raw message