spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexandru Barbulescu (JIRA)" <>
Subject [jira] [Created] (SPARK-27623) Provider org.apache.spark.sql.avro.AvroFileFormat could not be instantiated
Date Thu, 02 May 2019 14:16:00 GMT
Alexandru Barbulescu created SPARK-27623:

             Summary: Provider org.apache.spark.sql.avro.AvroFileFormat could not be instantiated
                 Key: SPARK-27623
             Project: Spark
          Issue Type: Bug
          Components: PySpark
    Affects Versions: 2.4.2
            Reporter: Alexandru Barbulescu

After updating to spark 2.4.2 when using the 

chain of methods, regardless of what parameter is passed to "format" we get the following
error related to avro:

- .options(**load_options)
- File "/opt/spark/python/lib/", line 172, in load
- File "/opt/spark/python/lib/", line 1257, in __call__
- File "/opt/spark/python/lib/", line 63, in deco
- File "/opt/spark/python/lib/", line 328, in get_return_value
- py4j.protocol.Py4JJavaError: An error occurred while calling o69.load.
- : java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister:
Provider org.apache.spark.sql.avro.AvroFileFormat could not be instantiated
- at
- at java.util.ServiceLoader.access$100(
- at java.util.ServiceLoader$LazyIterator.nextService(
- at java.util.ServiceLoader$
- at java.util.ServiceLoader$
- at scala.collection.convert.Wrappers$
- at scala.collection.Iterator.foreach(Iterator.scala:941)
- at scala.collection.Iterator.foreach$(Iterator.scala:941)
- at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
- at scala.collection.IterableLike.foreach(IterableLike.scala:74)
- at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
- at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
- at scala.collection.TraversableLike.filterImpl(TraversableLike.scala:250)
- at scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:248)
- at scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
- at scala.collection.TraversableLike.filter(TraversableLike.scala:262)
- at scala.collection.TraversableLike.filter$(TraversableLike.scala:262)
- at scala.collection.AbstractTraversable.filter(Traversable.scala:108)
- at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:630)
- at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
- at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(
- at java.lang.reflect.Method.invoke(
- at py4j.reflection.MethodInvoker.invoke(
- at py4j.reflection.ReflectionEngine.invoke(
- at py4j.Gateway.invoke(
- at py4j.commands.AbstractCommand.invokeMethod(
- at py4j.commands.CallCommand.execute(
- at
- at
- Caused by: java.lang.NoClassDefFoundError: org/apache/spark/sql/execution/datasources/FileFormat$class
- at org.apache.spark.sql.avro.AvroFileFormat.<init>(AvroFileFormat.scala:44)
- at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
- at sun.reflect.NativeConstructorAccessorImpl.newInstance(
- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
- at java.lang.reflect.Constructor.newInstance(
- at java.lang.Class.newInstance(
- at java.util.ServiceLoader$LazyIterator.nextService(
- ... 29 more
- Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.execution.datasources.FileFormat$class
- at
- at java.lang.ClassLoader.loadClass(
- at sun.misc.Launcher$AppClassLoader.loadClass(
- at java.lang.ClassLoader.loadClass(
- ... 36 more


The code we run looks like this:

spark_session = (
 .config('', SERVER_IP_ADDRESS)
 .config('spark.cassandra.auth.username', CASSANDRA_USERNAME)
 .config('spark.cassandra.auth.password', CASSANDRA_PASSWORD)
 .config('spark.sql.shuffle.partitions', 16)
 .config('parquet.enable.summary-metadata', 'true')

 load_options = {
 'table': TABLE_NAME,
 'spark.cassandra.input.fetch.size_in_rows': '150' }

 df = ('org.apache.spark.sql.cassandra')

We get the exact same error when trying to read a local .avro file instead of from Cassandra.

Up to now we included the .jar file for Spark-Avro using the spark-submit --jars option. The
version of Spark-Avro that we used, and worked with Spark 2.4.1, was Spark-Avro 2.4.0.

In an attempt to fix this problem we tried updating the .jar file version. We also tried using
the --packages option, with different version combinations, but none of these solutions worked.
The same error shows up every time. 

When rolling back to Spark 2.4.1 with the exact same setup and code, the error doesn't show
up and everything works fine. 

Any ideas on what could be causing this?



This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message