spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Russell Spitzer <russell.spit...@gmail.com>
Subject Re: spark cassandra issue
Date Sun, 04 Sep 2016 16:31:24 GMT
https://github.com/datastax/spark-cassandra-connector/blob/v1.3.1/doc/14_data_frames.md
In Spark 1.3 it was illegal to use "table" as a key in Spark SQL so in that
version of Spark the connector needed to use the option "c_table"

val df = sqlContext.read.
     | format("org.apache.spark.sql.cassandra").
     | options(Map( "c_table" -> "****", "keyspace" -> "***")).
     | load()


On Sun, Sep 4, 2016 at 8:32 AM Mich Talebzadeh <mich.talebzadeh@gmail.com>
wrote:

> and your Cassandra table is there etc?
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 4 September 2016 at 16:20, Selvam Raman <selmna@gmail.com> wrote:
>
>> Hey Mich,
>>
>> I am using the same one right now. Thanks for the reply.
>> import org.apache.spark.sql.cassandra._
>> import com.datastax.spark.connector._ //Loads implicit functions
>> sc.cassandraTable("keyspace name", "table name")
>>
>>
>> On Sun, Sep 4, 2016 at 8:48 PM, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Hi Selvan.
>>>
>>> I don't deal with Cassandra but have you tried other options as
>>> described here
>>>
>>>
>>> https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
>>>
>>> To get a Spark RDD that represents a Cassandra table, call the
>>> cassandraTable method on the SparkContext object.
>>>
>>> import com.datastax.spark.connector._ //Loads implicit functions
>>> sc.cassandraTable("keyspace name", "table name")
>>>
>>>
>>>
>>> HTH
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 4 September 2016 at 15:52, Selvam Raman <selmna@gmail.com> wrote:
>>>
>>>> its very urgent. please help me guys.
>>>>
>>>> On Sun, Sep 4, 2016 at 8:05 PM, Selvam Raman <selmna@gmail.com> wrote:
>>>>
>>>>> Please help me to solve the issue.
>>>>>
>>>>> spark-shell --packages
>>>>> com.datastax.spark:spark-cassandra-connector_2.10:1.3.0 --conf
>>>>> spark.cassandra.connection.host=******
>>>>>
>>>>> val df = sqlContext.read.
>>>>>      | format("org.apache.spark.sql.cassandra").
>>>>>      | options(Map( "table" -> "****", "keyspace" -> "***")).
>>>>>      | load()
>>>>> java.util.NoSuchElementException: key not found: c_table
>>>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>>>         at
>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.default(ddl.scala:151)
>>>>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>>>>         at
>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.apply(ddl.scala:151)
>>>>>         at
>>>>> org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOptions(DefaultSource.scala:120)
>>>>>         at
>>>>> org.apache.spark.sql.cassandra.DefaultSource.createRelation(DefaultSource.scala:56)
>>>>>         at
>>>>> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
>>>>>         a
>>>>>
>>>>> --
>>>>> Selvam Raman
>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம்
நிமிர்த்து"
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Selvam Raman
>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>
>>>
>>>
>>
>>
>> --
>> Selvam Raman
>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>
>
>

Mime
View raw message