hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Dimiduk <ndimi...@gmail.com>
Subject Re: Q regarding thrift server to expose RDD via SQL
Date Mon, 16 Feb 2015 18:09:52 GMT
Using TableInputFormat directly will have better scalability than HS2.
Better still, use TableSnapshotInputFormat to work from a snapshot (since
RDDs are immutable anyway).

-n

On Monday, February 16, 2015, Marco <marco.frg@gmail.com> wrote:

> Hi,
>
> I've played with the feature to expose RDD via Thrift to enable JDBC
> access. (Spark 1.2)
>
>
> val eventsView = sqlContext.createSchemaRDD(eventSchemaRdd)
>      eventsView.registerTempTable("Events")
>
> HiveThriftServer2.startWithContext(sqlContext)
>
>
> This works all fine.
>
> Now, my understanding is you can't deploy this to a yarn-cluster. Is this
> correct or what are my options here ? My major concern is scaleability
> (e.g. having a lot of SQL requests, which may also not be trivial)
>
> Thanks,
> Marco
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message