hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mich Talebzadeh <>
Subject Re: Hive transactional table with delta files, Spark cannot read and sends error
Date Mon, 01 Aug 2016 22:34:34 GMT
Thanks Gopal.

I am on Spark 1.6.1 and getting the following error

scala> var conn = LlapContext.newInstance(sc, hs2_url);
<console>:28: error: not found: value LlapContext
         var conn = LlapContext.newInstance(sc, hs2_url);

Dr Mich Talebzadeh

LinkedIn *

*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

On 1 August 2016 at 22:53, Gopal Vijayaraghavan <> wrote:

> > Spark fails reading this table. What options do I have here?
> Would your issue be the same as
> LLAPContext in Spark can read those tables with ACID semantics (as in
> delete/updates will work right).
> var conn = LlapContext.newInstance(sc, hs2_url);
> var df: DataFrame = conn.sql("select * from payees").persist();
> Please be aware that's entirely in auto-commit mode, so you will be
> getting lazy snapshot isolation (hence, persist is a good idea).
> Even though "payees" is a placeholder, but this approach is intended for
> tables like that which have multiple consumers, the practical reason to
> use this pathway would be to apply specific masking/filtering by accessing
> user (like hide amounts or just fit amounts into ranges, like 0-99, 99-999
> etc instead of actual values for compliance audits without creating
> complete copies).
> Cheers,
> Gopal

View raw message