Does anyone have some advice on the best way to deploy a Hive UDF for use
with a Spark SQL Thriftserver where the client is Tableau using Simba ODBC
Spark SQL driver.
I have seen the hive documentation that provides an example of creating the
function using a hive client ie: CREATE FUNCTION myfunc AS 'myclass' USING
JAR 'hdfs:///path/to/jar';
However using Tableau I can't run this create function statement to register
my UDF. Ideally there is a configuration setting that will load my UDF jar
and register it at start-up of the thriftserver.
Can anyone tell me what the best option if it is possible?
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Thriftserver-and-Hive-UDF-in-Production-tp25114.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
|