Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B95A0200AC8 for ; Tue, 7 Jun 2016 17:12:22 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B826C160968; Tue, 7 Jun 2016 15:12:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 04BE5160A4F for ; Tue, 7 Jun 2016 17:12:21 +0200 (CEST) Received: (qmail 16197 invoked by uid 500); 7 Jun 2016 15:12:21 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 16168 invoked by uid 99); 7 Jun 2016 15:12:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Jun 2016 15:12:21 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 059442C1F60 for ; Tue, 7 Jun 2016 15:12:21 +0000 (UTC) Date: Tue, 7 Jun 2016 15:12:21 +0000 (UTC) From: "Apache Spark (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Assigned] (SPARK-15730) [Spark SQL] the value of 'hiveconf' parameter in Spark-sql CLI don't take effect in spark-sql session MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 07 Jun 2016 15:12:22 -0000 [ https://issues.apache.org/jira/browse/SPARK-15730?page=3Dcom.atlassi= an.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-15730: ------------------------------------ Assignee: Apache Spark > [Spark SQL] the value of 'hiveconf' parameter in Spark-sql CLI don't take= effect in spark-sql session > -------------------------------------------------------------------------= ---------------------------- > > Key: SPARK-15730 > URL: https://issues.apache.org/jira/browse/SPARK-15730 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0 > Reporter: Yi Zhou > Assignee: Apache Spark > Priority: Critical > > /usr/lib/spark/bin/spark-sql -v --driver-memory 4g --executor-memory 7g -= -executor-cores 5 --num-executors 31 --master yarn-client --conf spark.yarn= .executor.memoryOverhead=3D1024 --hiveconf RESULT_TABLE=3Dtest_result01 > spark-sql> use test; > 16/06/02 21:36:15 INFO execution.SparkSqlParser: Parsing command: use tes= t > 16/06/02 21:36:15 INFO spark.SparkContext: Starting job: processCmd at Cl= iDriver.java:376 > 16/06/02 21:36:15 INFO scheduler.DAGScheduler: Got job 2 (processCmd at C= liDriver.java:376) with 1 output partitions > 16/06/02 21:36:15 INFO scheduler.DAGScheduler: Final stage: ResultStage 2= (processCmd at CliDriver.java:376) > 16/06/02 21:36:15 INFO scheduler.DAGScheduler: Parents of final stage: Li= st() > 16/06/02 21:36:15 INFO scheduler.DAGScheduler: Missing parents: List() > 16/06/02 21:36:15 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (= MapPartitionsRDD[8] at processCmd at CliDriver.java:376), which has no miss= ing parents > 16/06/02 21:36:15 INFO memory.MemoryStore: Block broadcast_2 stored as va= lues in memory (estimated size 3.2 KB, free 2.4 GB) > 16/06/02 21:36:15 INFO memory.MemoryStore: Block broadcast_2_piece0 store= d as bytes in memory (estimated size 1964.0 B, free 2.4 GB) > 16/06/02 21:36:15 INFO storage.BlockManagerInfo: Added broadcast_2_piece0= in memory on 192.168.3.11:36189 (size: 1964.0 B, free: 2.4 GB) > 16/06/02 21:36:15 INFO spark.SparkContext: Created broadcast 2 from broad= cast at DAGScheduler.scala:1012 > 16/06/02 21:36:15 INFO scheduler.DAGScheduler: Submitting 1 missing tasks= from ResultStage 2 (MapPartitionsRDD[8] at processCmd at CliDriver.java:37= 6) > 16/06/02 21:36:15 INFO cluster.YarnScheduler: Adding task set 2.0 with 1 = tasks > 16/06/02 21:36:15 INFO scheduler.TaskSetManager: Starting task 0.0 in sta= ge 2.0 (TID 2, 192.168.3.13, partition 0, PROCESS_LOCAL, 5362 bytes) > 16/06/02 21:36:15 INFO cluster.YarnClientSchedulerBackend: Launching task= 2 on executor id: 10 hostname: 192.168.3.13. > 16/06/02 21:36:16 INFO storage.BlockManagerInfo: Added broadcast_2_piece0= in memory on hw-node3:45924 (size: 1964.0 B, free: 4.4 GB) > 16/06/02 21:36:17 INFO scheduler.TaskSetManager: Finished task 0.0 in sta= ge 2.0 (TID 2) in 1934 ms on 192.168.3.13 (1/1) > 16/06/02 21:36:17 INFO cluster.YarnScheduler: Removed TaskSet 2.0, whose = tasks have all completed, from pool > 16/06/02 21:36:17 INFO scheduler.DAGScheduler: ResultStage 2 (processCmd = at CliDriver.java:376) finished in 1.937 s > 16/06/02 21:36:17 INFO scheduler.DAGScheduler: Job 2 finished: processCmd= at CliDriver.java:376, took 1.962631 s > Time taken: 2.027 seconds > 16/06/02 21:36:17 INFO CliDriver: Time taken: 2.027 seconds > spark-sql> DROP TABLE IF EXISTS ${hiveconf:RESULT_TABLE}; > 16/06/02 21:36:36 INFO execution.SparkSqlParser: Parsing command: DROP TA= BLE IF EXISTS ${hiveconf:RESULT_TABLE} > Error in query: > mismatched input '$' expecting {'ADD', 'AS', 'ALL', 'GROUP', 'BY', 'GROUP= ING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'LIMIT', 'AT', 'IN', 'NO', 'EXISTS= ', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC',= 'DESC', 'FOR', 'OUTER', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE',= 'ROWS', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'ROW', 'WITH', 'VALUES', 'CRE= ATE', 'TABLE', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', '= EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'SHOW', 'TABLES', 'COLUMNS', 'COL= UMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'TO', 'TABLESAMPLE', 'ALTER= ', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', '= START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'IF', 'PERCENT', 'BUCKET', 'OU= T', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUC= E', 'USING', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'D= ELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED'= , 'LINES', 'SEPARATED', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE',= 'LAZY', 'FORMATTED', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPR= OPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHA= NGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENAT= E', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUT= FORMAT', 'OUTPUTFORMAT', DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE'= , 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GR= ANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE= ', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES'= , 'LOCKS', 'OPTION', 'LOCAL', 'INPATH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(= line 1, pos 21) > =3D=3D SQL =3D=3D > DROP TABLE IF EXISTS ${hiveconf:RESULT_TABLE} > ---------------------^^^ -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org