spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiao Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-15396) [Spark] [SQL] It can't connect hive metastore database
Date Thu, 19 May 2016 02:37:12 GMT

    [ https://issues.apache.org/jira/browse/SPARK-15396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15290318#comment-15290318
] 

Xiao Li commented on SPARK-15396:
---------------------------------

Obviously, that is out of dated. We need to update the related contents. Actually, many parts
need an update. [~rxin][~yhuai]

> [Spark] [SQL] It can't connect hive metastore database
> ------------------------------------------------------
>
>                 Key: SPARK-15396
>                 URL: https://issues.apache.org/jira/browse/SPARK-15396
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Yi Zhou
>            Priority: Critical
>
> I am try to run Spark SQL using bin/spark-sql with Spark 2.0 master code(commit ba181c0c7a32b0e81bbcdbe5eed94fc97b58c83e)
but ran across an issue that it always connect local derby database and can't connect my existing
hive metastore database. Could you help me to check what's the root cause ? What's specific
configuration for integration with hive metastore in Spark 2.0 ? BTW, this case is OK in Spark
1.6. Thanks in advance !
> Build package command:
> ./dev/make-distribution.sh --tgz -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0-cdh5.5.1
-Phive -Phive-thriftserver -DskipTests
> Key configurations in spark-defaults.conf:
> {code}
> spark.sql.hive.metastore.version=1.1.0
> spark.sql.hive.metastore.jars=/usr/lib/hive/lib/*:/usr/lib/hadoop/client/*
> spark.executor.extraClassPath=/etc/hive/conf
> spark.driver.extraClassPath=/etc/hive/conf
> spark.yarn.jars=local:/usr/lib/spark/jars/*
> {code}
> There is existing hive metastore database named by "test_sparksql". I always got error
"metastore.ObjectStore: Failed to get database test_sparksql, returning NoSuchObjectException"
after issuing 'use test_sparksql'. Please see below steps for details.
>  
> $ /usr/lib/spark/bin/spark-sql --master yarn --deploy-mode client
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/usr/lib/avro/avro-tools-1.7.6-cdh5.5.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/05/12 22:23:28 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine
does not exist
> 16/05/12 22:23:30 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation
class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/12 22:23:30 INFO metastore.ObjectStore: ObjectStore, initialize called
> 16/05/12 22:23:30 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms"
is already registered. Ensure you dont have multiple JAR versions of the same plugin in the
classpath. The URL "file:/usr/lib/hive/lib/datanucleus-rdbms-3.2.9.jar" is already registered,
and you are trying to register an identical plugin located at URL "file:/usr/lib/spark/jars/datanucleus-rdbms-3.2.9.jar."
> 16/05/12 22:23:30 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already
registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath.
The URL "file:/usr/lib/hive/lib/datanucleus-core-3.2.10.jar" is already registered, and you
are trying to register an identical plugin located at URL "file:/usr/lib/spark/jars/datanucleus-core-3.2.10.jar."
> 16/05/12 22:23:30 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo"
is already registered. Ensure you dont have multiple JAR versions of the same plugin in the
classpath. The URL "file:/usr/lib/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered,
and you are trying to register an identical plugin located at URL "file:/usr/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar."
> 16/05/12 22:23:30 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown
- will be ignored
> 16/05/12 22:23:30 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown
unknown - will be ignored
> 16/05/12 22:23:31 WARN conf.HiveConf: HiveConf of name hive.enable.spark.execution.engine
does not exist
> 16/05/12 22:23:31 INFO metastore.ObjectStore: Setting MetaStore object pin classes with
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 16/05/12 22:23:32 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:32 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:33 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:33 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:33 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB
is DERBY
> 16/05/12 22:23:33 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/12 22:23:33 WARN metastore.ObjectStore: Version information not found in metastore.
hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
> 16/05/12 22:23:33 WARN metastore.ObjectStore: Failed to get database default, returning
NoSuchObjectException
> 16/05/12 22:23:34 INFO metastore.HiveMetaStore: Added admin role in metastore
> 16/05/12 22:23:34 INFO metastore.HiveMetaStore: Added public role in metastore
> 16/05/12 22:23:34 INFO metastore.HiveMetaStore: No user is added in admin role, since
config is empty
> 16/05/12 22:23:34 INFO metastore.HiveMetaStore: 0: get_all_databases
> 16/05/12 22:23:34 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      cmd=get_all_databases
> 16/05/12 22:23:34 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
> 16/05/12 22:23:34 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      cmd=get_functions:
db=default pat=*
> 16/05/12 22:23:34 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:34 INFO session.SessionState: Created local directory: /tmp/4e7ccc40-e10b-455c-b51d-ed225be85ffe_resources
> 16/05/12 22:23:34 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4e7ccc40-e10b-455c-b51d-ed225be85ffe
> 16/05/12 22:23:34 INFO session.SessionState: Created local directory: /tmp/root/4e7ccc40-e10b-455c-b51d-ed225be85ffe
> 16/05/12 22:23:34 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4e7ccc40-e10b-455c-b51d-ed225be85ffe/_tmp_space.db
> 16/05/12 22:23:34 INFO spark.SparkContext: Running Spark version 2.0.0-SNAPSHOT
> 16/05/12 22:23:34 INFO spark.SecurityManager: Changing view acls to: root
> 16/05/12 22:23:34 INFO spark.SecurityManager: Changing modify acls to: root
> 16/05/12 22:23:34 INFO spark.SecurityManager: Changing view acls groups to:
> 16/05/12 22:23:34 INFO spark.SecurityManager: Changing modify acls groups to:
> 16/05/12 22:23:34 INFO spark.SecurityManager: SecurityManager: authentication disabled;
ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set();
users  with modify permissions: Set(root); groups with modify permissions: Set()
> 16/05/12 22:23:35 INFO util.Utils: Successfully started service 'sparkDriver' on port
37223.
> 16/05/12 22:23:35 INFO spark.SparkEnv: Registering MapOutputTracker
> 16/05/12 22:23:35 INFO spark.SparkEnv: Registering BlockManagerMaster
> 16/05/12 22:23:35 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-5a30adbe-4f9a-4b34-b52f-b61671f8b06d
> 16/05/12 22:23:35 INFO memory.MemoryStore: MemoryStore started with capacity 511.1 MB
> 16/05/12 22:23:35 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 16/05/12 22:23:35 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 16/05/12 22:23:35 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
> 16/05/12 22:23:35 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
> 16/05/12 22:23:35 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.3.11:4040
> 16/05/12 22:23:35 INFO client.RMProxy: Connecting to ResourceManager at hw-node2/192.168.3.12:8032
> 16/05/12 22:23:35 INFO yarn.Client: Requesting a new application from cluster with 4
NodeManagers
> 16/05/12 22:23:35 INFO yarn.Client: Verifying our application has not requested more
than the maximum memory capability of the cluster (196608 MB per container)
> 16/05/12 22:23:35 INFO yarn.Client: Will allocate AM container, with 896 MB memory including
384 MB overhead
> 16/05/12 22:23:35 INFO yarn.Client: Setting up container launch context for our AM
> 16/05/12 22:23:35 INFO yarn.Client: Setting up the launch environment for our AM container
> 16/05/12 22:23:35 INFO yarn.Client: Preparing resources for our AM container
> 16/05/12 22:23:35 INFO yarn.Client: Uploading resource file:/tmp/spark-a712ffb6-a0d0-48db-99b4-ee6a41b3f132/__spark_conf__7597761027449817951.zip
-> hdfs://hw-node2:8020/user/root/.sparkStaging/application_1463053929123_0006/__spark_conf__.zip
> 16/05/12 22:23:36 INFO yarn.Client: Uploading resource file:/tmp/spark-a712ffb6-a0d0-48db-99b4-ee6a41b3f132/__spark_conf__9093112552235548615.zip
-> hdfs://hw-node2:8020/user/root/.sparkStaging/application_1463053929123_0006/__spark_conf__9093112552235548615.zip
> 16/05/12 22:23:36 INFO spark.SecurityManager: Changing view acls to: root
> 16/05/12 22:23:36 INFO spark.SecurityManager: Changing modify acls to: root
> 16/05/12 22:23:36 INFO spark.SecurityManager: Changing view acls groups to:
> 16/05/12 22:23:36 INFO spark.SecurityManager: Changing modify acls groups to:
> 16/05/12 22:23:36 INFO spark.SecurityManager: SecurityManager: authentication disabled;
ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set();
users  with modify permissions: Set(root); groups with modify permissions: Set()
> 16/05/12 22:23:36 INFO yarn.Client: Submitting application application_1463053929123_0006
to ResourceManager
> 16/05/12 22:23:36 INFO impl.YarnClientImpl: Submitted application application_1463053929123_0006
> 16/05/12 22:23:36 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services
with app application_1463053929123_0006 and attemptId None
> 16/05/12 22:23:37 INFO yarn.Client: Application report for application_1463053929123_0006
(state: ACCEPTED)
> 16/05/12 22:23:37 INFO yarn.Client:
>          client token: N/A
>          diagnostics: N/A
>          ApplicationMaster host: N/A
>          ApplicationMaster RPC port: -1
>          queue: root.root
>          start time: 1463063016173
>          final status: UNDEFINED
>          tracking URL: http://hw-node2:8088/proxy/application_1463053929123_0006/
>          user: root
> 16/05/12 22:23:38 INFO yarn.Client: Application report for application_1463053929123_0006
(state: ACCEPTED)
> 16/05/12 22:23:38 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster
registered as NettyRpcEndpointRef(null)
> 16/05/12 22:23:38 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
Map(PROXY_HOSTS -> hw-node2, PROXY_URI_BASES -> http://hw-node2:8088/proxy/application_1463053929123_0006),
/proxy/application_1463053929123_0006
> 16/05/12 22:23:38 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 16/05/12 22:23:39 INFO yarn.Client: Application report for application_1463053929123_0006
(state: RUNNING)
> 16/05/12 22:23:39 INFO yarn.Client:
>          client token: N/A
>          diagnostics: N/A
>          ApplicationMaster host: 192.168.3.16
>          ApplicationMaster RPC port: 0
>          queue: root.root
>          start time: 1463063016173
>          final status: UNDEFINED
>          tracking URL: http://hw-node2:8088/proxy/application_1463053929123_0006/
>          user: root
> 16/05/12 22:23:39 INFO cluster.YarnClientSchedulerBackend: Application application_1463053929123_0006
has started running.
> 16/05/12 22:23:39 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService'
on port 45022.
> 16/05/12 22:23:39 INFO netty.NettyBlockTransferService: Server created on 192.168.3.11:45022
> 16/05/12 22:23:39 INFO storage.BlockManager: external shuffle service port = 7337
> 16/05/12 22:23:39 INFO storage.BlockManagerMaster: Trying to register BlockManager
> 16/05/12 22:23:39 INFO storage.BlockManagerMasterEndpoint: Registering block manager
192.168.3.11:45022 with 511.1 MB RAM, BlockManagerId(driver, 192.168.3.11, 45022)
> 16/05/12 22:23:39 INFO storage.BlockManagerMaster: Registered BlockManager
> 16/05/12 22:23:39 INFO scheduler.EventLoggingListener: Logging events to hdfs://hw-node2:8020/user/spark/applicationHistory/application_1463053929123_0006
> 16/05/12 22:23:39 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready
for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
> 16/05/12 22:23:39 INFO hive.HiveSharedState: Setting Hive metastore warehouse path to
'/root/spark-warehouse'
> 16/05/12 22:23:39 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.1.0
using file:/usr/lib/hive/lib/httpcore-4.2.5.jar:file:/usr/lib/hive/lib/hive-contrib.jar:file:/usr/lib/hive/lib/oro-2.0.8.jar:file:/usr/lib/hive/lib/accumulo-start-1.6.0.jar:file:/usr/lib/hive/lib/groovy-all-2.4.4.jar:file:/usr/lib/hive/lib/hive-metastore.jar:file:/usr/lib/hive/lib/hive-beeline.jar:file:/usr/lib/hive/lib/datanucleus-core-3.2.10.jar:file:/usr/lib/hive/lib/jackson-core-2.2.2.jar:file:/usr/lib/hive/lib/velocity-1.5.jar:file:/usr/lib/hive/lib/hive-serde.jar:file:/usr/lib/hive/lib/hive-metastore-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/commons-beanutils-core-1.8.0.jar:file:/usr/lib/hive/lib/hamcrest-core-1.1.jar:file:/usr/lib/hive/lib/jta-1.1.jar:file:/usr/lib/hive/lib/hive-shims-0.23-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/antlr-2.7.7.jar:file:/usr/lib/hive/lib/hive-exec-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:file:/usr/lib/hive/lib/accumulo-fate-1.6.0.jar:file:/usr/lib/hive/lib/hive-accumulo-handler.jar:file:/usr/lib/hive/lib/snappy-java-1.0.4.1.jar:file:/usr/lib/hive/lib/tempus-fugit-1.1.jar:file:/usr/lib/hive/lib/maven-scm-provider-svn-commons-1.4.jar:file:/usr/lib/hive/lib/libfb303-0.9.2.jar:file:/usr/lib/hive/lib/datanucleus-rdbms-3.2.9.jar:file:/usr/lib/hive/lib/xz-1.0.jar:file:/usr/lib/hive/lib/hbase-common.jar:file:/usr/lib/hive/lib/activation-1.1.jar:file:/usr/lib/hive/lib/hive-ant-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/accumulo-trace-1.6.0.jar:file:/usr/lib/hive/lib/hive-serde-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/commons-compress-1.4.1.jar:file:/usr/lib/hive/lib/hbase-hadoop2-compat.jar:file:/usr/lib/hive/lib/commons-configuration-1.6.jar:file:/usr/lib/hive/lib/servlet-api-2.5.jar:file:/usr/lib/hive/lib/libthrift-0.9.2.jar:file:/usr/lib/hive/lib/stax-api-1.0.1.jar:file:/usr/lib/hive/lib/hive-testutils.jar:file:/usr/lib/hive/lib/hive-shims-scheduler-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/hive-testutils-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/junit-4.11.jar:file:/usr/lib/hive/lib/jackson-annotations-2.2.2.jar:file:/usr/lib/hive/lib/stringtemplate-3.2.1.jar:file:/usr/lib/hive/lib/super-csv-2.2.0.jar:file:/usr/lib/hive/lib/hive-hwi-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/log4j-1.2.16.jar:file:/usr/lib/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:file:/usr/lib/hive/lib/accumulo-core-1.6.0.jar:file:/usr/lib/hive/lib/hive-hbase-handler.jar:file:/usr/lib/hive/lib/high-scale-lib-1.1.1.jar:file:/usr/lib/hive/lib/hbase-protocol.jar:file:/usr/lib/hive/lib/hive-common-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/hive-jdbc.jar:file:/usr/lib/hive/lib/commons-logging-1.1.3.jar:file:/usr/lib/hive/lib/derby-10.11.1.1.jar:file:/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/hive-shims-scheduler.jar:file:/usr/lib/hive/lib/asm-commons-3.1.jar:file:/usr/lib/hive/lib/hive-jdbc-standalone.jar:file:/usr/lib/hive/lib/maven-scm-api-1.4.jar:file:/usr/lib/hive/lib/janino-2.7.6.jar:file:/usr/lib/hive/lib/hive-cli.jar:file:/usr/lib/hive/lib/maven-scm-provider-svnexe-1.4.jar:file:/usr/lib/hive/lib/bonecp-0.8.0.RELEASE.jar:file:/usr/lib/hive/lib/zookeeper.jar:file:/usr/lib/hive/lib/jline-2.12.jar:file:/usr/lib/hive/lib/asm-3.2.jar:file:/usr/lib/hive/lib/logredactor-1.0.3.jar:file:/usr/lib/hive/lib/hive-ant.jar:file:/usr/lib/hive/lib/hive-shims-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/ant-launcher-1.9.1.jar:file:/usr/lib/hive/lib/hive-cli-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/gson-2.2.4.jar:file:/usr/lib/hive/lib/avro.jar:file:/usr/lib/hive/lib/parquet-hadoop-bundle.jar:file:/usr/lib/hive/lib/commons-beanutils-1.7.0.jar:file:/usr/lib/hive/lib/commons-digester-1.8.jar:file:/usr/lib/hive/lib/apache-log4j-extras-1.2.17.jar:file:/usr/lib/hive/lib/calcite-core-1.0.0-incubating.jar:file:/usr/lib/hive/lib/metrics-json-3.0.2.jar:file:/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.5.1-standalone.jar:file:/usr/lib/hive/lib/jackson-databind-2.2.2.jar:file:/usr/lib/hive/lib/hive-exec.jar:file:/usr/lib/hive/lib/jersey-server-1.14.jar:file:/usr/lib/hive/lib/asm-tree-3.1.jar:file:/usr/lib/hive/lib/jdo-api-3.0.1.jar:file:/usr/lib/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:file:/usr/lib/hive/lib/metrics-core-3.0.2.jar:file:/usr/lib/hive/lib/commons-dbcp-1.4.jar:file:/usr/lib/hive/lib/mail-1.4.1.jar:file:/usr/lib/hive/lib/metrics-jvm-3.0.2.jar:file:/usr/lib/hive/lib/paranamer-2.3.jar:file:/usr/lib/hive/lib/commons-lang-2.6.jar:file:/usr/lib/hive/lib/commons-compiler-2.7.6.jar:file:/usr/lib/hive/lib/commons-codec-1.4.jar:file:/usr/lib/hive/lib/guava-14.0.1.jar:file:/usr/lib/hive/lib/hive-service-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/jersey-servlet-1.14.jar:file:/usr/lib/hive/lib/regexp-1.3.jar:file:/usr/lib/hive/lib/jpam-1.1.jar:file:/usr/lib/hive/lib/calcite-linq4j-1.0.0-incubating.jar:file:/usr/lib/hive/lib/hive-accumulo-handler-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/hbase-server.jar:file:/usr/lib/hive/lib/eigenbase-properties-1.1.4.jar:file:/usr/lib/hive/lib/commons-pool-1.5.4.jar:file:/usr/lib/hive/lib/commons-vfs2-2.0.jar:file:/usr/lib/hive/lib/jackson-jaxrs-1.9.2.jar:file:/usr/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/commons-math-2.1.jar:file:/usr/lib/hive/lib/commons-cli-1.2.jar:file:/usr/lib/hive/lib/commons-io-2.4.jar:file:/usr/lib/hive/lib/ant-1.9.1.jar:file:/usr/lib/hive/lib/ST4-4.0.4.jar:file:/usr/lib/hive/lib/hive-shims-common-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/hive-common.jar:file:/usr/lib/hive/lib/jetty-all-server-7.6.0.v20120127.jar:file:/usr/lib/hive/lib/hive-service.jar:file:/usr/lib/hive/lib/hbase-hadoop-compat.jar:file:/usr/lib/hive/lib/hive-shims-0.23.jar:file:/usr/lib/hive/lib/hive-contrib-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/curator-client-2.6.0.jar:file:/usr/lib/hive/lib/commons-httpclient-3.0.1.jar:file:/usr/lib/hive/lib/plexus-utils-1.5.6.jar:file:/usr/lib/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:file:/usr/lib/hive/lib/jetty-all-7.6.0.v20120127.jar:file:/usr/lib/hive/lib/hive-shims.jar:file:/usr/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar:file:/usr/lib/hive/lib/htrace-core.jar:file:/usr/lib/hive/lib/httpclient-4.2.5.jar:file:/usr/lib/hive/lib/jcommander-1.32.jar:file:/usr/lib/hive/lib/antlr-runtime-3.4.jar:file:/usr/lib/hive/lib/opencsv-2.3.jar:file:/usr/lib/hive/lib/jsr305-3.0.0.jar:file:/usr/lib/hive/lib/jackson-xc-1.9.2.jar:file:/usr/lib/hive/lib/hive-shims-common.jar:file:/usr/lib/hive/lib/curator-framework-2.6.0.jar:file:/usr/lib/hive/lib/calcite-avatica-1.0.0-incubating.jar:file:/usr/lib/hive/lib/hive-beeline-1.1.0-cdh5.5.1.jar:file:/usr/lib/hive/lib/hive-hwi.jar:file:/usr/lib/hive/lib/hbase-client.jar:file:/usr/lib/hadoop/client/httpcore-4.2.5.jar:file:/usr/lib/hadoop/client/hadoop-hdfs.jar:file:/usr/lib/hadoop/client/apacheds-i18n-2.0.0-M15.jar:file:/usr/lib/hadoop/client/apacheds-kerberos-codec.jar:file:/usr/lib/hadoop/client/slf4j-api-1.7.5.jar:file:/usr/lib/hadoop/client/commons-net.jar:file:/usr/lib/hadoop/client/commons-beanutils-core-1.8.0.jar:file:/usr/lib/hadoop/client/jackson-annotations-2.2.3.jar:file:/usr/lib/hadoop/client/commons-logging.jar:file:/usr/lib/hadoop/client/curator-recipes-2.7.1.jar:file:/usr/lib/hadoop/client/hadoop-aws-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/snappy-java.jar:file:/usr/lib/hadoop/client/leveldbjni-all.jar:file:/usr/lib/hadoop/client/jackson-databind.jar:file:/usr/lib/hadoop/client/commons-lang.jar:file:/usr/lib/hadoop/client/xmlenc-0.52.jar:file:/usr/lib/hadoop/client/snappy-java-1.0.4.1.jar:file:/usr/lib/hadoop/client/commons-httpclient.jar:file:/usr/lib/hadoop/client/hadoop-yarn-server-common.jar:file:/usr/lib/hadoop/client/jackson-databind-2.2.3.jar:file:/usr/lib/hadoop/client/guava-11.0.2.jar:file:/usr/lib/hadoop/client/xz-1.0.jar:file:/usr/lib/hadoop/client/hadoop-yarn-api-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-jobclient.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-app.jar:file:/usr/lib/hadoop/client/activation-1.1.jar:file:/usr/lib/hadoop/client/jaxb-api.jar:file:/usr/lib/hadoop/client/commons-compress-1.4.1.jar:file:/usr/lib/hadoop/client/commons-configuration-1.6.jar:file:/usr/lib/hadoop/client/jackson-xc.jar:file:/usr/lib/hadoop/client/servlet-api-2.5.jar:file:/usr/lib/hadoop/client/xmlenc.jar:file:/usr/lib/hadoop/client/jackson-jaxrs.jar:file:/usr/lib/hadoop/client/jackson-xc-1.8.8.jar:file:/usr/lib/hadoop/client/hadoop-common-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/apacheds-kerberos-codec-2.0.0-M15.jar:file:/usr/lib/hadoop/client/commons-cli.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-app-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/aws-java-sdk-1.7.4.jar:file:/usr/lib/hadoop/client/netty.jar:file:/usr/lib/hadoop/client/protobuf-java.jar:file:/usr/lib/hadoop/client/jaxb-api-2.2.2.jar:file:/usr/lib/hadoop/client/commons-logging-1.1.3.jar:file:/usr/lib/hadoop/client/commons-net-3.1.jar:file:/usr/lib/hadoop/client/hadoop-annotations.jar:file:/usr/lib/hadoop/client/hadoop-hdfs-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/jersey-core.jar:file:/usr/lib/hadoop/client/hadoop-yarn-client-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/hadoop-auth-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/zookeeper.jar:file:/usr/lib/hadoop/client/commons-collections-3.2.2.jar:file:/usr/lib/hadoop/client/servlet-api.jar:file:/usr/lib/hadoop/client/guava.jar:file:/usr/lib/hadoop/client/hadoop-yarn-api.jar:file:/usr/lib/hadoop/client/commons-math3.jar:file:/usr/lib/hadoop/client/slf4j-api.jar:file:/usr/lib/hadoop/client/stax-api.jar:file:/usr/lib/hadoop/client/hadoop-auth.jar:file:/usr/lib/hadoop/client/commons-io.jar:file:/usr/lib/hadoop/client/commons-digester.jar:file:/usr/lib/hadoop/client/hadoop-annotations-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/gson-2.2.4.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-core-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/avro.jar:file:/usr/lib/hadoop/client/activation.jar:file:/usr/lib/hadoop/client/apacheds-i18n.jar:file:/usr/lib/hadoop/client/hadoop-yarn-common.jar:file:/usr/lib/hadoop/client/commons-beanutils-1.7.0.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-common.jar:file:/usr/lib/hadoop/client/commons-digester-1.8.jar:file:/usr/lib/hadoop/client/jetty-util.jar:file:/usr/lib/hadoop/client/jackson-core-asl-1.8.8.jar:file:/usr/lib/hadoop/client/jetty-util-6.1.26.cloudera.4.jar:file:/usr/lib/hadoop/client/httpcore.jar:file:/usr/lib/hadoop/client/curator-client.jar:file:/usr/lib/hadoop/client/netty-3.6.2.Final.jar:file:/usr/lib/hadoop/client/jackson-mapper-asl.jar:file:/usr/lib/hadoop/client/commons-beanutils-core.jar:file:/usr/lib/hadoop/client/jackson-jaxrs-1.8.8.jar:file:/usr/lib/hadoop/client/xz.jar:file:/usr/lib/hadoop/client/paranamer-2.3.jar:file:/usr/lib/hadoop/client/commons-lang-2.6.jar:file:/usr/lib/hadoop/client/jackson-annotations.jar:file:/usr/lib/hadoop/client/commons-codec-1.4.jar:file:/usr/lib/hadoop/client/jersey-core-1.9.jar:file:/usr/lib/hadoop/client/api-asn1-api-1.0.0-M20.jar:file:/usr/lib/hadoop/client/commons-collections.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/api-util.jar:file:/usr/lib/hadoop/client/jsr305.jar:file:/usr/lib/hadoop/client/httpclient.jar:file:/usr/lib/hadoop/client/xml-apis-1.3.04.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-shuffle.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-core.jar:file:/usr/lib/hadoop/client/curator-client-2.7.1.jar:file:/usr/lib/hadoop/client/commons-httpclient-3.1.jar:file:/usr/lib/hadoop/client/hadoop-yarn-common-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/commons-cli-1.2.jar:file:/usr/lib/hadoop/client/commons-io-2.4.jar:file:/usr/lib/hadoop/client/curator-framework.jar:file:/usr/lib/hadoop/client/stax-api-1.0-2.jar:file:/usr/lib/hadoop/client/htrace-core4.jar:file:/usr/lib/hadoop/client/jackson-core-2.2.3.jar:file:/usr/lib/hadoop/client/jackson-core-asl.jar:file:/usr/lib/hadoop/client/commons-configuration.jar:file:/usr/lib/hadoop/client/commons-compress.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-common-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/xercesImpl-2.9.1.jar:file:/usr/lib/hadoop/client/jersey-client-1.9.jar:file:/usr/lib/hadoop/client/log4j.jar:file:/usr/lib/hadoop/client/jackson-mapper-asl-1.8.8.jar:file:/usr/lib/hadoop/client/leveldbjni-all-1.8.jar:file:/usr/lib/hadoop/client/api-util-1.0.0-M20.jar:file:/usr/lib/hadoop/client/curator-framework-2.7.1.jar:file:/usr/lib/hadoop/client/commons-codec.jar:file:/usr/lib/hadoop/client/xml-apis.jar:file:/usr/lib/hadoop/client/jersey-client.jar:file:/usr/lib/hadoop/client/hadoop-yarn-client.jar:file:/usr/lib/hadoop/client/aws-java-sdk.jar:file:/usr/lib/hadoop/client/paranamer.jar:file:/usr/lib/hadoop/client/hadoop-aws.jar:file:/usr/lib/hadoop/client/hadoop-yarn-server-common-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/commons-math3-3.1.1.jar:file:/usr/lib/hadoop/client/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.5.1.jar:file:/usr/lib/hadoop/client/httpclient-4.2.5.jar:file:/usr/lib/hadoop/client/commons-beanutils.jar:file:/usr/lib/hadoop/client/hadoop-common.jar:file:/usr/lib/hadoop/client/api-asn1-api.jar:file:/usr/lib/hadoop/client/htrace-core4-4.0.1-incubating.jar:file:/usr/lib/hadoop/client/log4j-1.2.17.jar:file:/usr/lib/hadoop/client/jsr305-3.0.0.jar:file:/usr/lib/hadoop/client/curator-recipes.jar:file:/usr/lib/hadoop/client/slf4j-log4j12.jar:file:/usr/lib/hadoop/client/jackson-core.jar:file:/usr/lib/hadoop/client/protobuf-java-2.5.0.jar:file:/usr/lib/hadoop/client/xercesImpl.jar:file:/usr/lib/hadoop/client/gson.jar
> 16/05/12 22:23:40 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation
class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/12 22:23:40 INFO metastore.ObjectStore: ObjectStore, initialize called
> 16/05/12 22:23:40 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown
- will be ignored
> 16/05/12 22:23:40 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown
unknown - will be ignored
> 16/05/12 22:23:41 INFO metastore.ObjectStore: Setting MetaStore object pin classes with
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 16/05/12 22:23:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:42 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0"
since the connection used is closing
> 16/05/12 22:23:42 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB
is DERBY
> 16/05/12 22:23:42 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/12 22:23:42 INFO metastore.HiveMetaStore: Added admin role in metastore
> 16/05/12 22:23:42 INFO metastore.HiveMetaStore: Added public role in metastore
> 16/05/12 22:23:42 INFO metastore.HiveMetaStore: No user is added in admin role, since
config is empty
> 16/05/12 22:23:42 INFO log.PerfLogger: <PERFLOG method=get_all_functions from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 16/05/12 22:23:42 INFO metastore.HiveMetaStore: 0: get_all_functions
> 16/05/12 22:23:42 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      cmd=get_all_functions
> 16/05/12 22:23:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri"
is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/12 22:23:42 INFO log.PerfLogger: </PERFLOG method=get_all_functions start=1463063022896
end=1463063022941 duration=45 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0
retryCount=0 error=false>
> 16/05/12 22:23:42 INFO session.SessionState: Created local directory: /tmp/f1ff20d6-3eac-4df0-adbd-64f7e73f35e8_resources
> 16/05/12 22:23:42 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/f1ff20d6-3eac-4df0-adbd-64f7e73f35e8
> 16/05/12 22:23:42 INFO session.SessionState: Created local directory: /tmp/root/f1ff20d6-3eac-4df0-adbd-64f7e73f35e8
> 16/05/12 22:23:42 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/f1ff20d6-3eac-4df0-adbd-64f7e73f35e8/_tmp_space.db
> 16/05/12 22:23:42 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
> 16/05/12 22:23:42 INFO client.HiveClientImpl: Warehouse location for Hive client (version
1.1.0) is /root/spark-warehouse
> 16/05/12 22:23:43 INFO session.SessionState: Created local directory: /tmp/4f466b18-e85b-4fa5-9b3a-2a1a67118851_resources
> 16/05/12 22:23:43 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4f466b18-e85b-4fa5-9b3a-2a1a67118851
> 16/05/12 22:23:43 INFO session.SessionState: Created local directory: /tmp/root/4f466b18-e85b-4fa5-9b3a-2a1a67118851
> 16/05/12 22:23:43 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/4f466b18-e85b-4fa5-9b3a-2a1a67118851/_tmp_space.db
> 16/05/12 22:23:43 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
> 16/05/12 22:23:43 INFO client.HiveClientImpl: Warehouse location for Hive client (version
1.1.0) is /root/spark-warehouse
> spark-sql> use test_sparksql;
> 16/05/12 22:25:06 INFO execution.SparkSqlParser: Parsing command: use test_sparksql
> 16/05/12 22:25:06 INFO log.PerfLogger: <PERFLOG method=create_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 16/05/12 22:25:06 INFO metastore.HiveMetaStore: 0: create_database: Database(name:default,
description:default database, locationUri:hdfs://hw-node2:8020/root/spark-warehouse, parameters:{})
> 16/05/12 22:25:06 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      cmd=create_database:
Database(name:default, description:default database, locationUri:hdfs://hw-node2:8020/root/spark-warehouse,
parameters:{})
> 16/05/12 22:25:06 ERROR metastore.RetryingHMSHandler: AlreadyExistsException(message:Database
default already exists)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:898)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:133)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
>         at com.sun.proxy.$Proxy34.create_database(Unknown Source)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:645)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:91)
>         at com.sun.proxy.$Proxy35.createDatabase(Unknown Source)
>         at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:341)
>         at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply$mcV$sp(HiveClientImpl.scala:292)
>         at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:292)
>         at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:292)
>         at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:263)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:210)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:209)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:252)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.createDatabase(HiveClientImpl.scala:291)
>         at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply$mcV$sp(HiveExternalCatalog.scala:94)
>         at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:94)
>         at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:94)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:68)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.createDatabase(HiveExternalCatalog.scala:93)
>         at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:142)
>         at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:84)
>         at org.apache.spark.sql.hive.HiveSessionCatalog.<init>(HiveSessionCatalog.scala:50)
>         at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:49)
>         at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
>         at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
>         at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
>         at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
>         at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
>         at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:61)
>         at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)
>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:671)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:325)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:240)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:724)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 16/05/12 22:25:06 INFO log.PerfLogger: </PERFLOG method=create_database start=1463063106660
end=1463063106665 duration=5 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0
retryCount=-1 error=true>
> 16/05/12 22:25:06 INFO log.PerfLogger: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 16/05/12 22:25:06 INFO metastore.HiveMetaStore: 0: get_database: test_sparksql
> 16/05/12 22:25:06 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr      cmd=get_database:
test_sparksql
> 16/05/12 22:25:06 WARN metastore.ObjectStore: Failed to get database test_sparksql, returning
NoSuchObjectException
> 16/05/12 22:25:06 INFO log.PerfLogger: </PERFLOG method=get_database start=1463063106947
end=1463063106950 duration=3 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0
retryCount=-1 error=true>
> Error in query: Database 'test_sparksql' not found; 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message