ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Mutyala <rmuty...@gmail.com>
Subject Re: HDP 2.1.7 can't start hive metastore service
Date Wed, 05 Nov 2014 08:21:10 GMT
Try logging in as username the one that is configured for hive, most likely
hive if you used default. Password is the one that you entered in ambari.

I still thing that user does not have access to DB hive.

On Wed, Nov 5, 2014 at 1:11 AM, guxiaobo1982 <guxiaobo1982@qq.com> wrote:

> [root@lix1 xiaobogu]# mysql -h lix1.bh.com
>
> Welcome to the MySQL monitor.  Commands end with ; or \g.
>
> Your MySQL connection id is 5
>
> Server version: 5.1.73 Source distribution
>
>
> Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights
> reserved.
>
>
> Oracle is a registered trademark of Oracle Corporation and/or its
>
> affiliates. Other names may be trademarks of their respective
>
> owners.
>
>
> Type 'help;' or '\h' for help. Type '\c' to clear the current input
> statement.
>
>
> mysql> show databases;
>
> +--------------------+
>
> | Database           |
>
> +--------------------+
>
> | information_schema |
>
> | hive               |
>
> | mysql              |
>
> | test               |
>
> +--------------------+
>
> 4 rows in set (0.00 sec)
>
>
> mysql> use hive;
>
> Reading table information for completion of table and column names
>
> You can turn off this feature to get a quicker startup with -A
>
>
> Database changed
>
> mysql>
>
>
> ------------------ Original ------------------
> *From: * "Ravi Mutyala";<rmutyala@gmail.com>;
> *Send time:* Wednesday, Nov 5, 2014 12:51 PM
> *To:* "user"<user@ambari.apache.org>;
> *Subject: * Re: HDP 2.1.7 can't start hive metastore service
>
> This most likely is an error authenticating/connecting to the metastore.
> Did this node have an existing mysql? You can try connecting using mysql
> client and with -h <hostname> so it does not use localhost and see if
> connects.
>
> On Tue, Nov 4, 2014 at 10:03 PM, guxiaobo1982 <guxiaobo1982@qq.com> wrote:
>
>> This is the output
>>
>> [xiaobogu@lix1 ~]$ su root
>>
>> 密码:
>>
>> [root@lix1 xiaobogu]# export HIVE_CONF_DIR=/etc/hive/conf.server ;
>> /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive
>> -passWord hive
>>
>> Metastore connection URL: jdbc:mysql://
>> lix1.bh.com/hive?createDatabaseIfNotExist=true
>>
>> Metastore Connection Driver : com.mysql.jdbc.Driver
>>
>> Metastore connection User: hive
>>
>> org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema
>> version.
>>
>> *** schemaTool failed ***
>>
>> [root@lix1 xiaobogu]# ping lix1.bh.com
>>
>> PING lix1.bh.com (192.168.100.3) 56(84) bytes of data.
>>
>> 64 bytes from lix1.bh.com (192.168.100.3): icmp_seq=1 ttl=64 time=0.058
>> ms
>>
>> 64 bytes from lix1.bh.com (192.168.100.3): icmp_seq=2 ttl=64 time=0.036
>> ms
>>
>>
>>
>>
>> ------------------ Original ------------------
>> *From: * "Yusaku Sako";<yusaku@hortonworks.com>;
>> *Send time:* Wednesday, Nov 5, 2014 10:46 AM
>> *To:* "user@ambari.apache.org"<user@ambari.apache.org>;
>> *Subject: * Re: HDP 2.1.7 can't start hive metastore service
>>
>> Hi,
>>
>> I've tried installing HDP 2.1.7 using Ambari 1.6.1 on CentOS 6.4 today
>> and I did not run into the Hive issue you mentioned.
>> I selected "New MySQL Database" for Hive.
>> You mentioned that it's a single-node cluster.
>>
>> 1. If you run "export HIVE_CONF_DIR=/etc/hive/conf.server ;
>> /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive
>> -passWord <password>" from the command line, does that work?
>> 2. schematool is trying to connect via jdbc:mysql://
>> lix1.bh.com/hive?createDatabaseIfNotExist=true  Does lix1.bh.com
>> <http://lix1.bh.com/hive?createDatabaseIfNotExist=true> resolve
>> properly?
>>
>> Yusaku
>>
>> On Tue, Nov 4, 2014 at 1:00 AM, guxiaobo1982 <guxiaobo1982@qq.com> wrote:
>>
>>>
>>> Hi,
>>>
>>> I use ambari 1.6.1 to install a single node cluster, I can see ambari
>>> installed the lasted version 2.1.7 of HDP, but the hive service failed to
>>> start with the following messages:
>>>
>>>
>>> stderr:   /var/lib/ambari-agent/data/errors-56.txt
>>>
>>> 2014-11-04 16:46:08,931 - Error while executing command 'start':
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 111, in execute
>>>     method(env)
>>>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
line 42, in start
>>>     self.configure(env) # FOR SECURITY
>>>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
line 37, in configure
>>>     hive(name='metastore')
>>>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py",
line 108, in hive
>>>     not_if = check_schema_created_cmd
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line
148, in __init__
>>>     self.env.run()
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 149, in run
>>>     self.run_action(resource, action)
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 115, in run_action
>>>     provider_action()
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 239, in action_run
>>>     raise ex
>>> Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool
-initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. Metastore connection
URL:	 jdbc:mysql://lix1.bh.com/hive?createDatabaseIfNotExist=true
>>> Metastore Connection Driver :	 com.mysql.jdbc.Driver
>>> Metastore connection User:	 hive
>>> org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
>>> *** schemaTool failed ***
>>>
>>> stdout:   /var/lib/ambari-agent/data/output-56.txt
>>>
>>> 2014-11-04 16:45:55,983 - Execute['mkdir -p /tmp/HDP-artifacts/;     curl -kf
-x "" --retry 10     http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip']
{'environment': ..., 'not_if': 'test -e /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip', 'ignore_failures':
True, 'path': ['/bin', '/usr/bin/']}
>>> 2014-11-04 16:45:56,001 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/;   
 curl -kf -x "" --retry 10     http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip
-o /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] due to not_if
>>> 2014-11-04 16:45:56,115 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root',
'group': 'root', 'recursive': True}
>>> 2014-11-04 16:45:56,116 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf',
'to': '/etc/hadoop/conf.empty'}
>>> 2014-11-04 16:45:56,137 - Skipping Link['/etc/hadoop/conf'] due to not_if
>>> 2014-11-04 16:45:56,152 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content':
Template('hadoop-env.sh.j2'), 'owner': 'hdfs'}
>>> 2014-11-04 16:45:56,153 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 'group':
'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
>>> 2014-11-04 16:45:56,159 - Generating config: /etc/hadoop/conf/core-site.xml
>>> 2014-11-04 16:45:56,160 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 'hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
>>> 2014-11-04 16:45:56,160 - Writing File['/etc/hadoop/conf/core-site.xml'] because
contents don't match
>>> 2014-11-04 16:45:56,177 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if':
'test -f /selinux/enforce'}
>>> 2014-11-04 16:45:56,216 - Execute['mkdir -p /usr/lib/hadoop/lib/native/Linux-i386-32;
ln -sf /usr/lib/libsnappy.so /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
>>> 2014-11-04 16:45:56,241 - Execute['mkdir -p /usr/lib/hadoop/lib/native/Linux-amd64-64;
ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
>>> 2014-11-04 16:45:56,262 - Directory['/var/log/hadoop'] {'owner': 'root', 'group':
'root', 'recursive': True}
>>> 2014-11-04 16:45:56,263 - Directory['/var/run/hadoop'] {'owner': 'root', 'group':
'root', 'recursive': True}
>>> 2014-11-04 16:45:56,265 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive':
True}
>>> 2014-11-04 16:45:56,274 - File['/etc/hadoop/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
>>> 2014-11-04 16:45:56,276 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'),
'owner': 'hdfs'}
>>> 2014-11-04 16:45:56,278 - File['/etc/hadoop/conf/log4j.properties'] {'content':
'...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
>>> 2014-11-04 16:45:56,282 - File['/etc/hadoop/conf/hadoop-metrics2.properties']
{'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
>>> 2014-11-04 16:45:56,283 - File['/etc/hadoop/conf/task-log4j.properties'] {'content':
StaticFile('task-log4j.properties'), 'mode': 0755}
>>> 2014-11-04 16:45:56,284 - File['/etc/hadoop/conf/configuration.xsl'] {'owner':
'hdfs', 'group': 'hadoop'}
>>> 2014-11-04 16:45:56,457 - Execute['hive mkdir -p /tmp/HDP-artifacts/ ; cp /usr/share/java/mysql-connector-java.jar
/usr/lib/hive/lib//mysql-connector-java.jar'] {'creates': '/usr/lib/hive/lib//mysql-connector-java.jar',
'path': ['/bin', '/usr/bin/'], 'not_if': 'test -f /usr/lib/hive/lib//mysql-connector-java.jar'}
>>> 2014-11-04 16:46:00,436 - Directory['/etc/hive/conf.server'] {'owner': 'hive',
'group': 'hadoop', 'recursive': True}
>>> 2014-11-04 16:46:00,437 - Creating directory Directory['/etc/hive/conf.server']
>>> 2014-11-04 16:46:00,440 - Changing owner for /etc/hive/conf.server from 0 to
hive
>>> 2014-11-04 16:46:00,440 - Changing group for /etc/hive/conf.server from 0 to
hadoop
>>> 2014-11-04 16:46:00,440 - XmlConfig['mapred-site.xml'] {'owner': 'hive', 'group':
'hadoop', 'mode': 0600, 'conf_dir': '/etc/hive/conf.server', 'configurations': ...}
>>> 2014-11-04 16:46:00,447 - Generating config: /etc/hive/conf.server/mapred-site.xml
>>> 2014-11-04 16:46:00,447 - File['/etc/hive/conf.server/mapred-site.xml'] {'owner':
'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600}
>>> 2014-11-04 16:46:00,448 - Writing File['/etc/hive/conf.server/mapred-site.xml']
because it doesn't exist
>>> 2014-11-04 16:46:00,448 - Changing permission for /etc/hive/conf.server/mapred-site.xml
from 644 to 600
>>> 2014-11-04 16:46:00,448 - Changing owner for /etc/hive/conf.server/mapred-site.xml
from 0 to hive
>>> 2014-11-04 16:46:00,448 - Changing group for /etc/hive/conf.server/mapred-site.xml
from 0 to hadoop
>>> 2014-11-04 16:46:00,449 - XmlConfig['hive-site.xml'] {'owner': 'hive', 'group':
'hadoop', 'mode': 0600, 'conf_dir': '/etc/hive/conf.server', 'configurations': ...}
>>> 2014-11-04 16:46:00,451 - Generating config: /etc/hive/conf.server/hive-site.xml
>>> 2014-11-04 16:46:00,451 - File['/etc/hive/conf.server/hive-site.xml'] {'owner':
'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600}
>>> 2014-11-04 16:46:00,452 - Writing File['/etc/hive/conf.server/hive-site.xml']
because it doesn't exist
>>> 2014-11-04 16:46:00,452 - Changing permission for /etc/hive/conf.server/hive-site.xml
from 644 to 600
>>> 2014-11-04 16:46:00,452 - Changing owner for /etc/hive/conf.server/hive-site.xml
from 0 to hive
>>> 2014-11-04 16:46:00,452 - Changing group for /etc/hive/conf.server/hive-site.xml
from 0 to hadoop
>>> 2014-11-04 16:46:00,452 - Execute['/bin/sh -c 'cd /usr/lib/ambari-agent/ &&
curl -kf -x "" --retry 5 http://ambari.bh.com:8080/resources/DBConnectionVerification.jar
-o DBConnectionVerification.jar''] {'environment': ..., 'not_if': '[ -f DBConnectionVerification.jar]'}
>>> 2014-11-04 16:46:00,513 - File['/etc/hive/conf.server/hive-env.sh'] {'content':
Template('hive-env.sh.j2'), 'owner': 'hive', 'group': 'hadoop'}
>>> 2014-11-04 16:46:00,513 - Writing File['/etc/hive/conf.server/hive-env.sh'] because
it doesn't exist
>>> 2014-11-04 16:46:00,514 - Changing owner for /etc/hive/conf.server/hive-env.sh
from 0 to hive
>>> 2014-11-04 16:46:00,514 - Changing group for /etc/hive/conf.server/hive-env.sh
from 0 to hadoop
>>> 2014-11-04 16:46:00,514 - File['/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'),
'mode': 0755}
>>> 2014-11-04 16:46:00,515 - Execute['export HIVE_CONF_DIR=/etc/hive/conf.server
; /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]']
{'not_if': 'export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -info
-dbType mysql -userName hive -passWord [PROTECTED]'}
>>> 2014-11-04 16:46:08,931 - Error while executing command 'start':
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 111, in execute
>>>     method(env)
>>>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
line 42, in start
>>>     self.configure(env) # FOR SECURITY
>>>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
line 37, in configure
>>>     hive(name='metastore')
>>>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py",
line 108, in hive
>>>     not_if = check_schema_created_cmd
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line
148, in __init__
>>>     self.env.run()
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 149, in run
>>>     self.run_action(resource, action)
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 115, in run_action
>>>     provider_action()
>>>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 239, in action_run
>>>     raise ex
>>> Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool
-initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. Metastore connection
URL:	 jdbc:mysql://lix1.bh.com/hive?createDatabaseIfNotExist=true
>>> Metastore Connection Driver :	 com.mysql.jdbc.Driver
>>> Metastore connection User:	 hive
>>> org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
>>> *** schemaTool failed ***
>>>
>>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>

Mime
View raw message