ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-11545) Hive Metastore Upgrade Fails During Upgrade Schema
Date Fri, 29 May 2015 22:09:18 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-11545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565534#comment-14565534
] 

Hadoop QA commented on AMBARI-11545:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12736227/AMBARI-11545.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified
tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in ambari-server:

                  org.apache.ambari.server.api.services.PersistServiceTest

Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/2928//testReport/
Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/2928//console

This message is automatically generated.

> Hive Metastore Upgrade Fails During Upgrade Schema
> --------------------------------------------------
>
>                 Key: AMBARI-11545
>                 URL: https://issues.apache.org/jira/browse/AMBARI-11545
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.1.0
>            Reporter: Jonathan Hurley
>            Assignee: Jonathan Hurley
>            Priority: Critical
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-11545.patch
>
>
> When upgrading Hive from HDP 2.2 to 2.3, metastore upgrade fails.
> {code}
> 2015-05-29 13:37:48,328 - hive-metastore is currently at version 2.2.7.0-2808
> 2015-05-29 13:37:48,348 - hive-metastore is currently at version 2.2.7.0-2808
> 2015-05-29 13:37:48,368 - hive-metastore is currently at version 2.2.7.0-2808
> 2015-05-29 13:37:48,387 - call['conf-select set-conf-dir --package hadoop --stack-version
2.3.0.0-2162 --conf-version 0'] {'logoutput': False, 'quiet': False}
> 2015-05-29 13:37:48,406 - call returned (0, '/usr/hdp/2.3.0.0-2162/hadoop/conf ->
/etc/hadoop/2.3.0.0-2162/0\r')
> 2015-05-29 13:37:48,568 - call['conf-select set-conf-dir --package hadoop --stack-version
2.3.0.0-2162 --conf-version 0'] {'logoutput': False, 'quiet': False}
> 2015-05-29 13:37:48,586 - call returned (0, '/usr/hdp/2.3.0.0-2162/hadoop/conf ->
/etc/hadoop/2.3.0.0-2162/0\r')
> 2015-05-29 13:37:48,607 - hive-metastore is currently at version 2.2.7.0-2808
> 2015-05-29 13:37:48,610 - Directory['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/']
{'recursive': True}
> 2015-05-29 13:37:48,611 - File['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jce_policy-8.zip']
{'content': DownloadSource('http://jhurley-hdp22-ru-1.c.pramod-thangali.internal:8080/resources//jce_policy-8.zip')}
> 2015-05-29 13:37:48,611 - Not downloading the file from http://jhurley-hdp22-ru-1.c.pramod-thangali.internal:8080/resources//jce_policy-8.zip,
because /var/lib/ambari-agent/data/tmp/jce_policy-8.zip already exists
> 2015-05-29 13:37:48,612 - Group['spark'] {'ignore_failures': False}
> 2015-05-29 13:37:48,612 - Group['hadoop'] {'ignore_failures': False}
> 2015-05-29 13:37:48,613 - Group['users'] {'ignore_failures': False}
> 2015-05-29 13:37:48,613 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,614 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,615 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False,
'groups': [u'hadoop']}
> 2015-05-29 13:37:48,615 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'users']}
> 2015-05-29 13:37:48,616 - User['falcon'] {'gid': 'hadoop', 'ignore_failures': False,
'groups': [u'hadoop']}
> 2015-05-29 13:37:48,617 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'users']}
> 2015-05-29 13:37:48,618 - User['accumulo'] {'gid': 'hadoop', 'ignore_failures': False,
'groups': [u'hadoop']}
> 2015-05-29 13:37:48,618 - User['spark'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,619 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False,
'groups': [u'users']}
> 2015-05-29 13:37:48,619 - User['flume'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,620 - User['kafka'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,620 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,621 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,622 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,623 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False,
'groups': [u'hadoop']}
> 2015-05-29 13:37:48,623 - User['hbase'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,624 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
[u'hadoop']}
> 2015-05-29 13:37:48,624 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2015-05-29 13:37:48,625 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
{'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
> 2015-05-29 13:37:48,629 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh
ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
due to not_if
> 2015-05-29 13:37:48,629 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive':
True, 'mode': 0775, 'cd_access': 'a'}
> 2015-05-29 13:37:48,630 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2015-05-29 13:37:48,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase
/home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test
$(id -u hbase) -gt 1000) || (false)'}
> 2015-05-29 13:37:48,635 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh
hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
> 2015-05-29 13:37:48,635 - Group['hdfs'] {'ignore_failures': False}
> 2015-05-29 13:37:48,636 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop',
u'hdfs']}
> 2015-05-29 13:37:48,636 - Directory['/etc/hadoop'] {'mode': 0755}
> 2015-05-29 13:37:48,649 - File['/usr/hdp/2.3.0.0-2162/hadoop/conf/hadoop-env.sh'] {'content':
InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
> 2015-05-29 13:37:48,658 - Execute['('setenforce', '0')'] {'not_if': '(! which getenforce
) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if':
'test -f /selinux/enforce'}
> 2015-05-29 13:37:48,675 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775,
'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
> 2015-05-29 13:37:48,676 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root',
'recursive': True, 'cd_access': 'a'}
> 2015-05-29 13:37:48,677 - Changing owner for /var/run/hadoop from 511 to root
> 2015-05-29 13:37:48,677 - Changing group for /var/run/hadoop from 501 to root
> 2015-05-29 13:37:48,677 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive':
True, 'cd_access': 'a'}
> 2015-05-29 13:37:48,681 - File['/usr/hdp/2.3.0.0-2162/hadoop/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2015-05-29 13:37:48,683 - File['/usr/hdp/2.3.0.0-2162/hadoop/conf/health_check'] {'content':
Template('health_check.j2'), 'owner': 'hdfs'}
> 2015-05-29 13:37:48,683 - File['/usr/hdp/2.3.0.0-2162/hadoop/conf/log4j.properties']
{'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2015-05-29 13:37:48,692 - File['/usr/hdp/2.3.0.0-2162/hadoop/conf/hadoop-metrics2.properties']
{'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2015-05-29 13:37:48,692 - File['/usr/hdp/2.3.0.0-2162/hadoop/conf/task-log4j.properties']
{'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2015-05-29 13:37:48,924 - call['conf-select set-conf-dir --package hadoop --stack-version
2.3.0.0-2162 --conf-version 0'] {'logoutput': False, 'quiet': False}
> 2015-05-29 13:37:48,943 - call returned (0, '/usr/hdp/2.3.0.0-2162/hadoop/conf ->
/etc/hadoop/2.3.0.0-2162/0\r')
> 2015-05-29 13:37:48,964 - hive-metastore is currently at version 2.2.7.0-2808
> 2015-05-29 13:37:48,984 - hive-metastore is currently at version 2.2.7.0-2808
> 2015-05-29 13:37:49,011 - Execute['ambari-sudo.sh kill `cat /var/run/hive/hive.pid`']
{'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p `cat /var/run/hive/hive.pid`
>/dev/null 2>&1)'}
> 2015-05-29 13:37:49,032 - Execute['ambari-sudo.sh kill -9 `cat /var/run/hive/hive.pid`']
{'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p `cat /var/run/hive/hive.pid`
>/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null
2>&1 && ps -p `cat /var/run/hive/hive.pid` >/dev/null 2>&1) )'}
> 2015-05-29 13:37:54,052 - Skipping Execute['ambari-sudo.sh kill -9 `cat /var/run/hive/hive.pid`']
due to not_if
> 2015-05-29 13:37:54,053 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1
&& ps -p `cat /var/run/hive/hive.pid` >/dev/null 2>&1)'] {'tries': 20, 'try_sleep':
3}
> 2015-05-29 13:37:54,062 - File['/var/run/hive/hive.pid'] {'action': ['delete']}
> 2015-05-29 13:37:54,062 - Deleting File['/var/run/hive/hive.pid']
> 2015-05-29 13:37:54,062 - Executing Metastore Rolling Upgrade pre-restart
> 2015-05-29 13:37:54,064 - Upgrading Hive Metastore
> 2015-05-29 13:37:54,065 - Execute['/usr/hdp/2.3.0.0-2162/hive/bin/schematool -dbType
mysql -upgradeSchema'] {'logoutput': True, 'environment': {'HIVE_CONF_DIR': '/usr/hdp/current/hive-metastore/conf/conf.server'},
'tries': 1, 'user': 'hive'}
> WARNING: Use "yarn jar" to launch YARN applications.
> org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
> *** schemaTool failed ***
> {code}
> lt appears as though conf-select is not yet called and therefore the configuration directory
doesn't exist:
> {noformat}
> [root@jhurley-hdp22-ru-4 conf.server]# ll /usr/hdp/current/hive-metastore/conf/conf.server
> ls: cannot access /usr/hdp/current/hive-metastore/conf/conf.server: No such file or directory
> [root@jhurley-hdp22-ru-4 conf.server]# ll /usr/hdp/current/hive-metastore/conf
> lrwxrwxrwx. 1 root root 14 May 29 03:31 /usr/hdp/current/hive-metastore/conf -> /etc/hive/conf
> [root@jhurley-hdp22-ru-4 conf.server]# ll /usr/hdp/current | grep hive
> lrwxrwxrwx. 1 root root 26 May 29 03:59 hive-client -> /usr/hdp/2.2.7.0-2808/hive
> lrwxrwxrwx. 1 root root 26 May 29 03:31 hive-metastore -> /usr/hdp/2.2.7.0-2808/hive
> lrwxrwxrwx. 1 root root 26 May 29 03:31 hive-server2 -> /usr/hdp/2.2.7.0-2808/hive
> lrwxrwxrwx. 1 root root 35 May 29 03:31 hive-webhcat -> /usr/hdp/2.2.7.0-2808/hive-hcatalog
> [root@jhurley-hdp22-ru-4 conf.server]# ll /etc/hive
> drwxr-xr-x. 2 hive hadoop 4096 May 29 03:35 conf
> drwxr-xr-x. 2 hive hadoop 4096 May 29 03:37 conf.server
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message