Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 38BEC11FCB for ; Wed, 20 Aug 2014 18:08:28 +0000 (UTC) Received: (qmail 54547 invoked by uid 500); 20 Aug 2014 18:08:27 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 54522 invoked by uid 500); 20 Aug 2014 18:08:27 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 54401 invoked by uid 99); 20 Aug 2014 18:08:27 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Aug 2014 18:08:27 +0000 Date: Wed, 20 Aug 2014 18:08:27 +0000 (UTC) From: "Vitaly Brodetskyi (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (AMBARI-6952) Schema upgrade failed during upgrade from BWM20 with default Postgres DB MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Vitaly Brodetskyi created AMBARI-6952: ----------------------------------------- Summary: Schema upgrade failed during upgrade from BWM20 with = default Postgres DB Key: AMBARI-6952 URL: https://issues.apache.org/jira/browse/AMBARI-6952 Project: Ambari Issue Type: Bug Components: agent Affects Versions: 1.7.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Blocker Fix For: 1.7.0 *STR:* 1) Install Ambari server (BWM20) and setup it by default (http://public-rep= o-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo) 2) Deploy cluster 3) Make Ambari only upgrade to 1.7.0 4) Make schema upgrade *Result:* schema upgrade failed. ambari-server.log: {noformat} org.postgresql.util.PSQLException: ERROR: relation "clusters_cluster_id_seq= " does not exist Position: 87 =09at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryEx= ecutorImpl.java:2161) =09at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutor= Impl.java:1890) =09at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.ja= va:255) =09at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Stat= ement.java:559) =09at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(Abstract= Jdbc2Statement.java:403) =09at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Stat= ement.java:395) =09at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorIm= pl.java:499) =09at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorIm= pl.java:485) =09at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(= UpgradeCatalog150.java:443) =09at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(A= bstractUpgradeCatalog.java:272) =09at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdate= s(SchemaUpgradeHelper.java:194) =09at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgra= deHelper.java:243) 03:15:30,101 INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO= ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1) 03:15:30,102 WARN [main] DBAccessorImpl:505 - Error executing query: INSER= T INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq'= , 1), errorCode =3D 0, message =3D ERROR: duplicate key value violates uniq= ue constraint "ambari_sequences_pkey" 03:15:30,102 INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO= ambari_sequences(sequence_name, "value") VALUES('requestschedule_id_seq', = 1) 03:15:30,124 INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO= ambari_sequences(sequence_name, "value") VALUES('resourcefilter_id_seq', 1= ) 03:15:30,790 INFO [main] StackExtensionHelper:467 - No services defined fo= r stack: HDP-1.3.3 03:15:31,662 INFO [Stack Version Loading Thread] LatestRepoCallable:72 - L= oading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP= /hdp_urlinfo.json 03:15:31,900 INFO [Stack Version Loading Thread] LatestRepoCallable:72 - L= oading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP= /hdp_urlinfo.json 03:15:32,090 INFO [Stack Version Loading Thread] LatestRepoCallable:72 - L= oading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP= /hdp_urlinfo.json 03:15:32,263 INFO [Stack Version Loading Thread] LatestRepoCallable:72 - L= oading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP= /hdp_urlinfo.json 03:15:32,432 INFO [main] ActionDefinitionManager:124 - Added custom action= definition for nagios_update_ignore 03:15:32,433 INFO [main] ActionDefinitionManager:124 - Added custom action= definition for check_host 03:15:32,433 INFO [main] ActionDefinitionManager:124 - Added custom action= definition for validate_configs 03:15:32,543 ERROR [main] AbstractUpgradeCatalog:150 - Error in transaction= =20 javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclip= se Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.= exceptions.DatabaseException Internal Exception: java.sql.BatchUpdateException: Batch entry 0 INSERT INT= O clusterconfig (config_id, config_attributes, config_data, version_tag, cr= eate_timestamp, type_name, version, cluster_id) VALUES (13, NULL, E'{"conte= nt":"\\n# Licensed to the Apache Software Foundation (ASF) under one\\n# or= more contributor license agreements. See the NOTICE file\\n# distributed = with this work for additional information\\n# regarding copyright ownership= . The ASF licenses this file\\n# to you under the Apache License, Version = 2.0 (the\\n# \\"License\\"); you may not use this file except in compliance= \\n# with the License. You may obtain a copy of the License at\\n#\\n# = http://www.apache.org/licenses/LICENSE-2.0\\n#\\n# Unless required by appl= icable law or agreed to in writing, software\\n# distributed under the Lice= nse is distributed on an \\"AS IS\\" BASIS,\\n# WITHOUT WARRANTIES OR CONDI= TIONS OF ANY KIND, either express or implied.\\n# See the License for the s= pecific language governing permissions and\\n# limitations under the Licens= e.\\n\\n\\n# Define some default values that can be overridden by system pr= operties\\nhbase.root.logger\\u003dINFO,console\\nhbase.security.logger\\u0= 03dINFO,console\\nhbase.log.dir\\u003d.\\nhbase.log.file\\u003dhbase.log\\n= \\n# Define the root logger to the system property \\"hbase.root.logger\\".= \\nlog4j.rootLogger\\u003d${hbase.root.logger}\\n\\n# Logging Threshold\\nl= og4j.threshold\\u003dALL\\n\\n#\\n# Daily Rolling File Appender\\n#\\nlog4j= .appender.DRFA\\u003dorg.apache.log4j.DailyRollingFileAppender\\nlog4j.appe= nder.DRFA.File\\u003d${hbase.log.dir}/${hbase.log.file}\\n\\n# Rollver at m= idnight\\nlog4j.appender.DRFA.DatePattern\\u003d.yyyy-MM-dd\\n\\n# 30-day b= ackup\\n#log4j.appender.DRFA.MaxBackupIndex\\u003d30\\nlog4j.appender.DRFA.= layout\\u003dorg.apache.log4j.PatternLayout\\n\\n# Pattern format: Date Log= Level LoggerName LogMessage\\nlog4j.appender.DRFA.layout.ConversionPattern\= \u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\\n\\n# Rolling File Appender proper= ties\\nhbase.log.maxfilesize\\u003d256MB\\nhbase.log.maxbackupindex\\u003d2= 0\\n\\n# Rolling File Appender\\nlog4j.appender.RFA\\u003dorg.apache.log4j.= RollingFileAppender\\nlog4j.appender.RFA.File\\u003d${hbase.log.dir}/${hbas= e.log.file}\\n\\nlog4j.appender.RFA.MaxFileSize\\u003d${hbase.log.maxfilesi= ze}\\nlog4j.appender.RFA.MaxBackupIndex\\u003d${hbase.log.maxbackupindex}\\= n\\nlog4j.appender.RFA.layout\\u003dorg.apache.log4j.PatternLayout\\nlog4j.= appender.RFA.layout.ConversionPattern\\u003d%d{ISO8601} %-5p [%t] %c{2}: %m= %n\\n\\n#\\n# Security audit appender\\n#\\nhbase.security.log.file\\u003dS= ecurityAuth.audit\\nhbase.security.log.maxfilesize\\u003d256MB\\nhbase.secu= rity.log.maxbackupindex\\u003d20\\nlog4j.appender.RFAS\\u003dorg.apache.log= 4j.RollingFileAppender\\nlog4j.appender.RFAS.File\\u003d${hbase.log.dir}/${= hbase.security.log.file}\\nlog4j.appender.RFAS.MaxFileSize\\u003d${hbase.se= curity.log.maxfilesize}\\nlog4j.appender.RFAS.MaxBackupIndex\\u003d${hbase.= security.log.maxbackupindex}\\nlog4j.appender.RFAS.layout\\u003dorg.apache.= log4j.PatternLayout\\nlog4j.appender.RFAS.layout.ConversionPattern\\u003d%d= {ISO8601} %p %c: %m%n\\nlog4j.category.SecurityLogger\\u003d${hbase.securit= y.logger}\\nlog4j.additivity.SecurityLogger\\u003dfalse\\n#log4j.logger.Sec= urityLogger.org.apache.hadoop.hbase.security.access.AccessController\\u003d= TRACE\\n\\n#\\n# Null Appender\\n#\\nlog4j.appender.NullAppender\\u003dorg.= apache.log4j.varia.NullAppender\\n\\n#\\n# console\\n# Add \\"console\\" to= rootlogger above if you want to use this\\n#\\nlog4j.appender.console\\u00= 3dorg.apache.log4j.ConsoleAppender\\nlog4j.appender.console.target\\u003dSy= stem.err\\nlog4j.appender.console.layout\\u003dorg.apache.log4j.PatternLayo= ut\\nlog4j.appender.console.layout.ConversionPattern\\u003d%d{ISO8601} %-5p= [%t] %c{2}: %m%n\\n\\n# Custom Logging levels\\n\\nlog4j.logger.org.apache= .zookeeper\\u003dINFO\\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem\\u0= 03dDEBUG\\nlog4j.logger.org.apache.hadoop.hbase\\u003dDEBUG\\n# Make these = two classes INFO-level. Make them DEBUG to see more zk debug.\\nlog4j.logge= r.org.apache.hadoop.hbase.zookeeper.ZKUtil\\u003dINFO\\nlog4j.logger.org.ap= ache.hadoop.hbase.zookeeper.ZooKeeperWatcher\\u003dINFO\\n#log4j.logger.org= .apache.hadoop.dfs\\u003dDEBUG\\n# Set this class to log INFO only otherwis= e its OTT\\n# Enable this to get detailed connection error/retry logging.\\= n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnecti= onImplementation\\u003dTRACE\\n\\n\\n# Uncomment this line to enable tracin= g on _every_ RPC call (this can be a lot of output)\\n#log4j.logger.org.apa= che.hadoop.ipc.HBaseServer.trace\\u003dDEBUG\\n\\n# Uncomment the below if = you want to remove logging of client region caching\\u0027\\n# and scan of = .META. messages\\n# log4j.logger.org.apache.hadoop.hbase.client.HConnection= Manager$HConnectionImplementation\\u003dINFO\\n# log4j.logger.org.apache.ha= doop.hbase.client.MetaScanner\\u003dINFO\\n\\n "}', 'version1', 14084433= 32466, 'hbase-log4j', NULL, 2) was aborted. Call getNextException to see t= he cause. Error Code: 0 Call: INSERT INTO clusterconfig (config_id, config_attributes, config_data,= version_tag, create_timestamp, type_name, version, cluster_id) VALUES (?, = ?, ?, ?, ?, ?, ?, ?) =09bind =3D> [8 parameters bound] Query: InsertObjectQuery(org.apache.ambari.server.orm.entities.ClusterConfi= gMappingEntity@45e881b6) =09at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityMa= nagerImpl.java:804) =09at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(Q= ueryImpl.java:857) =09at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(Query= Impl.java:180) =09at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryI= mpl.java:442) =09at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJB= QueryImpl.java:382) =09at org.apache.ambari.server.orm.dao.DaoUtils.selectOne(DaoUtils.java:70) =09at org.apache.ambari.server.orm.dao.ClusterDAO.findConfig(ClusterDAO.jav= a:96) =09at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(Amb= ariLocalSessionInterceptor.java:53) =09at org.apache.ambari.server.upgrade.UpgradeCatalog150.addMissingLog4jCon= figs(UpgradeCatalog150.java:699) =09at org.apache.ambari.server.upgrade.UpgradeCatalog150$5.run(UpgradeCatal= og150.java:555) =09at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.executeInTran= saction(AbstractUpgradeCatalog.java:147) =09at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(= UpgradeCatalog150.java:552) =09at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(A= bstractUpgradeCatalog.java:272) =09at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdate= s(SchemaUpgradeHelper.java:194) =09at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgra= deHelper.java:243) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)