ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vitaly Brodetskyi (JIRA)" <j...@apache.org>
Subject [jira] [Created] (AMBARI-6952) Schema upgrade failed during upgrade from BWM20 with default Postgres DB
Date Wed, 20 Aug 2014 18:08:27 GMT
Vitaly Brodetskyi created AMBARI-6952:
-----------------------------------------

             Summary: Schema upgrade failed during upgrade from BWM20 with default Postgres
DB
                 Key: AMBARI-6952
                 URL: https://issues.apache.org/jira/browse/AMBARI-6952
             Project: Ambari
          Issue Type: Bug
          Components: agent
    Affects Versions: 1.7.0
            Reporter: Vitaly Brodetskyi
            Assignee: Vitaly Brodetskyi
            Priority: Blocker
             Fix For: 1.7.0


*STR:*
1) Install Ambari server (BWM20) and setup it by default (http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo)
2) Deploy cluster
3) Make Ambari only upgrade to 1.7.0
4) Make schema upgrade

*Result:* schema upgrade failed. ambari-server.log:
{noformat}
org.postgresql.util.PSQLException: ERROR: relation "clusters_cluster_id_seq" does not exist
  Position: 87
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
	at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:499)
	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:485)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:443)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
03:15:30,101  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name,
"value") VALUES('configgroup_id_seq', 1)
03:15:30,102  WARN [main] DBAccessorImpl:505 - Error executing query: INSERT INTO ambari_sequences(sequence_name,
"value") VALUES('configgroup_id_seq', 1), errorCode = 0, message = ERROR: duplicate key value
violates unique constraint "ambari_sequences_pkey"
03:15:30,102  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name,
"value") VALUES('requestschedule_id_seq', 1)
03:15:30,124  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name,
"value") VALUES('resourcefilter_id_seq', 1)
03:15:30,790  INFO [main] StackExtensionHelper:467 - No services defined for stack: HDP-1.3.3
03:15:31,662  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL
info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:31,900  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL
info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:32,090  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL
info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:32,263  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL
info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:32,432  INFO [main] ActionDefinitionManager:124 - Added custom action definition for
nagios_update_ignore
03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for
check_host
03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for
validate_configs
03:15:32,543 ERROR [main] AbstractUpgradeCatalog:150 - Error in transaction 
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence
Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO clusterconfig
(config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version,
cluster_id) VALUES (13, NULL, E'{"content":"\\n# Licensed to the Apache Software Foundation
(ASF) under one\\n# or more contributor license agreements.  See the NOTICE file\\n# distributed
with this work for additional information\\n# regarding copyright ownership.  The ASF licenses
this file\\n# to you under the Apache License, Version 2.0 (the\\n# \\"License\\"); you may
not use this file except in compliance\\n# with the License.  You may obtain a copy of the
License at\\n#\\n#     http://www.apache.org/licenses/LICENSE-2.0\\n#\\n# Unless required
by applicable law or agreed to in writing, software\\n# distributed under the License is distributed
on an \\"AS IS\\" BASIS,\\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied.\\n# See the License for the specific language governing permissions and\\n# limitations
under the License.\\n\\n\\n# Define some default values that can be overridden by system properties\\nhbase.root.logger\\u003dINFO,console\\nhbase.security.logger\\u003dINFO,console\\nhbase.log.dir\\u003d.\\nhbase.log.file\\u003dhbase.log\\n\\n#
Define the root logger to the system property \\"hbase.root.logger\\".\\nlog4j.rootLogger\\u003d${hbase.root.logger}\\n\\n#
Logging Threshold\\nlog4j.threshold\\u003dALL\\n\\n#\\n# Daily Rolling File Appender\\n#\\nlog4j.appender.DRFA\\u003dorg.apache.log4j.DailyRollingFileAppender\\nlog4j.appender.DRFA.File\\u003d${hbase.log.dir}/${hbase.log.file}\\n\\n#
Rollver at midnight\\nlog4j.appender.DRFA.DatePattern\\u003d.yyyy-MM-dd\\n\\n# 30-day backup\\n#log4j.appender.DRFA.MaxBackupIndex\\u003d30\\nlog4j.appender.DRFA.layout\\u003dorg.apache.log4j.PatternLayout\\n\\n#
Pattern format: Date LogLevel LoggerName LogMessage\\nlog4j.appender.DRFA.layout.ConversionPattern\\u003d%d{ISO8601}
%-5p [%t] %c{2}: %m%n\\n\\n# Rolling File Appender properties\\nhbase.log.maxfilesize\\u003d256MB\\nhbase.log.maxbackupindex\\u003d20\\n\\n#
Rolling File Appender\\nlog4j.appender.RFA\\u003dorg.apache.log4j.RollingFileAppender\\nlog4j.appender.RFA.File\\u003d${hbase.log.dir}/${hbase.log.file}\\n\\nlog4j.appender.RFA.MaxFileSize\\u003d${hbase.log.maxfilesize}\\nlog4j.appender.RFA.MaxBackupIndex\\u003d${hbase.log.maxbackupindex}\\n\\nlog4j.appender.RFA.layout\\u003dorg.apache.log4j.PatternLayout\\nlog4j.appender.RFA.layout.ConversionPattern\\u003d%d{ISO8601}
%-5p [%t] %c{2}: %m%n\\n\\n#\\n# Security audit appender\\n#\\nhbase.security.log.file\\u003dSecurityAuth.audit\\nhbase.security.log.maxfilesize\\u003d256MB\\nhbase.security.log.maxbackupindex\\u003d20\\nlog4j.appender.RFAS\\u003dorg.apache.log4j.RollingFileAppender\\nlog4j.appender.RFAS.File\\u003d${hbase.log.dir}/${hbase.security.log.file}\\nlog4j.appender.RFAS.MaxFileSize\\u003d${hbase.security.log.maxfilesize}\\nlog4j.appender.RFAS.MaxBackupIndex\\u003d${hbase.security.log.maxbackupindex}\\nlog4j.appender.RFAS.layout\\u003dorg.apache.log4j.PatternLayout\\nlog4j.appender.RFAS.layout.ConversionPattern\\u003d%d{ISO8601}
%p %c: %m%n\\nlog4j.category.SecurityLogger\\u003d${hbase.security.logger}\\nlog4j.additivity.SecurityLogger\\u003dfalse\\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController\\u003dTRACE\\n\\n#\\n#
Null Appender\\n#\\nlog4j.appender.NullAppender\\u003dorg.apache.log4j.varia.NullAppender\\n\\n#\\n#
console\\n# Add \\"console\\" to rootlogger above if you want to use this\\n#\\nlog4j.appender.console\\u003dorg.apache.log4j.ConsoleAppender\\nlog4j.appender.console.target\\u003dSystem.err\\nlog4j.appender.console.layout\\u003dorg.apache.log4j.PatternLayout\\nlog4j.appender.console.layout.ConversionPattern\\u003d%d{ISO8601}
%-5p [%t] %c{2}: %m%n\\n\\n# Custom Logging levels\\n\\nlog4j.logger.org.apache.zookeeper\\u003dINFO\\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem\\u003dDEBUG\\nlog4j.logger.org.apache.hadoop.hbase\\u003dDEBUG\\n#
Make these two classes INFO-level. Make them DEBUG to see more zk debug.\\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil\\u003dINFO\\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher\\u003dINFO\\n#log4j.logger.org.apache.hadoop.dfs\\u003dDEBUG\\n#
Set this class to log INFO only otherwise its OTT\\n# Enable this to get detailed connection
error/retry logging.\\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\\u003dTRACE\\n\\n\\n#
Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)\\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace\\u003dDEBUG\\n\\n#
Uncomment the below if you want to remove logging of client region caching\\u0027\\n# and
scan of .META. messages\\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\\u003dINFO\\n#
log4j.logger.org.apache.hadoop.hbase.client.MetaScanner\\u003dINFO\\n\\n    "}', 'version1',
1408443332466, 'hbase-log4j', NULL, 2) was aborted.  Call getNextException to see the cause.
Error Code: 0
Call: INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp,
type_name, version, cluster_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
	bind => [8 parameters bound]
Query: InsertObjectQuery(org.apache.ambari.server.orm.entities.ClusterConfigMappingEntity@45e881b6)
	at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:804)
	at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:857)
	at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:180)
	at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:442)
	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:382)
	at org.apache.ambari.server.orm.dao.DaoUtils.selectOne(DaoUtils.java:70)
	at org.apache.ambari.server.orm.dao.ClusterDAO.findConfig(ClusterDAO.java:96)
	at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150.addMissingLog4jConfigs(UpgradeCatalog150.java:699)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150$5.run(UpgradeCatalog150.java:555)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.executeInTransaction(AbstractUpgradeCatalog.java:147)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:552)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message