Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EC7531797C for ; Thu, 6 Nov 2014 04:37:35 +0000 (UTC) Received: (qmail 23260 invoked by uid 500); 6 Nov 2014 04:37:35 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 23167 invoked by uid 500); 6 Nov 2014 04:37:35 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 23033 invoked by uid 500); 6 Nov 2014 04:37:35 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 23016 invoked by uid 99); 6 Nov 2014 04:37:35 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Nov 2014 04:37:35 +0000 Date: Thu, 6 Nov 2014 04:37:34 +0000 (UTC) From: "Brock Noland (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-8744) hbase_stats3.q test fails when paths stored at JDBCStatsUtils.getIdColumnName() are too large MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-8744?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D14199= 770#comment-14199770 ]=20 Brock Noland commented on HIVE-8744: ------------------------------------ That's a pretty old database and will be older when we release 0.15. I thin= k we should move ahead... > hbase_stats3.q test fails when paths stored at JDBCStatsUtils.getIdColumn= Name() are too large > -------------------------------------------------------------------------= -------------------- > > Key: HIVE-8744 > URL: https://issues.apache.org/jira/browse/HIVE-8744 > Project: Hive > Issue Type: Bug > Affects Versions: 0.15.0 > Reporter: Sergio Pe=C3=B1a > Assignee: Sergio Pe=C3=B1a > Attachments: HIVE-8744.1.patch > > > This test is related to the bug HIVE-8065 where I am trying to support HD= FS encryption. One of the enhancements to support it is to create a .hive-s= taging directory on the same table directory location where the query is ex= ecuted. > Now, when running the hbase_stats3.q test from a temporary directory that= has a large path, then the new path, a combination of table location + .hi= ve-staging + random temporary subdirectories, is too large to fit into the = statistics table, so the path is truncated. > This causes the following error: > {noformat} > 2014-11-04 08:57:36,680 ERROR [LocalJobRunner Map Task Executor #0]: jdbc= .JDBCStatsPublisher (JDBCStatsPublisher.java:publishStat(199)) - Error duri= ng publishing statistics.=20 > java.sql.SQLDataException: A truncation error was encountered trying to s= hrink VARCHAR 'pfile:/home/hiveptest/hive-ptest-cloudera-slaves-ee9-24.vpc.= &' to length 255. > =09at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Un= known Source) > =09at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Sour= ce) > =09at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLExcepti= on(Unknown Source) > =09at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(= Unknown Source) > =09at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown = Source) > =09at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown = Source) > =09at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown = Source) > =09at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(= Unknown Source) > =09at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeLargeUpdat= e(Unknown Source) > =09at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unk= nown Source) > =09at org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher$2.run(JDBCS= tatsPublisher.java:148) > =09at org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher$2.run(JDBCS= tatsPublisher.java:145) > =09at org.apache.hadoop.hive.ql.exec.Utilities.executeWithRetry(Utilities= .java:2667) > =09at org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher.publishStat= (JDBCStatsPublisher.java:161) > =09at org.apache.hadoop.hive.ql.exec.FileSinkOperator.publishStats(FileSi= nkOperator.java:1031) > =09at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOpe= rator.java:870) > =09at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:579) > =09at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:591) > =09at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:591) > =09at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:591) > =09at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:= 227) > =09at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) > =09at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) > =09at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > =09at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(Loc= alJobRunner.java:243) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 471) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:744) > Caused by: java.sql.SQLException: A truncation error was encountered tryi= ng to shrink VARCHAR 'pfile:/home/hiveptest/hive-ptest-cloudera-slaves-ee9-= 24.vpc.&' to length 255. > =09at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unkn= own Source) > =09at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTranspo= rtAcrossDRDA(Unknown Source) > =09... 30 more > Caused by: ERROR 22001: A truncation error was encountered trying to shri= nk VARCHAR 'pfile:/home/hiveptest/hive-ptest-cloudera-slaves-ee9-24.vpc.&' = to length 255. > =09at org.apache.derby.iapi.error.StandardException.newException(Unknown = Source) > =09at org.apache.derby.iapi.types.SQLChar.hasNonBlankChars(Unknown Source= ) > =09at org.apache.derby.iapi.types.SQLVarchar.normalize(Unknown Source) > =09at org.apache.derby.iapi.types.SQLVarchar.normalize(Unknown Source) > =09at org.apache.derby.iapi.types.DataTypeDescriptor.normalize(Unknown So= urce) > =09at org.apache.derby.impl.sql.execute.NormalizeResultSet.normalizeColum= n(Unknown Source) > =09at org.apache.derby.impl.sql.execute.NormalizeResultSet.normalizeRow(U= nknown Source) > =09at org.apache.derby.impl.sql.execute.NormalizeResultSet.getNextRowCore= (Unknown Source) > =09at org.apache.derby.impl.sql.execute.DMLWriteResultSet.getNextRowCore(= Unknown Source) > =09at org.apache.derby.impl.sql.execute.InsertResultSet.open(Unknown Sour= ce) > =09at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unkn= own Source) > =09at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown = Source) > =09... 24 more > {noformat} > We should increment the size of the VARCHAR datatype in order to fix this= . -- This message was sent by Atlassian JIRA (v6.3.4#6332)