Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6C1631188F for ; Tue, 10 Jun 2014 08:47:02 +0000 (UTC) Received: (qmail 78290 invoked by uid 500); 10 Jun 2014 08:47:02 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 78220 invoked by uid 500); 10 Jun 2014 08:47:01 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 78210 invoked by uid 99); 10 Jun 2014 08:47:01 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Jun 2014 08:47:01 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of fagudo@pragsis.com designates 80.26.83.175 as permitted sender) Received: from [80.26.83.175] (HELO srvzimbra01.pragsis.local) (80.26.83.175) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Jun 2014 08:46:58 +0000 Received: from localhost (localhost [127.0.0.1]) by srvzimbra01.pragsis.local (Postfix) with ESMTP id 76EA2586927 for ; Tue, 10 Jun 2014 10:46:31 +0200 (CEST) Received: from srvzimbra01.pragsis.local ([127.0.0.1]) by localhost (srvzimbra01.pragsis.local [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 3Srlw4hRA2Hq for ; Tue, 10 Jun 2014 10:46:23 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by srvzimbra01.pragsis.local (Postfix) with ESMTP id C58A85869BD for ; Tue, 10 Jun 2014 10:46:23 +0200 (CEST) X-Virus-Scanned: amavisd-new at srvzimbra01.pragsis.local Received: from srvzimbra01.pragsis.local ([127.0.0.1]) by localhost (srvzimbra01.pragsis.local [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Pf8ISqjjk9r4 for ; Tue, 10 Jun 2014 10:46:23 +0200 (CEST) Received: from [192.168.10.7] (pctyrion07.pragsis.local [192.168.10.7]) by srvzimbra01.pragsis.local (Postfix) with ESMTPSA id 6DF17586784 for ; Tue, 10 Jun 2014 10:46:23 +0200 (CEST) Message-ID: <5396C5E0.30701@pragsis.com> Date: Tue, 10 Jun 2014 10:46:24 +0200 From: Fernando Agudo User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:16.0) Gecko/20121011 Thunderbird/16.0.1 MIME-Version: 1.0 To: dev@hive.apache.org Subject: Possible bug loading data in Hive. Content-Type: multipart/alternative; boundary="------------020508090507020001070907" X-Virus-Checked: Checked by ClamAV on apache.org --------------020508090507020001070907 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Hello, I'm working with Hive 0.9.0 with CDH4.1. I have a process which it's=20 loading data in Hive every minute. It creates the partition if it's=20 necessary. I have been monitoring this process for three days and I realize that=20 there's a method (*listStorageDescriptorsWithCD*) which increases the=20 execution time. First execution this method lasted about 15 millisencond=20 and in the end it took more than 3 seconds (three days later), after=20 that, Hive throws an exception and starts working again. I have checking this method but I haven't figured out any suspicious,=20 could it be a bug? *2014-06-05 09:58:20,921* DEBUG metastore.ObjectStore=20 (ObjectStore.java:listStorageDescriptorsWithCD(2036)) - Executing=20 listStorageDescriptorsWithCD *2014-06-05 09:58:20,928* DEBUG metastore.ObjectStore=20 (ObjectStore.java:listStorageDescriptorsWithCD(2045)) - Done executing=20 query for listStorageDescriptorsWithCD *2014-06-08 20:15:33,867* DEBUG metastore.ObjectStore=20 (ObjectStore.java:listStorageDescriptorsWithCD(2036)) - Executing=20 listStorageDescriptor sWithCD *2014-06-08 20:15:36,134* DEBUG metastore.ObjectStore=20 (ObjectStore.java:listStorageDescriptorsWithCD(2045)) - Done executing=20 query for listSt orageDescriptorsWithCD 2014-06-08 20:16:34,600 DEBUG metastore.ObjectStore=20 (ObjectStore.java:removeUnusedColumnDescriptor(1989)) - execute=20 removeUnusedColumnDescr iptor *2014-06-08 20:16:34,600 DEBUG metastore.ObjectStore=20 (ObjectStore.java:listStorageDescriptorsWithCD(2036)) - Executing=20 listStorageDescriptor** **sWithCD* 2014-06-08 20:16:34,805 ERROR metadata.Hive=20 (Hive.java:getPartition(1453)) -=20 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to al ter partition. at=20 org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(Hive.java:429) at=20 org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1446) at=20 org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1158) at=20 org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:304) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153= ) at=20 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:5= 7) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1331) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1117) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:950) at=20 org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveS= erver.java:191) at=20 org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(Thr= iftHive.java:630) at=20 org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(Thr= iftHive.java:618) at=20 org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:= 34) at=20 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolS= erver.java:176) at=20 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor= .java:886) at=20 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:908) at java.lang.Thread.run(Thread.java:662) Caused by: MetaException(message:The transaction for alter partition did=20 not commit successfully.) at=20 org.apache.hadoop.hive.metastore.ObjectStore.alterPartition(ObjectStore.j= ava:1927) at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) at=20 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at=20 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore= .java:111) at $Proxy0.alterPartition(Unknown Source) at=20 org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartition(HiveAlte= rHandler.java:254) at=20 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rename_partitio= n(HiveMetaStore.java:1816) at=20 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rename_partitio= n(HiveMetaStore.java:1788) at=20 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partition= (HiveMetaStore.java:1771) at=20 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partition(Hive= MetaStoreClient.java:834) at=20 org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(Hive.java:425) ... 17 more 2014-06-08 20:16:34,827 ERROR exec.Task=20 (SessionState.java:printError(403)) - Failed with exception=20 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter partiti= on. org.apache.hadoop.hive.ql.metadata.HiveException:=20 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter partiti= on. at=20 org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1454) at=20 org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1158) at=20 org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:304) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153= ) at=20 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:5= 7) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1331) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1117) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:950) at=20 org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveS= erver.java:191) at=20 org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(Thr= iftHive.java:630) at=20 org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(Thr= iftHive.java:618) at=20 org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:= 34) at=20 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolS= erver.java:176) at=20 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor= .java:886) at=20 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to=20 alter partition. at=20 org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(Hive.java:429) at=20 org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1446) ... 16 more Caused by: MetaException(message:The transaction for alter partition did=20 not commit successfully.) at=20 org.apache.hadoop.hive.metastore.ObjectStore.alterPartition(ObjectStore.j= ava:1927) at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) at=20 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at=20 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore= .java:111) at $Proxy0.alterPartition(Unknown Source) at=20 org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartition(HiveAlte= rHandler.java:254) at=20 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rename_partitio= n(HiveMetaStore.java:1816) at=20 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rename_partitio= n(HiveMetaStore.java:1788) at=20 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partition= (HiveMetaStore.java:1771) at=20 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partition(Hive= MetaStoreClient.java:834) at=20 org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(Hive.java:425) ... 17 more 2014-06-08 20:16:34,852 ERROR ql.Driver=20 (SessionState.java:printError(403)) - FAILED: Execution Error, return=20 code 1 from org.apache.hadoop .hive.ql.exec.MoveTask --=20 *Fernando Agudo Taranc=F3n* /Big Data Software Engineer/ Telf.: +34 917 680 490 Fax: +34 913 833 301 C/ Manuel Tovar, 49-53 - 28034 Madrid - Spain _http://www.bidoop.es_ --------------020508090507020001070907--