From dev-return-53287-archive-asf-public=cust-asf.ponee.io@phoenix.apache.org Wed Jul 25 07:01:08 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id B145118062C for ; Wed, 25 Jul 2018 07:01:07 +0200 (CEST) Received: (qmail 33916 invoked by uid 500); 25 Jul 2018 05:01:06 -0000 Mailing-List: contact dev-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@phoenix.apache.org Delivered-To: mailing list dev@phoenix.apache.org Received: (qmail 33894 invoked by uid 99); 25 Jul 2018 05:01:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Jul 2018 05:01:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id C9D3A1806DA for ; Wed, 25 Jul 2018 05:01:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.501 X-Spam-Level: X-Spam-Status: No, score=-109.501 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id ryaOUrevyUCN for ; Wed, 25 Jul 2018 05:01:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 197EA5F529 for ; Wed, 25 Jul 2018 05:01:03 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id B7615E1328 for ; Wed, 25 Jul 2018 05:01:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id F286827766 for ; Wed, 25 Jul 2018 05:01:00 +0000 (UTC) Date: Wed, 25 Jul 2018 05:01:00 +0000 (UTC) From: "Karan Mehta (JIRA)" To: dev@phoenix.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (PHOENIX-4797) file not found or file exist exception when create global index use -snapshot option MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/PHOENIX-4797?page=3Dcom.atlass= ian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karan Mehta resolved PHOENIX-4797. ---------------------------------- Resolution: Fixed > file not found or file exist exception when create global index use -snap= shot option > -------------------------------------------------------------------------= ----------- > > Key: PHOENIX-4797 > URL: https://issues.apache.org/jira/browse/PHOENIX-4797 > Project: Phoenix > Issue Type: Bug > Affects Versions: 4.13.2-cdh5.11.2 > Reporter: sailingYang > Priority: Major > > when use indextool with -snapshot option and if the mapreduce create mult= i mapper.this=C2=A0will cause the hdfs file not found or=C2=A0 hdfs file ex= ist exception=E3=80=82finally the mapreduce task must be failed. because th= e mapper use the same restore work dir. > {code:java} > Error: java.io.IOException: java.util.concurrent.ExecutionException: java= .io.IOException: The specified region already exists on disk: hdfs://m12v1.= mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8= 280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2= c1d73d2e31bb5a5e2b394da564f8 > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyReg= ionUtils.java:186) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegion= s(RestoreSnapshotHelper.java:578) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegi= ons(RestoreSnapshotHelper.java:249) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegi= ons(RestoreSnapshotHelper.java:171) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotFor= Scanner(RestoreSnapshotHelper.java:814) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnaps= hotResultIterator.java:77) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSna= pshotResultIterator.java:73) > at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRec= ordReader.java:126) > at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(Ma= pTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1709) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.util.concurrent.ExecutionException: java.io.IOException: = The specified region already exists on disk: hdfs://m12v1.mlamp.cn:8020/tmp= /index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/d= efault/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2= b394da564f8 > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyReg= ionUtils.java:180) > ... 15 more > Caused by: java.io.IOException: The specified region already exists on di= sk: hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-= 2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516= _TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8 > at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnF= ileSystem(HRegionFileSystem.java:877) > at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.jav= a:6252) > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegi= onUtils.java:205) > at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtil= s.java:173) > at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtil= s.java:170) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471= ) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j= ava:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:615) > at java.lang.Thread.run(Thread.java:745) > 2018-06-28 15:01:55 70909 [main] INFO org.apache.hadoop.mapreduce.Job - T= ask Id : attempt_1530004808977_0011_m_000001_0, Status : FAILED > Error: java.io.IOException: java.util.concurrent.ExecutionException: java= .io.IOException: The specified region already exists on disk: hdfs://m12v1.= mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8= 280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2= c1d73d2e31bb5a5e2b394da564f8 > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyReg= ionUtils.java:186) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegion= s(RestoreSnapshotHelper.java:578) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegi= ons(RestoreSnapshotHelper.java:249) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegi= ons(RestoreSnapshotHelper.java:171) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotFor= Scanner(RestoreSnapshotHelper.java:814) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnaps= hotResultIterator.java:77) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSna= pshotResultIterator.java:73) > at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRec= ordReader.java:126) > at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(Ma= pTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1709) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.util.concurrent.ExecutionException: java.io.IOException: = The specified region already exists on disk: hdfs://m12v1.mlamp.cn:8020/tmp= /index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/d= efault/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2= b394da564f8 > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyReg= ionUtils.java:180) > ... 15 more > Caused by: java.io.IOException: The specified region already exists on di= sk: hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-= 2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516= _TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8 > at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnF= ileSystem(HRegionFileSystem.java:877) > at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.jav= a:6252) > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegi= onUtils.java:205) > at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtil= s.java:173) > at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtil= s.java:170) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471= ) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j= ava:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:615) > at java.lang.Thread.run(Thread.java:745) > 2018-06-28 15:01:55 70946 [main] INFO org.apache.hadoop.mapreduce.Job - T= ask Id : attempt_1530004808977_0011_m_000003_0, Status : FAILED > Error: java.io.IOException: java.util.concurrent.ExecutionException: java= .io.IOException: The specified region already exists on disk: hdfs://m12v1.= mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8= 280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/6032= d65488e6e7eba1255c20c1047cdc > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyReg= ionUtils.java:186) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegion= s(RestoreSnapshotHelper.java:578) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegi= ons(RestoreSnapshotHelper.java:249) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegi= ons(RestoreSnapshotHelper.java:171) > at org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotFor= Scanner(RestoreSnapshotHelper.java:814) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnaps= hotResultIterator.java:77) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSna= pshotResultIterator.java:73) > at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRec= ordReader.java:126) > at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(Ma= pTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1709) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.util.concurrent.ExecutionException: java.io.IOException: = The specified region already exists on disk: hdfs://m12v1.mlamp.cn:8020/tmp= /index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/d= efault/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/6032d65488e6e7eba1255= c20c1047cdc > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyReg= ionUtils.java:180) > ... 15 more > Caused by: java.io.IOException: The specified region already exists on di= sk: hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-= 2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516= _TRAIN_EVENT/6032d65488e6e7eba1255c20c1047cdc > at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnF= ileSystem(HRegionFileSystem.java:877) > at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.jav= a:6252) > at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegi= onUtils.java:205) > at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtil= s.java:173) > at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtil= s.java:170) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471= ) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j= ava:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:615) > at java.lang.Thread.run(Thread.java:745) > 2018-06-28 15:01:56 71957 [main] INFO org.apache.hadoop.mapreduce.Job - T= ask Id : attempt_1530004808977_0011_m_000000_0, Status : FAILED > Error: java.lang.NullPointerException > at org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnaps= hotResultIterator.java:82) > at org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSna= pshotResultIterator.java:73) > at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRec= ordReader.java:126) > at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(Ma= pTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1709) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)