Return-Path: Delivered-To: apmail-hadoop-hbase-dev-archive@locus.apache.org Received: (qmail 34477 invoked from network); 22 Apr 2008 04:42:45 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 Apr 2008 04:42:45 -0000 Received: (qmail 71827 invoked by uid 500); 22 Apr 2008 04:42:46 -0000 Delivered-To: apmail-hadoop-hbase-dev-archive@hadoop.apache.org Received: (qmail 71818 invoked by uid 500); 22 Apr 2008 04:42:46 -0000 Mailing-List: contact hbase-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-dev@hadoop.apache.org Received: (qmail 71807 invoked by uid 99); 22 Apr 2008 04:42:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 Apr 2008 21:42:46 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 22 Apr 2008 04:42:01 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id BBA4B234C0F9 for ; Mon, 21 Apr 2008 21:39:21 -0700 (PDT) Message-ID: <1564596857.1208839161767.JavaMail.jira@brutus> Date: Mon, 21 Apr 2008 21:39:21 -0700 (PDT) From: "stack (JIRA)" To: hbase-dev@hadoop.apache.org Subject: [jira] Commented: (HBASE-594) tables are being reassigned instead of deleted In-Reply-To: <1479190770.1208560102002.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591196#action_12591196 ] stack commented on HBASE-594: ----------------------------- I just tried on 4 node cluster (not a 15 node cluster). {code} --> create table 'x' ('x'); Table created successfully. hql > --> show tables; +--------------------------------------+--------------------------------------+ | Name | Descriptor | +--------------------------------------+--------------------------------------+ | x | name: x, families: {x:={name: x, max | | | versions: 3, compression: NONE, in me| | | mory: false, block cache enabled: fal| | | se, max length: 2147483647, bloom fil| | | ter: none}} | +--------------------------------------+--------------------------------------+ 1 table(s) in set. (0.02 sec) hql > drop table 'x'; 1 table(s) dropped successfully. (10.19 sec) hql > show tables; No tables found. hql > select * from .META.; 0 row(s) in set. (0.05 sec) {code} What you think the difference is? > tables are being reassigned instead of deleted > ---------------------------------------------- > > Key: HBASE-594 > URL: https://issues.apache.org/jira/browse/HBASE-594 > Project: Hadoop HBase > Issue Type: Bug > Components: master, regionserver > Affects Versions: 0.2.0 > Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03 > Reporter: Andrew Purtell > > We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. > I have been trying to use the HBase shell to drop tables for quite a few minutes now. > The master schedules the table for deletion and the region server processes the deletion: > 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29 > 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}} > 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323 > but then a META scan happens and the table is reassigned to another server to live on as a zombie: > 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} > 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete > 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned > 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020 > 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} > 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete > 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323 > 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020 > 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020 > Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans: > 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} > 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode] > 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1 > 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549 > 961323 from meta region: .META.,,1 because HRegionInfo was empty > 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete > 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned > yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die... -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.