Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 64C8A102C0 for ; Wed, 28 Aug 2013 08:50:20 +0000 (UTC) Received: (qmail 5875 invoked by uid 500); 28 Aug 2013 08:50:14 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 5772 invoked by uid 500); 28 Aug 2013 08:50:13 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 5765 invoked by uid 99); 28 Aug 2013 08:50:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2013 08:50:13 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of liulei412@gmail.com designates 209.85.220.172 as permitted sender) Received: from [209.85.220.172] (HELO mail-vc0-f172.google.com) (209.85.220.172) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Aug 2013 08:50:07 +0000 Received: by mail-vc0-f172.google.com with SMTP id m17so3952641vca.31 for ; Wed, 28 Aug 2013 01:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=dzXd43FbvyUCaAmv4m5cFxRjHetIeDP1jKA1alBCR3s=; b=m/KRcN1G50WvYCHIHbLHO5/KCVSgP5Y32HNRKQE38MQeH8i1ePouFb0VN7I9v7iBjg di8ABE3Hx624VxT7tXgm7yzuBddd9DeWq4ePo+rW5UAkHZ8QAf6sFaHzSm4wmYyi/LjV LG2XLffobhW2nrnyxK3sR57suG+4DqSEMcx2F1oWPlhtfcgKp2DVqb70f+Y6+lINBTQ1 FauVbnUekcqNLKKlozN8XMvCi9KliPOMR12UuLvn/Dg5krirzJ2a/QnXla8RT2XMnr9P Mr/a12/DSaplO5K6+RER5/2V2BiWKDto9S+5mABVsemuIbTNS9O2uRlkDMoXrAf+DH05 4SWg== MIME-Version: 1.0 X-Received: by 10.52.34.40 with SMTP id w8mr20802171vdi.7.1377679786167; Wed, 28 Aug 2013 01:49:46 -0700 (PDT) Received: by 10.220.90.9 with HTTP; Wed, 28 Aug 2013 01:49:46 -0700 (PDT) In-Reply-To: References: Date: Wed, 28 Aug 2013 16:49:46 +0800 Message-ID: Subject: Re: hadoop2 and Hbase0.94 From: lei liu To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf307cfec044d3a804e4fe12f9 X-Virus-Checked: Checked by ClamAV on apache.org --20cf307cfec044d3a804e4fe12f9 Content-Type: text/plain; charset=ISO-8859-1 In org.apache.hadoop.hbase.coprocessor.TestMasterObserver unit test. 2013/8/28 lei liu > When I run hbase unit test, there is the exception. > > > 2013/8/28 Harsh J > >> Moving to user@hbase.apache.org. >> >> Please share your hbase-site.xml and core-site.xml. Was this HBase >> cluster previously running on a standalone local filesystem mode? >> >> On Wed, Aug 28, 2013 at 2:06 PM, lei liu wrote: >> > I use hadoop2 and hbase0.94, but there is below exception: >> > >> > 2013-08-28 11:36:12,922 ERROR >> > [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0] >> > executor.EventHandler(172): Caught throwable while processing >> > event C_M_DELETE_TABLE >> > java.lang.IllegalArgumentException: Wrong FS: >> > file:/tmp/ >> hbase-shenxiu.cx/hbase/observed_table/47b334989065a8ac84873e6d07c1de62, >> > expected: hdfs://localhost.lo >> > caldomain:35974 >> > at >> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:590) >> > at >> > >> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:172) >> > at >> > >> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:402) >> > at >> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1427) >> > at >> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1467) >> > at >> > org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052) >> > at >> > >> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:123) >> > at >> > >> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:72) >> > at >> > >> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFileSystem.java:444) >> > at >> > >> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableOperation(DeleteTableHandler.java:73) >> > at >> > >> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:96) >> > at >> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> > at java.lang.Thread.run(Thread.java:662) >> > 2013-08-28 11:37:05,653 INFO >> > [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner] >> > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use >> >> >> >> -- >> Harsh J >> > > --20cf307cfec044d3a804e4fe12f9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
In org.apache.hadoop.hbase.coprocessor.TestMasterObserver = unit test.


2013/8/28 lei liu <liulei412@gmail.com>
When I run hbase unit test,= there is the exception.
<= div class=3D"gmail_extra">

2013/8/28 Harsh J <= harsh@cloudera.com<= /a>>
Moving to user@hbase.apache.org.

Please share your hbase-site.xml and core-site.xml. Was this HBase
cluster previously running on a standalone local filesystem mode?

On Wed, Aug 28, 2013 at 2:06 PM, lei liu <liulei412@gmail.com> wrote:
> I use hadoop2 and hbase0.94, but there is below exception:
>
> 2013-08-28 11:36:12,922 ERROR
> [MASTER_TABLE_OPERATIONS-dw74.kgb.sqa.cm4,13646,1377660964832-0]
> executor.EventHandler(172): Caught throwable while processing
> event C_M_DELETE_TABLE
> java.lang.IllegalArgumentException: Wrong FS:
> file:/tmp/hbase-shenxiu.cx/hbase/obse= rved_table/47b334989065a8ac84873e6d07c1de62,
> expected: hdfs://localhost.lo
> caldomain:35974
> =A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FileSystem.checkPath(FileSyste= m.java:590)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFi= leSystem.java:172)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFil= eSystem.java:402)
> =A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FileSystem.listStatus(FileSyst= em.java:1427)
> =A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FileSystem.listStatus(FileSyst= em.java:1467)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1052)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiv= er.java:123)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiv= er.java:72)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.master.MasterFileSystem.deleteRegion(MasterFil= eSystem.java:444)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.handleTableO= peration(DeleteTableHandler.java:73)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.master.handler.TableEventHandler.process(Table= EventHandler.java:96)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:16= 9)
> =A0 =A0 =A0 =A0 at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecu= tor.java:886)
> =A0 =A0 =A0 =A0 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:908)
> =A0 =A0 =A0 =A0 at java.lang.Thread.run(Thread.java:662)
> 2013-08-28 11:37:05,653 INFO
> [Master:0;dw74.kgb.sqa.cm4,13646,1377660964832.archivedHFileCleaner] > util.FSUtils(1055): hdfs://localhost.localdomain:35974/use



--
Harsh J


--20cf307cfec044d3a804e4fe12f9--