Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1F4FE1088E for ; Sat, 3 Aug 2013 09:55:56 +0000 (UTC) Received: (qmail 55553 invoked by uid 500); 3 Aug 2013 09:55:53 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 54791 invoked by uid 500); 3 Aug 2013 09:55:43 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 54774 invoked by uid 99); 3 Aug 2013 09:55:41 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 03 Aug 2013 09:55:41 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jigneshmpatel@gmail.com designates 209.85.215.49 as permitted sender) Received: from [209.85.215.49] (HELO mail-la0-f49.google.com) (209.85.215.49) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 03 Aug 2013 09:55:37 +0000 Received: by mail-la0-f49.google.com with SMTP id ev20so991220lab.8 for ; Sat, 03 Aug 2013 02:55:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=zZqjeE7+iX/Y0Lr4pdZFJ8TUI+bZ0fXBjef2oK5xYsk=; b=Q9YPwNgyiNMmio/kqVHhprTUdy/Npam1BIhKmN9BiFoQB1M3dOe91f9aNKdDWs0stY 5uixNms6colFXDzhbYqPTqJ6fh9e0POFBHzpM+AR697+sFypmXKcgy+x/ds7FNOsrWJZ K2MXZmaIVMbA+TM9zMwoVBbSLqRKZuO7PNgO42x7Ag6dIOfNoaalDk/7j7BRi0NUqxmn tXujJALTl92fhXDOCr9ZKSOIEkgWlnlvpQO0ckSqK6zJ2V3d5TY3CjNPGFCNGwip3HVt utQ7Ywck0dfTYOyUcCpdEfLZt043w7JU3zCf+RicfuyOVx4I1GEk1gepf1hZ0wdQGQvy af0w== MIME-Version: 1.0 X-Received: by 10.152.3.101 with SMTP id b5mr4618351lab.74.1375523714809; Sat, 03 Aug 2013 02:55:14 -0700 (PDT) Received: by 10.112.202.65 with HTTP; Sat, 3 Aug 2013 02:55:14 -0700 (PDT) In-Reply-To: References: Date: Sat, 3 Aug 2013 05:55:14 -0400 Message-ID: Subject: Re: Import HBase snapshots possible? From: Jignesh Patel To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=089e0149405e670c4104e30812f3 X-Virus-Checked: Checked by ClamAV on apache.org --089e0149405e670c4104e30812f3 Content-Type: text/plain; charset=ISO-8859-1 We have two requirements 1. Change the name of table1 2. Modify primary key of the table2. Table and table 2 are not intern linked(those are different db). Can we use above mention export and import functionality in both cases. On Fri, Aug 2, 2013 at 3:34 AM, Siddharth Karandikar < siddharth.karandikar@gmail.com> wrote: > Hi Matteo, > > Thanks a lot for all your help and detailed explanation. I could > finally get it running. > > I am now running it like - > ./bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot > -Dfs.default.name=hdfs://10.209.17.88:9000/ > -Dhbase.rootdir=hdfs://10.209.17.88:9000/hbase-ss/ > -snapshot s1 > -copy-to /hbase/ > > My HBase is running with hbase.rootdir=hdfs://10.209.17.88:9000/hbase. > > See how I am specifying fs.default.name now. Without that it used to > fail for source location with - "IllegalArgumentException: Wrong FS: > hdfs://10.209.17.88:9000/hbase-ss/s2/.hbase-snapshot/s2/.snapshotinfo > expected: file:///" > I am still understanding how Configuration, FS type and checkPath are > working underneath, but for now things are working for me. > > Thanks again. > > Siddharth > > > On Thu, Aug 1, 2013 at 8:14 PM, Matteo Bertozzi > wrote: > > You can't just copy the .snapshot folder... so now you've the RSs that is > > failing since the files for the cloned table are not available.. > > > > when you specify the hbase.rootdir you've just to specify the > hbase.rootdir > > the one in /etc/hbase-site.xml which doesn't contain the name of the > > snapshot/table that you want to export (e.g. > > hdfs://10.209.17.88:9000/hbasenot hdfs:// > > 10.209.17.88:9000/hbase/s2) > > > > > > > > Matteo > > > > > > > > On Thu, Aug 1, 2013 at 3:25 PM, Siddharth Karandikar < > > siddharth.karandikar@gmail.com> wrote: > > > >> Its failing with '//' as well as '///'. Error suggests that it needs > local > >> fs. > >> > >> 3 /// > >> ssk01:~/siddharth/tools/hbase-0.95.1-hadoop1 # ./bin/hbase > >> org.apache.hadoop.hbase.snapshot.ExportSnapshot > >> -Dhbase.rootdir=hdfs:///10.209.17.88:9000/hbase/s2 -snapshot s2 > >> -copy-to /root/siddharth/tools/hbase-0.95.1-hadoop1/data/ > >> Exception in thread "main" java.io.IOException: Incomplete HDFS URI, > >> no host: hdfs:///10.209.17.88:9000/hbase/s2 > >> at > >> > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:85) > >> at > >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) > >> at > org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) > >> at > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) > >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) > >> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) > >> at > >> org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:860) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:594) > >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:690) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:694) > >> > >> 2 // > >> > >> ssk01:~/siddharth/tools/hbase-0.95.1-hadoop1 # ./bin/hbase > >> org.apache.hadoop.hbase.snapshot.ExportSnapshot > >> -Dhbase.rootdir=hdfs://10.209.17.88:9000/hbase/s2 -snapshot s2 > >> -copy-to /root/siddharth/tools/hbase-0.95.1-hadoop1/data/ > >> Exception in thread "main" java.lang.IllegalArgumentException: Wrong > >> FS: hdfs://10.209.17.88:9000/hbase/s2/.hbase-snapshot/s2/.snapshotinfo, > >> expected: file:/// > >> at > org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381) > >> at > >> > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55) > >> at > >> > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:393) > >> at > >> > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) > >> at > >> > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125) > >> at > >> > org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283) > >> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427) > >> at > >> > org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:296) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.getSnapshotFiles(ExportSnapshot.java:371) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:618) > >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:690) > >> at > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:694) > >> > >> > >> > >> > >> Btw, I tried one more thing. From my HDFS location, I just did a copy > like > >> - > >> ssk01:~/siddharth/tools/hadoop-1.1.2 # ./bin/hadoop fs -copyToLocal > >> hdfs://10.209.17.88:9000/hbase/s1/.hbase-snapshot/s1 > >> /root/siddharth/tools/hbase-0.95.1-hadoop1/data/.hbase-snapshot/ > >> > >> After doing this, I am able to see s1 in 'list_snapshots'. But it is > >> failing at 'clone_snapshot'. > >> > >> hbase(main):014:0> clone_snapshot 's1', 'ts1' > >> > >> ERROR: java.io.IOException: Table 'ts1' not yet enabled, after 199617ms. > >> > >> Here is some help for this command: > >> Create a new table by cloning the snapshot content. > >> There're no copies of data involved. > >> And writing on the newly created table will not influence the snapshot > >> data. > >> > >> Examples: > >> hbase> clone_snapshot 'snapshotName', 'tableName' > >> > >> > >> > >> On Thu, Aug 1, 2013 at 7:44 PM, Matteo Bertozzi < > theo.bertozzi@gmail.com> > >> wrote: > >> > you have to use 3 slashes otherwise is interpreted as local > file-system > >> path > >> > -Dhbase.rootdir=hdfs:///10.209.17.88:9000/hbase > >> > > >> > Matteo > >> > > >> > > >> > > >> > On Thu, Aug 1, 2013 at 3:09 PM, Siddharth Karandikar < > >> > siddharth.karandikar@gmail.com> wrote: > >> > > >> >> Tried what you suggested. Here is what I get - > >> >> > >> >> ssk01:~/siddharth/tools/hbase-0.95.1-hadoop1 # ./bin/hbase > >> >> org.apache.hadoop.hbase.snapshot.ExportSnapshot > >> >> -Dhbase.rootdir=hdfs://10.209.17.88:9000/hbase -snapshot s1 -copy-to > >> >> /root/siddharth/tools/hbase-0.95.1-hadoop1/data/ > >> >> Exception in thread "main" java.lang.IllegalArgumentException: Wrong > >> >> FS: hdfs://10.209.17.88:9000/hbase/.hbase-snapshot/s1/.snapshotinfo, > >> >> expected: file:/// > >> >> at > >> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381) > >> >> at > >> >> > >> > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55) > >> >> at > >> >> > >> > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:393) > >> >> at > >> >> > >> > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) > >> >> at > >> >> > >> > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125) > >> >> at > >> >> > >> > org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283) > >> >> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427) > >> >> at > >> >> > >> > org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:296) > >> >> at > >> >> > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.getSnapshotFiles(ExportSnapshot.java:371) > >> >> at > >> >> > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:618) > >> >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >> >> at > >> >> > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:690) > >> >> at > >> >> > >> > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:694) > >> >> > >> >> > >> >> Am I missing something? > >> >> > >> >> > >> >> Thanks, > >> >> Siddharth > >> >> > >> >> > >> >> On Thu, Aug 1, 2013 at 7:31 PM, Matteo Bertozzi < > >> theo.bertozzi@gmail.com> > >> >> wrote: > >> >> > Ok, so to export a snapshot from your HBase cluster, you can do > >> >> > $ bin/hbase class > org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot > >> >> > -snapshot MySnapshot -copy-to hdfs:///srv2:8082/my-backup-dir > >> >> > > >> >> > Now on cluster2, hdfs:///srv2:8082 you've your my-backup-dir that > >> >> contains > >> >> > the exported snapshot (note that the snapshot is under hidden dirs > >> >> > .snapshots, and .archive) > >> >> > > >> >> > Now if you want to restore the snapshot, you have to export it > back > >> an > >> >> > HBase cluster. > >> >> > So on cluster2, you can do: > >> >> > $ bin/hbase class > >> org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot -D > >> >> > hbase.rootdir=hdfs:///srv2:8082/my-backup-dir -snapshot MySnapshot > >> >> -copy-to > >> >> > hdfs:///hbaseSrv:8082/hbase > >> >> > > >> >> > > >> >> > so, to recap > >> >> > - You take a snapshot > >> >> > - You Export the snapshot from HBase Cluster-1 -> to a simple HDFS > >> dir > >> >> in > >> >> > Cluster-2 > >> >> > - Then you want to restore > >> >> > - You Export the snapshot from HDFS dir in Cluster-2 to HBase > Cluster > >> >> (it > >> >> > can be a different one from the original) > >> >> > - From the hbase shell you can just: clone_snapshot > 'snapshotName', > >> >> > 'newTableName' if the table does not exists or use restore_snapshot > >> >> > 'snapshotName', if there's a table with the same name > >> >> > > >> >> > > >> >> > Matteo > >> >> > > >> >> > > >> >> > > >> >> > On Thu, Aug 1, 2013 at 2:54 PM, Siddharth Karandikar < > >> >> > siddharth.karandikar@gmail.com> wrote: > >> >> > > >> >> >> Yeah, thats right. But the issue is, hdfs that I am exporting to > is > >> >> >> not under HBase. > >> >> >> Can you please provide some example command to do this... > >> >> >> > >> >> >> > >> >> >> Thanks, > >> >> >> Siddharth > >> >> >> > >> >> >> On Thu, Aug 1, 2013 at 7:17 PM, Matteo Bertozzi < > >> >> theo.bertozzi@gmail.com> > >> >> >> wrote: > >> >> >> > Yes, the export an HDFS path. > >> >> >> > $ bin/hbase class > >> org.apache.hadoop.hbase.snapshot.tool.ExportSnapshot > >> >> >> > -snapshot MySnapshot -copy-to hdfs:///srv2:8082/hbase > >> >> >> > > >> >> >> > so you can export to some /my-backup-dir on your HDFS > >> >> >> > and then you've to export back to an hbase cluster, when you > want > >> to > >> >> >> > restore it > >> >> >> > > >> >> >> > Matteo > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > On Thu, Aug 1, 2013 at 2:45 PM, Siddharth Karandikar < > >> >> >> > siddharth.karandikar@gmail.com> wrote: > >> >> >> > > >> >> >> >> Can't I export it to plain HDFS? I think that would be very > >> useful. > >> >> >> >> > >> >> >> >> On Thu, Aug 1, 2013 at 7:08 PM, Matteo Bertozzi < > >> >> >> theo.bertozzi@gmail.com> > >> >> >> >> wrote: > >> >> >> >> > The ExportSnapshot will export the snapshot data+metadata, in > >> >> theory, > >> >> >> to > >> >> >> >> > another hbase cluster. > >> >> >> >> > so on the second cluster you'll now be able to do > >> "list_snapshots" > >> >> >> from > >> >> >> >> > shell and see the exported snapshot. > >> >> >> >> > now you can simply do clone_snapshot "snapshot_name", > >> >> "new_table_name" > >> >> >> >> and > >> >> >> >> > you're restoring a snapshot on the second cluster > >> >> >> >> > > >> >> >> >> > assuming that you have removed the snapshot from cluster1 and > >> you > >> >> >> want to > >> >> >> >> > export back your snapshot.. > >> >> >> >> > you just use ExportSnapshot again to move the snapshot from > >> >> cluster2 > >> >> >> to > >> >> >> >> > cluster1 > >> >> >> >> > and same as before you do a clone_snapshot to restore it > >> >> >> >> > > >> >> >> >> > Matteo > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > On Thu, Aug 1, 2013 at 2:35 PM, Siddharth Karandikar < > >> >> >> >> > siddharth.karandikar@gmail.com> wrote: > >> >> >> >> > > >> >> >> >> >> Hi there, > >> >> >> >> >> > >> >> >> >> >> I am testing out newly added snapshot capability, > >> ExportSnapshot > >> >> in > >> >> >> >> >> particular. > >> >> >> >> >> Its working fine for me. I am able to run ExportSnapshot > >> properly. > >> >> >> >> >> > >> >> >> >> >> But the biggest (noob) issue is, once exported, is there any > >> way > >> >> to > >> >> >> >> >> import those snapshots back in hbase? I don't see any > >> >> ImportSnapshot > >> >> >> >> >> util there. > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> Thanks, > >> >> >> >> >> Siddharth > >> >> >> >> >> > >> >> >> >> > >> >> >> > >> >> > >> > --089e0149405e670c4104e30812f3--