hbase-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From te...@apache.org
Subject hbase git commit: HBASE-16574 Book updates for backup and restore
Date Thu, 06 Oct 2016 18:29:07 GMT
Repository: hbase
Updated Branches:
  refs/heads/HBASE-7912 b14e2ab1c -> a072f6f49


HBASE-16574 Book updates for backup and restore


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a072f6f4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a072f6f4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a072f6f4

Branch: refs/heads/HBASE-7912
Commit: a072f6f49a26a7259ff2aaef6cb56d85eb592482
Parents: b14e2ab
Author: Frank Welsch <fwelsch@jps.net>
Authored: Fri Sep 23 18:00:42 2016 -0400
Committer: tedyu <yuzhihong@gmail.com>
Committed: Thu Oct 6 11:26:51 2016 -0700

----------------------------------------------------------------------
 src/main/asciidoc/_chapters/backup_restore.adoc | 521 +++++++++++++++++++
 src/main/asciidoc/book.adoc                     |   5 +-
 .../resources/images/backup-app-components.png  | Bin 0 -> 24366 bytes
 .../resources/images/backup-cloud-appliance.png | Bin 0 -> 30114 bytes
 .../images/backup-dedicated-cluster.png         | Bin 0 -> 24950 bytes
 .../resources/images/backup-intra-cluster.png   | Bin 0 -> 19348 bytes
 6 files changed, 523 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/a072f6f4/src/main/asciidoc/_chapters/backup_restore.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/backup_restore.adoc b/src/main/asciidoc/_chapters/backup_restore.adoc
new file mode 100644
index 0000000..f362ed0
--- /dev/null
+++ b/src/main/asciidoc/_chapters/backup_restore.adoc
@@ -0,0 +1,521 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+[[casestudies]]
+= Backup and Restore
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+
+[[br.overview]]
+== Overview
+
+Backup-and-restore is a standard set of operations for many databases. An effective backup-and-restore
+strategy helps ensure that you can recover data in case of data loss or failures. The HBase
backup-and-restore
+utility helps ensure that enterprises using HBase as a data repository can recover from these
types of
+incidents. Another important feature of the backup-and-restore utility is the ability to
restore the
+database to a particular point-in-time, commonly referred to as a snapshot.
+
+The HBase backup-and-restore utility features both full backups and incremental backups.
A full backup
+is required at least once. The full backup is the foundation on which incremental backups
are applied
+to build iterative snapshots. Incremental backups can be run on a schedule to capture changes
over time,
+for example by using a Cron job. Incremental backup is more cost effective because it only
captures the
+changes. It also enables you to restore the database to any incremental backup version. Furthermore,
the
+utilities also enable table-level data backup-and-recovery if you do not want to restore
the entire dataset
+of the backup.
+
+[[br.planning]]
+== Planning
+
+There are a few strategies you can use to implement backup-and-restore in your environment.
The following sections
+show how they are implemented and identify potential tradeoffs.
+
+WARNING: This backup and restore tools has not been tested on Transparent Data Encryption
(TDE) enabled HDFS clusters.
+This is related to the open issue link:https://issues.apache.org/jira/browse/HBASE-16178[HBASE-16178].
+
+[[br.intracluster.backup]]
+=== Backup within a cluster
+
+Backup-and-restore within the same cluster is only appropriate for testing. This strategy
is not suitable for production
+unless the underlying HDFS layer is backed up and is reliably recoverable.
+
+.Intra-Cluster Backup
+image::backup-intra-cluster.png[]
+
+[[br.dedicated.cluster.backup]]
+=== Backup using a dedicated cluster
+
+This strategy provides greater fault tolerance and provides a path towards disaster recovery.
In this setting, you will
+store the backup on a separate HDFS cluster by supplying the backup destination cluster’s
HDFS URL to the backup utility.
+You should consider backing up to a different physical location, such as a different data
center.
+
+Typically, a backup-dedicated HDFS cluster uses a more economical hardware profile.
+
+.Dedicated HDFS Cluster Backup
+image::backup-dedicated-cluster.png[]
+
+[[br.cloud.or.vendor.backup]]
+=== Backup to the Cloud or a storage vendor appliance
+
+Another approach to safeguarding HBase incremental backups is to store the data on provisioned,
secure servers that belong
+to third-party vendors and that are located off-site. The vendor can be a public cloud provider
or a storage vendor who uses
+a Hadoop-compatible file system, such as S3 and other HDFS-compatible destinations.
+
+.Backup to Cloud or Vendor Storage Solutions
+image::backup-cloud-appliance.png[]
+
+NOTE: The HBase backup utility does not support backup to multiple destinations. A workaround
is to manually create copies
+of the backup files from HDFS or S3.
+
+[[br.best.practices]]
+## Best Practices
+
+_Formulate a restore strategy and test it._
+
+Before you rely on a backup-and-restore strategy for your production environment, identify
how backups must be performed,
+and more importantly, how restores must be performed. Test the plan to ensure that it is
workable.
+At a minimum, store backup data from a production cluster on a different cluster or server.
To further safeguard the data,
+use a backup location that is at a different site.
+
+If you have a unrecoverable loss of data on your primary production cluster as a result of
computer system issues, you may
+be able to restore the data from a different cluster or server at the same site. However,
a disaster that destroys the whole
+site renders locally stored backups useless. Consider storing the backup data and necessary
resources (both computing capacity
+and operator expertise) to restore the data at a site sufficiently remote from the production
site. In the case of a catastrophe
+at the whole primary site (fire, earthquake, etc.), the remote backup site can be very valuable.
+
+_Secure a full backup image first._
+
+As a baseline, you must complete a full backup of HBase data at least once before you can
rely on incremental backups. The full
+backup should be stored outside of the source cluster. To ensure complete dataset recovery,
you must run the restore utility
+with the option to restore baseline full backup. The full backup is the foundation of your
dataset. Incremental backup data
+is applied on top of the full backup during the restore operation to return you to the point
in time when backup was last taken.
+
+_Define and use backup sets for groups of tables that are logical subsets of the entire dataset._
+
+You can group tables into an object called a backup set. A backup set can save time when
you have a particular group of tables
+that you expect to repeatedly back up or restore.
+
+When you create a backup set, you type table names to include in the group. The backup set
includes not only groups of related
+tables, but also retains the HBase backup metadata. Afterwards, you can invoke the backup
set name to indicate what tables apply
+to the command execution instead of entering all the table names individually.
+
+_Document the backup-and-restore strategy, and ideally log information about each backup._
+
+Document the whole process so that the knowledge base can transfer to new administrators
after employee turnover. As an extra
+safety precaution, also log the calendar date, time, and other relevant details about the
data of each backup. This metadata
+can potentially help locate a particular dataset in case of source cluster failure or primary
site disaster. Maintain duplicate
+copies of all documentation: one copy at the production cluster site and another at the backup
location or wherever it can be
+accessed by an administrator remotely from the production cluster.
+
+[[br.running.utility]]
+## Running the Backup and Restore Utility
+
+This section details the commands and their arguments of the backup-and-restore utility,
as well as example usage based on task.
+
+WARNING: The YARN *container-executor.cfg* configuration file must have the following property
setting: _allowed.system.users=hbase_. No spaces
+are allowed in entries of this configuration file.
+
+*Example of a valid container-executor.cfg file for backup and restore:*
+
+[source]
+----
+yarn.nodemanager.log-dirs=/var/log/hadoop/mapred
+yarn.nodemanager.linux-container-executor.group=yarn
+banned.users=hdfs,yarn,mapred,bin
+allowed.system.users=hbase
+min.user.id=500
+----
+
+NOTE: Run the command `hbase backup help <command>` to access the online help that
provides basic information about a command
+and its options.
+
+[[br.creating.complete.backup]]
+## Creating and Maintaining a Complete Backup Image
+
+[NOTE]
+====
+For HBase clusters also using Apache Phoenix: include the SQL system catalog tables in the
backup. In the event that you
+need to restore the HBase backup, access to the system catalog tables enable you to resume
Phoenix interoperability with the
+restored data.
+====
+
+The first step in running the backup-and-restore utilities is to perform a full backup and
to store the data in a separate image
+from the source. At a minimum, you must do this to get a baseline before you can rely on
incremental backups.
+
+Run the following command as HBase superuser:
+
+[source]
+----
+hbase backup create
+	{{ full | incremental }
+	 {backup_root_path}
+	 {[tables] | [-set backup_set_name]}}
+
+	[[-silent] |
+	 [-w number_of_workers] |
+	 [-b bandwidth_per_worker]]
+----
+
+After the command finishes running, the console prints a SUCCESS or FAILURE status message.
The SUCCESS message includes a _backup_ ID.
+The backup ID is the Unix time (also known as Epoch time) that the HBase master received
the backup request from the client.
+
+[TIP]
+====
+Record the backup ID that appears at the end of a successful backup. In case the source cluster
fails and you need to recover the
+dataset with a restore operation, having the backup ID readily available can save time.
+====
+
+[[br.create.required.cli.arguments]]
+### Required Command-Line Arguments
+
+_full_ or _incremental_::
+  Using the _full_ argument creates a full backup image. The _incremental_ argument directs
the command to create an incremental backup
+  that has an image of data changes since the immediately preceding backup, either the full
backup or the previous incremental backup.
+
+_backup_root_path_::
+  The _backup_root_path_ argument specifies the full filesystem URI of where to store the
backup image. Valid prefixes are
+  are _hdfs:_, _webhdfs:_, _gpfs:_, and _s3fs:_.
+
+[[br.create.optional.cli.arguments]]
+### Optional Command-Line Arguments
+
+_tables_::
+  Table or tables to back up. If no table is specified, all tables are backed up. The values
for this argument must be entered directly
+  after the _backup_root_path_ argument. Specify tables in a comma-separated list. Namespace
wildcards are not supported yet, so to backup
+  a namespace you must enter a full list of tables in the namespace.
+
+_-set <backup_set_name>_::
+  The _-set_ option invokes an existing backup set in the command. See <<br.using.backup.sets,Using
Backup Sets>> for the purpose and usage
+  of backup sets.
+
+_-silent_::
+  Directs the command to not display progress and completes execution without manual interaction.
+
+_-w <number>-::
+  Specifies the number of parallel workers to copy data to backup destination (for example,
number of map tasks in a MapReduce job).
+
+_-b <bandwidth_per_worker>_::
+  Specifies the bandwidth of each worker in MB per second.
+
+[[br.usage.examples]]
+### Example usage
+
+[source]
+----
+$ hbase backup create full hdfs://host5:8020/data/backup SALES2,SALES3 -w 3
+----
+
+This command creates a full backup image of two tables, SALES2 and SALES3, in the HDFS instance
who NameNode is host5:8020
+in the path _/data/backup_. THe _-w_ option specifies that no more than three parallel works
complete the operation.
+
+
+[[br.managing.backup.progress]]
+## Managing Backup Progress
+
+You can monitor a running backup by running the _hbase backup progress_ command and specifying
the backup ID as an argument.
+
+For example, run the following command as hbase superuser to view the progress of a backup
+
+[source]
+----
+$ hbase backup progress {backupId}
+----
+
+[[br.progress.required.cli.arguments]]
+### Required Command-Line Arguments
+
+_backupId_::
+  Specifies the backup that you want to monitor by seeing the progress information. The backupId
is case-sensitive.
+
+[[br.progress.example]]
+### Example usage
+
+[source]
+----
+hbase backup progress backupId_1467823988425
+----
+
+[[br.using.backup.sets]]
+## Using Backup Sets
+
+Backup sets can ease the administration of HBase data backups and restores by reducing the
amount of repetitive input
+of table names. You can group tables into a named backup set with the `hbase backup set add`
command. You can then use
+the -set option to invoke the name of a backup set in the `hbase backup create` or `hbase
backup restore` rather than list
+individually every table in the group. You can have multiple backup sets.
+
+NOTE: Note the differentiation between the `hbase backup set add` command and the _-set_
option. The `hbase backup set add`
+command must be run before using the `-set` option in a different command because backup
sets must be named and defined
+before using backup sets as a shortcut.
+
+If you run the `hbase backup set add` command and specify a backup set name that does not
yet exist on your system, a new set
+is created. If you run the command with the name of an existing backup set name, then the
tables that you specify are added
+to the set.
+
+In this command, the backup set name is case-sensitive.
+
+NOTE: The metadata of backup sets are stored within HBase. If you do not have access to the
original HBase cluster with the
+backup set metadata, then you must specify individual table names to restore the data.
+
+To create a backup set, run the following command as the HBase superuser:
+
+[source]
+----
+$ hbase backup set {add, remove, list, describe, delete} <backup_set_name> tables
+----
+
+[[br.using.subcommands]]
+### Subcommands
+
+The following list details subcommands of the hbase backup set command.
+
+NOTE: You must enter one (and no more than one) of the following subcommands after hbase
backup set to complete an operation.
+Also, the backup set name is case-sensitive in the command-line utility.
+
+_add_::
+  Add tables to a backup set. Specify a _backup_set_name_ value after this argument to create
a backup set.
+
+_remove_::
+  Removes tables from the set. Specify the tables to remove in the tables argument.
+
+_list_::
+  Lists all backup sets.
+
+_describe_::
+  Use this subcommand to display on the screen a description of a backup set. The information
includes whether the set has full
+  or incremental backups, start and end times of the backups, and a list of the tables in
the set. This subcommand must precede
+  a valid value for the _backup_set_name_ value.
+
+_delete_::
+  Deletes a backup set. Enter the value for the _backup_set_name_ option directly after the
`hbase backup set delete` command.
+
+[[br.using.optional.cli.arguments]]
+### Optional Command-Line Arguments
+
+_backup_set_name_::
+  Use to assign or invoke a backup set name. The backup set name must contain only printable
characters and cannot have any spaces.
+
+_tables_::
+  List of tables (or a single table) to include in the backup set. Enter the table names
as a comma-separated list. If no tables
+  are specified, all tables are included in the set.
+
+TIP: Maintain a log or other record of the case-sensitive backup set names and the corresponding
tables in each set on a separate
+or remote cluster, backup strategy. This information can help you in case of failure on the
primary cluster.
+
+[[br.using.usage]]
+### Example of Usage
+
+[source]
+----
+$ hbase backup set add Q1Data TEAM3,TEAM_4
+----
+
+Depending on the environment, this command results in _one_ of the following actions:
+
+* If the `Q1Data` backup set does not exist, a backup set containing tables `TEAM_3` and
`TEAM_4` is created.
+* If the `Q1Data` backup set exists already, the tables `TEAM_3` and `TEAM_4` are added to
the `Q1Data` backup set.
+
+[[br.restoring.backup]]
+## Restoring a Backup Image
+
+Run the following command as HBase superuser. You can only restore on a live HBase cluster
because the data must be
+redistributed to complete the restore operation successfully.
+
+[source]
+----
+hbase restore {[-set backup_set_name] | [backup_root_path] | [backupId] | [tables]} [[table_mapping]
| [-overwrite] | [-check]]
+----
+
+[[br.restore.required.args]]
+### Required Command-Line Arguments
+
+-set <backup_set_name>::
+  The `-set` option here directs the utility to restore the backup set that you specify in
the _backup_set_name_ argument.
+
+_backup_root_path_::
+  The _backup_root_path_ argument specifies the parent location of the stored backup image.
+
+_backupId_::
+  The backup ID that uniquely identifies the backup image to be restored.
+
+_tables_::
+  Table(s) to restore. The values for this argument must be entered directly after the `backupId`
argument. Specify tables
+  in a comma-separated list.
+
+[[br.restore.optional.args]]
+### Optional Command-Line Arguments
+
+_table_mapping_::
+  Directs the utility to restore data in the tables that are specified in the tables option.
Each table must be mapped prior to
+  running the command. Enter tables as a comma-separated list.
+
+-overwrite::
+  Truncates one or more tables in the target restore location and loads data from the backup
image. The existing table must be
+  online before the hbase restore command is run to successfully overwrite the data in the
table. Compaction is not required for
+  the data restore operation when you use the -overwrite argument.
+
+-check::
+  Verifies that the restore sequence and dependencies are in working order without actually
executing a data restore.
+
+[[br.restore.usage]]
+### Example of Usage
+
+[source]
+----
+hbase restore /tmp/backup_incremental backupId_1467823988425 mytable1,mytable2 -overwrite
+----
+
+This command restores two tables of an incremental backup image. In this example:
+• `/tmp/backup_incremental` is the path to the directory containing the backup image.
+• `backupId_1467823988425` is the backup ID.
+• `mytable1` and `mytable2` are the names of tables in the backup image to be restored.
+• `-overwrite` is an argument that indicates the restored tables overwrite all existing
data in the versions of `mytable1` and
+  `mytable2` that exist in the target destination of the restore operation.
+
+[[br.administration]]
+## Administrationg of Backup Images
+
+The `hbase backup` command has several subcommands that help with administering backup images
as they accumulate. Most production
+environments require recurring backups, so it is necessary to have utilities to help manage
the data of the backup repository.
+Some subcommands enable you to find information that can help identify backups that are relevant
in a search for particular data.
+You can also delete backup images.
+
+The following list details each `hbase backup subcommand` that can help administer backups.
Run the full command-subcommand line as
+the HBase superuser.
+
+`hbase backup history [-n number_of_backups]`::
+  Displays a log of backup sessions. The information for each session includes backup ID,
type (full or incremental), the tables
+  in the backup, status, and start and end time. Specify the number of backup sessions to
display with the optional -n argument.
+  If no number is specified, the command displays a log of 10 backup sessions.
+
+`hbase backup describe {backupId}`::
+  Lists the backup image content, time when the backup was taken, whether the backup is full
or incremental, all tables in the
+  backup, and backup status. The `backupId` option is required.
+
+`hbase backup delete {backupId}`::
+  Deletes the specified backup image from the system. The `backupId` option is required.
+
+[[br.technical.details]]
+## Technical Details of Incremental Backup and Restore
+
+HBase incremental backups enable more efficient capture of HBase table images than previous
attempts at serial backup-and-restore
+solutions, such as those that only used HBase Export and Import APIs. Incremental backups
use Write Ahead Logs (WALs) to capture
+the data changes since the previous backup was created. A roll log is executed across all
RegionServers to track the WALs that need
+to be in the backup.
+
+After the incremental backup image is created, the source backup files usually are on same
node as the data source. A process similar
+to the DistCp (distributed copy) tool is used to move the source backup files to the target
file systems. When a table restore operation
+starts, a two-step process is initiated. First, the full backup is restored from the full
backup image. Second, all WAL files from
+incremental backups between the last full backup and the incremental backup being restored
are converted to HFiles, which the HBase
+Bulk Load utility automatically imports as restored data in the table.
+
+You can only restore on a live HBase cluster because the data must be redistributed to complete
the restore operation successfully.
+
+[[br.s3.backup.scenario]]
+## Scenario: Safeguarding Application Datasets on Amazon S3
+
+This scenario describes how a hypothetical retail business uses backups to safeguard application
data and then restore the dataset
+after failure.
+
+The HBase administration team uses backup sets to store data from a group of tables that
have interrelated information for an
+application called green. In this example, one table contains transaction records and the
other contains customer details. The
+two tables need to be backed up and be recoverable as a group.
+
+The admin team also wants to ensure daily backups occur automatically.
+
+.Tables Composing The Backup Set
+image::backup-app-components.png[]
+
+The following is an outline of the steps and examples of commands that are used to backup
the data for the _green_ application and
+to recover the data later. All commands are run when logged in as HBase superuser.
+
+1. A backup set called _green_set_ is created as an alias for both the transactions table
and the customer table. The backup set can
+be used for all operations to avoid typing each table name. The backup set name is case-sensitive
and should be formed with only
+printable characters and without spaces.
+
+[source]
+----
+$ hbase backup set add green_set transactions
+$ hbase backup set add green_set customer
+----
+
+2. The first backup of green_set data must be a full backup. The following command example
shows how credentials are passed to Amazon
+S3 and specifies the file system with the s3a: prefix.
+
+[source]
+----
+$ ACCESS_KEY=ABCDEFGHIJKLMNOPQRST
+$ SECRET_KEY=123456789abcdefghijklmnopqrstuvwxyzABCD
+$ sudo -u hbase hbase backup create full\
+  s3a://$ACCESS_KEY:SECRET_KEY@prodhbasebackups/backups -set green_set
+----
+
+3. Incremental backups should be run according to a schedule that ensures essential data
recovery in the event of a catastrophe. At
+this retail company, the HBase admin team decides that automated daily backups secures the
data sufficiently. The team decides that
+they can implement this by modifying an existing Cron job that is defined in `/etc/crontab`.
Consequently, IT modifies the Cron job
+by adding the following line:
+
+[source]
+----
+@daily hbase hbase backup create incremental s3a://$ACCESS_KEY:$SECRET_KEY@prodhbasebackups/backups
-set green_set
+----
+
+4. A catastrophic IT incident disables the production cluster that the green application
uses. An HBase system administrator of the
+backup cluster must restore the _green_set_ dataset to the point in time closest to the recovery
objective.
+
+NOTE: If the administrator of the backup HBase cluster has the backup ID with relevant details
in accessible records, the following
+search with the `hdfs dfs -ls` command and manually scanning the backup ID list can be bypassed.
Consider continuously maintaining
+and protecting a detailed log of backup IDs outside the production cluster in your environment.
+
+The HBase administrator runs the following command on the directory where backups are stored
to print the list of successful backup
+IDs on the console:
+
+`hdfs dfs -ls -t /prodhbasebackups/backups`
+
+5. The admin scans the list to see which backup was created at a date and time closest to
the recovery objective. To do this, the
+admin converts the calendar timestamp of the recovery point in time to Unix time because
backup IDs are uniquely identified with
+Unix time. The backup IDs are listed in reverse chronological order, meaning the most recent
successful backup appears first.
+
+The admin notices that the following line in the command output corresponds with the _green_set_
backup that needs to be restored:
+
+`/prodhbasebackups/backups/backup_1467823988425`
+
+6. The admin restores green_set invoking the backup ID and the -overwrite option. The -overwrite
option truncates all existing data
+in the destination and populates the tables with data from the backup dataset. Without this
flag, the backup data is appended to the
+existing data in the destination. In this case, the admin decides to overwrite the data because
it is corrupted.
+
+[source]
+----
+$ sudo -u hbase hbase restore -set green_set \
+  s3a://$ACCESS_KEY:$SECRET_KEY@prodhbasebackups/backups backup_1467823988425 \ -overwrite
+----
+
+[[br.limitations]]
+## Limitations of the Backup and Restore Utility
+
+•	Only one active backup session is supported. link:https://issues.apache.org/jira/browse/HBASE-16391[HBASE-16391]
will introduce multiple-backup sessions support.
+•	Both backup and restore can’t be canceled while in progress. (link:https://issues.apache.org/jira/browse/HBASE-15997[HBASE-15997],
link:https://issues.apache.org/jira/browse/HBASE-15998[HBASE-15998]).
+•	No supported for bulk-loaded data (link:https://issues.apache.org/jira/browse/HBASE-14417[HBASE-14417]).
+•	Single backup destination only supported. link:https://issues.apache.org/jira/browse/HBASE-15476[HBASE-15476]
will introduce multiple- backup destinations support.
+•	There is no merge for incremental images (link:https://issues.apache.org/jira/browse/HBASE-14135[HBASE-14135]).
This can increase restore time.  Users will need to periodically execute full backups to
be able to restore data faster.
+•	Only superuser (hbase) is allowed to perform backup/restore, which can be a problem for
security (link:https://issues.apache.org/jira/browse/HBASE-14138[HBASE-14138]).
+•	During incremental backup ALL WAL data will be copied over to backup destination, including
data from tables that are not being backed up. This functionality is a performance and security
limitation. link:https://issues.apache.org/jira/browse/HBASE-14141[HBASE-14141] will introduce
more granular copy WAL implementation.

http://git-wip-us.apache.org/repos/asf/hbase/blob/a072f6f4/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 2209b4f..f5089e9 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -19,7 +19,7 @@
  */
 ////
 
-= Apache HBase (TM) Reference Guide 
+= Apache HBase (TM) Reference Guide
 :Author: Apache HBase Team
 :Email: <hbase-dev@lists.apache.org>
 :doctype: book
@@ -62,6 +62,7 @@ include::_chapters/mapreduce.adoc[]
 include::_chapters/security.adoc[]
 include::_chapters/architecture.adoc[]
 include::_chapters/hbase_mob.adoc[]
+include::_chapters/backup_restore.adoc[]
 include::_chapters/hbase_apis.adoc[]
 include::_chapters/external_apis.adoc[]
 include::_chapters/thrift_filter_language.adoc[]
@@ -92,5 +93,3 @@ include::_chapters/asf.adoc[]
 include::_chapters/orca.adoc[]
 include::_chapters/tracing.adoc[]
 include::_chapters/rpc.adoc[]
-
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/a072f6f4/src/main/site/resources/images/backup-app-components.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-app-components.png b/src/main/site/resources/images/backup-app-components.png
new file mode 100644
index 0000000..5e403e2
Binary files /dev/null and b/src/main/site/resources/images/backup-app-components.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/a072f6f4/src/main/site/resources/images/backup-cloud-appliance.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-cloud-appliance.png b/src/main/site/resources/images/backup-cloud-appliance.png
new file mode 100644
index 0000000..76b6d5a
Binary files /dev/null and b/src/main/site/resources/images/backup-cloud-appliance.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/a072f6f4/src/main/site/resources/images/backup-dedicated-cluster.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-dedicated-cluster.png b/src/main/site/resources/images/backup-dedicated-cluster.png
new file mode 100644
index 0000000..bca282d
Binary files /dev/null and b/src/main/site/resources/images/backup-dedicated-cluster.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/a072f6f4/src/main/site/resources/images/backup-intra-cluster.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-intra-cluster.png b/src/main/site/resources/images/backup-intra-cluster.png
new file mode 100644
index 0000000..113c577
Binary files /dev/null and b/src/main/site/resources/images/backup-intra-cluster.png differ


Mime
View raw message