hawq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r...@apache.org
Subject [2/3] incubator-hawq git commit: HAWQ-158. Remove legacy command line tools and help.
Date Sat, 14 Nov 2015 07:41:40 GMT
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gp_dump_help
----------------------------------------------------------------------
diff --git a/tools/doc/gp_dump_help b/tools/doc/gp_dump_help
deleted file mode 100755
index d622ffc..0000000
--- a/tools/doc/gp_dump_help
+++ /dev/null
@@ -1,316 +0,0 @@
-COMMAND NAME: gp_dump
-
-Writes out a Greenplum database to SQL script files, which can 
-then be used to restore the database using gp_restore.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gp_dump [-a | -s] [-c] [-d] [-D] [-n <schema>] [-o] [-O] 
-[-t <table_name>] [-T table_name] 
-[-x] [-h <hostname>] 
-[-p <port>] [-U <username>] [-W] [-i] [-v] 
-[--gp-c] [--gp-d=<backup_directory>] 
-[--gp-r=<reportfile>] [--gp-s=<dbid>] <database_name> 
-
-gp_dump -? | --help
-
-gp_dump --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gp_dump utility dumps the contents of a Greenplum database 
-into SQL script files, which can then be used to restore the database 
-schema and user data at a later time using gp_restore. During a dump 
-operation, users will still have full access to the database. 
-The functionality of gp_dump is analogous to PostgreSQLs pg_dump utility, 
-which writes out (or dumps) the content of a database into a script file. 
-The script file contains SQL commands that can be used to restore the 
-databases, data, and global objects such as users, groups, and access 
-permissions. 
-
-The functionality of gp_dump is modified to accommodate the 
-distributed nature of a Greenplum database. Keep in mind that a database 
-in Greenplum Database is actually comprised of several PostgreSQL instances 
-(the master and all segments), each of which must be dumped individually. 
-The gp_dump utility takes care of dumping all of the individual instances 
-across the system.
-
-The gp_dump utility performs the following actions and produces the 
-following dump files by default:
-
-ON THE MASTER HOST
-
-* Dumps CREATE DATABASE SQL statements into a file in the master data 
-  directory. The default naming convention of this file is 
-  gp_cdatabase_1_<dbid>_<timestamp>. This can be run on 
-  the master instance to recreate the user database(s).
-
-* Dumps the user database schema(s) into a SQL file in the master 
-  data directory. The default naming convention of this file is 
-  gp_dump_1_<dbid>_<timestamp>. This file is used by gp_restore 
-  to recreate the database schema(s).
-
-* Creates a dump file in the master data directory named 
-  gp_dump_1_<dbid>_<timestamp>_post_data that contains commands to 
-  rebuild objects associated with the tables.
-  When the database is restored with gp_restore, first the schema 
-  and data are restored, then the dump file is used to rebuilt 
-  the other objects associated with the tables.
-
-* Creates a log file in the master data directory named 
-  gp_dump_status_1_<dbid>_<timestamp>.
-
-* gp_dump launches a gp_dump_agent for each segment instance to be 
-  backed up. gp_dump_agent processes run on the segment hosts and report 
-  status back to the gp_dump process running on the master host. 
-
-
-ON THE SEGMENT HOSTS
-
-* Dumps the user data for each segment instance into a SQL file in the segment 
-  instances data directory. By default, only primary (or active) segment 
-  instances are backed up. The default naming convention of this file is 
-  gp_dump_0_<dbid>_<timestamp>. This file is used by gp_restore to recreate 
-  that particular segment of user data.
-
-* Creates a log file in each segment instances data directory named 
-  gp_dump_status_0_<dbid>_<timestamp>. 
-
-Note that the 14 digit timestamp is the number that uniquely identifies the 
-backup job, and is part of the filename for each dump file created by a
-gp_dump operation. This timestamp must be passed to the gp_restore utility 
-when restoring a Greenplum database.
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--a
---data-only
-
-
-Dump only the data, not the schema (data definitions).
-
-
--s
---schema-only
-
-Dump only the object definitions (schema), not data.
-
-
--c
---clean
-
-Output commands to clean (drop) database objects prior to (the commands for) 
-creating them.
-
-
--d
---inserts
-
-Dump data as INSERT commands (rather than COPY). This will make restoration 
-very slow; it is mainly useful for making dumps that can be loaded into 
-non-PostgreSQL based databases. Note that the restore may fail altogether 
-if you have rearranged column order. The -D option is safer, though slower. 
-
-
--D
---column-inserts
-
-Dump data as INSERT commands with explicit column names 
-(INSERT INTO table (column, ...) VALUES ...). This will make restoration 
-very slow; it is mainly useful for making dumps that can be loaded into 
-non-PostgreSQL based databases.
-
-
--n <schema>
---schema=<schema>
-
-Dumps the contents of the named schema only. If this option is not 
-specified, all non-system schemas in the target database will be dumped.
-Caution: In this mode, gp_dump makes no attempt to dump any other 
-database objects that objects in the selected schema may depend upon. 
-Therefore, there is no guarantee that the results of a single-schema 
-dump can be successfully restored by themselves into a clean database.
-You cannot backup system catalog schemas (such as pg_catalog) with gp_dump.
-
-
--o
---oids
-
-Dump object identifiers (OIDs) as part of the data for every table. 
-Use of OIDs is not recommended in Greenplum, so this option should 
-not be used if restoring data to another Greenplum Database installation.
-
-
--O
---no-owner
-
-Do not output commands to set ownership of objects to match the 
-original database. By default, gp_dump issues ALTER OWNER or 
-SET SESSION AUTHORIZATION statements to set ownership of created 
-database objects. These statements will fail when the script is 
-run unless it is started by a superuser (or the same user that 
-owns all of the objects in the script). To make a script that can 
-be restored by any user, but will give that user ownership of all 
-the objects, specify -O.
-
-
--t table | --table=table
-
-Dump only tables (or views or sequences) matching the table pattern. 
-Multiple tables can be selected by writing multiple -t switches. 
-Also, the table parameter is interpreted as a pattern according to the 
-same rules used by psql�s \d commands, so multiple tables can also be 
-selected by writing wildcard characters in the pattern. When using 
-wildcards, be careful to quote the pattern if needed to prevent the 
-shell from expanding the wildcards. The -n and -N switches have no 
-effect when -t is used, because tables selected by -t will be dumped 
-regardless of those switches, and non-table objects will not be dumped.
-
-Note: When -t is specified, pg_dump makes no attempt to dump any other 
-database objects that the selected table(s) may depend upon. 
-Therefore, there is no guarantee that the results of a specific-table 
-dump can be successfully restored by themselves into a clean database.
-
-Note: -t cannot be used to specify a child table partition. To dump a 
-partitioned table, you must specify the parent table name.
-
-
--T table | --exclude-table=table
-
-Do not dump any tables matching the table pattern. The pattern is 
-interpreted according to the same rules as for -t. -T can be given 
-more than once to exclude tables matching any of several patterns. 
-When both -t and -T are given, the behavior is to dump just the tables 
-that match at least one -t switch but no -T switches. If -T appears 
-without -t, then tables matching -T are excluded from what is otherwise 
-a normal dump.
-
-
--x
---no-privileges
---no-acl
-
-Prevents the dumping of access privileges (GRANT/REVOKE commands).
-
-
--h <hostname>
---host=<hostname>
-
-The host name of the master host. If not provided, the value of 
-$PGHOST or the local host is used.
-
-
--p <port>
---port=<port>
-
-The master port. If not provided, the value of $PGPORT or the 
-port number provided at compile time is used.
-
-
--U <username>
---username=<user>
-
-The database super user account name, for example bgadmin. 
-If not provided, the value of $PGUSER or the current OS 
-user name is used.
-
-
--W
-Forces a password prompt. This will happen automatically if 
-the server requires password authentication.
-
-
--i
---ignore-version
-
-Ignores a version mismatch between gp_dump and the database server.
-
-
--v
---verbose
-
-
-Specifies verbose mode. This will cause gp_dump to output detailed 
-object comments and start/stop times to the dump file, and progress 
-messages to standard error.
-
-
---gp-c
-
-Use gzip for inline compression.
-
-
---gp-d=<directoryname>
-
-Specifies the relative or absolute path where the backup files 
-will be placed on each host. If this is a relative path, it is 
-considered to be relative to the data directory. If the path does 
-not exist, it will be created, if possible. If not specified, 
-defaults to the data directory of each instance to be backed up. 
-Using this option may be desirable if each segment host has multiple 
-segment instances  it will create the dump files in a centralized location.
-
-
---gp-r=<reportfile>
-
-Specifies the full path name where the backup job report file will be 
-placed on the master host. If not specified, defaults to the master 
-data directory or the current directory if running remotely.
-
-
---gp-s=<dbid> (backup certain segments)
-
-Specifies the set of active segment instances to back up with a 
-comma-separated list of the segments dbid. The default is to 
-backup all active segment instances.
-
-
-<database_name>
-
-Required. The name of the database you want to dump. If not specified, 
-the value of $PGDATABASE will be used. The database name must be 
-stated last after all other options have been specified.
-
-
--? | --help
-
-Displays the online help.
-
-
---version
-
-Displays the version of this script.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Back up a database:
-
-gp_dump gpdb
-
-
-Back up a database, and create dump files in a centralized 
-location on all hosts:
-
-gp_dump --gp-d=/home/gpadmin/backups gpdb
-
-
-Back up the specified schema only:
-
-gp_dump -n myschema mydatabase
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gp_restore, gprebuildsystem, gprebuildseg

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gp_restore_help
----------------------------------------------------------------------
diff --git a/tools/doc/gp_restore_help b/tools/doc/gp_restore_help
deleted file mode 100755
index 2081d09..0000000
--- a/tools/doc/gp_restore_help
+++ /dev/null
@@ -1,218 +0,0 @@
-COMMAND NAME: gp_restore
-
-Restores Greenplum Database databases that were backed up using gp_dump.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gp_restore --gp-k=<timestamp_key> -d <database_name> [-a | -s] [-i] 
-[-v] [-c] [-h <hostname>] [-p <port>] [-U <username>] [-W] 
-[--gp-c] [--gp-i] [--gp-d=<directoryname>] [--gp-r=<reportfile>] 
-[--gp-l=a|p]
-
-gp_restore -? | -h | --help 
-
-gp_restore --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-
-The gp_restore utility recreates the data definitions (schema) and 
-user data in a Greenplum Database database using the script files created 
-by an gp_dump operation. The use of this script assumes:
-
-1. You have backup files created by an gp_dump operation.
-
-2. Your Greenplum Database system up and running.
-
-3. Your Greenplum Database system has the exact same number of segment 
-   instances as the system that was backed up using gp_dump. 
-
-4. (optional) The gp_restore script uses the information in 
-   the Greenplum system catalog tables to determine the hosts, ports, 
-   and data directories for the segment instances it is restoring. If 
-   you want to change any of this information (for example, move the 
-   system to a different array of hosts) you must use the gprebuildsystem 
-   and gprebuildseg scripts to reconfigure your array before restoring.
-
-5. The databases you are restoring have been created in the system.
-
-The functionality of gp_restore is analogous to PostgreSQL pg_restore 
-utility, which restores a database from files created by the database 
-backup process. It issues the commands necessary to reconstruct the database 
-to the state it was in at the time it was saved.
-
-The functionality of gp_restore is modified to accommodate the
-distributed nature of a Greenplum Database database, and to use files 
-created by an gp_dump operation. Keep in mind that a database in 
-Greenplum is actually comprised of several PostgreSQL instances (the master 
-and all segments), each of which must be restored individually. 
-The gp_restore utility takes care of populating each segment in the 
-system with its own distinct portion of data.
-
-The gp_restore utility performs the following actions:
-
-ON THE MASTER HOST
-
-* Creates the user database schema(s) using the 
-  gp_dump_1_<dbid>_<timestamp> SQL file created by gp_dump.
-
-* Creates a log file in the master data directory named 
-  gp_restore_status_1_<dbid>_<timestamp>.
-
-* gp_restore launches a gp_restore_agent for each segment instance 
-  to be restored. gp_restore_agent processes run on the segment hosts 
-  and report status back to the gp_restore process running on the 
-  master host. 
-
-ON THE SEGMENT HOSTS
-
-* Restores the user data for each segment instance using the 
-  gp_dump_0_<dbid>_<timestamp> files created by gp_dump. Each 
-  segment instance on a host (primary and mirror instances) are restored.
-
-* Creates a log file for each segment instance named 
-  gp_restore_status_0_<dbid>_<timestamp>.
-
-Note that the 14 digit timestamp is the number that uniquely identifies 
-the backup job to be restored, and is part of the filename for each 
-dump file created by a gp_dump operation. This timestamp must be passed 
-to the gp_restore utility when restoring a Greenplum Database database.
-
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
-
---gp-k=<timestamp_key>
-
-Required. The 14 digit timestamp key that uniquely identifies the 
-backup set of data to restore. This timestamp can be found in the gp_dump 
-log file output, as well as at the end of the dump files created by a 
-gp_dump operation. It is of the form YYYYMMDDHHMMSS.
-
-
--d <database_name>
---dbname=<dbname>
-
-Required. The name of the database to connect to in order to restore 
-the user data. The database(s) you are restoring must exist, gp_restore 
-does not create the database.
- 
-
--i
---ignore-version
-
-Ignores a version mismatch between gp_restore and the database server.
-
-
--v
---verbose
-
-Specifies verbose mode.
-
-
--a
---data-only
-
-Restore only the data, not the schema (data definitions).
-
-
--c
---clean
-
-Clean (drop) database objects before recreating them.
-
-
--s
---schema-only
-
-Restores only the schema (data definitions), no user data is restored.
-
-
--h <hostname>
---host=<hostname>
-
-The host name of the master host. If not provided, the value of PGHOST 
-or the local host is used.
-
-
--p <port>
---port=<port>
-
-The master port. If not provided, the value of PGPORT or 
-the port number provided at compile time is used.
-
-
--U <username>
---username=<username>
-
-The database super user account name, for example bgadmin. If not 
-provided, the value of PGUSER or the current OS user name is used.
-
-
--W
-
-Forces a password prompt. This will happen automatically if the 
-server requires password authentication.
-
-
---gp-c
-
-Use gunzip for inline decompression.
-
-
---gp-i
-
-Specifies that processing should ignore any errors that occur. Use 
-this option to continue restore processing on errors.
-
-
---gp-d=<directoryname>
-
-Specifies the relative or absolute path to backup files on the hosts. 
-If this is a relative path, it is considered to be relative to the data 
-directory. If not specified, defaults to the data directory of each instance 
-being restored. Use this option if you created your backup files in an 
-alternate location when running gp_dump.
-
-
---gp-r=<reportfile>
-
-Specifies the full path name where the restore job report file will 
-be placed on the master host. If not specified, defaults to the 
-master data directory.
-
-
---gp-l={a|p}
-
-Specifies whether to check for backup files on (a)ll segment instances 
-or only on (p)rimary segment instances. The default is to check for 
-primary segment backup files only, and then recreate the corresponding 
-mirrors.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-
-Restore a Greenplum database using backup files created by gp_dump:
-
-gp_restore --gp-k=2005103112453 -d gpdb
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-
-gp_dump, gprebuildsystem, gprebuildseg
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpaddmirrors_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpaddmirrors_help b/tools/doc/gpaddmirrors_help
deleted file mode 100755
index 5ee2089..0000000
--- a/tools/doc/gpaddmirrors_help
+++ /dev/null
@@ -1,253 +0,0 @@
-COMMAND NAME: gpaddmirrors
-
-Adds mirror segments to a Greenplum Database system that was 
-initially configured without mirroring.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-
-gpaddmirrors [-p <port_offset>] [-m <datadir_config_file> [-a]] [-s] 
-             [-d <master_data_directory>] [-B <parallel_processes>] 
-             [-l <logfile_directory>] [-v]
-
-gpaddmirrors -i <mirror_config_file> [-s] [-a] 
-             [-d <master_data_directory>] [-B <parallel_processes>] 
-             [-l <logfile_directory>] [-v]
-
-gpaddmirrors -o <output_sample_mirror_config> [-m <datadir_config_file>]
-
-gpaddmirrors -? 
-
-gpaddmirrors --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-The gpaddmirrors utility configures mirror segment instances for an 
-existing Greenplum Database system that was initially configured with 
-primary segment instances only. The utility will create the mirror 
-instances and begin the online replication process between the primary 
-and mirror segment instances. Once all mirrors are synchronized with 
-their primaries, your Greenplum Database system is fully data redundant.
-
-By default, the utility will prompt you for the file system location(s) 
-where it will create the mirror segment data directories. If you do not 
-want to be prompted, you can pass in a file containing the file system 
-locations using the -m option.
-
-The mirror locations and ports must be different than your primary 
-segment data locations and ports. If you have created additional filespaces, 
-you will also be prompted for mirror locations for each of your filespaces.
-
-The utility will create a unique data directory for each mirror segment 
-instance in the specified location using the predefined naming convention. 
-There must be the same number of file system locations declared for mirror 
-segment instances as for primary segment instances. It is OK to specify 
-the same directory name multiple times if you want your mirror data 
-directories created in the same location, or you can enter a different 
-data location for each mirror. Enter the absolute path. For example:
-
-Enter mirror segment data directory location 1 of 2 > /gpdb/mirror
-Enter mirror segment data directory location 2 of 2 > /gpdb/mirror
-OR
-Enter mirror segment data directory location 1 of 2 > /gpdb/m1
-Enter mirror segment data directory location 2 of 2 > /gpdb/m2
-
-Alternatively, you can run the gpaddmirrors utility and supply a 
-detailed configuration file using the -i option. This is useful if 
-you want your mirror segments on a completely different set of hosts 
-than your primary segments. The format of the mirror configuration file is:
-
-filespaceOrder=[<filespace1_fsname>[:<filespace2_fsname>:...]
-mirror<content>=<content>:<address>:<port>:<mir_replication_port>:<pri_replication_port>:<fselocation>[:<fselocation>:...]
-
-For example (if you do not have additional filespaces configured 
-besides the default pg_system filespace):
-
-filespaceOrder=
-mirror0=0:sdw1-1:60000:61000:62000:/gpdata/mir1/gp0
-mirror1=1:sdw1-1:60001:61001:62001:/gpdata/mir2/gp1
-
-The gp_segment_configuration, pg_filespace, and pg_filespace_entry 
-system catalog tables can help you determine your current primary 
-segment configuration so that you can plan your mirror segment 
-configuration. For example, run the following query:
-
-=# SELECT dbid, content, address as host_address, port, 
-   replication_port, fselocation as datadir 
-   FROM gp_segment_configuration, pg_filespace_entry 
-   WHERE dbid=fsedbid 
-   ORDER BY dbid;
-
-If creating your mirrors on alternate mirror hosts, the new 
-mirror segment hosts must be pre-installed with the Greenplum 
-Database software and configured exactly the same as the 
-existing primary segment hosts. 
-
-You must make sure that the user who runs gpaddmirrors (the 
-gpadmin user) has permissions to write to the data directory 
-locations specified. You may want to create these directories 
-on the segment hosts and chown them to the appropriate user 
-before running gpaddmirrors.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--a (do not prompt)
-
- Run in quiet mode - do not prompt for information. Must supply 
- a configuration file with either -m or -i if this option is used.
-
-
--B <parallel_processes>
-
- The number of mirror setup processes to start in parallel. If 
- not specified, the utility will start up to 10 parallel processes 
- depending on how many mirror segment instances it needs to set up.
-
-
--d <master_data_directory>
-
- The master data directory. If not specified, the value set for 
- $MASTER_DATA_DIRECTORY will be used.
-
-
--i <mirror_config_file>
-
- A configuration file containing one line for each mirror segment 
- you want to create. You must have one mirror segment listed for 
- each primary segment in the system. The format of this file is as 
- follows (as per attributes in the gp_segment_configuration, 
- pg_filespace, and pg_filespace_entry catalog tables):
-
-   filespaceOrder=[<filespace1_fsname>[:<filespace2_fsname>:...]
-   mirror<content>=<content>:<address>:<port>:<mir_replication_port>:<pri_replication_port>:<fselocation>[:<fselocation>:...]
-
- Note that you only need to specify a name for filespaceOrder if 
- your system has multiple filespaces configured. If your system does 
- not have additional filespaces configured besides the default pg_system 
- filespace, this file will only have one location per segment (for 
- the default data directory filespace, pg_system). pg_system does 
- not need to be listed in the filespaceOrder line. It will always be 
- the first <fselocation> listed after <replication_port>.
-
-
--l <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--m <datadir_config_file>
-
- A configuration file containing a list of file system locations where 
- the mirror data directories will be created. If not supplied, the 
- utility will prompt you for locations. Each line in the file specifies 
- a mirror data directory location. For example:
-   /gpdata/m1
-   /gpdata/m2
-   /gpdata/m3
-   /gpdata/m4
- If your system has additional filespaces configured in addition to the 
- default pg_system filespace, you must also list file system locations 
- for each filespace as follows:
-    filespace filespace1
-    /gpfs1/m1
-    /gpfs1/m2
-    /gpfs1/m3
-    /gpfs1/m4
-
-
--o <output_sample_mirror_config>
-
- If you are not sure how to lay out the mirror configuration file 
- used by the -i option, you can run gpaddmirrors with this option 
- to generate a sample mirror configuration file based on your 
- primary segment configuration. The utility will prompt you for 
- your mirror segment data directory locations (unless you provide 
- these in a file using -m). You can then edit this file to change 
- the host names to alternate mirror hosts if necessary.
-
-
--p <port_offset>
-
- Optional. This number is used to calculate the database ports 
- and replication ports used for mirror segments. The default offset 
- is 1000. Mirror port assignments are calculated as follows: 
-	primary port + offset = mirror database port
-	primary port + (2 * offset) = mirror replication port
-	primary port + (3 * offset) = primary replication port
- For example, if a primary segment has port 50001, then its mirror 
- will use a database port of 51001, a mirror replication port of 
- 52001, and a primary replication port of 53001 by default.
-
-
--s (spread mirrors)
-
- Spreads the mirror segments across the available hosts. The 
- default is to group a set of mirror segments together on an 
- alternate host from their primary segment set. Mirror spreading 
- will place each mirror on a different host within the Greenplum 
- Database array. Spreading is only allowed if there is a sufficient 
- number of hosts in the array (number of hosts is greater than 
- or equal to the number of segment instances per host).
-
-
--v (verbose)
-
- Sets logging output to verbose.
-
-
---version (show utility version)
-
- Displays the version of this utility.
-
-
--? (help)
-
- Displays the online help.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Add mirroring to an existing Greenplum Database system using 
-the same set of hosts as your primary data. Calculate the mirror 
-database and replication ports by adding 100 to the current 
-primary segment port numbers:
-
-  $ gpaddmirrors -p 100
-
-
-Add mirroring to an existing Greenplum Database system using a 
-different set of hosts from your primary data:
-
-$ gpaddmirrors -i mirror_config_file
-
-Where the mirror_config_file looks something like this (if you do not 
-have additional filespaces configured besides the default pg_system 
-filespace):
-
-filespaceOrder=
-mirror0=0:sdw1-1:52001:53001:54001:/gpdata/mir1/gp0
-mirror1=1:sdw1-2:52002:53002:54002:/gpdata/mir2/gp1
-mirror2=2:sdw2-1:52001:53001:54001:/gpdata/mir1/gp2
-mirror3=3:sdw2-2:52002:53002:54002:/gpdata/mir2/gp3
-
-
-Output a sample mirror configuration file to use with gpaddmirrors -i:
-
-  $ gpaddmirrors -o /home/gpadmin/sample_mirror_config
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpinitsystem, gpinitstandby, gpactivatestandby

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpbitmapreindex_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpbitmapreindex_help b/tools/doc/gpbitmapreindex_help
deleted file mode 100644
index c81311e..0000000
--- a/tools/doc/gpbitmapreindex_help
+++ /dev/null
@@ -1,111 +0,0 @@
-COMMAND NAME: gpbitmapreindex
-
-Rebuilds bitmap indexes after a 3.3.x to 4.0.x upgrade.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpbitmapreindex -m { r | d | {l [-o <output_sql_file>]} }
-                [-h <master_host>] [-p <master_port>] 
-                [-n <number_of_processes>] [-v]
-
-gpmigrator --version
-
-gpmigrator --help | -?
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The on-disk format of bitmap indexes has changed from release 
-3.3.x to 4.0.x. Users who upgrade must rebuild all bitmap indexes 
-after upgrading to 4.0. The gpbitmapreindex utility facilitates the 
-upgrade of bitmap indexes by either running the REINDEX command to 
-reindex them, or running the DROP INDEX command to simply remove them. 
-If you decide to drop your bitmap indexes rather than reindex, 
-run gpbitmapreindex in list --outfile mode first to output a SQL file 
-that you can use to recreate the indexes later. You must be the 
-Greenplum Database superuser (gpadmin) in order to run gpbitmapreindex.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--h <host> | --host <host>
-
-Specifies the host name of the machine on which the Greenplum 
-master database server is running. If not specified, reads from 
-the environment variable PGHOST or defaults to localhost.
-
-
--m {r|d|l} | --mode {reindex|drop|list}
-
-Required. The bitmap index upgrade mode: either reindex, drop, 
-or list all bitmap indexes in the system.
-
-
--n <number_of_processes> | --parallel <number_of_processes>
-
-The number of bitmap indexes to reindex or drop in parallel. 
-Valid values are 1-16. The default is 1.
-
-
--o <output_sql_file> | --outfile <output_sql_file>
-
-When used with list mode, outputs a SQL file that can be 
-used to recreate the bitmap indexes.
-
-
--p <port> | --port <port>
-
-Specifies the TCP port on which the Greenplum master database 
-server is listening for connections. If not specified, reads from 
-the environment variable PGPORT or defaults to 5432.
-
-
--v | --verbose
-
-Show verbose output.
-
-
---version
-
-Displays the version of this utility. 
-
-
--? | --help
-
-Displays the online help.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Reindex all bitmap indexes:
-
-     gpbitmapreindex -m r
-
-
-Output a file of SQL commands that can be used to recreate all 
-bitmap indexes:
-
-     gpbitmapreindex -m list --outfile /home/gpadmin/bmp_ix.sql
-
-
-Drop all bitmap indexes and run in verbose mode:
-
-     gpbitmapreindex -m d -v
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-REINDEX, DROP INDEX, CREATE INDEX
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpcheckos.xml
----------------------------------------------------------------------
diff --git a/tools/doc/gpcheckos.xml b/tools/doc/gpcheckos.xml
deleted file mode 100644
index ecdd752..0000000
--- a/tools/doc/gpcheckos.xml
+++ /dev/null
@@ -1,91 +0,0 @@
-<?xml version="1.0"?>
-<gpcheckosxml>
-<osParm>
- 	<sysctlConf>
-        	<param>net.ipv4.ip_forward</param>
-        	<value>0</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>net.ipv4.tcp_tw_recycle</param>
-        	<value>1</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.sem</param>
-        	<value>250  64000  100  512</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.shmall</param>
-        	<value>4000000000</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.shmmni</param>
-        	<value>4096</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.shmmax</param>
-		<value>500000000</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.msgmax</param>
-		<value>65536</value>
- 	</sysctlConf>
- 	<sysctlConf>
-	  	<param>kernel.msgmnb</param>
-		<value>65536</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>net.ipv4.tcp_syncookies</param>
-        	<value>1</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.core_uses_pid</param>
-        	<value>1</value>
- 	</sysctlConf>
- 	<sysctlConf>
-        	<param>net.ipv4.conf.default.accept_source_route</param>
-        	<value>0</value>
- 	</sysctlConf>
-		<param>net.ipv4.tcp_max_syn_backlog</param>
-        	<value>1</value>
- 	<sysctlConf>
-		<param>net.core.netdev_max_backlog</param>
-        	<value>10000</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>vm.overcommit_memory</param>
-        	<value>2</value>
- 	</sysctlConf>
- 	<sysctlConf>
-		<param>kernel.sysrq</param>
-        	<value>0</value>
- 	</sysctlConf>
-        <limitsConf>
- 		<limit>nofile</limit>
-		<softValue>* soft nofile 65536</softValue>
-		<hardValue>* hard  nofile 65536</hardValue>
-	</limitsConf>
-        <limitsConf>
- 		<limit>nproc</limit>
-		<softValue>* soft  nproc 131072</softValue>
-		<hardValue>* hard  nproc 131072</hardValue>
-	</limitsConf>
-        <blockDev>
-            	<target>/dev/sd?</target> 
-                <operation>setra</operation>
-		<opValue>16384</opValue>
-        </blockDev>
-        <grub>
- 		<appendValue>elevator=deadline</appendValue>
-        </grub>
-
-</osParm>
-<refPlatform>
-  	<Dell>
-             <model>PowerEdge R710</model>
-  	</Dell>
-  	<hp>
-             <model>ProLiant DL185</model>
-             <ctrlUtil>/usr/sbin/hpacucli</ctrlUtil>
-  	</hp>
-</refPlatform>
-</gpcheckosxml>

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpcheckos_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpcheckos_help b/tools/doc/gpcheckos_help
deleted file mode 100755
index 039c474..0000000
--- a/tools/doc/gpcheckos_help
+++ /dev/null
@@ -1,3 +0,0 @@
-COMMAND NAME: gpcheckos
-
-THIS UTILITY IS DEPRECATED - USE gpcheck INSTEAD.

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpcrondump_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpcrondump_help b/tools/doc/gpcrondump_help
deleted file mode 100755
index f3bf009..0000000
--- a/tools/doc/gpcrondump_help
+++ /dev/null
@@ -1,330 +0,0 @@
-COMMAND NAME: gpcrondump
-
-A wrapper utility for gp_dump, which can be called directly or 
-from a crontab entry.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpcrondump -x <database_name> 
-     [-s <schema> | -t <schema>.<table> | -T <schema>.<table>] 
-     [--table-file="<filename>" | --exclude-table-file="<filename>"] 
-     [-u <backup_directory>] [-R <post_dump_script>] 
-     [-c] [-z] [-r] [-f <free_space_percent>] [-b] [-h] [-j | -k] 
-     [-g] [-G] [-C] [-d <master_data_directory>] [-B <parallel_processes>] 
-     [-a] [-q] [-y <reportfile>] [-l <logfile_directory>] [-v]
-     { [-E <encoding>] [--inserts | --column-inserts] [--oids] 
-       [--no-owner | --use-set-session-authorization] 
-       [--no-privileges] [--rsyncable]
-     
-gpcrondump -o
-
-gpcrondump -? 
-
-gpcrondump --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-gpcrondump is a wrapper utility for gp_dump. By default, dump files are 
-created in their respective master and segment data directories in a 
-directory named db_dumps/YYYYMMDD. The data dump files are compressed 
-by default using gzip.
-
-gpcrondump allows you to schedule routine backups of a Greenplum database 
-using cron (a scheduling utility for UNIX operating systems). Cron jobs 
-that call gpcrondump should be scheduled on the master host.
-
-gpcrondump is used to schedule Data Domain Boost backup and restore 
-operations. gpcrondump is also used to set or remove one-time 
-credentials for Data Domain Boost.
-
-**********************
-Return Codes
-**********************
-
-The following is a list of the codes that gpcrondump returns.
-   0 - Dump completed with no problems
-   1 - Dump completed, but one or more warnings were generated
-   2 - Dump failed with a fatal error
-
-**********************
-EMAIL NOTIFICATIONS
-**********************
-To have gpcrondump send out status email notifications, you must place 
-a file named mail_contacts in the home directory of the Greenplum 
-superuser (gpadmin) or in the same directory as the gpcrondump utility 
-($GPHOME/bin). This file should contain one email address per line. 
-gpcrondump will issue a warning if it cannot locate a mail_contacts file 
-in either location. If both locations have a mail_contacts file, then 
-the one in $HOME takes precedence.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--a (do not prompt)
-
- Do not prompt the user for confirmation.
-
-
--b (bypass disk space check)
-
- Bypass disk space check. The default is to check for available disk space.
-
- Note: Bypassing the disk space check generates a warning message. 
- With a warning message, the return code for gpcrondump is 1 if the 
- dump is successful. (If the dump fails, the return code is 2, in all cases.)
-
-
--B <parallel_processes>
-
- The number of segments to check in parallel for pre/post-dump validation. 
- If not specified, the utility will start up to 60 parallel processes 
- depending on how many segment instances it needs to dump.
-
-
--c (clear old dump files first)
-
- Clear out old dump files before doing the dump. The default is not to 
- clear out old dump files. This will remove all old dump directories in 
- the db_dumps directory, except for the dump directory of the current date.
-
-
--C (clean old catalog dumps)
-
- Clean out old catalog schema dump files prior to create.
-
-
---column-inserts
- 
- Dump data as INSERT commands with column names.
-
-
--d <master_data_directory>
-
- The master host data directory. If not specified, the value 
- set for $MASTER_DATA_DIRECTORY will be used.
-
--E encoding
-
- Character set encoding of dumped data. Defaults to the encoding of 
- the database being dumped.
-
-
--f <free_space_percent>
-
- When doing the check to ensure that there is enough free disk space to 
- create the dump files, specifies a percentage of free disk space that 
- should remain after the dump completes. The default is 10 percent.
-
--g (copy config files)
-
- Secure a copy of the master and segment configuration files 
- postgresql.conf, pg_ident.conf, and pg_hba.conf. These 
- configuration files are dumped in the master or segment data 
- directory to db_dumps/YYYYMMDD/config_files_<timestamp>.tar
-
--G (dump global objects)
-
- Use pg_dumpall to dump global objects such as roles and tablespaces. 
- Global objects are dumped in the master data directory to 
- db_dumps/YYYYMMDD/gp_global_1_1_<timestamp>.
-
--h (record dump details)
-
-  Record details of database dump in database table
-  public.gpcrondump_history in database supplied via 
-  -x option. Utility will create table if it does not
-  currently exist.
-
-
---inserts
-
- Dump data as INSERT, rather than COPY commands.
-
-
--j (vacuum before dump)
-
- Run VACUUM before the dump starts.
-
-
--k (vacuum after dump)
-
- Run VACUUM after the dump has completed successfully.
-
-
--l <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
---no-owner
-
- Do not output commands to set object ownership.
-
-
---no-privileges
-
- Do not output commands to set object privileges (GRANT/REVOKE commands).
-
-
--o (clear old dump files only)
-
- Clear out old dump files only, but do not run a dump. This will remove 
- the oldest dump directory except the current date's dump directory. 
- All dump sets within that directory will be removed.
-
-
---oids
-
- Include object identifiers (oid) in dump data.
-
-
--q (no screen output)
-
- Run in quiet mode. Command output is not displayed on the screen, 
- but is still written to the log file.
-
-
--r (rollback on failure)
-
- Rollback the dump files (delete a partial dump) if a failure 
- is detected. The default is to not rollback.
-
-
--R <post_dump_script>
-
- The absolute path of a script to run after a successful dump operation. 
- For example, you might want a script that moves completed dump files 
- to a backup host. This script must reside in the same location on 
- the master and all segment hosts.
-
-
---rsyncable
-
- Passes the --rsyncable flag to the gpzip utility to synchronize
- the output occasionally, based on the input during compression.
- This synchronization increases the file size by less than 1% in
- most cases. When this flag is passed, the rsync(1) program can
- synchronize compressed files much more efficiently. The gunzip
- utility cannot differentiate between a compressed file created
- with this option, and one created without it.  
-
- 
--s <schema_name>
-
- Dump only the named schema in the named database.
-
-
--t <schema>.<table_name>
-
- Dump only the named table in this database.
- The -t option can be specified multiple times.
-
-
--T <schema>.<table_name>
-
- A table name to exclude from the database dump. 
- The -T option can be specified multiple times.
-
---exclude-table-file="<filename>"
-
-  Exclude all tables listed in <filename> from the 
-  database dump. The file <filename> contains any 
-  number of tables, listed one per line.
-
---table-file="<filename>"
-
-  Dump only the tables listed in <filename>.
-  The file <filename> contains any 
-  number of tables, listed one per line.
-
--u <backup_directory>
-
- Specifies the absolute path where the backup files will be 
- placed on each host. If the path does not exist, it will be 
- created, if possible. If not specified, defaults to the data 
- directory of each instance to be backed up. Using this option 
- may be desirable if each segment host has multiple segment 
- instances as it will create the dump files in a centralized 
- location rather than the segment data directories.
-
-
---use-set-session-authorization
-
- Use SET SESSION AUTHORIZATION commands instead of ALTER OWNER 
- commands to set object ownership.
-
-
--v | --verbose
-
- Specifies verbose mode.
-
-
---version (show utility version)
-
- Displays the version of this utility.
-
-
--x <database_name>
-
- Required. The name of the Greenplum database to dump.
- Multiple databases can be specified in a comma-separated list.
-
-
--y <reportfile>
-
- Specifies the full path name where the backup job log file will 
- be placed on the master host. If not specified, defaults to the 
- master data directory or if running remotely, the current working 
- directory.
-
-
--z (no compression)
-
- Do not use compression. Default is to compress the dump files 
- using gzip.
-
- We recommend using this option for NFS and Data Dommain 
- Boost backups.
-
-
--? (help)
-
- Displays the online help.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Call gpcrondump directly and dump mydatabase (and global objects):
-
-  gpcrondump -x mydatabase -c -g -G
-
-A crontab entry that runs a backup of the sales database 
-(and global objects) nightly at one past midnight:
-
-  01 0 * * * /home/gpadmin/gpdump.sh >> gpdump.log
-
-The content of dump script gpdump.sh is:
-
-  #!/bin/bash
-  export GPHOME=/usr/local/greenplum-db
-  export MASTER_DATA_DIRECTORY=/data/gpdb_p1/gp-1
-  . $GPHOME/greenplum_path.sh  
-  gpcrondump -x sales -c -g -G -a -q 
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gp_dump, gpdbrestore
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpdbrestore_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpdbrestore_help b/tools/doc/gpdbrestore_help
deleted file mode 100644
index 7cfd7d1..0000000
--- a/tools/doc/gpdbrestore_help
+++ /dev/null
@@ -1,203 +0,0 @@
-COMMAND NAME: gpdbrestore
-
-A wrapper utility around gp_restore. Restores a database from 
-a set of dump files generated by gpcrondump.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-
-gpdbrestore { -t <timestamp_key> [-L] 
-              | -b YYYYMMDD 
-              | -R <hostname>:<path_to_dumpset> 
-              | -s <database_name> } 
-     [-T <schema>.<table> [,...]] [-e] [-G] [-B <parallel_processes>] 
-     [-d <master_data_directory>] [-a] [-q] [-l <logfile_directory>] 
-     [-v]
-
-gpdbrestore -? 
-
-gpdbrestore --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-gpdbrestore is a wrapper around gp_restore, which provides some 
-convenience and flexibility in restoring from a set of backup 
-files created by gpcrondump. This utility provides the following 
-additional functionality on top of gp_restore:
-
-* Automatically reconfigures for compression. 
-
-* Validates the number of dump files are correct (For primary 
-  only, mirror only, primary and mirror, or a subset consisting 
-  of some mirror and primary segment dump files). 
-
-* If a failed segment is detected, restores to active segment instances.
-
-* Do not need to know the complete timestamp key (-t) of the backup 
-  set to restore. Additional options are provided to instead give 
-  just a date (-b), backup set directory location (-R), or 
-  database name (-s) to restore.
-
-* The -R option allows the ability to restore from a backup set 
-  located on a host outside of the Greenplum Database array 
-  (archive host). Ensures that the correct dump file goes to the 
-  correct segment instance.
-
-* Identifies the database name automatically from the backup set.
-
-* Allows you to restore particular tables only (-T option) instead 
-  of the entire database. Note that single tables are not automatically 
-  dropped or truncated prior to restore.
-
-* Can restore global objects such as roles and tablespaces (-G option).
-
-* Detects if the backup set is primary segments only or primary 
-  and mirror segments and passes the appropriate options to gp_restore.
-
-* Allows you to drop the target database before a restore in a 
-  single operation. 
-
-Error Reporting
-
-gpdbrestore does not report errors automatically. After the restore 
-is completed, check the report status files to verify that there 
-are no errors. The restore status files are stored in the 
-db_dumps/<date>/ directory by default. 
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--a (do not prompt)
-
- Do not prompt the user for confirmation.
-
-
--b YYYYMMDD
-
- Looks for dump files in the segment data directories on the 
- Greenplum Database array of hosts in db_dumps/YYYYMMDD.
-
--B <parallel_processes>
-
- The number of segments to check in parallel for pre/post-restore 
- validation. If not specified, the utility will start up to 60 
- parallel processes depending on how many segment instances it 
- needs to restore.
-
-
--d <master_data_directory>
-
- Optional. The master host data directory. If not specified, the 
- value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--e (drop target database before restore)
-
- Drops the target database before doing the restore and then recreates it.
-
-
--G (restore global objects)
-
- Restores global objects such as roles and tablespaces if the global 
- object dump file db_dumps/<date>/gp_global_1_1_<timestamp> is found 
- in the master data directory.
-
-
--l <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--L (list tablenames in backup set)
-
- When used with the -t option, lists the table names that exist in 
- the named backup set and exits. Does not do a restore.
-
-
--q (no screen output)
-
- Run in quiet mode. Command output is not displayed on the screen, 
- but is still written to the log file.
-
-
--R <hostname>:<path_to_dumpset>
-
- Allows you to provide a hostname and full path to a set of dump 
- files. The host does not have to be in the Greenplum Database array 
- of hosts, but must be accessible from the Greenplum master.
-
-
--s <database_name>
-
- Looks for latest set of dump files for the given database name in 
- the segment data directories db_dumps directory on the Greenplum 
- Database array of hosts.
-
-
--t <timestamp_key>
-
- The 14 digit timestamp key that uniquely identifies a backup set 
- of data to restore. It is of the form YYYYMMDDHHMMSS. Looks for 
- dump files matching this timestamp key in the segment data 
- directories db_dumps directory on the Greenplum Database array of hosts.
-
-
--T <schema>.<table_name>
-
- A comma-separated list of specific table names to restore. The 
- named table(s) must exist in the backup set of the database being 
- restored. Existing tables are not automatically truncated before 
- data is restored from backup. If your intention is to replace 
- existing data in the table from backup, truncate the table prior 
- to running gpdbrestore -T.
-
-
--v | --verbose
-  Specifies verbose mode.
-
-
---version (show utility version)
-
- Displays the version of this utility.
-
-
--? (help)
-
- Displays the online help.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Restore the sales database from the latest backup files generated 
-by gpcrondump (assumes backup files are in the segment data 
-directories in db_dumps):
-
-  gpdbrestore -s sales
-
-
-Restore a database from backup files that reside on an archive 
-host outside the Greenplum Database array (command issued on the 
-Greenplum master host):
-
-  gpdbrestore -R archivehostname:/data_p1/db_dumps/20080214
-
-
-Restore global objects only (roles and tablespaces):
-
-  gpdbrestore -G
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpcrondump, gp_restore

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpdeletesystem_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpdeletesystem_help b/tools/doc/gpdeletesystem_help
deleted file mode 100755
index afb19ad..0000000
--- a/tools/doc/gpdeletesystem_help
+++ /dev/null
@@ -1,97 +0,0 @@
-COMMAND NAME: gpdeletesystem
-
-Deletes a Greenplum Database system that was initialized 
-using gpinitsystem.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpdeletesystem -d <master_data_directory> [-B <parallel_processes>] 
-[-f] [-l <logfile_directory>] [-D] 
-
-gpdeletesystem -? 
-
-gpdeletesystem -v
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpdeletesystem script will perform the following actions:
-
-* Stop all postmaster processes (the segment instances and master instance).
-
-* Delete all data directories.
-
-Before runing this script, you should move any backup files 
-(created by gp_dump) out of the master and segment data directories.
-
-This script will not uninstall the Greenplum Database software.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--d <master_data_directory>
-
-Required. The master host data directory.
-
-
--B <parallel_processes>
-
-The number of segments to delete in parallel. If not specified, the 
-script will start up to 60 parallel processes depending on how many 
-segment instances it needs to delete.
-
-
--f (force)
-
-Force a delete even if backup files are found in the data directories. 
-The default is to not delete Greenplum Database instances if backup 
-files (created by gp_dump) are present.
-
-
--l <logfile_directory>
-
-The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--D (debug)
-
-Sets logging level to debug.
-
-
--? (help)
-
-Displays the online help.
-
-
--v (show script version)
-
-Displays the version, status, last updated date, and check sum of this script.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Delete a Greenplum Database system:
-
-gpdeletesystem -d /gpdata/gp-1
-
-
-Delete a Greenplum Database system even if backup files are present:
-
-gpdeletesystem -d /gpdata/gp-1 -f
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpinitsystem, gp_dump
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpdetective_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpdetective_help b/tools/doc/gpdetective_help
deleted file mode 100644
index 2b8b2f9..0000000
--- a/tools/doc/gpdetective_help
+++ /dev/null
@@ -1,187 +0,0 @@
-COMMAND NAME: gpdetective
-
-Collects diagnostic information from a running HAWQ system.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpdetective [-h <hostname>] [-p <port>] [-U <username>] [-P <password>] 
-            [--start_date <number_of_days> | <YYYY-MM-DD>] 
-            [--end_date <YYYY-MM-DD>]
-            [--diagnostics a|n|s|o|c] 
-            [--logs a|n|<dbid>[,<dbid>,... | -<dbid>]] 
-            [--cores t|f]
-            [--pg_dumpall t|f] [--pg_dump_options <option>[,...]] 
-            [--tempdir <temp_dir>] 
-            [--connect t|f]
-
-gpdetective -?
-
-gpdetective -v
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpdetective utility collects information from a running HAWQ 
-system and creates a bzip2-compressed tar output file. This 
-output file can then be sent to Greenplum Customer Support to help with 
-the diagnosis of HAWQ errors or system failures. The 
-gpdetective utility runs the following diagnostic tests:
-
-  * gpstate to check the system status
-  * gpcheckos to verify the recommended OS settings on all hosts
-  * gpcheckcat and gpcheckdb to check the system catalog tables 
-    for inconsistencies
-
-gpdetective captures the following files and HAWQ system information:
-
-  * postgresql.conf configuration files
-  * log files (master and segments)
-  * HAWQ system configuration information
-  * (optional) Core files
-  * (optional) Schema DDL dumps for all databases and global objects 
-    
-A bzip2-compressed tar output file containing this information is created 
-in the current directory with a file name of gpdetective<timestamp>.tar.bz2.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
---connect t|f
-
-  Specifies if gpdetective should connect to the database to obtain 
-  system information. The default is true (t). If false (f), 
-  gpdetective only gathers information it can obtain without making 
-  a connection to the database. This information includes (from the
-  master host):
-
-  * Log files
-  * The <master_data_directory>/postgresql.conf file
-  * The ~/gpAdminLogs directory
-  * gpcheckos output
-  * Core files
-
-
---cores t|f 
-
-  Determines whether or not the utility retrieves core files. The 
-  default is true (t).
-
-
---diagnostics a|n|s|o|c
- 
-  Specifies the diagnostic tests to run: all (a), none (n), 
-  operating system (o) diagnostics, or catalog (c) diagnostics. 
-  The default is all (a).
-
-
---end_date YYYY-MM-DD
- 
-  Sets the end date for the diagnostic information collected. The 
-  collected information ends at 00:00:00 of the specified date. 
-
-
--h hostname
-
-  The host name of the machine on which the HAWQ master 
-  database server is running. If not specified, reads from the 
-  environment variable PGHOST or defaults to localhost.
-
-
---logs a|n|dbid_list
-
-  Specifies which log file(s) to retrieve: all (a), none (n), a 
-  comma separated list of segment dbid numbers, or a range of dbid 
-  numbers divided by a dash (-) (for example, 3-6 retrieves logs 
-  from segments 3, 4, 5, and 6). The default is all (a).
-
-
--P password
-
-  If HAWQ is configured to use password authentication, 
-  you must also supply the database superuser password. If not specified, 
-  reads from ~/.pgpass if it exists.
-
-
---pg_dumpall t|f
-
-  Determines whether or not the utility runs pg_dumpall to collect 
-  schema DDL for all databases and global objects. The default is true (t).
-
-
---pg_dump_options option[,...]
- 
-  If --pg_dumpall is true, specifies a comma separated list of dump 
-  options to use when the pg_dumpall utility is called. See pg_dumpall 
-  for a valid list of dump options.
-
-
--p port
-
-  The TCP port on which the HAWQ master server is listening 
-  for connections. If not specified, reads from the environment variable 
-  PGPORT or defaults to 5432.
-
-
---start_date number_of_days | YYYY-MM-DD
-
-  Sets the start date for the diagnostic information collected. Specify 
-  either the number of days prior, or an explicit past date.
-
-
---tempdir temp_dir
- 
-  Specifies the temporary directory used by gpdetective. The default 
-  value is determined by the $TEMP, $TMP and $TMPDIR environment variables.
-
-
--U gp_superuser
-
-  The HAWQ superuser role name to connect as (typically gpadmin). 
-  If not specified, reads from the environment variable PGUSER or 
-  defaults to the current system user name.
-
-
--v (show utility version)
-
-  Displays the version of this utility.
-
-
--? (help)
-
-  Displays the utility usage and syntax.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Collect all diagnostic information for a HAWQ system 
-and supply the required connection information for the master host:
-
-  gpdetective -h mdw -p 54320 -U gpadmin -P mypassword
-
-
-Run diagnostics and collect all logs and system information for the 
-past two days:
-
-  gpdetective --start_date 2
-
-
-To collect the log files of the master and segment without 
-diagnostic tests or schema dumps:
-
-  gpdetective --diagnostics n --logs -1,3 --pg_dumpall f
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpstate, gpcheckos, pg_dumpall
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpinitstandby_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpinitstandby_help b/tools/doc/gpinitstandby_help
deleted file mode 100755
index 739e0fb..0000000
--- a/tools/doc/gpinitstandby_help
+++ /dev/null
@@ -1,160 +0,0 @@
-COMMAND NAME: gpinitstandby
-
-Adds and/or initializes a standby master host for a Greenplum Database system.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpinitstandby { -s <standby_hostname> | -r | -n } 
-              [-M smart | -M fast] [-a] [-q] [-D] [-L]
-              [-l <logfile_directory>]
-
- 
-gpinitstandby -? | -v
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpinitstandby utility adds a backup master host to your 
-Greenplum Database system. If your system has an existing backup 
-master host configured, use the -r option to remove it before adding 
-the new standby master host. 
-
-Before running this utility, make sure 
-that the Greenplum Database software is installed on the backup master 
-host and that you have exchanged SSH keys between hosts. Also make sure 
-that the master port is set to the same port number on the master host 
-and the backup master host. This utility should be run on the currently 
-active primary master host.
- 
-The utility will perform the following steps:
-
-* Shutdown your Greenplum Database system
-* Update the Greenplum Database system catalog to remove the 
-  existing backup master host information (if the -r option is supplied) 
-* Update the Greenplum Database system catalog to add the new backup 
-  master host information (use the -n option to skip this step)
-* Edit the pg_hba.conf files of the segment instances to allow access 
-  from the newly added standby master.
-* Setup the backup master instance on the alternate master host
-* Start the synchronization process
-* Restart your Greenplum Database system
-
-A backup master host serves as a 'warm standby' in the event of the 
-primary master host becoming unoperational. The backup master is kept 
-up to date by a transaction log replication process (gpsyncagent), 
-which runs on the backup master host and keeps the data between the 
-primary and backup master hosts synchronized. If the primary master 
-fails, the log replication process is shutdown, and the backup master 
-can be activated in its place by using the gpactivatestandby utility. 
-Upon activation of the backup master, the replicated logs are used to 
-reconstruct the state of the master host at the time of the last 
-successfully committed transaction.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--s <standby_hostname>
-
-The host name of the standby master host.
-
-
--r (remove standby master)
-
-Removes the currently configured standby master host from your 
-Greenplum Database system.
-
-
--n (resynchronize)
-
-Use this option if you already have a standby master configured, 
-and just want to resynchronize the data between the primary and 
-backup master host. The Greenplum system catalog tables will not 
-be updated.
-
-
--M fast (fast shutdown - rollback)
-
-Use fast shut down when stopping Greenplum Database at the beginning
-of the standby initialization process. Any transactions in progress 
-are interrupted and rolled back.
-
-
--M smart (smart shutdown - warn)
-
-Use smart shut down when stopping Greenplum Database at the beginning
-of the standby initialization process. If there are active connections, 
-this command fails with a warning. This is the default shutdown mode.
-
-
--L (leave database stopped)
-
-Leave Greenplum Database in a stopped state after removing the warm 
-standby master.
-
--a (do not prompt)
-
-Do not prompt the user for confirmation.
-
-
--q (no screen output)
-
-Run in quiet mode. Command output is not displayed on the screen, 
-but is still written to the log file.
-
-
--l <logfile_directory>
-
-The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--D (debug)
-
-Sets logging level to debug.
-
-
--? (help)
-
-Displays the online help.
-
-
--v (show script version)
-
-Displays the version of this utility.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Add a backup master host to your Greenplum Database system and 
-start the synchronization process:
-
-gpinitstandby -s host09
-
-
-Remove the existing backup master from your Greenplum system configuration:
-
-gpinitstandby -r
-
-
-Start an existing backup master host and synchronize the data 
-with the primary master host - do not add a new Greenplum backup 
-master host to the system catalog:
-
-gpinitstandby -n
-
-Note: Do not specify the -n and -s options in the same command.
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpinitsystem, gpaddmirrors, gpactivatestandby

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpkill_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpkill_help b/tools/doc/gpkill_help
deleted file mode 100644
index 4e6db2d..0000000
--- a/tools/doc/gpkill_help
+++ /dev/null
@@ -1,77 +0,0 @@
-COMMAND NAME: gpkill
-
-Checks or terminates a Greenplum Database process. 
-Users other than the superuser can only use gpkill 
-to terminate their own processes.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpkill [options] pid
-gpkill --version
-gpkill -? | -h | --help
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-
-This utility checks or terminates a Greenplum process. 
-If the process is a critical Greenplum Database process 
-or a system process that is not part of Greenplum, gpkill 
-does not terminate it.
-
-After gpkill verifies that the specified process can be 
-terminated safely, it prompts for confirmation. Prior to 
-terminating a process, gpkill attempts to capture 
-troubleshooting information, if the user has appropriate 
-operating system priviliges.
-
-* The troubleshooting information is captured, even if 
-  the user does not confirm killing the process.
-
-* Failure to capture troubleshooting information does not 
-  stop gpkill from proceeding.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
-pid 
-The process ID to check or terminate.
-
---check 
-Checks the specified process ID to verify that it is a 
-Greenplum process and can safely be killed, but does not 
-erminate it. 
-
--v 
-Displays verbose debugging information. 
-
--q
-Enables quiet mode. Informational messages are suppressed.
-
-NOTE: Choosing both the -v and -q options sends the verbose 
-debugging information to the system log, but does not display 
-informational messages on stdout.
-
--? | -h | --help (help)
-Displays the online help.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-
-Kill process 27893
-
-     gpkill 27893
-
-Check process 27893 to see if it can be killed. Send 
-debugging information to the system log, but do not 
-display informational messages.
-
-     gpkill -q -v --check 27893
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpload_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpload_help b/tools/doc/gpload_help
index 225b2ff..de62428 100644
--- a/tools/doc/gpload_help
+++ b/tools/doc/gpload_help
@@ -69,7 +69,7 @@ OPTIONS
 -l log_file
  
   Specifies where to write the log file. Defaults to 
-  ~/gpAdminLogs/hawq_load_YYYYMMDD. 
+  ~/hawqAdminLogs/hawq_load_YYYYMMDD. 
 
 -v (verbose mode) 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpperfmon_install_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpperfmon_install_help b/tools/doc/gpperfmon_install_help
deleted file mode 100644
index 3a802d2..0000000
--- a/tools/doc/gpperfmon_install_help
+++ /dev/null
@@ -1,147 +0,0 @@
-COMMAND NAME: gpperfmon_install
-
-Installs the gpperfmon database and optionally enables the data 
-collection agents.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpperfmon_install 
-      [--enable --password <gpmon_password> --port <gpdb_port>] 
-      [--pgpass <path_to_file>] 
-      [--verbose]
-
-
-gpperfmon_install --help | -h | -?
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpperfmon_install utility automates the steps required 
-to enable the performance monitor data collection agents. You 
-must be the Greenplum Database administrative user (gpadmin) 
-in order to run this utility. If using the --enable option, 
-Greenplum Database must be restarted after the utility completes.
-
-When run without any options, the utility will just create the 
-gpperfmon database (the database used to store performance monitor 
-data). When run with the --enable option, the utility will also 
-run the following additional tasks necessary to enable the 
-performance monitor data collection agents:
-
-1. Creates the gpmon superuser role in Greenplum Database. 
-   The performance monitor data collection agents require this 
-   role to connect to the database and write their data. The 
-   gpmon superuser role uses MD5-encrypted password authentication 
-   by default. Use the --password option to set the gpmon superuser�s 
-   password. Use the --port option to supply the port of the 
-   Greenplum Database master instance.
-
-2. Updates the $MASTER_DATA_DIRECTORY/pg_hba.conf file. The 
-   utility will add the following line to the host-based 
-   authentication file (pg_hba.conf). This allows the gpmon 
-   user to locally connect to any database using MD5-encrypted 
-   password authentication:
-        local     all    gpmon    md5
-
-3. Updates the password file (.pgpass). In order to allow the 
-   data collection agents to connect as the gpmon role without 
-   a password prompt, you must have a password file that has an 
-   entry for the gpmon user. The utility add the following entry 
-   to your password file (if the file does not exist, the utility 
-   will create it):
-         *:5432:gpperfmon:gpmon:gpmon_password
-   If your password file is not located in the default location 
-   (~/.pgpass), use the --pgpass option to specify the file location.
-
-4. Sets the server configuration parameters for performance monitor. 
-   The following parameters must be enabled in order for the data 
-   collection agents to begin collecting data. The utility will set 
-   the following parameters in the Greenplum Database postgresql.conf 
-   configuration files:
-      gp_enable_gpperfmon=on (in all postgresql.conf files)
-      gpperfmon_port=8888 (in the master postgresql.conf file)
-      gp_external_enable_exec=on (in the master postgresql.conf file)
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
---enable
-
-  In addition to creating the gpperfmon database, performs the 
-  additional steps required to enable the performance monitor 
-  data collection agents. When --enable is specified the utility 
-  will also create and configure the gpmon superuser account and 
-  set the performance monitor server configuration parameters in 
-  the postgresql.conf files.
-
-
---password <gpmon_password>
-
-  Required if --enable is specified. Sets the password of the 
-  gpmon superuser.
-
-
-
---port <gpdb_port>
-
-  Required if --enable is specified. Specifies the connection port 
-  of the Greenplum Database master.
-
-
-
---pgpass <path_to_file>
-
-  Optional if --enable is specified. If the password file is not 
-  in the default location of ~/.pgpass, specifies the location of 
-  the password file.
-
-
-
---verbose
-
-  Sets the logging level to verbose.
-
-
-
---help | -h | -?
-
-  Displays the online help.
-
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Create the gpperfmon database only:
-
-  $ su - gpadmin
-
-  $ gpperfmon_install
-
-
-Create the gpperfmon database, create the gpmon superuser, 
-and enable the performance monitor agents:
-
-  $ su - gpadmin
-
-  $ gpperfmon_install --enable --password p@$$word --port 5432
-
-  $ gpstop -r
-
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpstop
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpsnmpd_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpsnmpd_help b/tools/doc/gpsnmpd_help
deleted file mode 100644
index 2043d55..0000000
--- a/tools/doc/gpsnmpd_help
+++ /dev/null
@@ -1,147 +0,0 @@
-COMMAND NAME: gpsnmpd
-
-Reports on the health and state of a Greenplum Database system 
-through SNMP.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpsnmpd -s -C connect_string [-b] [-g] [-m MIB:...] 
-	[-M directory:...]
-
-gpsnmpd -c FILE -C connect_string [-x address:port] [-b] [-g] 
-	[-m MIB:...] [-M directory:...]
-
-gpsnmpd -?
-
-gpsnmpd --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-Greenplum�s gpsnmpd agent is an SNMP (Simple Network Management Protocol) 
-daemon that reports on the health and state of a Greenplum Database 
-system by using a set of MIBs (Management Information Bases). MIBs are a 
-collection of objects that describe an SNMP-manageable entity; in this 
-case, a Greenplum Database system. In a typical environment, gpsnmpd is 
-polled by a network monitor and returns information on a Greenplum 
-Database system. It currently supports the generic RDBMS MIB and 
-typically operates on the master host.
-
-gpsnmpd works in conjunction with the SNMP support that (normally) 
-already exists on the Greenplum Database system. gpsnmpd does not 
-replace the system snmpd agent that monitors items such as hardware, 
-processor, memory, and network functions. However, you can run the 
-Greenplum SNMP agent as a stand-alone agent if required.
-
-As a standalone SNMP agent, gpsnmpd listens (on a network socket) for 
-SNMP queries, and requires the same extensive configuration as the 
-system SNMP agent. 
-
-Greenplum recommends that you run gpsnmpd as a sub-agent to the system 
-agent. When it starts, the gpsnmpd sub-agent registers itself with the 
-system-level SNMP agent, and communicates to the system agent the parts 
-of the MIB of which it is aware. The system agent communicates with the 
-SNMP client/network monitoring application and forwards requests for 
-particular sections of the MIB to the gpsnmpd sub-agent. Note that the 
-gpsnmpd sub-agent communicates with the system agent through UNIX sockets; 
-it does not listen on network sockets when used as a sub-agent.
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
--s (sub-agent)
-
-Run gpsnmpd as an AgentX sub-agent to the system snmpd process. You do not 
-need to use the -x option when using this option.
-
--b (background)
-
-Run gpsnmpd as a background process.
-
-
--c (configuration file)
-
-Specify the SNMP configuration file to use when starting gpsnmpd as a 
-stand-alone agent. Note that you can specify any configuration file to 
-run gpsnmpd as a stand-alone agent; you do not have to use the 
-/etc/snmp/snmpd.conf file (/etc/sma/snmp/ on Solaris platforms). The 
-configuration file you use must include a value for rocommunity.
-
-
--g (use syslog)
-
-Logs gpsnmpd error messages to syslog
-
-
--C (libpq connection string) 
-
-The libpq connection string to connect to Greenplum Database. Note that 
-you can run gpsnmpd from a remote system. Depending on your network 
-configuration, the gpsnmpd agent can establish a connection and monitor 
-a remote Greenplum Database database instance. The configuration string 
-can contain the database name, the host name, the port number, the 
-username, the password, and other information if required. 
-
-Greenplum recommends using the postgres database in the connection string 
-(dbname=postgres). This is the default database if you do not use the �C 
-option.
-
- 
--x (address:port of a network interface)
-
-Specify an IP address for a network interface card on the host system, 
-and specify a port other than the default SNMP port of 161. This enables 
-you to run gpsnmpd without root permissions (you must have root permissions 
-to use ports 1024 and lower).
-
-You do not need to specify this option if you are running gpsnmpd as an 
-AgentX sub-agent (-s).
-
-
--m (MIB:...) 
-
-Loads one or more MIBs when starting gpsnmpd. Use a colon (:) to separate 
-the MIBs. Enter ALL to load all MIBs. If you do not enter -m in the gpsnmpd 
-command, a default set of MIBs are loaded by default.
-
-
--M (directory:...) 
-
-Loads all MIBs from one or more directories when starting gpsnmpd. Use a 
-colon (:) to separate the each directory. Enter the full path to each 
-directory you specify for this option. If you do not enter -M in the gpsnmpd 
-command, a default set of MIBs are loaded by default.
-
-
--? (help)
-
-Displays the online help.
-
-
--V
-
-Displays the version of this utility.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Start gpsnmpd as an agentx subagent:
-
-# gpsnmpd -s -b -m ALL -C "host=gpmaster dbname=template1 \ user=gpadmin 
-	password=secret"
-
-
-Start gpsnmpd as a stand-alone agent:
-
-# gpsnmpd -b -c /etc/snmp/snmpd.conf -x \ 192.168.100.12:10161 
-	-M /usr/mibs/mymibs -C "host=gpmaster \ dbname=template1 user=gpadmin 
-	password=secret"

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/doc/gpsuspend_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpsuspend_help b/tools/doc/gpsuspend_help
deleted file mode 100644
index cf67ff6..0000000
--- a/tools/doc/gpsuspend_help
+++ /dev/null
@@ -1,123 +0,0 @@
-COMMAND NAME:  gpsuspend
-
-Pause and resume a running Greenplum Database
-
-******************************************************
-SYNOPSIS
-******************************************************
-
-gpsuspend --pause [--batchsize batchsize] [--noninteractive]
-
-gpsuspend --resume --pausefile pausefile_name [--batchsize batchsize]
-
-gpsuspend -? | -h | --help
-
-Prerequisites:
-
-* You are logged in as the Greenplum Database superuser (gpadmin).
-
-* You are on the machine that is running the master database
-
-* You are not running --pause on an already paused database
-
-
-*******************************************************
-DESCRIPTION
-*******************************************************
-
-The gpsuspend utility can pause a running instance of Greenplum Database.
-
-The utility is first run in 'pause' mode which will pause the database.
-In 'pause' mode, the successful output of the command will print the 
-location of a generated pausefile which can be used to restore the
-system state.
-
-In 'resume' mode you must pass the location of the pause file which describes
-the list of segment hosts in a Greenplum database and can be used to resume
-the paused system
-
-By default the utility is run in interactive mode.  In interactive mode
-The utitility will stop after pausing the database and wait for user entry.
-At this point the database is paused.  When the administrator is ready to 
-resume the database they can use the prompt to enter 'resume' and the database
-will be resumed. To disable interactive mode and run 'pause' and 'resume' 
-independtly use the --noninteractive option with --pause.
-
-The utility pauses the database using unix signals STOP and CONT.  If you want
-to confirm that the database is paused you can use gpssh and enter the command
-ps ax | grep postgres | grep -v grep.  This will list all postgres processes
-on your cluster and the run state.  All processes should be in a STOP state.
-Also note, the order in which the processes are paused and resumed is important.
-First the master postgres instance is paused and then the segments.  Also within
-a postgres instance, first the postmaster process is paused and then its children.
-
-
-********************************************************
-OPTIONS
-********************************************************
-
-
--h (help)
-
-Displays the online help.
-
---pause
-
-Sets the utility into 'pause' mode
-
---resume
-
-Sets the utility into 'resume' mode
-
---pausefile <pausefilename>
-
-This option is used in 'resume' mode for the utility to know the 
-location of the segments while the database is paused and inaccessible.
-The file is generated to the GPHOME directory during 'pause' mode.
-
---noninteractive
-
-This option will disable the default interactive mode.
-
--B <batch_size>
-
-The number of worker threads for connecting to segment hosts.
-By making this number higher, more parallel ssh connections will be
-made in order to complete the job faster.
-
---verbose | -v (verbose) 
-
-Verbose debugging output.
-
--? | h (help)
-
-Displays the online help.
-
-
-*********************************************************
-EXAMPLES
-*********************************************************
-
-Pause a running Greenplum database:
-
-$ gpsuspend --pause --noninteractive
-
-
-Resume a running Greenplum database using a pausefile:
-
-$ gpsuspend --resume --pausefile /home/gpadmin/greenplum-db/./gp_pause.20091113.2158.dat
-
-Running in interactive mode:
-
-$ gpsuspend --pause
-Database is paused. When you are ready, type a command below to resume or quit.
- quit|resume (default=quit):
-$ resume
---done--
-
-
-**********************************************************
-SEE ALSO
-**********************************************************
-
-gpstart, gpstop

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/sbin/gpaddconfig.py
----------------------------------------------------------------------
diff --git a/tools/sbin/gpaddconfig.py b/tools/sbin/gpaddconfig.py
deleted file mode 100755
index 5811291..0000000
--- a/tools/sbin/gpaddconfig.py
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/usr/bin/env python
-#
-# Copyright (c) Greenplum Inc 2009. All Rights Reserved. 
-#
-# This is a private script to be called by gpaddconfig
-# The script is executed on a single machine and gets a list of data directories to modify from STDIN
-# With the script you can either change the value of a setting (and comment out all other entries for that setting)
-# or you can do remove only, to comment out all entries of a setting and not add an entry.
-#
-try:
-    
-    import sys, os
-    from optparse import Option, OptionParser
-    from gppylib.gpparseopts import OptParser, OptChecker
-    from gppylib.gparray import *
-    from gppylib.commands.gp import *
-    from gppylib.db import dbconn
-    from gppylib.gpcoverage import GpCoverage
-
-except ImportError, e:    
-    sys.exit('Cannot import modules.  Please check that you have sourced greenplum_path.sh.  Detail: ' + str(e))
-
-_help = [""""""
-
-]
-
-def parseargs():
-
-    parser = OptParser(option_class=OptChecker)
-
-    parser.setHelp(_help)
-
-    parser.remove_option('-h')
-    parser.add_option('-h', '-?', '--help', action='help', help='show this help message and exit')
-
-    parser.add_option('--entry', type='string')
-    parser.add_option('--value', type='string')
-    parser.add_option('--removeonly', action='store_true')
-    parser.set_defaults(removeonly=False)
-
-    # Parse the command line arguments
-    (options, args) = parser.parse_args()
-
-    # sanity check 
-    if not options.entry:
-        print "--entry is required"
-        sys.exit(1)
-
-    if (not options.value) and (not options.removeonly):
-        print "Select either --value or --removeonly"
-        sys.exit(1)
-
-    return options
-
-
-#------------------------------- Mainline --------------------------------
-coverage = GpCoverage()
-coverage.start()
-
-try:
-    options = parseargs()
-     
-    files = list()
-    
-    # get the files to edit from STDIN
-    line = sys.stdin.readline()
-    while line:
-    
-        directory = line.rstrip()
-        
-        filename = directory + "/postgresql.conf"
-        if not os.path.exists( filename ):
-            raise Exception("path does not exist" + filename)
-    
-        files.append(filename)
-    
-        line = sys.stdin.readline()
-    
-    
-    fromString = "(^\s*" + options.entry + "\s*=.*$)"
-    toString="#$1"
-    name = "mycmd"
-    
-    # update all the files
-    for f in files:
-    
-        # comment out any existing entries for this setting
-        cmd=InlinePerlReplace(name, fromString, toString, f)
-        cmd.run(validateAfter=True)
-    
-        if options.removeonly:
-            continue
-    
-        cmd = GpAppendGucToFile(name, f, options.entry, options.value)
-        cmd.run(validateAfter=True)
-finally:
-    coverage.stop()
-    coverage.generate_report()

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3691f236/tools/sbin/gpchangeuserpassword
----------------------------------------------------------------------
diff --git a/tools/sbin/gpchangeuserpassword b/tools/sbin/gpchangeuserpassword
deleted file mode 100755
index 0279330..0000000
--- a/tools/sbin/gpchangeuserpassword
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/usr/bin/env python
-''' 
-USAGE:  gpchangeuserpassword --user USER --password PASSWORD
-        where USER is the user for whom the password is being changed
-        where PASSWORD is the new password
-'''
-
-import os, sys, getpass, crypt
-from subprocess import Popen
-sys.path.append(sys.path[0] + '/../bin/lib')
-
-try:
-    import pexpect
-    from optparse import Option, OptionParser 
-    from gppylib.gpparseopts import OptParser, OptChecker
-    from gppylib.commands.unix import SYSTEM
-except ImportError, e:    
-    sys.exit('Cannot import modules.  Please check that you have sourced greenplum_path.sh.  Detail: ' + str(e))
-
-options = None
-
-parser = OptParser(option_class=OptChecker)
-parser.remove_option('-h')
-parser.add_option('-h', '-?', '--help', action='store_true')
-parser.add_option('-u', '--user', type='string')
-parser.add_option('-p', '--password', type='string')
-(options, args) = parser.parse_args()
-
-global gphome
-gphome = os.environ.get('GPHOME')
-if not gphome:
-    sys.stderr.write("GPHOME not set\n")
-    sys.exit(1)
-
-if options.help:
-    sys.stderr.write(__doc__)
-    sys.exit(0)
-
-if not options.user:
-    sys.stderr.write("--user must be specified\n")
-    sys.exit(1)
-
-if not options.password:
-    sys.stderr.write("--password must be specified\n")
-    sys.exit(1)
-
-if options.user == "root":
-    sys.stderr.write("root password can not be changed with this utility\n")
-    sys.exit(1)
-
-if getpass.getuser() != "root":
-    sys.stderr.write("this utility must be run as root\n")
-    sys.exit(1)
-
-####################################################################################################
-if SYSTEM.getName() == "linux":
-
-    cmdstr = 'usermod -p "%s" %s' % (crypt.crypt(options.password, options.password), options.user)
-    p = Popen(cmdstr, shell=True, executable="/bin/bash")
-    sts = os.waitpid(p.pid, 0)[1]
-    if sts:
-        sys.stderr.write("error on cmd: %s\n" % cmdstr)
-        sys.exit(1)
-    else:
-        sys.exit(0)
-####################################################################################################
-
-if SYSTEM.getName() != "sunos":
-    sys.stderr.write("linux and solaris are the only operating systems supported by this utility\n")
-    sys.exit(1)
-
-# SOLARIS password change
-# New Password:
-# Re-enter new Password:
-# passwd: They don't match.
-# passwd: password successfully changed for ivanfoo
-
-done = False
-child = None
-
-try:
-    cmdstr = "passwd %s" % options.user
-    child = pexpect.spawn(cmdstr)
-
-    index = 0
-    while 1:
-        index = child.expect(["match", "success", "Password", pexpect.EOF, pexpect.TIMEOUT])
-        if index == 0:
-            sys.stderr.write("passwords did not match\n")
-            sys.exit(1)
-        elif index == 1:
-            child.close
-            sys.exit(0)
-        elif index == 2:
-            child.sendline(options.password)
-            continue
-        elif index == 3:
-            sys.stderr.write("error calling passwd\n")
-            sys.exit(1)
-        elif index == 4:
-            sys.stderr.write("timeout calling passwd\n")
-            sys.exit(1)
-        else:
-            sys.stderr.write("error2 calling passwd\n")
-            sys.exit(1)
-
-except Exception, e:
-    sys.stderr.write("Exception running cmd: %s\n" % cmdstr)
-    sys.stderr.write("%s\n" % e.__str__())
-    sys.exit(1)



Mime
View raw message