hawq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r...@apache.org
Subject [3/8] incubator-hawq git commit: HAWQ-121. Remove legacy command line tools.
Date Thu, 05 Nov 2015 03:09:59 GMT
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gpfilespace_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpfilespace_help b/tools/doc/gpfilespace_help
deleted file mode 100644
index 3b97bb3..0000000
--- a/tools/doc/gpfilespace_help
+++ /dev/null
@@ -1,196 +0,0 @@
-COMMAND NAME: gpfilespace
-
-Creates a filespace using a configuration file that defines 
-per-segment file system locations. Filespaces describe the 
-physical file system resources to be used by a tablespace.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpfilespace [<connection_option> ...] [-l <logfile_directory>] 
-            [-o [<output_fs_config_file>]]
-
-gpfilespace [<connection_option> ...] [-l <logfile_directory>] 
-            -c <fs_config_file>
-
-gpfilespace --movefilespace=<FILESPACE_NAME | default> --location=<TARGET_LOCATION>
-
-gpfilespace -v | -?
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-A tablespace requires a file system location to store its database 
-files. In HAWQ, the master and each segment needs its own distinct
-storage location. This collection of file system locations for all
-components in a HAWQ system is referred to as a filespace.
-Once a filespace is defined, it can be used by one or more
-tablespaces.
-
-When used with the -o option, the gpfilespace utility looks up your 
-system configuration information in the HAWQ catalog tables and
-prompts you for the appropriate file system locations needed to
-create the filespace. It then outputs a configuration file that
-can be used to create a filespace. If a file name is not
-specified, a gpfilespace_config_<#> file will be created in the
-current directory by default.  
-
-Once you have a configuration file, you can run gpfilespace with 
-the -c option to create the filespace in HAWQ.
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--c | --config <fs_config_file>
-
- A configuration file containing:
- * An initial line denoting the new filespace name. For example:
-   filespace:myfs
- * One line each for the master, the primary segments. A line
-   describes a file system location that a particular segment
-   database instance should use as its data directory location
-   to store database files associated with a tablespace. Each line
-   is in the format of:
-   <hostname>:<dbid>:/<filesystem_dir>/<seg_datadir_name>
-   
--l | --logdir <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--o | --output <output_file_name>
-
- The directory location and file name to output the generated 
- filespace configuration file. You will be prompted to enter a 
- name for the filespace, a master file system location, the 
- primary segment file system locations, and the mirror segment 
- file system locations. For example, if your configuration has 
- 2 primary and 2 mirror segments per host, you will be prompted 
- for a total of 5 locations (including the master). The file 
- system locations must exist on all hosts in your system prior 
- to running the gpfilespace utility. The utility will designate 
- segment-specific data directories within the location(s) you 
- specify, so it is possible to use the same location for multiple 
- segments. However, primaries and mirrors cannot use the same 
- location. After the utility creates the configuration file, you 
- can manually edit the file to make any required changes to the 
- filespace layout before creating the filespace in HAWQ.
-
--v | --version (show utility version)
-
- Displays the version of this utility.
-
-
--? | --help (help)
-
- Displays the utility usage and syntax.
-
-
-****************************
-CONNECTION OPTIONS
-****************************
-
--h host | --host host
-
- The host name of the machine on which the HAWQ master 
- database server is running. If not specified, reads from 
- the environment variable PGHOST or defaults to localhost.
-
-
--p port | --port port
-
- The TCP port on which the HAWQ master database server 
- is listening for connections. If not specified, reads from 
- the environment variable PGPORT or defaults to 5432.
-
-
--U username | --username superuser_name
-
- The database superuser role name to connect as. If not 
- specified, reads from the environment variable PGUSER or 
- defaults to the current system user name. Only database 
- superusers are allowed to create filespaces.
-
-
--W | --password
-
- Force a password prompt.
-
-Note: gpfilespace, showfilespace, showtempfilespace, 
-movetransfilespace, showtransfilespace, movetempfilespace 
-are not supported.
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Create a filespace configuration file. You will be prompted to 
-enter a name for the filespace, choose a file system name, file
-replica number, and a DFS URL for store data.
-
- $ gpfilespace -o .
- Enter a name for this filespace
- > example_hdfs
-
- Available filesystem name:
- filesystem: hdfs
- Choose filesystem name for this filespace
-
- > hdfs
-
- Enter replica num for filespace. If 0, default replica num is used (default=3)
- >3 
-
- Checking your configuration:
- Your system has 1 hosts with 2 primary segments per host.
-
- Configuring hosts: [sdw1]
-
- Please specify the DFS location for the segments (for example: localhost:9000/fs)
- location> 127.0.0.1:9000/hdfs
-
- ***************************************
- Example filespace configuration file:
-
- filespace:example_hdfs
- fsysname:hdfs
- fsreplica:3
- sdw1:1:/data1/master/hdfs_b/gpseg-1
- sdw1:2:[127.0.0.1:9000/hdfs/gpseg0]
- sdw1:3:[127.0.0.1:9000/hdfs/gpseg1]
-
-
-Execute the configuration file to create the filespace 
-in GPSQL:
-
- $ gpfilespace -c gpfilespace_config_1
-
-*****************************************************
-MOVE FILESPACE
-*****************************************************
-
-Move the filiespace to a new location on distributed file system
-
-$ filespace --movefilespace=example_filespace_name --location=hdfs://host:port/new/location
-
-This command is to move filespace "example_filespace_name" to new location "hdfs://host:port/new/location"
-
-Note:
- 1) The value of --location should be a valid URL
- 2) No data is actually moved, only catalog is updated. User should move data manually.
- 3) Shutdown the database and then backup the master data directory first
- 4) Master's data cannot be moved using this command, and should be backuped before this command. 
-		Otherwise the metadata may be in inconsistent state and data may lose if this command fail 
- 5) If standby master is configured, it should be removed and initialized again after the successfully executing of this command
- 
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-CREATE FILESPACE, CREATE TABLESPACE

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gpmigrator_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpmigrator_help b/tools/doc/gpmigrator_help
deleted file mode 100644
index 591dd0f..0000000
--- a/tools/doc/gpmigrator_help
+++ /dev/null
@@ -1,145 +0,0 @@
-COMMAND NAME: gpmigrator
-
-Upgrades an existing Greenplum Database 4.1.x system 
-without mirrors to 4.2.x.
-
-Use gpmigrator_mirror to upgrade a 4.1.x system that 
-has mirrors.
-
-Note: Using gpmigrator on a system with mirrors causes 
-      an error.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpmigrator <old_GPHOME_path> <new_GPHOME_path>
-           [-d <master_data_directory>] 
-           [-l <logfile_directory>] [-q] 
-           [--check-only] [--debug] [-R]
-
-gpmigrator --version | -v
-
-gpmigrator --help | -h
-
-
-*****************************************************
-PREREQUISITES
-*****************************************************
-
-The following tasks should be performed prior to executing an upgrade:
-
-* Make sure you are logged in to the master host as the Greenplum Database 
-  superuser (gpadmin).
-* Install the Greenplum Database 4.2 binaries on all Greenplum hosts.
-* Copy any custom modules you use into your 4.2 installation.  Make sure 
-  you obtain the latest version of any custom modules and that they are 
-  compatible with Greenplum Database 4.2.
-* Copy or preserve any additional folders or files (such as backup folders) 
-  that you have added in the Greenplum data directories or $GPHOME directory. 
-  Only files or folders strictly related to Greenplum Database operations are 
-  preserved by the migration utility.
-* (Optional) Run VACUUM on all databases, and remove old server log files 
-  from pg_log in your master and segment data directories. This is not required, 
-  but will reduce the size of Greenplum Database files to be backed up and migrated.
-* Check for and recover any failed segments in your current Greenplum Database 
-  system (gpstate, gprecoverseg).
-* (Optional, but highly recommended) Backup your current databases (gpcrondump 
-   or ZFS snapshots). If you find any issues when testing your upgraded system, 
-   you can restore this backup.
-* Remove the standby master from your system configuration (gpinitstandby -r).
-* Do a clean shutdown of your current system (gpstop).
-* Update your environment to source the 4.2 installation.
-* Inform all database users of the upgrade and lockout time frame. Once the 
-  upgrade is in process, users will not be allowed on the system until the 
-  upgrade is complete.
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpmigrator utility upgrades an existing Greenplum Database 4.1.x.x 
-system without mirrors to 4.2. This utility updates the system catalog 
-and internal version number, but not the actual software binaries. 
-During the migration process, all client connections to Greenplum 
-Database will be locked out.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
-<old_GPHOME_path>
-
- Required. The absolute path to the current version of Greenplum 
- Database software you want to migrate away from.
-
-
-<new_GPHOME_path>
-
- Required. The absolute path to the new version of Greenplum Database 
- software you want to migrate to.
-
-
--d <master_data_directory>
-
- Optional. The current master host data directory. If not specified, 
- the value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--l <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--q (quiet mode)
-
- Run in quiet mode. Command output is not displayed on the screen, but is 
- still written to the log file.
-
-
--R (revert)
- 
- In the event of an error during upgrade, reverts all changes made by gpmigrator.
-
-
---check-only
-
- Runs pre-migrate checks to verify that your database is healthy.
- Checks include: 
-  * Check catalog health
-  * Check that the Greenplum Database binaries on each segment match 
-    those on the master
-  * Check for a minium amount of free disk space
-
-
---help | -h
-Displays the online help.
-
-
---debug
-Sets logging level to debug.
-
-
---version | -v
-Displays the version of this utility. 
-
-
-*****************************************************
-EXAMPLE
-*****************************************************
-
-Upgrade to version 4.2 from version 4.1.1.3 without mirrors 
-(make sure you are using the 4.2 version of gpmigrator):
-
-/usr/local/greenplum-db-4.2.x.x/bin/gpmigrator \
-  /usr/local/greenplum-db-4.1.1.3 \
-  /usr/local/greenplum-db-4.2.x.x
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpmigrator_mirror, gpstop, gpstate, gprecoverseg, gpcrondump
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gpmigrator_mirror_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpmigrator_mirror_help b/tools/doc/gpmigrator_mirror_help
deleted file mode 100644
index bef5fd5..0000000
--- a/tools/doc/gpmigrator_mirror_help
+++ /dev/null
@@ -1,141 +0,0 @@
-COMMAND NAME: gpmigrator_mirror
-
-Upgrades an existing Greenplum Database 4.1.x system 
-with mirrors to 4.2.x.
-
-Use gpmigrator to upgrade a 4.1.x system that does not
-have mirrors.
-
-Note: Using gpmigrator_mirror on a system without mirrors
-      causes an error.
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpmigrator_mirror <old_GPHOME_path> <new_GPHOME_path>
-                  [-d <master_data_directory>] 
-                  [-l <logfile_directory>] [-q] 
-                  [--check-only] [--debug]
-
-gpmigrator_mirror  --version | -v
-
-gpmigrator_mirror  --help | -h
-
-
-*****************************************************
-PREREQUISITES
-*****************************************************
-
-The following tasks should be performed prior to executing an upgrade:
-
-* Make sure you are logged in to the master host as the Greenplum Database 
-  superuser (gpadmin).
-* Install the Greenplum Database 4.2 binaries on all Greenplum hosts.
-* Copy any custom modules you use into your 4.2 installation.  Make sure 
-  you obtain the latest version of any custom modules and that they are 
-  compatible with Greenplum Database 4.2.
-* Copy or preserve any additional folders or files (such as backup folders) 
-  that you have added in the Greenplum data directories or $GPHOME directory. 
-  Only files or folders strictly related to Greenplum Database operations are 
-  preserved by the migration utility.
-* (Optional) Run VACUUM on all databases, and remove old server log files 
-  from pg_log in your master and segment data directories. This is not required, 
-  but will reduce the size of Greenplum Database files to be backed up and migrated.
-* Check for and recover any failed segments in your current Greenplum Database 
-  system (gpstate, gprecoverseg).
-* (Optional, but highly recommended) Backup your current databases (gpcrondump 
-   or ZFS snapshots). If you find any issues when testing your upgraded system, 
-   you can restore this backup.
-* Remove the standby master from your system configuration (gpinitstandby -r).
-* Do a clean shutdown of your current system (gpstop).
-* Update your environment to source the 4.2 installation.
-* Inform all database users of the upgrade and lockout time frame. Once the 
-  upgrade is in process, users will not be allowed on the system until the 
-  upgrade is complete.
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpmigrator utility upgrades an existing Greenplum Database 4.1.x.x
-system with mirrors to 4.2. This utility updates the system catalog 
-and internal version number, but not the actual software binaries. 
-During the migration process, all client connections to Greenplum 
-Database will be locked out.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
-<old_GPHOME_path>
-
- Required. The absolute path to the current version of Greenplum 
- Database software you want to migrate away from.
-
-
-<new_GPHOME_path>
-
- Required. The absolute path to the new version of Greenplum Database 
- software you want to migrate to.
-
-
--d <master_data_directory>
-
- Optional. The current master host data directory. If not specified, 
- the value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--l <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--q (quiet mode)
-
- Run in quiet mode. Command output is not displayed on the screen, but is 
- still written to the log file.
-
-
---check-only
-
- Runs pre-migrate checks to verify that your database is healthy.
- Checks include: 
-  * Check catalog health
-  * Check that the Greenplum Database binaries on each segment match 
-    those on the master
-  * Check for a minium amount of free disk space
-
-
---help | -h
-
-Displays the online help.
-
-
---debug
-Sets logging level to debug.
-
-
---version | -v
-Displays the version of this utility. 
-
-
-*****************************************************
-EXAMPLE
-*****************************************************
-
-Upgrade to version 4.2 from version 4.1.1.3 with mirrors 
-(make sure you are using the 4.2 version of gpmigrator_mirror):
-
-/usr/local/greenplum-db-4.2.x.x/bin/gpmigrator_mirror \
-  /usr/local/greenplum-db-4.1.1.3 \
-  /usr/local/greenplum-db-4.2.x.x
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpmigrator, gpstop, gpstate, gprecoverseg, gpcrondump
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gprecoverseg_help
----------------------------------------------------------------------
diff --git a/tools/doc/gprecoverseg_help b/tools/doc/gprecoverseg_help
deleted file mode 100755
index f81fc59..0000000
--- a/tools/doc/gprecoverseg_help
+++ /dev/null
@@ -1,157 +0,0 @@
-COMMAND NAME: gprecoverseg
-
-Recovers a segment instance that has been marked as down.
-
-******************************************************
-Synopsis
-******************************************************
-
-gprecoverseg [-p <new_recover_host>[,...]]
-             [-d <master_data_directory>] [-B <parallel_processes>] 
-             [-F] [-a] [-q] [-l <logfile_directory>]
-
-gprecoverseg -? 
-
-gprecoverseg --version
-
-******************************************************
-DESCRIPTION
-******************************************************
-
-The gprecoverseg utility reactivates failed segment instances.
-Once gprecoverseg completes this process, the system will be recovered.
-
-A segment instance can fail for several reasons, such as a host failure, 
-network failure, or disk failure. When a segment instance fails, its 
-status is marked as down in the HAWQ Database system catalog, 
-and the master will random pickup a segment to process query for a session.
-In order to bring the failed segment instance back into operation again,
-you must first correct the problem that made it fail in the first place, 
-and then recover the segment instance in HAWQ Database using gprecoverseg.
-
-Segment recovery using gprecoverseg requires that you have at least one
-alive segment to recover from. For systems that do not have alive
-segment do a system restart to bring the segments back online (gpstop -r).
-
-By default, a failed segment is restarted in place, meaning that 
-the system brings the segment back online on the same host and data 
-directory location on which it was originally configured. 
-
-If the data directory was removed or damaged, gprecoverseg can
-recovery the data directory (using -F). This requires that you have
-at least one alive segment to recover from. 
-
-In some cases, the above method may not be possible (for example, if a
-host was physically damaged and cannot be recovered). In this situation, 
-gprecoverseg allows you to recover failed segments to a completely 
-new host (using -p). In this senario, to prevent HAWQ getting imbalanced workload,
-all the segments on the failed host should be moved to the new host.
-You must manually kill the other alive segments left on the failed host
-before you try to run gprecoverseg.
-
-The new recovery segment host must be pre-installed with the HAWQ 
-Database software and configured exactly the same as the existing 
-segment hosts. A spare data directory location must exist on all 
-currently configured segment hosts and have enough disk space to 
-accommodate the failed segments.
-
-The recovery process marks the segment as up again in the HAWQ 
-Database system catalog. Use the following command to check the
-recovery result.
-
- $ gpstate
-
-******************************************************
-OPTIONS
-******************************************************
-
--a (do not prompt)
-
-Do not prompt the user for confirmation.
-
-
--B parallel_processes
-
-The number of segments to recover in parallel. If not specified, 
-the utility will start up to four parallel processes depending 
-on how many segment instances it needs to recover.
-
-
--d master_data_directory
-
-Optional. The master host data directory. If not specified, 
-the value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--F (full recovery)
-
-Optional. Perform a full copy of the active segment instance 
-in order to recover the failed segment. The default is to 
-only restart the failed segment in-place.
-
-
--l <logfile_directory>
-
-The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--p <new_recover_host>[,...]
-
-Specifies a spare host outside of the currently configured 
-HAWQ Database array on which to recover invalid segments. In 
-the case of multiple failed segment hosts, you can specify a 
-comma-separated list. The spare host must have the HAWQ Database 
-software installed and configured, and have the same hardware and OS 
-configuration as the current segment hosts (same OS version, locales, 
-gpadmin user account, data directory locations created, ssh keys 
-exchanged, number of network interfaces, network interface naming 
-convention, and so on.). 
-
-When this option is used, assume the number of
-failed hosts is N, you need to specify N new hosts, and make sure that all the 
-segments on the failed hosts are marked 'down' before run the command.
-If there are still some alive segments on the failed hosts, kill the segments first or
-shutdown the failed hosts.
-
-
--q (no screen output)
-
-Run in quiet mode. Command output is not displayed on 
-the screen, but is still written to the log file.
-
-
--v (verbose)
-
-Sets logging output to verbose.
-
-
---version (version)
-
-Displays the version of this utility.
-
-
--? (help)
-Displays the online help.
-
-
-******************************************************
-EXAMPLES
-******************************************************
-
-Recover any failed segment instances in place:
-
- $ gprecoverseg
-
-Recreate any failed segment instances in place:
-
- $ gprecoverseg -F
-
-Replace any failed host to a set of new host:
-
- $ gprecoverseg -p new1,new2 
-
-******************************************************
-SEE ALSO
-******************************************************
-
-gpstart, gpstop, gpstate

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gpstart_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpstart_help b/tools/doc/gpstart_help
deleted file mode 100755
index 18c13e0..0000000
--- a/tools/doc/gpstart_help
+++ /dev/null
@@ -1,155 +0,0 @@
-COMMAND NAME: gpstart
-
-Starts a HAWQ system.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpstart [-d <master_data_directory>] [-B <parallel_processes>] 
-        [-R] [-m] [-y] [-a] [-t <timeout_seconds>] 
-        [-l logfile_directory] [-v | -q]
-
-gpstart -? | -h | --help
-
-gpstart --version
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpstart utility is used to start the HAWQ server 
-processes. When you start a HAWQ system, you are 
-actually starting several postgres database server listener processes 
-at once (the master and all of the segment instances). The gpstart utility 
-handles the startup of the individual instances. Each instance is started 
-in parallel.
-
-The first time an administrator runs gpstart, the utility creates a hosts 
-cache file named .gphostcache in the user�s home directory. Subsequently, 
-the utility uses this list of hosts to start the system more efficiently. 
-If new hosts are added to the system, you must manually remove this file 
-from the gpadmin user�s home directory. The utility will create a new hosts 
-cache file at the next startup.
-
-Before you can start a HAWQ system, you must have initialized 
-the system using gpinitsystem first.
-
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--a (do not prompt)
-
-Do not prompt the user for confirmation.
-
-
--B <parallel_processes>
-
-The number of segments to start in parallel. If not specified, 
-the utility will start up to 60 parallel processes depending on 
-how many segment instances it needs to start.
-
-
--d <master_data_directory>
-
-Optional. The master host data directory. If not specified, 
-the value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--l <logfile_directory>
-
-The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--m (master only)
-
-Optional. Starts the master instance only, which may be necessary 
-for maintenance tasks. This mode only allows connections to the master 
-in utility mode. For example:
-
-PGOPTIONS='-c gp_session_role=utility' psql
-
-Note that starting the system in master-only mode is only advisable
-under supervision of Greenplum support.  Improper use of this option
-may lead to a split-brain condition and possible data loss.
-
-
--q (no screen output)
-
-Run in quiet mode. Command output is not displayed on the screen, 
-but is still written to the log file.
-
-
--R (restricted mode)
-
-Starts Greenplum Database in restricted mode (only database superusers 
-are allowed to connect).
-
-
--t | --timeout <number_of_seconds>
-
-Specifies a timeout in seconds to wait for a segment instance to 
-start up. If a segment instance was shutdown abnormally (due to 
-power failure or killing its postgres database listener process, 
-for example), it may take longer to start up due to the database 
-recovery and validation process. If not specified, the default timeout 
-is 60 seconds.
-
-
--v (verbose output)
-
-Displays detailed status, progress and error messages output by the utility.
-
-
--y (do not start standby master)
-
-Optional. Do not start the standby master host. The default is to start 
-the standby master host and synchronization process.
-
-
--? | -h | --help (help)
-
-Displays the online help.
-
-
---version (show utility version)
-
-Displays the version of this utility.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Start a HAWQ system:
-
-gpstart
-
-
-Start a HAWQ system in restricted mode 
-(only allow superuser connections):
-
-gpstart -R
-
-
-Start the HAWQ master instance only and connect in utility mode:
-
-gpstart -m
-
-PGOPTIONS='-c gp_session_role=utility' psql
-
-
-Display the online help for the gpstart utility:
-
-gpstart -?
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpinitsystem, gpstop

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gpstate_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpstate_help b/tools/doc/gpstate_help
deleted file mode 100755
index 8ffc66e..0000000
--- a/tools/doc/gpstate_help
+++ /dev/null
@@ -1,203 +0,0 @@
-COMMAND NAME: gpstate
-
-Shows the status of a running Greenplum Database system.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpstate [-d <master_data_directory>] [-B <parallel_processes>] 
-        [-s | -b | -Q] [-p] [-i] [-f] 
-        [-v | -q] [-l <log_directory>]
-
-
-gpstate -? | -h | --help
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpstate utility displays information about a running 
-Greenplum Database instance. There is additional information 
-you may want to know about a Greenplum Database system, since 
-it is comprised of multiple PostgreSQL database instances (segments) 
-spanning multiple machines. The gpstate utility provides 
-additional status information for a Greenplum Database system, 
-such as:
-* Which segments are down.
-* Master and segment configuration information (hosts, 
-  data directories, etc.).
-* The ports used by the system.
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--b (brief status)
-
-  Optional. Display a brief summary of the state of the 
-  Greenplum Database system. This is the default option.
-
-
--B <parallel_processes>
-
-  The number of segments to check in parallel. If not specified, 
-  the utility will start up to 60 parallel processes depending on 
-  how many segment instances it needs to check.
-
-
--d <master_data_directory>
-
-  Optional. The master data directory. If not specified, the 
-  value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--f (show standby master details)
-
-  Display details of the standby master host if configured.
-
-
--i (show Greenplum Database version)
-  
-  Display the Greenplum Database software version information 
-  for each instance.
-
-
--l <logfile_directory>
-
-  The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--p (show ports)
-
-  List the port numbers used throughout the Greenplum Database 
-  system.
-
-
--q (no screen output)
-
-  Optional. Run in quiet mode. Except for warning messages, command 
-  output is not displayed on the screen. However, this information 
-  is still written to the log file.
-
-
--Q (quick status)
-
-  Optional. Checks segment status in the system catalog on 
-  the master host. Does not poll the segments for status.
-
-
--s (detailed status)
-
-  Optional. Displays detailed status information for the 
-  Greenplum Database system.
-
-
--v (verbose output)
-
-  Optional. Displays error messages and outputs detailed status 
-  and progress information.
-
-
--? | -h | --help (help)
-
-  Displays the online help.
-
-
-*****************************************************
-OUTPUT DEFINITIONS FOR DETAIL VIEW
-*****************************************************
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-MASTER OUTPUT DATA
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-* Master host - host name of the master
-
-* Master postgres process ID - PID of the master postgres database 
-                               listener process
-
-* Master data directory - file system location of the master data directory
-
-* Master port - port of the master database listener process
-
-* Master current role - dispatch = regular operating mode 
-                        utility = maintenance mode 
-
-* Greenplum array configuration type - Standard = one NIC per host 
-                                       Multi-Home = multiple NICs per host
-
-* Greenplum initsystem version - version of Greenplum Database when 
-                                 system was first initialized
-
-* Greenplum current version - current version of Greenplum Database
-
-* Postgres version - version of PostgreSQL that Greenplum Database 
-                     is based on
-
-* Master standby - host name of the standby master
-
-* Standby master state - status of the standby master: active or passive
-
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-SEGMENT OUTPUT DATA
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-* Hostname - system-configured host name
-
-* Address - network address host name (NIC name)
-
-* Datadir - file system location of segment data directory
-
-* Port - port number of segment postgres database listener process
-
-* Current Role - current role of a segment: Primary 
-
-* Preferred Role - role at system initialization time: Primary
-
-* File postmaster.pid - status of postmaster.pid lock file: Found or Missing
-
-* PID from postmaster.pid file - PID found in the postmaster.pid file
-
-* Lock files in /tmp - a segment port lock file for its postgres process is 
-                       created in /tmp (file is removed when a segment shuts down)
-
-* Active PID - active process ID of a segment
-
-* Master reports status as - segment status as reported in the system catalog: 
-                           Up or Down
-
-Database status - status of Greenplum Database to incoming requests: 
-                Up, Down, or Suspended. A Suspended state means database 
-                activity is temporarily paused while a segment transitions from 
-                one state to another.
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Show detailed status information of a Greenplum Database system:
-
-   gpstate -s
-
-
-Do a quick check for down segments in the master host system catalog:
-
-   gpstate -Q
-
-
-Show information about the standby master configuration:
-
-   gpstate -f
-
-
-Display the Greenplum software version information:
-
-   gpstate -i
-
-
-*****************************************************
-SEE ALSO
-*****************************************************
-
-gpstart, gplogfilter

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/doc/gpstop_help
----------------------------------------------------------------------
diff --git a/tools/doc/gpstop_help b/tools/doc/gpstop_help
deleted file mode 100755
index 04ee8e6..0000000
--- a/tools/doc/gpstop_help
+++ /dev/null
@@ -1,189 +0,0 @@
-COMMAND NAME: gpstop
-
-Stops or restarts a HAWQ system.
-
-
-*****************************************************
-SYNOPSIS
-*****************************************************
-
-gpstop [-d <master_data_directory>] [-B <parallel_processes>] 
-       [-M smart | fast | immediate] [-t <timeout_seconds>]
-       [-r] [-y] [-a] [-l <logfile_directory>] [-v | -q]
-
-gpstop -m [-d <master_data_directory>] [-y] [-l <logfile_directory>] 
-       [-v | -q]
-
-gpstop -u [-d <master_data_directory>] [-l <logfile_directory>] 
-          [-v | -q] 
-
-gpstop --version
-
-gpstop -? | -h | --help
-
-
-*****************************************************
-DESCRIPTION
-*****************************************************
-
-The gpstop utility is used to stop the database servers that 
-comprise a HAWQ system. When you stop a HAWQ system, you 
-are actually stopping several postgres database server 
-processes at once (the master and all of the segment instances).
-The gpstop utility handles the shutdown of the individual 
-instances. Each instance is shutdown in parallel. 
-
-By default, you are not allowed to shut down HAWQ if there 
-are any client connections to the database. Use the -M fast 
-option to roll back all in progress transactions and terminate 
-any connections before shutting down. If there are any 
-transactions in progress, the default behavior is to wait 
-for them to commit before shutting down.
-
-With the -u option, the utility uploads changes made to the 
-master pg_hba.conf file or to runtime configuration parameters 
-in the master postgresql.conf file without interruption of 
-service. Note that any active sessions will not pickup the 
-changes until they reconnect to the database.
-
-*****************************************************
-OPTIONS
-*****************************************************
-
--a (do not prompt)
-
- Do not prompt the user for confirmation.
-
-
--B <parallel_processes>
-
- The number of segments to stop in parallel. If not specified, 
- the utility will start up to 60 parallel processes depending 
- on how many segment instances it needs to stop.
-
-
--d <master_data_directory>
-
- Optional. The master host data directory. If not specified, 
- the value set for $MASTER_DATA_DIRECTORY will be used.
-
-
--l <logfile_directory>
-
- The directory to write the log file. Defaults to ~/gpAdminLogs.
-
-
--m (master only)
-
- Optional. Shuts down a HAWQ master instance that was 
- started in maintenance mode.
-
-
--M fast (fast shutdown - rollback)
-
- Fast shut down. Any transactions in progress are interrupted 
- and rolled back. 
-
-
--M immediate (immediate shutdown - abort)
-
- Immediate shut down. Any transactions in progress are aborted. 
- This shutdown mode is not recommended. This mode kills all postgres 
- processes without allowing the database server to complete transaction 
- processing or clean up any temporary or in-process work files. 
- 
-
--M smart (smart shutdown - warn)
- 
- Smart shut down. If there are active connections, this command 
- fails with a warning. This is the default shutdown mode.
-
-
--q (no screen output)
-
- Run in quiet mode. Command output is not displayed on the 
- screen, but is still written to the log file.
-
-
--r (restart)
-
- Restart after shutdown is complete.
-
--t <timeout_seconds>
-
- Specifies a timeout threshold (in seconds) to wait for a 
- segment instance to shutdown. If a segment instance does not 
- shutdown in the specified number of seconds, gpstop displays 
- a message indicating that one or more segments are still in 
- the process of shutting down and that you cannot restart 
- HAWQ until the segment instance(s) are stopped. 
- This option is useful in situations where gpstop is executed 
- and there are very large transactions that need to rollback. 
- These large transactions can take over a minute to rollback 
- and surpass the default timeout period of 600 seconds.
-
-
--u (reload pg_hba.conf and postgresql.conf files only)
-
- This option reloads the pg_hba.conf files of the master and 
- segments and the runtime parameters of the postgresql.conf files 
- but does not shutdown the HAWQ array. Use this 
- option to make new configuration settings active after editing 
- postgresql.conf or pg_hba.conf. Note that this only applies to 
- configuration parameters that are designated as runtime 
- parameters. In HAWQ if there are some failed segments, 
- this option can not be executed.
-
-
--v (verbose output)
-
- Displays detailed status, progress and error messages output 
- by the utility.
-
-
---version (show utility version)
-
- Displays the version of this utility.
-
-
--y (do not stop standby master)
-
- Do not stop the standby master process. The default is to stop 
- the standby master.
-
-
--? | -h | --help (help)
-
- Displays the online help.
-
-
-*****************************************************
-EXAMPLES
-*****************************************************
-
-Stop a HAWQ system in smart mode:
-
-  gpstop
-
-Stop a HAWQ system in fast mode:
-
-  gpstop -M fast
-
-
-Stop all segment instances and then restart the system:
-
-  gpstop -r
-
-
-Stop a master instance that was started in maintenance mode:
-
-  gpstop -m
-
-
-Reload the postgresql.conf and pg_hba.conf files after 
-making runtime configuration parameter changes but do not 
-shutdown the HAWQ array:
-
-  gpstop -u
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/sbin/gprepairmirrorseg.py
----------------------------------------------------------------------
diff --git a/tools/sbin/gprepairmirrorseg.py b/tools/sbin/gprepairmirrorseg.py
deleted file mode 100755
index 856ec5c..0000000
--- a/tools/sbin/gprepairmirrorseg.py
+++ /dev/null
@@ -1,539 +0,0 @@
-#!/usr/bin/env python
-#
-# Copyright (c) Greenplum Inc 2010. All Rights Reserved.
-#
-
-from gppylib.commands.base import *
-from gppylib.commands.gp import *
-from optparse import Option, OptionGroup, OptionParser, OptionValueError, SUPPRESS_USAGE
-from time import strftime, sleep
-import bisect
-import copy
-import datetime
-import os
-import pprint
-import signal
-import sys
-import tempfile
-import threading
-import traceback
-
-
-
-sys.path.append(os.path.join(os.path.dirname(__file__), "../bin"))
-sys.path.append(os.path.join(os.path.dirname(__file__), "../bin/lib"))
-
-
-try:
-    from pysync import *
-    from gppylib.commands.unix import *
-    from gppylib.commands.gp import *
-    from gppylib.commands.pg import PgControlData
-    from gppylib.gparray import GpArray, get_host_interface
-    from gppylib.gpparseopts import OptParser, OptChecker
-    from gppylib.gplog import *
-    from gppylib.db import dbconn
-    from gppylib.db import catalog
-    from gppylib.userinput import *
-    from pygresql.pgdb import DatabaseError
-    from pygresql import pg
-except ImportError, e:
-    sys.exit('ERROR: Cannot import modules.  Please check that you have sourced greenplum_path.sh.  Detail: ' + str(e))
-
-#
-# Constants
-#
-
-EXECNAME = os.path.split(__file__)[-1]
-FULL_EXECNAME = os.path.abspath( __file__ )
-
-HOME_DIRECTORY = os.path.expanduser("~")
-
-GPADMINLOGS_DIRECTORY = HOME_DIRECTORY + "/gpAdminLogs"
-
-PID_FILE = GPADMINLOGS_DIRECTORY + '/gprepairmirrorseg.pid'
-
-DESCRIPTION = ("""Repair utility to re-sync primary and mirror files.""")
-
-_help  = [""" TODO add help """]
-
-_usage = """ TODO add usage """
-
-TEN_MEG = 10485760
-ONE_GIG = 128  * TEN_MEG
-TEN_GIG = 1024 * TEN_MEG
-
-PID_FILE = "gprepairmirrorseg.pid"
-
-# Keep the value of the name/value dictionary entry the same length for printing.
-categoryAction = {}
-
-COPY       = 'COPY     '
-DELETE     = 'DELETE   '
-NO_ACTION  = 'NO ACTION'
-
-categoryAction['ao:']              = COPY
-categoryAction['heap:']            = COPY
-categoryAction['btree:']           = COPY
-categoryAction['extra_p:']         = COPY
-categoryAction['extra_m:']         = DELETE
-categoryAction['missing_topdir_p:'] = COPY
-categoryAction['missing_topdir_m:'] = COPY
-categoryAction['unknown:']         = NO_ACTION
-
-
-#-------------------------------------------------------------------------------
-def prettyPrintFiles(resyncFile):
-  
-  fileIndex = 0
-  fileListLength = len(resyncFile.fileList)
-  
-  for fileIndex in range(fileListLength):
-      category = resyncFile.getCategory(fileIndex)
-      action   = categoryAction[category]
-      logger.info("")
-      logger.info("  Resync file number %d" % (fileIndex + 1))
-      if action == COPY:
-         logger.info("    Copy source to target" )
-         logger.info("    " + "Source Host = " + resyncFile.getSourceHost(fileIndex))
-         logger.info("    " + "Source File = " + resyncFile.getSourceFile(fileIndex))
-         logger.info("    " + "Target Host = " + resyncFile.getTargetHost(fileIndex))
-         logger.info("    " + "Target File = " + resyncFile.getTargetFile(fileIndex))
-         logger.info("")
-      if action == DELETE:
-         logger.info("    Delete file")
-         logger.info("    " + "Host = " + resyncFile.getSourceHost(fileIndex))
-         logger.info("    " + "File = " + resyncFile.getSourceFile(fileIndex))
-      if action == NO_ACTION:
-         logger.info("    Unknown file type. No action will be taken.")
-         logger.info("    " + "Source Host = " + resyncFile.getSourceHost(fileIndex))
-         logger.info("    " + "Source File = " + resyncFile.getSourceFile(fileIndex))
-         logger.info("    " + "Target Host = " + resyncFile.getTargetHost(fileIndex))
-         logger.info("    " + "Target File = " + resyncFile.getTargetFile(fileIndex))
-  
-  logger.info("")
-
-
-#-------------------------------------------------------------------------------
-def sshBusy(cmd):
-    """ 
-      This function will check the results of a Command to see if ssh was too busy.
-      It will return False if the command completed, successfully or not, 
-      and a retry is not possible or necessary. 
-    """
-    retValue = False
-    results = cmd.get_results()
-    resultStr = results.printResult()
-    if results.rc != 0:
-       if resultStr.find("ssh_exchange_identification: Connection closed by remote host") != -1:
-          retValue = True
-       else:
-          retValue = False
-    else:
-       retValue = False
-    return retValue
-
-
-
-#-------------------------------------------------------------------------------
-def runAndCheckCommandComplete(cmd):
-    """ 
-      This function will run a Command and return False if ssh was too busy.
-      It will return True if the command completed, successfully or not, 
-      if ssh wasn't busy. 
-    """
-    retValue = True
-    cmd.run(validateAfter = False)
-    if sshBusy(cmd) == True:
-       """ Couldn't make the connection. put in a delay, and return"""
-       self.logger.debug("gprepairmirrorseg ssh is busy... need to retry the command: " + str(cmd))
-       time.sleep(1)
-       retValue = False
-    else:
-       retValue = True
-    return retValue
-
-
-#-------------------------------------------------------------------------------                                                             
-def parseargs():
-    parser = OptParser( option_class = OptChecker
-                      , description  = ' '.join(DESCRIPTION.split())
-                      , version      = '%prog version $Revision: #12 $'
-                      )
-    parser.setHelp(_help)
-    parser.set_usage('%prog ' + _usage)
-    parser.remove_option('-h')
-
-    parser.add_option('-f', '--file', default='',
-                      help='the name of a file containing the re-sync file list.')
-    parser.add_option('-v','--verbose', action='store_true',
-                      help='debug output.', default=False)
-    parser.add_option('-h', '-?', '--help', action='help',
-                        help='show this help message and exit.', default=False)
-    parser.add_option('--usage', action="briefhelp")    
-    parser.add_option('-d', '--master_data_directory', type='string',
-                        dest="masterDataDirectory",
-                        metavar="<master data directory>",
-                        help="Optional. The master host data directory. If not specified, the value set for $MASTER_DATA_DIRECTORY will be used.",
-                        default=get_masterdatadir()
-                     )
-    parser.add_option('-a', help='don\'t ask to confirm repairs',
-                      dest='confirm', default=True, action='store_false')
-
-
-
-    """
-     Parse the command line arguments
-    """
-    (options, args) = parser.parse_args()
-
-    if len(args) > 0:
-        logger.error('Unknown argument %s' % args[0])
-        parser.exit()
-
-    return options, args
-
-#-------------------------------------------------------------------------------
-def sig_handler(sig, arg):
-    print "Handling signal..."
-    signal.signal(signal.SIGTERM, signal.SIG_DFL)
-    signal.signal(signal.SIGHUP, signal.SIG_DFL)
-
-    # raise sig
-    os.kill(os.getpid(), sig)
-
-
-#-------------------------------------------------------------------------------
-def create_pid_file():
-    """Creates gprepairmirrorseg pid file"""
-    try:
-        fp = open(PID_FILE, 'w')
-        fp.write(str(os.getpid()))
-    except IOError:
-        raise
-    finally:
-        if fp: fp.close()
-
-
-#-------------------------------------------------------------------------------
-def remove_pid_file():
-    """Removes upgrademirror pid file"""
-    try:
-        os.unlink(PID_FILE)
-    except:
-        pass
-
-
-#-------------------------------------------------------------------------------
-def is_gprepairmirrorseg_running():
-    """Checks if there is another instance of gprepairmirrorseg running"""
-    is_running = False
-    try:
-        fp = open(PID_FILE, 'r')
-        pid = int(fp.readline().strip())
-        fp.close()
-        is_running = check_pid(pid)
-    except IOError:
-        pass
-    except Exception, msg:
-        raise
-
-    return is_running
-
-#-------------------------------------------------------------------------------
-def check_master_running():
-    logger.debug("Check if Master is running...")
-    if os.path.exists(options.masterDataDirectory + '/postmaster.pid'):
-       logger.warning("postmaster.pid file exists on Master")
-       logger.warning("The database must not be running during gprepairmirrorseg.")
-
-       # Would be nice to check the standby master as well, but if the system is down, we can't find it.
-       raise Exception("Unable to continue gprepairmirrorseg")
-
-
-#-------------------------------------------------------------------------------
-#-------------------------------------------------------------------------------
-class InvalidStatusError(Exception): pass
-
-
-#-------------------------------------------------------------------------------
-#-------------------------------------------------------------------------------
-class ValidationError(Exception): pass
-
-
-#-------------------------------------------------------------------------------
-#-------------------------------------------------------------------------------
-class GPResyncFile:
-    """ 
-      This class represents the information stored in the resync file.
-      The expected file format is: 
-      
-        <category> <good-segment-host>:<good-segment-file> <bad segment host>:<bad segment file>
-
-      where <category> is one of:
-
-        ao       - An ao table
-        heap     - A heap table
-        btree    - A btree
-        unknown  - An unknown file type
-        extra_p  - An extra file on the primary, which is the same as the <good-segment>
-        extra_m  - An extra file on the mirror, which is the same as the <bad-segment>
-       
-    """
-    
-
-    def __init__(self, filename):
-        self.filename = filename
-        self.fileList = []
-        self.readfile()
-
-    #-------------------------------------------------------------------------------
-    def __str__(self):
-       tempStr = "self.filename = " + str(self.filename) + '\n'
-       return tempStr
-
-    #-------------------------------------------------------------------------------                                                     
-    def readfile(self):
-        try:
-            file = None
-            file = open(self.filename, 'r')
-
-            for line in file:
-                line = line.strip()
-                (category, goodseg, badseg) = line.split()
-                (goodseghost, goodsegfile) = goodseg.split(":")
-                (badseghost, badsegfile) = badseg.split(":")
-                self.fileList.append([category, goodseghost, goodsegfile, badseghost, badsegfile])
-        except IOError, ioe:
-            logger.error("Can not read file %s. Exception: %s" % (self.filename, str(ioe)))
-            raise Exception("Unable to read file: %s" % self.filename)
-        finally:
-            if file != None:
-               file.close()
-
-    #-------------------------------------------------------------------------------
-    def getEntry(self, index):
-        logger.debug("Entering getEntry, index = %s" % str(index))
-        return self.fileList[index]
-
-    #-------------------------------------------------------------------------------                                                     
-    def getCategory(self, index):
-        return self.fileList[index][0]
-
-    #-------------------------------------------------------------------------------
-    def getSourceHost(self, index):
-        return str(self.fileList[index][1])
-
-    #-------------------------------------------------------------------------------
-    def getSourceFile(self, index):
-        return str(self.fileList[index][2])
-    
-    #-------------------------------------------------------------------------------
-    def getTargetHost(self, index):
-        return str(self.fileList[index][3])    
-
-    #-------------------------------------------------------------------------------
-    def getTargetFile(self, index):
-        return str(self.fileList[index][4])
-
-
-#------------------------------------------------------------------------
-#-------------------------------------------------------------------------                    
-class RemoteCopyPreserve(Command):
-    def __init__( self
-                , name
-                , srcDirectory
-                , dstHost
-                , dstDirectory
-                , ctxt = LOCAL
-                , remoteHost = None
-                ):
-        self.srcDirectory = srcDirectory
-        self.dstHost = dstHost
-        self.dstDirectory = dstDirectory        
-        cmdStr="%s -rp %s %s:%s" % (findCmdInPath('scp'),srcDirectory,dstHost,dstDirectory)
-        Command.__init__(self,name,cmdStr,ctxt,remoteHost)
-
-
-#-------------------------------------------------------------------------
-#-------------------------------------------------------------------------                                                       
-class MoveDirectoryContents(Command):
-    """ This class moves the contents of a local directory."""
-
-    def __init__( self
-                , name
-                , srcDirectory
-                , dstDirectory
-                , ctxt = LOCAL
-                , remoteHost = None
-                ):
-        self.srcDirectory = srcDirectory
-        self.srcDirectoryFiles = self.srcDirectory + "." + 'dirfilelist' 
-        self.dstDirectory = dstDirectory
-        ls = findCmdInPath("ls")
-        cat = findCmdInPath("cat")
-        xargs = findCmdInPath("xargs")
-        mv = findCmdInPath("mv")
-        cmdStr = "%s -1 %s > %s" % (ls, self.srcDirectory, self.srcDirectoryFiles)
-        cmdStr = cmdStr + ";%s %s" % (cat, self.srcDirectoryFiles)
-        cmdStr = cmdStr + " | %s -I xxx %s %s/xxx %s" % (xargs, mv, self.srcDirectory, self.dstDirectory)
-        cmdStr = cmdStr + "; rm %s" % (self.srcDirectoryFiles)
-        Command.__init__(self,name,cmdStr,ctxt,remoteHost)
-        
-
-#-------------------------------------------------------------------------
-#-------------------------------------------------------------------------
-class PySyncPlus(PySync):
-    """
-    This class is really just PySync but it records all the parameters passed in.
-    """
-
-    def __init__( self
-                , name
-                , srcDir
-                , dstHost
-                , dstDir
-                , ctxt = LOCAL
-                , remoteHost = None
-                , options = None
-                ):
-        self.namePlus = name
-        self.srcDirPlus = srcDir
-        self.dstHostPlus = dstHost
-        self.dstDirPlus = dstDir
-        self.ctxtPlus = ctxt
-        self.remoteHostPlus = remoteHost
-        self.optionsPlus = options
-        PySync.__init__( self
-                       , name = name
-                       , srcDir = srcDir
-                       , dstHost = dstHost
-                       , dstDir = dstDir
-                       , ctxt = ctxt
-                       , remoteHost = remoteHost
-                       , options = options 
-                       )
-        self.destinationHost = dstHost
-
-
-#-------------------------------------------------------------------------------
-#--------------------------------- Main ----------------------------------------
-#-------------------------------------------------------------------------------
-""" 
-   This the the main body of code for gprepairmirrorseg.
-"""
-
-
-
-try:
-  # setup signal handlers so we can clean up correctly
-  signal.signal(signal.SIGTERM, sig_handler)
-  signal.signal(signal.SIGHUP, sig_handler)
-  
-  logger = get_default_logger()
-  applicationName = EXECNAME
-  setup_tool_logging( appName = applicationName
-                    , hostname = getLocalHostname()
-                    , userName = getUserName()
-                    )
-
-  options, args = parseargs()
-  
-  check_master_running()
-
-  if options.file == None or len(options.file) == 0:
-     logger.error('The --file command line argument is required')
-     raise Exception("Unable to continue")
-
-  if options.verbose:
-     enable_verbose_logging()
-
-  if is_gprepairmirrorseg_running():
-     logger.error('gprepairmirrorseg is already running.  Only one instance')
-     logger.error('of gprepairmirrorseg is allowed at a time.')
-     remove_pid = False
-     sys.exit(1)
-  else:
-     create_pid_file()
-
-  resyncFiles = GPResyncFile(options.file)  
-  logger.info("gprepairmirrorseg will attempt to repair the following files:")
-  prettyPrintFiles(resyncFiles)
-
-  if options.confirm == True:
-     msg = "Do you wish to continue with re-sync of these files"
-     ans = ask_yesno(None,msg,'N')
-     if not ans:
-        logger.info("User abort requested, Exiting...")
-        sys.exit(4)
-     
-     msg = "Are you sure you wish to continue"
-     ans = ask_yesno(None,msg,'N')
-     if not ans:
-        logger.info("User abort requested, Exiting...")
-        sys.exit(4)
-
-  syncListLength = len(resyncFiles.fileList)
-  
-  for index in range(syncListLength):
-      category = resyncFiles.getCategory(index)
-      action   = categoryAction[category]
-      sourceHost = resyncFiles.getSourceHost(index)
-      sourceFile = resyncFiles.getSourceFile(index)
-      sourceDir, sf = os.path.split(sourceFile)
-      targetHost = resyncFiles.getTargetHost(index)
-      targetFile = resyncFiles.getTargetFile(index)
-      targetDir, tf = os.path.split(targetFile)
-      
-      syncOptions = " -i " + sf
-      
-      if action == COPY:
-         cmd = PySyncPlus( name = "gprepairsegment sync %s:%s to %s:%s" % (sourceHost, sourceFile, targetHost, targetFile)
-                         , srcDir = sourceDir
-                         , dstHost = targetHost
-                         , dstDir = targetDir
-                         , ctxt = REMOTE
-                         , remoteHost = sourceHost
-                         , options = syncOptions
-                         )
-         cmd.run(validateAfter = True)
-         logger.info(str(cmd))
-      if action == DELETE:
-         cmd = RemoveFiles(name = "gprepairmirrorseg remove extra file", directory = sourceFile, ctxt = LOCAL, remoteHost = sourceHost)
-         cmd.run(validateAfter = True)
-         logger.info(str(cmd))
-      if action == NO_ACTION:
-         logger.warn("No action will be taken for %s:%s and %s:%s" % (sourceHost, sourceFile, targetHost, targetFile))
-
-  sys.exit(0)
-
-except Exception,e:
-    logger.error("gprepairmirrorseg failed: %s \n\nExiting..." % str(e) )
-    traceback.print_exc()
-    sys.exit(3)
-
-except KeyboardInterrupt:
-    # Disable SIGINT while we shutdown.
-    signal.signal(signal.SIGINT,signal.SIG_IGN)
-
-    # Re-enabled SIGINT
-    signal.signal(signal.SIGINT,signal.default_int_handler)
-
-    sys.exit('\nUser Interrupted')
-
-except Exception, e:
-  print "FATAL Exception: " + str(e)
-  traceback.print_exc()
-  sys.exit(1)
-
-
-finally:
-    try:
-        if remove_pid:
-            remove_pid_file()
-    except Exception:
-        pass
-    logger.info("gprepairmirrorseg exit")
-
-
-    

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/sbin/gpsegstart.py
----------------------------------------------------------------------
diff --git a/tools/sbin/gpsegstart.py b/tools/sbin/gpsegstart.py
deleted file mode 100755
index 667785d..0000000
--- a/tools/sbin/gpsegstart.py
+++ /dev/null
@@ -1,555 +0,0 @@
-#!/usr/bin/env python
-# Line too long - pylint: disable=C0301
-# Invalid name  - pylint: disable=C0103
-#
-# Copyright (c) EMC/Greenplum Inc 2011. All Rights Reserved.
-# Copyright (c) Greenplum Inc 2008. All Rights Reserved. 
-#
-"""
-Internal Use Function.
-"""
-
-# THIS IMPORT MUST COME FIRST
-from gppylib.mainUtils import simple_main, addStandardLoggingAndHelpOptions
-
-import os, pickle, base64
-
-from gppylib.gpparseopts import OptParser, OptChecker
-from gppylib import gparray, gplog
-from gppylib.commands import base, gp
-from gppylib.utils import parseKeyColonValueLines
-
-logger = gplog.get_default_logger()
-
-
-DESCRIPTION = """
-This utility is NOT SUPPORTED and is for internal-use only.
-Starts a set of one or more segment databases.
-"""
-
-HELP = ["""
-Utility should only be used by other GP utilities.  
-
-Return codes:
-  0 - All segments started successfully
-  1 - At least one segment didn't start successfully
-
-"""]
-
-
-
-
-class StartResult:
-    """
-    Recorded result information from an attempt to start one segment.
-    """
-
-    def __init__(self, datadir, started, reason, reasoncode):
-        """
-        @param datadir
-        @param started
-        @param reason
-        @param reasoncode one of the gp.SEGSTART_* values
-        """
-        self.datadir    = datadir
-        self.started    = started
-        self.reason     = reason
-        self.reasoncode = reasoncode
-    
-    def __str__(self):
-        return "".join([
-                "STATUS", 
-                "--DIR:", str(self.datadir),
-                "--STARTED:", str(self.started),
-                "--REASONCODE:", str(self.reasoncode),
-                "--REASON:", str(self.reason)
-                ])
-
-
-class OverallStatus:
-    """
-    Mapping and segment status information for all segments on this host.
-    """
-
-    def __init__(self, dblist):
-        """
-        Build the datadir->segment mapping and remember the original size.
-        Since segments which fail to start will be removed from the mapping, 
-        we later test the size of the map against the original size when
-        returning the appropriate status code to the caller.
-        """
-        self.dirmap          = dict([(seg.getSegmentDataDirectory(), seg) for seg in dblist])
-        self.original_length = len(self.dirmap)
-        self.results         = []
-        self.logger          = logger
-
-
-    def mark_failed(self, datadir, msg, reasoncode):
-        """
-        Mark a segment as failed during some startup process.
-        Remove the entry for the segment from dirmap.
-
-        @param datadir
-        @param msg
-        @param reasoncode one of the gp.SEGSTART_* constant values
-        """
-        self.logger.info("Marking failed %s, %s, %s" % (datadir, msg, reasoncode))
-        self.results.append( StartResult(datadir=datadir, started=False, reason=msg, reasoncode=reasoncode) )
-        del self.dirmap[datadir]
-
-
-    def remaining_items_succeeded(self):
-        """
-        Add results for all remaining items in our datadir->segment map.
-        """
-        for datadir in self.dirmap.keys():
-            self.results.append( StartResult(datadir=datadir, started=True, reason="Start Succeeded", reasoncode=gp.SEGSTART_SUCCESS ) )
-
-
-    def log_results(self):
-        """
-        Log info messages with our results
-        """
-        status = '\nCOMMAND RESULTS\n' + "\n".join([str(result) for result in self.results])
-        self.logger.info(status)
-
-
-    def exit_code(self):
-        """
-        Return an appropriate exit code: 0 if no failures, 1 if some segments failed to start.
-        """
-        if len(self.dirmap) != self.original_length:
-            return 1
-        return 0
-
-
-
-class GpSegStart:
-    """
-    Logic to start segment servers on this host.
-    """
-
-    def __init__(self, dblist, gpversion, collation, mirroringMode, num_cids, era, 
-                 timeout, pickledTransitionData, specialMode, wrapper, wrapper_args):
-
-        # validate/store arguments
-        #
-        self.dblist                = map(gparray.GpDB.initFromString, dblist)
-
-        expected_gpversion         = gpversion
-        actual_gpversion           = gp.GpVersion.local('local GP software version check', os.path.abspath(os.pardir))
-        if actual_gpversion != expected_gpversion:
-            raise Exception("Local Software Version does not match what is expected.\n"
-                            "The local software version is: '%s'\n"
-                            "But we were expecting it to be: '%s'\n"
-                            "Please review and correct" % (actual_gpversion, expected_gpversion))
-
-        collation_strings          = collation.split(':')
-        if len(collation_strings) != 3:
-            raise Exception("Invalid collation string specified!")
-        (self.expected_lc_collate, self.expected_lc_monetary, self.expected_lc_numeric) = collation_strings
-
-        self.mirroringMode         = mirroringMode
-        self.num_cids              = num_cids
-        self.era                   = era
-        self.timeout               = timeout
-        self.pickledTransitionData = pickledTransitionData
-
-        assert(specialMode in [None, 'upgrade', 'maintenance'])
-        self.specialMode           = specialMode
-
-        self.wrapper               = wrapper
-        self.wrapper_args          = wrapper_args
-
-        # initialize state
-        #
-        self.pool                  = base.WorkerPool(numWorkers=len(dblist))
-        self.logger                = logger
-        self.overall_status        = None
-
-
-    def __checkPostmasters(self, must_be_running):
-        """
-        Check that segment postmasters have been started.
-        @param must_be_running True if postmasters must be running by now.
-        """
-        self.logger.info("Checking segment postmasters... (must_be_running %s)" % must_be_running)
-
-        for datadir in self.overall_status.dirmap.keys():
-            pid     = gp.read_postmaster_pidfile(datadir)
-            running = gp.check_pid(pid)
-            msg     = "Postmaster %s %srunning (pid %d)" % (datadir, "is " if running else "NOT ", pid)
-            self.logger.info(msg)
-
-            if must_be_running and not running:
-                reasoncode = gp.SEGSTART_ERROR_PG_CTL_FAILED
-                self.overall_status.mark_failed(datadir, msg, reasoncode)
-
-
-    def __validateDirectoriesAndSetupRecoveryStartup(self):
-        """
-        validate that the directories all exist and run recovery startup if needed
-        """
-        self.logger.info("Validating directories...")
-
-        for datadir in self.overall_status.dirmap.keys():
-            self.logger.info("Validating directory: %s" % datadir)
-
-            if os.path.isdir(datadir):
-                #
-                # segment datadir exists
-                #
-                pg_log = os.path.join(datadir, 'pg_log')
-                if not os.path.exists(pg_log):
-                    os.mkdir(pg_log)
-                    
-                postmaster_pid = os.path.join(datadir, 'postmaster.pid')
-                if os.path.exists(postmaster_pid):
-                    self.logger.warning("postmaster.pid file exists, checking if recovery startup required")
-
-                    msg = gp.recovery_startup(datadir)
-                    if msg:
-                        reasoncode = gp.SEGSTART_ERROR_STOP_RUNNING_SEGMENT_FAILED
-                        self.overall_status.mark_failed(datadir, msg, reasoncode)
-
-            else:
-                #
-                # segment datadir does not exist
-                #
-                msg = "Segment data directory does not exist for: '%s'" % datadir
-                self.logger.warning(msg)
-
-                reasoncode = gp.SEGSTART_ERROR_DATA_DIRECTORY_DOES_NOT_EXIST
-                self.overall_status.mark_failed(datadir, msg, reasoncode)
-
-
-    def __startSegments(self):
-        """
-        Start the segments themselves 
-        """
-        self.logger.info("Starting segments... (mirroringMode %s)" % self.mirroringMode)
-
-        for datadir, seg in self.overall_status.dirmap.items():
-            cmd = gp.SegmentStart("Starting seg at dir %s" % datadir, 
-                                  seg,
-                                  self.num_cids,
-                                  self.era,
-                                  self.mirroringMode,
-                                  noWait=(self.mirroringMode == 'quiescent'),
-                                  timeout=self.timeout,
-                                  specialMode=self.specialMode,
-                                  wrapper=self.wrapper,
-                                  wrapper_args=self.wrapper_args)
-            self.pool.addCommand(cmd)
-
-        self.pool.join()
-
-        for cmd in self.pool.getCompletedItems():
-            res = cmd.get_results()
-            if res.rc != 0:
-
-                # we should also read in last entries in startup.log here
-                
-                datadir    = cmd.segment.getSegmentDataDirectory()
-                msg        = "PG_CTL failed.\nstdout:%s\nstderr:%s\n" % (res.stdout, res.stderr)
-                reasoncode = gp.SEGSTART_ERROR_PG_CTL_FAILED
-                self.overall_status.mark_failed(datadir, msg, reasoncode)
-
-        self.pool.empty_completed_items()
-
-
-
-    def __convertSegments(self):
-        """
-        Inform segments of their role
-        """
-        if self.mirroringMode != 'quiescent':
-            self.logger.info("Not transitioning segments, mirroringMode is %s..." % self.mirroringMode)
-            return
-
-        self.logger.info("Transitioning segments, mirroringMode is %s..."  % self.mirroringMode)
-
-        transitionData = None
-        if self.pickledTransitionData is not None:
-            transitionData = pickle.loads(base64.urlsafe_b64decode(self.pickledTransitionData))
-
-        # send transition messages to the segments
-        #
-        for datadir, seg in self.overall_status.dirmap.items():
-            #
-            # This cmd will deliver a message to the postmaster using gp_primarymirror
-            # (look for the protocol message type PRIMARY_MIRROR_TRANSITION_REQUEST_CODE )
-            #
-            port = seg.getSegmentPort()
-            cmd  = gp.SendFilerepTransitionMessage.buildTransitionMessageCommand(transitionData, datadir, port)
-
-            self.pool.addCommand(cmd)
-        self.pool.join()
-
-
-        # examine the results from the segments
-        #
-        segments     = self.overall_status.dirmap.values()
-        dataDirToSeg = gparray.GpArray.getSegmentsGroupedByValue(segments, gparray.GpDB.getSegmentDataDirectory)
-        toStop       = []
-        cmds         = self.pool.getCompletedItems()
-
-        for cmd in cmds:
-            res = cmd.get_results()
-            if res.rc == 0:
-                continue
-
-            # some form of failure
-            #
-            stdoutFromFailure = res.stdout.replace("\n", " ").strip()
-            stderrFromFailure = res.stderr.replace("\n", " ").strip()
-            shouldStop = False
-
-            if res.rc == gp.SendFilerepTransitionMessage.TRANSITION_ERRCODE_ERROR_SERVER_DID_NOT_RETURN_DATA:
-                msg        = "Segment did not respond to startup request; check segment logfile"
-                reasoncode = gp.SEGSTART_ERROR_SERVER_DID_NOT_RESPOND
-
-                # server crashed when sending response, should ensure it's stopped completely!
-                shouldStop = True
-
-            elif stderrFromFailure.endswith("failure: Error: MirroringFailure"):
-                msg        = "Failure in segment mirroring; check segment logfile"
-                reasoncode = gp.SEGSTART_ERROR_MIRRORING_FAILURE
-
-            elif stderrFromFailure.endswith("failure: Error: PostmasterDied"):
-                msg        = "Segment postmaster has exited; check segment logfile"
-                reasoncode = gp.SEGSTART_ERROR_POSTMASTER_DIED
-
-            elif stderrFromFailure.endswith("failure: Error: InvalidStateTransition"):
-                msg        = "Not a valid operation at this time; check segment logfile"
-                reasoncode = gp.SEGSTART_ERROR_INVALID_STATE_TRANSITION
-
-                # This should never happen, but if it does then we will ensure process is gone
-                shouldStop = True
-
-            elif stderrFromFailure.endswith("failure: Error: ServerIsInShutdown"):
-                msg        = "System is shutting down"
-                reasoncode = gp.SEGSTART_ERROR_SERVER_IS_IN_SHUTDOWN
-
-            else:
-                if res.rc == gp.SendFilerepTransitionMessage.TRANSITION_ERRCODE_ERROR_SOCKET:
-
-                    # Couldn't connect to server to do transition or got another problem
-                    # communicating, must make sure it's halted!
-                    shouldStop = True
-
-                msg        = "Start failed; check segment logfile.  \"%s%s\"" % (stdoutFromFailure, stderrFromFailure)
-                reasoncode = gp.SEGSTART_ERROR_OTHER
-
-            self.overall_status.mark_failed(cmd.dataDir, msg, reasoncode)
-
-            if shouldStop:
-                assert len(dataDirToSeg[cmd.dataDir]) == 1, "Multiple segments with dir %s" % cmd.dataDir
-                toStop.append( dataDirToSeg[cmd.dataDir][0] )
-
-
-        # ensure segments in a bad state are stopped
-        # 
-        for seg in toStop:
-            datadir, port = (seg.getSegmentDataDirectory(), seg.getSegmentPort())
-            
-            msg = "Stopping segment %s, %s because of failure sending transition" % (datadir, port)
-            self.logger.info(msg)
-
-            cmd = gp.SegmentStop('stop segment', datadir, mode="immediate")
-            cmd.run(validateAfter=False)
-            res = cmd.get_results()
-
-            if res.rc == 0:
-                self.logger.info("Stop of segment succeeded")
-            else:
-                stdoutFromFailure = res.stdout.replace("\n", " ").strip()
-                stderrFromFailure = res.stderr.replace("\n", " ").strip()
-                self.logger.info("Stop of segment failed: rc: %s\nstdout:%s\nstderr:%s" % \
-                                (res.rc, stdoutFromFailure, stderrFromFailure))
-            
-
-
-    def __checkLocaleAndConnect(self):
-        """
-        Check locale information of primaries.
-        """
-        self.logger.info("Validating segment locales...")
-
-        # ask each primary for its locale details
-        #
-        dataDirToCmd = {}
-        for datadir, seg in self.overall_status.dirmap.items():
-            if seg.isSegmentPrimary(True):
-
-                # we CANNOT validate using a psql connection because this may hang (see MPP-9974).
-                #    so we validate these items using a postmaster 'transition' message
-                #
-                name      = "Check Status"
-                statusmsg = "getCollationAndDataDirSettings"
-                port      = seg.getSegmentPort()
-
-                self.logger.info("Checking %s, port %s" % (datadir, port))
-                cmd       = gp.SendFilerepTransitionStatusMessage(name, statusmsg, datadir, port)
-
-                dataDirToCmd[datadir] = cmd
-                self.pool.addCommand(cmd)
-
-        self.pool.join()
-
-
-        # examine results from the primaries
-        #
-        for datadir, cmd in dataDirToCmd.items():
-            self.logger.info("Reviewing %s" % datadir)
-
-            cmd.get_results()
-            line = cmd.unpackSuccessLine()
-            if line is None:
-
-                msg        = "Unable to connect to server"
-                reasoncode = gp.SEGSTART_ERROR_CHECKING_CONNECTION_AND_LOCALE_FAILED
-                self.overall_status.mark_failed(datadir, msg, reasoncode)
-                continue
-
-            dict_ = parseKeyColonValueLines(line)
-
-            # verify was parsed, and we got all needed data
-            if dict_ is None or \
-                [s for s in ["datadir", "lc_collate", "lc_monetary", "lc_numeric"] if s not in dict_]:
-
-                msg        = "Invalid response from server"
-                reasoncode = gp.SEGSTART_ERROR_CHECKING_CONNECTION_AND_LOCALE_FAILED
-                self.overall_status.mark_failed(datadir, msg, reasoncode)
-                continue
-
-            msg = ""
-            if dict_["lc_collate"] != self.expected_lc_collate:
-                msg += "".join(["Segment's value of lc_collate does not match the master.\n",
-                                " Master had value: '", str(self.expected_lc_collate), 
-                                "' while this segment has: '", str(dict_["lc_collate"]), "'\n"])
-
-            if dict_["lc_monetary"] != self.expected_lc_monetary:
-                msg += "".join(["Segment's value of lc_monetary does not match the master.\n",
-                                " Master had value: '", str(self.expected_lc_monetary), 
-                                "' while this segment has: '", str(dict_["lc_monetary"]), "'\n"])
-
-            if dict_["lc_numeric"] != self.expected_lc_numeric:
-                msg += "".join(["Segment's value of lc_numeric does not match the master.\n",
-                                " Master had value: '", str(self.expected_lc_numeric), 
-                                "' while this segment has: '", str(dict_["lc_numeric"]), "'\n"])
-
-            if not os.path.samefile(dict_["datadir"], datadir):
-                msg += "".join(["Segment's data directory does not match. ",
-                                " Expected value: '", str(datadir), 
-                                "' Actual value: '", str(dict_["datadir"]), "'\n"])
-
-            if len(msg) > 0:
-                reasoncode = gp.SEGSTART_ERROR_CHECKING_CONNECTION_AND_LOCALE_FAILED
-                self.overall_status.mark_failed(datadir, msg, reasoncode)
-                            
-
-
-    def run(self):
-        """
-        Logic to start the segments.
-        """
-
-        # we initialize an overall status object which maintains a mapping 
-        # from each segment's data directory to the segment object as well 
-        # as a list of specific success/failure results.
-        #
-        self.overall_status = OverallStatus(self.dblist)
-
-        # Each of the next four steps executes operations which may cause segment
-        # details to be removed from the mapping and recorded as failures.
-        #
-        self.__validateDirectoriesAndSetupRecoveryStartup()
-        self.__startSegments()
-
-        # Being paranoid, we frequently check for postmaster failures.
-        # The postmasters should be running by now unless we're in quiescent mode
-        #
-        must_be_running = (self.mirroringMode != 'quiescent')
-        self.__checkPostmasters(must_be_running)
-
-        self.__convertSegments()
-        self.__checkPostmasters(must_be_running=True)
-
-        self.__checkLocaleAndConnect()
-        self.__checkPostmasters(must_be_running=True)
-
-        # At this point any segments remaining in the mapping are assumed to
-        # have successfully started.
-        #
-        self.overall_status.remaining_items_succeeded()
-        self.overall_status.log_results()
-        return self.overall_status.exit_code()
-
-    
-
-    def cleanup(self):
-        """
-        Cleanup worker pool resources
-        """
-        if self.pool:
-            self.pool.haltWork()
-    
-
-    @staticmethod
-    def createParser():
-        """
-        Create parser expected by simple_main
-        """
-
-        parser = OptParser(option_class=OptChecker,
-                           description=' '.join(DESCRIPTION.split()),
-                           version='%prog version main build dev')
-        parser.setHelp(HELP)
-
-        #
-        # Note that this mirroringmode parameter should only be either mirrorless or quiescent.
-        #   If quiescent then it is implied that there is pickled transition data that will be
-        #   provided (using -p) to immediately convert to a primary or a mirror.
-        #
-        addStandardLoggingAndHelpOptions(parser, includeNonInteractiveOption=False)
-
-        parser.add_option("-C", "--collation", type="string",
-                            help="values for lc_collate, lc_monetary, lc_numeric separated by :")
-        parser.add_option("-D", "--dblist", dest="dblist", action="append", type="string")
-        parser.add_option("-M", "--mirroringmode", dest="mirroringMode", type="string")
-        parser.add_option("-p", "--pickledTransitionData", dest="pickledTransitionData", type="string")
-        parser.add_option("-V", "--gp-version", dest="gpversion", metavar="GP_VERSION", help="expected software version")
-        parser.add_option("-n", "--numsegments", dest="num_cids", help="number of distinct content ids in cluster")
-        parser.add_option("", "--era", dest="era", help="master era")
-        parser.add_option("-t", "--timeout", dest="timeout", type="int", default=gp.SEGMENT_TIMEOUT_DEFAULT,
-                          help="seconds to wait")
-        parser.add_option('-U', '--specialMode', type='choice', choices=['upgrade', 'maintenance'],
-                           metavar='upgrade|maintenance', action='store', default=None,
-                           help='start the instance in upgrade or maintenance mode')
-        parser.add_option('', '--wrapper', dest="wrapper", default=None, type='string')
-        parser.add_option('', '--wrapper-args', dest="wrapper_args", default=None, type='string')
-        
-        return parser
-
-    @staticmethod
-    def createProgram(options, args):
-        """
-        Create program expected by simple_main
-        """
-        return GpSegStart(options.dblist,
-                          options.gpversion,
-                          options.collation,
-                          options.mirroringMode,
-                          options.num_cids,
-                          options.era,
-                          options.timeout,
-                          options.pickledTransitionData,
-                          options.specialMode,
-                          options.wrapper,
-                          options.wrapper_args)
-
-#------------------------------------------------------------------------- 
-if __name__ == '__main__':
-    mainOptions = { 'setNonuserOnToolLogger':True}
-    simple_main( GpSegStart.createParser, GpSegStart.createProgram, mainOptions )

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/9932786b/tools/sbin/gpsegstop.py
----------------------------------------------------------------------
diff --git a/tools/sbin/gpsegstop.py b/tools/sbin/gpsegstop.py
deleted file mode 100755
index 3ed5f66..0000000
--- a/tools/sbin/gpsegstop.py
+++ /dev/null
@@ -1,155 +0,0 @@
-#!/usr/bin/env python
-#
-# Copyright (c) Greenplum Inc 2008. All Rights Reserved. 
-#
-#
-# Internal Use Function.
-#
-#
-#
-# THIS IMPORT MUST COME FIRST
-#
-# import mainUtils FIRST to get python version check
-from gppylib.mainUtils import *
-
-import os, sys, time, signal
-
-from optparse import Option, OptionGroup, OptionParser, OptionValueError, SUPPRESS_USAGE
-
-from gppylib.gpparseopts import OptParser, OptChecker
-from gppylib import gplog
-from gppylib.commands import base
-from gppylib.commands import unix
-from gppylib.commands import gp
-from gppylib.commands.gp import SEGMENT_TIMEOUT_DEFAULT
-from gppylib.commands import pg
-from gppylib.db import catalog
-from gppylib.db import dbconn
-from gppylib import pgconf
-from gppylib.gpcoverage import GpCoverage
-
-description = ("""
-This utility is NOT SUPPORTED and is for internal-use only.
-
-stops a set of one or more segment databases.
-""")
-
-logger = gplog.get_default_logger()
-
-#-------------------------------------------------------------------------
-class SegStopStatus:
-    def __init__(self,datadir,stopped,reason):
-        self.datadir=datadir
-        self.stopped=stopped
-        self.reason=reason
-    
-    def __str__(self):
-        return "STATUS--DIR:%s--STOPPED:%s--REASON:%s" % (self.datadir,self.stopped,self.reason)
-
-    
-#-------------------------------------------------------------------------    
-class GpSegStop:
-    ######
-    def __init__(self,dblist,mode,gpversion,timeout=SEGMENT_TIMEOUT_DEFAULT):
-        self.dblist=dblist
-        self.mode=mode
-        self.expected_gpversion=gpversion
-        self.timeout=timeout
-        self.gphome=os.path.abspath(os.pardir)
-        self.actual_gpversion=gp.GpVersion.local('local GP software version check',self.gphome)
-        if self.actual_gpversion != self.expected_gpversion:
-            raise Exception("Local Software Version does not match what is expected.\n"
-                            "The local software version is: '%s'\n"
-                            "But we were expecting it to be: '%s'\n"
-                            "Please review and correct" % (self.actual_gpversion,self.expected_gpversion))                
-        self.logger = logger
-    
-    ######
-    def run(self):
-        results  = []
-        failures = []
-        
-        self.logger.info("Issuing shutdown commands to local segments...")
-        for db in self.dblist:
-            datadir, port = db.split(':')[0:2]
-
-            cmd = gp.SegmentStop('segment shutdown', datadir, mode=self.mode, timeout=self.timeout)
-            cmd.run()
-            res = cmd.get_results()
-            if res.rc == 0:
-
-                # MPP-15208
-                #
-                cmd2 = gp.SegmentIsShutDown('check if shutdown', datadir)
-                cmd2.run()
-                if cmd2.is_shutdown():
-                    status = SegStopStatus(datadir, True, "Shutdown Succeeded")
-                    results.append(status)                
-                    continue
-
-                # MPP-16171
-                # 
-                if self.mode == 'immediate':
-                    status = SegStopStatus(datadir, True, "Shutdown Immediate")
-                    results.append(status)
-                    continue
-
-            # read pid and datadir from /tmp/.s.PGSQL.<port>.lock file
-            name = "failed segment '%s'" % db
-            (succeeded, mypid, file_datadir) = pg.ReadPostmasterTempFile.local(name,port).getResults()
-            if succeeded and file_datadir == datadir:
-
-                # now try to terminate the process, first trying with
-                # SIGTERM and working our way up to SIGABRT sleeping
-                # in between to give the process a moment to exit
-                #
-                unix.kill_sequence(mypid)
-
-                if not unix.check_pid(mypid):
-                    lockfile = "/tmp/.s.PGSQL.%s" % port    
-                    if os.path.exists(lockfile):
-                        self.logger.info("Clearing segment instance lock files")        
-                        os.remove(lockfile)
-            
-            status = SegStopStatus(datadir,False,"Shutdown failed: rc: %d stdout: %s stderr: %s" % (res.rc,res.stdout,res.stderr))
-            failures.append(status)
-            results.append(status)
-        
-        #Log the results!
-        status = '\nCOMMAND RESULTS\n'
-        for result in results:
-            status += str(result) + "\n"
-        
-        self.logger.info(status)
-        return 1 if failures else 0
-    
-    ######
-    def cleanup(self):
-        pass
-        
-    @staticmethod
-    def createParser():
-        parser = OptParser(option_class=OptChecker,
-                    description=' '.join(description.split()),
-                    version='%prog version $Revision: #12 $')
-        parser.setHelp([])
-
-        addStandardLoggingAndHelpOptions(parser, includeNonInteractiveOption=False)
-
-        parser.add_option("-D","--db",dest="dblist", action="append", type="string")
-        parser.add_option("-V", "--gp-version", dest="gpversion",metavar="GP_VERSION",
-                          help="expected software version")
-        parser.add_option("-m", "--mode", dest="mode",metavar="<MODE>",
-                          help="how to shutdown. modes are smart,fast, or immediate")
-        parser.add_option("-t", "--timeout", dest="timeout", type="int", default=SEGMENT_TIMEOUT_DEFAULT,
-                          help="seconds to wait")
-        return parser
-
-    @staticmethod
-    def createProgram(options, args):
-        return GpSegStop(options.dblist,options.mode,options.gpversion,options.timeout)
-
-#-------------------------------------------------------------------------
-if __name__ == '__main__':
-    mainOptions = { 'setNonuserOnToolLogger':True}
-    simple_main( GpSegStop.createParser, GpSegStop.createProgram, mainOptions)



Mime
View raw message