Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id C4535200BF4 for ; Fri, 6 Jan 2017 18:32:39 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id C2BA1160B4F; Fri, 6 Jan 2017 17:32:39 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 76A59160B4C for ; Fri, 6 Jan 2017 18:32:37 +0100 (CET) Received: (qmail 90260 invoked by uid 500); 6 Jan 2017 17:32:36 -0000 Mailing-List: contact commits-help@hawq.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hawq.incubator.apache.org Delivered-To: mailing list commits@hawq.incubator.apache.org Received: (qmail 90251 invoked by uid 99); 6 Jan 2017 17:32:36 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Jan 2017 17:32:36 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id F00ABC07FE for ; Fri, 6 Jan 2017 17:32:35 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -6.219 X-Spam-Level: X-Spam-Status: No, score=-6.219 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id sA5fvklQgz-p for ; Fri, 6 Jan 2017 17:32:28 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id 758FA5FDEE for ; Fri, 6 Jan 2017 17:32:18 +0000 (UTC) Received: (qmail 88868 invoked by uid 99); 6 Jan 2017 17:32:17 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Jan 2017 17:32:17 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 553E8DFCE5; Fri, 6 Jan 2017 17:32:17 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: yozie@apache.org To: commits@hawq.incubator.apache.org Date: Fri, 06 Jan 2017 17:32:37 -0000 Message-Id: <481635c200fc46e8999c9a0b285d104b@git.apache.org> In-Reply-To: <1b89d95838ac4b02a160900d44818c4f@git.apache.org> References: <1b89d95838ac4b02a160900d44818c4f@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [22/51] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs archived-at: Fri, 06 Jan 2017 17:32:39 -0000 http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb b/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb new file mode 100644 index 0000000..eeb7b39 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb @@ -0,0 +1,147 @@ +--- +title: hawq filespace +--- + +Creates a filespace using a configuration file that defines a file system location. Filespaces describe the physical file system resources to be used by a tablespace. + +## Synopsis + +``` pre +hawq filespace [] + -o | --output + [-l | --logdir ] + +hawq filespace [ | --config + [-l | --logdir ] + +hawq filespace [] + --movefilespace --location + [-l | --logdir ] + +hawq filespace -v | --version + +hawq filespace -? | --help +``` +where: + +``` pre + = + [-h | --host ] + [-p | -- port ] + [-U | --username ] + [-W | --password] +``` + +## Description + +A tablespace requires a file system location to store its database files. This file system location for all components in a HAWQ system is referred to as a *filespace*. Once a filespace is defined, it can be used by one or more tablespaces. + +The `--movefilespace` option allows you to relocate a filespace and its components within a dfs file system. + +When used with the `-o` option, the `hawq filespace` utility looks up your system configuration information in the system catalog tables and prompts you for the appropriate file system location needed to create the filespace. It then outputs a configuration file that can be used to create a filespace. If a file name is not specified, a `hawqfilespace_config_`*\#* file will be created in the current directory by default. + +Once you have a configuration file, you can run `hawq filespace` with the `-c` option to create the filespace in HAWQ system. + +**Note:** If segments are down due to a power or nic failure, you may see inconsistencies during filespace creation. You may not be able to bring up the cluster. + +## Options + +
-o, -\\\-output <output\_directory\_name>
+
The directory location and file name to output the generated filespace configuration file. You will be prompted to enter a name for the filespace and file system location. The file system locations must exist on all hosts in your system prior to running the `hawq filespace` command. You will specify the number of replicas to create. The default is 3 replicas. After the utility creates the configuration file, you can manually edit the file to make any required changes to the filespace layout before creating the filespace in HAWQ.
+ +
-c, -\\\-config <fs\_config\_file>
+
A configuration file containing: + +- An initial line denoting the new filespace name. For example: + + filespace:<myfs> +
+ +
-\\\-movefilespace <filespace>
+
Create the filespace in a new location on a distributed file system. Updates the dfs url in the HAWQ database, so that data in the original location can be moved or deleted. Data in the original location is not affected by this command.
+ +
-\\\-location <dfslocation>
+
Specifies the new URL location to which a dfs file system should be moved.
+ +
-l, -\\\-logdir <logfile\_directory>
+
The directory to write the log file. Defaults to `~/hawqAdminLogs`.
+ +
-v, -\\\-version (show utility version)
+
Displays the version of this utility.
+ +
-?, -\\\-help (help)
+
Displays the command usage and syntax.
+ +**<connection_options>** + +
-h, -\\\-host <hostname>
+
The host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `PGHOST` or defaults to localhost.
+ +
-p, -\\\-port <port>
+
The TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `PGPORT` or defaults to 5432.
+ +
-U, -\\\-username <superuser\_name>
+
The database superuser role name to connect as. If not specified, reads from the environment variable `PGUSER` or defaults to the current system user name. Only database superusers are allowed to create filespaces.
+ +
-W, -\\\-password
+
Force a password prompt.
+ +## Example 1 + +Create a filespace configuration file. Depending on your system setup, you may need to specify the host and port. You will be prompted to enter a name for the filespace and a replica number. You will then be asked for the DFS location. The file system locations must exist on all hosts in your system prior to running the `hawq filespace` command: + +``` shell +$ hawq filespace -o . +``` + +``` pre +Enter a name for this filespace +> fastdisk +Enter replica num for filespace. If 0, default replica num is used (default=3) +0 +Please specify the DFS location for the filespace (for example: localhost:9000/fs) +location> localhost:9000/hawqfs + +20160203:11:35:42:272716 hawqfilespace:localhost:gpadmin-[INFO]:-[created] +20160203:11:35:42:272716 hawqfilespace:localhost:gpadmin-[INFO]:- +To add this filespace to the database please run the command: + hawqfilespace --config ./hawqfilespace_config_20160203_112711 +Checking your configuration: + +Your system has 1 hosts with 2 primary segments +per host. + +Configuring hosts: [sdw1, sdw2] + +Enter a file system location for the master: +master location> /hawq_master_filespc +``` + +Example filespace configuration file: + +``` pre +filespace:fastdisk +mdw:1:/hawq_master_filespc/gp-1 +sdw1:2:/hawq_pri_filespc/gp0 +sdw2:3:/hawq_pri_filespc/gp1 +``` + +Execute the configuration file to create the filespace: + +``` shell +$ hawq filespace --config hawq_filespace_config_1 +``` + +## Example 2 + +Create the filespace at `cdbfast_fs_a` and move an hdfs filesystem to it: + +``` shell +$ hawq filespace --movefilespace=cdbfast_fs_a + --location=hdfs://gphd-cluster/cdbfast_fs_a/ +``` + +## See Also + +[CREATE TABLESPACE](../../sql/CREATE-TABLESPACE.html) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb b/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb new file mode 100644 index 0000000..de45ef3 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb @@ -0,0 +1,156 @@ +--- +title: hawq init +--- + +The `hawq init cluster` command initializes a HAWQ system and starts it. + +Use the `hawq init master` and `hawq init segment` commands to individually initialize the master or segment nodes, respectively. Specify any format options at this time. The `hawq init standby` command initializes a standby master host for a HAWQ system. + +Use the `hawq init --standby-host` option to define the host for a standby at initialization. + +## Synopsis + +``` pre +hawq init [--options] + +hawq init standby | cluster + [--standby-host ] + [] + +hawq init -? | --help +``` +where: + +``` pre + = cluster | master | segment | standby + + = +  [-a] [-l ] [-q] [-v] [-t] + [-n] + [--locale=] [--lc-collate=] +  [--lc-ctype=] [--lc-messages=] +  [--lc-monetary=] [--lc-numeric=] +  [--lc-time=] +  [--bucket_number ] +  [--max_connections ]   +  [--shared_buffers ] +``` + +## Description + +The `hawq init ` utility creates a HAWQ instance using configuration parameters defined in `$GPHOME/etc/hawq-site.xml`. Before running this utility, verify that you have installed the HAWQ software on all the hosts in the array. + +In a HAWQ DBMS, each database instance (the master and all segments) must be initialized across all of the hosts in the system in a way that allows them to work together as a unified DBMS. The `hawq init cluster` utility initializes the HAWQ master and each segment instance, and configures the system as a whole. When `hawq init cluster` is run, the cluster comes online automatically without needing to explicitly start it. You can start a single node cluster without any user-defined changes to the default `hawq-site.xml` file. For larger clusters, use the template-hawq-site.xml file to specify the configuration. + +To use the template for initializing a new cluster configuration, replace the items contained within the % markers. For example, replace `value%master.host%value` and `%master.host%` with the master host name. After modification, rename the file to the name of the default configuration file: `hawq-site.xml`. + + +- Before initializing HAWQ, set the `$GPHOME` environment variable to point to the location of your HAWQ installation on the master host and exchange SSH keys between all host addresses in the array, using `hawq ssh-exkeys`. +- To initialize and start a HAWQ cluster, enter the following command on the master host: + + ```shell + $ hawq init cluster + ``` + +This utility performs the following tasks: + +- Verifies that the parameters in the configuration file are correct. +- Ensures that a connection can be established to each host address. If a host address cannot be reached, the utility will exit. +- Verifies the locale settings. +- Initializes the master instance. +- Initializes the standby master instance (if specified). +- Initializes the segment instances. +- Configures the HAWQ system and checks for errors. +- Starts the HAWQ system. + +The `hawq init standby` utility can be run on either the currently active *primary* master host or on the standby node. + +`hawq init standby` performs the following steps: + +- Updates the HAWQ system catalog to add the new standby master host information +- Edits the `pg_hba.conf` file of the HAWQ master to allow access from the newly added standby master. +- Sets up the standby master instance on the alternate master host +- Starts the synchronization process + +A backup, standby master host serves as a 'warm standby' in the event of the primary master host becoming non-operational. The standby master is kept up to date by transaction log replication processes (the `walsender` and `walreceiver`), which run on the primary master and standby master hosts and keep the data between the primary and standby master hosts synchronized. To add a standby master to the system, use the command `hawq init standby`, for example: `init standby host09`. To configure the standby hostname at initialization without needing to run hawq config by defining it, use the --standby-host option. To create the standby above, you would specify `hawq init standby --standby-host=host09` or `hawq init cluster --standby-host=host09`. + +If the primary master fails, the log replication process is shut down. Run the `hawq activate standby` utility to activate the standby master in its place; upon activation of the standby master, the replicated logs are used to reconstruct the state of the master host at the time of the last successfully committed transaction. + +## Objects + +
cluster
+
Start a HAWQ cluster.
+ +
master
+
Start HAWQ master.
+ +
segment
+
Start a local segment node.
+ +
standby
+
Start a HAWQ standby master.
+ +## Options + +
-a, (do not prompt)
+
Do not prompt the user for confirmation.
+ + +
-l, -\\\-logdir \
+
The directory to write the log file. Defaults to `~/hawq/AdminLogs`.
+ +
-q, -\\\-quiet (no screen output)
+
Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.
+ +
-v, -\\\-verbose
+
Displays detailed status, progress and error messages and writes them to the log files.
+ +
-t, -\\\-timeout
+
Sets timeout value in seconds. The default is 60 seconds.
+ +
-n, -\\\-no-update
+
Resync the standby with the master, but do not update system catalog tables.
+ +
-\\\-locale=\
+
Sets the default locale used by HAWQ. If not specified, the `LC_ALL`, `LC_COLLATE`, or `LANG` environment variable of the master host determines the locale. If these are not set, the default locale is `C` (`POSIX`). A locale identifier consists of a language identifier and a region identifier, and optionally a character set encoding. For example, `sv_SE` is Swedish as spoken in Sweden, `en_US` is U.S. English, and `fr_CA` is French Canadian. If more than one character set can be useful for a locale, then the specifications look like this: `en_US.UTF-8` (locale specification and character set encoding). On most systems, the command `locale` will show the locale environment settings and `locale -a` will show a list of all available locales.
+ +
-\\\-lc-collate=\
+
Similar to `--locale`, but sets the locale used for collation (sorting data). The sort order cannot be changed after HAWQ is initialized, so it is important to choose a collation locale that is compatible with the character set encodings that you plan to use for your data. There is a special collation name of `C` or `POSIX` (byte-order sorting as opposed to dictionary-order sorting). The `C` collation can be used with any character encoding.
+ +
-\\\-lc-ctype=\
+
Similar to `--locale`, but sets the locale used for character classification (what character sequences are valid and how they are interpreted). This cannot be changed after HAWQ is initialized, so it is important to choose a character classification locale that is compatible with the data you plan to store in HAWQ.
+ +
-\\\-lc-messages=\
+
Similar to `--locale`, but sets the locale used for messages output by HAWQ. The current version of HAWQ does not support multiple locales for output messages (all messages are in English), so changing this setting will not have any effect.
+ +
-\\\-lc-monetary=\
+
Similar to `--locale`, but sets the locale used for formatting currency amounts.
+ +
-\\\-lc-numeric=\
+
Similar to `--locale`, but sets the locale used for formatting numbers.
+ +
-\\\-lc-time=\
+
Similar to `--locale`, but sets the locale used for formatting dates and times.
+ +
-\\\-bucket\_number=\
+
Sets value of `default_hash_table_bucket_number`, which sets the default number of hash buckets for creating virtual segments. This parameter overrides the default value of `default_hash_table_bucket_number` set in `hawq-site.xml` by an Ambari install. If not specified, `hawq init` will use the value in `hawq-site.xml`.
+ +
-\\\-max\_connections=\
+
Sets the number of client connections allowed to the master. The default is 250.
+ +
-\\\-shared\_buffers \
+
Sets the number of shared\_buffers to be used when initializing HAWQ.
+ +
-s, -\\\-standby-host \
+
Adds a standby host name to hawq-site.xml and syncs it to all the nodes. If a standby host name was already defined in hawq-site.xml, using this option will overwrite the existing value.
+ +
-?, -\\\-help
+
Displays the online help.
+ +## Examples + +Initialize a HAWQ array with an optional standby master host: + +``` shell +$ hawq init standby +``` http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqload.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqload.html.md.erb b/markdown/reference/cli/admin_utilities/hawqload.html.md.erb new file mode 100644 index 0000000..b9fe441 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqload.html.md.erb @@ -0,0 +1,420 @@ +--- +title: hawq load +--- + +Acts as an interface to the external table parallel loading feature. Executes a load specification defined in a YAML-formatted control file to invoke the HAWQ parallel file server (`gpfdist`). + +## Synopsis + +``` pre +hawq load -f [-l ] + [--gpfdist_timeout ] + [[-v | -V] + [-q]] + [-D] + [] + +hawq load -? + +hawq load --version +``` +where: + +``` pre + = + [-h ] + [-p ] + [-U ] + [-d ] + [-W] +``` + +## Prerequisites + +The client machine where `hawq load` is executed must have the following: + +- Python 2.6.2 or later, `pygresql` (the Python interface to PostgreSQL), and `pyyaml`. Note that Python and the required Python libraries are included with the HAWQ server installation, so if you have HAWQ installed on the machine where `hawq load` is running, you do not need a separate Python installation. + **Note:** HAWQ Loaders for Windows supports only Python 2.5 (available from [www.python.org](http://python.org)). + +- The [gpfdist](gpfdist.html#topic1) parallel file distribution program installed and in your `$PATH`. This program is located in `$GPHOME/bin` of your HAWQ server installation. +- Network access to and from all hosts in your HAWQ array (master and segments). +- Network access to and from the hosts where the data to be loaded resides (ETL servers). + +## Description + +`hawq load` is a data loading utility that acts as an interface to HAWQ's external table parallel loading feature. Using a load specification defined in a YAML formatted control file, `hawq load` executes a load by invoking the HAWQ parallel file server ([gpfdist](gpfdist.html#topic1)), creating an external table definition based on the source data defined, and executing an `INSERT` operation to load the source data into the target table in the database. + +The operation, including any SQL commands specified in the `SQL` collection of the YAML control file (see [Control File Format](#topic1__section7)), are performed as a single transaction to prevent inconsistent data when performing multiple, simultaneous load operations on a target table. + +## Arguments + +
-f <control\_file>
+
A YAML file that contains the load specification details. See [Control File Format](#topic1__section7).
+ +## Options + +
-\\\-gpfdist\_timeout <seconds>
+
Sets the timeout for the `gpfdist` parallel file distribution program to send a response. Enter a value from `0` to `30` seconds (entering "`0`" to disables timeouts). Note that you might need to increase this value when operating on high-traffic networks.
+ +
-l <log\_file>
+
Specifies where to write the log file. Defaults to `~/hawq/Adminlogs/hawq_load_YYYYMMDD`. For more information about the log file, see [Log File Format](#topic1__section9).
+ +
-q (no screen output)
+
Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.
+ +
-D (debug mode)
+
Check for error conditions, but do not execute the load.
+ +
-v (verbose mode)
+
Show verbose output of the load steps as they are executed.
+ +
-V (very verbose mode)
+
Shows very verbose output.
+ +
-? (show help)
+
Show help, then exit.
+ +
-\\\-version
+
Show the version of this utility, then exit.
+ +**Connection Options** + +
-d <database>
+
The database to load into. If not specified, reads from the load control file, the environment variable `$PGDATABASE` or defaults to the current system user name.
+ +
-h <hostname>
+
Specifies the host name of the machine on which the HAWQ master database server is running. If not specified, reads from the load control file, the environment variable `$PGHOST` or defaults to `localhost`.
+ +
-p <port>
+
Specifies the TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the load control file, the environment variable `$PGPORT` or defaults to 5432.
+ +
-U <username>
+
The database role name to connect as. If not specified, reads from the load control file, the environment variable `$PGUSER` or defaults to the current system user name.
+ +
-W (force password prompt)
+
Force a password prompt. If not specified, reads the password from the environment variable `$PGPASSWORD` or from a password file specified by `$PGPASSFILE` or in `~/.pgpass`. If these are not set, then `hawq load` will prompt for a password even if `-W` is not supplied.
+ +## Control File Format + +The `hawq load` control file uses the [YAML 1.1](http://yaml.org/spec/1.1/) document format and then implements its own schema for defining the various steps of a HAWQ load operation. The control file must be a valid YAML document. + +The `hawq load` program processes the control file document in order and uses indentation (spaces) to determine the document hierarchy and the relationships of the sections to one another. The use of white space is significant. White space should not be used simply for formatting purposes, and tabs should not be used at all. + +The basic structure of a load control file is: + +``` pre +--- +VERSION: 1.0.0.1 +DATABASE: db_name +USER: db_username +HOST: master_hostname +PORT: master_port +GPLOAD: + INPUT: + - SOURCE: +         LOCAL_HOSTNAME: +           - hostname_or_ip +         PORT: http_port +       | PORT_RANGE: [start_port_range, end_port_range] +         FILE: +           - /path/to/input_file +         SSL: true | false +         CERTIFICATES_PATH: /path/to/certificates + - COLUMNS: +           - field_name: data_type + - TRANSFORM: 'transformation' +    - TRANSFORM_CONFIG: 'configuration-file-path' +    - MAX_LINE_LENGTH: integer +    - FORMAT: text | csv +    - DELIMITER: 'delimiter_character' +    - ESCAPE: 'escape_character' | 'OFF' +    - NULL_AS: 'null_string' +    - FORCE_NOT_NULL: true | false +    - QUOTE: 'csv_quote_character' +    - HEADER: true | false +    - ENCODING: database_encoding + - ERROR_LIMIT: integer + - ERROR_TABLE: schema.table_name + OUTPUT: + - TABLE: schema.table_name + - MODE: insert | update | merge + - MATCH_COLUMNS: +           - target_column_name + - UPDATE_COLUMNS: +           - target_column_name + - UPDATE_CONDITION: 'boolean_condition' + - MAPPING: +            target_column_name: source_column_name | 'expression' + PRELOAD: + - TRUNCATE: true | false + - REUSE_TABLES: true | false + SQL: + - BEFORE: "sql_command" + - AFTER: "sql_command" +``` + +**Control File Schema Elements** + +The control file contains the schema elements for: + +- Version +- Database +- User +- Host +- Port +- GPLOAD file + +
VERSION
+
Optional. The version of the `hawq load` control file schema, for example: 1.0.0.1.
+ +
DATABASE
+
Optional. Specifies which database in HAWQ to connect to. If not specified, defaults to `$PGDATABASE` if set or the current system user name. You can also specify the database on the command line using the `-d` option.
+ +
USER
+
Optional. Specifies which database role to use to connect. If not specified, defaults to the current user or `$PGUSER` if set. You can also specify the database role on the command line using the `-U` option. + +If the user running `hawq load` is not a HAWQ superuser, then the server configuration parameter `gp_external_grant_privileges` must be set to `on` for the load to be processed.
+ +
HOST
+
Optional. Specifies HAWQ master host name. If not specified, defaults to localhost or `$PGHOST` if set. You can also specify the master host name on the command line using the `-h` option.
+ +
PORT
+
Optional. Specifies HAWQ master port. If not specified, defaults to 5432 or `$PGPORT` if set. You can also specify the master port on the command line using the `-p` option.
+ +
GPLOAD
+
Required. Begins the load specification section. A `GPLOAD` specification must have an `INPUT` and an `OUTPUT` section defined.
+ +
INPUT
+
Required element. Defines the location and the format of the input data to be loaded. `hawq load` will start one or more instances of the [gpfdist](gpfdist.html#topic1) file distribution program on the current host and create the required external table definition(s) in HAWQ that point to the source data. Note that the host from which you run `hawq load` must be accessible over the network by all HAWQ hosts (master and segments).
+ +
SOURCE
+
Required. The `SOURCE` block of an `INPUT` specification defines the location of a source file. An `INPUT` section can have more than one `SOURCE` block defined. Each `SOURCE` block defined corresponds to one instance of the [gpfdist](gpfdist.html#topic1) file distribution program that will be started on the local machine. Each `SOURCE` block defined must have a `FILE` specification.
+ +
LOCAL\_HOSTNAME
+
Optional. Specifies the host name or IP address of the local machine on which `hawq load` is running. If this machine is configured with multiple network interface cards (NICs), you can specify the host name or IP of each individual NIC to allow network traffic to use all NICs simultaneously. The default is to use the local machine's primary host name or IP only.
+ +
PORT
+
Optional. Specifies the specific port number that the [gpfdist](gpfdist.html#topic1) file distribution program should use. You can also supply a `PORT_RANGE` to select an available port from the specified range. If both `PORT` and `PORT_RANGE` are defined, then `PORT` takes precedence. If neither `PORT` or `PORT_RANGE` are defined, the default is to select an available port between 8000 and 9000. + +If multiple host names are declared in `LOCAL_HOSTNAME`, this port number is used for all hosts. This configuration is desired if you want to use all NICs to load the same file or set of files in a given directory location.
+ +
PORT\_RANGE
+
Optional. Can be used instead of `PORT` to supply a range of port numbers from which `hawq load` can choose an available port for this instance of the [gpfdist](gpfdist.html#topic1) file distribution program.
+ +
FILE
+
Required. Specifies the location of a file, named pipe, or directory location on the local file system that contains data to be loaded. You can declare more than one file so long as the data is of the same format in all files specified. + +If the files are compressed using `gzip` or `bzip2` (have a `.gz` or `.bz2` file extension), the files will be uncompressed automatically (provided that `gunzip` or `bunzip2` is in your path). + +When specifying which source files to load, you can use the wildcard character (`*`) or other C-style pattern matching to denote multiple files. The files specified are assumed to be relative to the current directory from which `hawq load` is executed (or you can declare an absolute path).
+ +
SSL
+
Optional. Specifies usage of SSL encryption.
+ +
CERTIFICATES\_PATH
+
Required when SSL is `true`; cannot be specified when SSL is `false` or unspecified. The location specified in `CERTIFICATES_PATH` must contain the following files: + +- The server certificate file, `server.crt` +- The server private key file, `server.key` +- The trusted certificate authorities, `root.crt` + +The root directory (`/`) cannot be specified as `CERTIFICATES_PATH`.
+ +
COLUMNS
+
Optional. Specifies the schema of the source data file(s) in the format of `field_name:data_type`. The `DELIMITER` character in the source file is what separates two data value fields (columns). A row is determined by a line feed character (`0x0a`). + +If the input `COLUMNS` are not specified, then the schema of the output `TABLE` is implied, meaning that the source data must have the same column order, number of columns, and data format as the target table. + +The default source-to-target mapping is based on a match of column names as defined in this section and the column names in the target `TABLE`. This default mapping can be overridden using the `MAPPING` section.
+ +
TRANSFORM
+
Optional. Specifies the name of the input XML transformation passed to `hawq load`. For more information about XML transformations, see ["Loading and Unloading Data."](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1).
+ +
TRANSFORM\_CONFIG
+
Optional. Specifies the location of the XML transformation configuration file that is specified in the `TRANSFORM` parameter, above.
+ +
MAX\_LINE\_LENGTH
+
Optional. An integer that specifies the maximum length of a line in the XML transformation data passed to `hawq load`.
+ +
FORMAT
+
Optional. Specifies the format of the source data file(s) - either plain text (`TEXT`) or comma separated values (`CSV`) format. Defaults to `TEXT` if not specified. For more information about the format of the source data, see ["Loading and Unloading Data"](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1) .
+ +
DELIMITER
+
Optional. Specifies a single ASCII character that separates columns within each row (line) of data. The default is a tab character in TEXT mode, a comma in CSV mode.You can also specify a non-printable ASCII character via an escape sequence\\ using the decimal representation of the ASCII character. For example, `\014` represents the shift out character..
+ +
ESCAPE
+
Specifies the single character that is used for C escape sequences (such as `\n`, `\t`, `\100`, and so on) and for escaping data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is a \\ (backslash) for text-formatted files and a `"` (double quote) for csv-formatted files, however it is possible to specify another character to represent an escape. It is also possible to disable escaping in text-formatted files by specifying the value `'OFF'` as the escape value. This is very useful for data such as text-formatted web log data that has many embedded backslashes that are not intended to be escapes.
+ +
NULL\_AS
+
Optional. Specifies the string that represents a null value. The default is `\N` (backslash-N) in `TEXT` mode, and an empty value with no quotations in `CSV` mode. You might prefer an empty string even in `TEXT` mode for cases where you do not want to distinguish nulls from empty strings. Any source data item that matches this string will be considered a null value.
+ +
FORCE\_NOT\_NULL
+
Optional. In CSV mode, processes each specified column as though it were quoted and hence not a NULL value. For the default null string in CSV mode (nothing between two delimiters), this causes missing values to be evaluated as zero-length strings.
+ +
QUOTE
+
Required when `FORMAT` is `CSV`. Specifies the quotation character for `CSV` mode. The default is double-quote (`"`).
+ +
HEADER
+
Optional. Specifies that the first line in the data file(s) is a header row (contains the names of the columns) and should not be included as data to be loaded. If using multiple data source files, all files must have a header row. The default is to assume that the input files do not have a header row.
+ +
ENCODING
+
Optional. Character set encoding of the source data. Specify a string constant (such as `'SQL_ASCII'`), an integer encoding number, or `'DEFAULT'` to use the default client encoding. If not specified, the default client encoding is used.
+ +
ERROR\_LIMIT
+
Optional. Sets the error limit count for HAWQ segment instances during input processing. Error rows will be written to the table specified in `ERROR_TABLE`. The value of ERROR\_LIMIT must be 2 or greater.
+ +
ERROR\_TABLE
+
Optional when `ERROR_LIMIT` is declared. Specifies an error table where rows with formatting errors will be logged when running in single row error isolation mode. You can then examine this error table to see error rows that were not loaded (if any). If the `ERROR_TABLE` specified already exists, it will be used. If it does not exist, it will be automatically generated. + +For more information about handling load errors, see "[Loading and Unloading Data](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1)".
+ +
OUTPUT
+
Required element. Defines the target table and final data column values that are to be loaded into the database.
+ +
TABLE
+
Required. The name of the target table to load into.
+ +
MODE
+
Optional. Defaults to `INSERT` if not specified. There are three available load modes:
+ +
INSERT
+
Loads data into the target table using the following method: + +``` pre +INSERT INTO target_table SELECT * FROM input_data; +``` +
+ +
UPDATE
+
Updates the `UPDATE_COLUMNS` of the target table where the rows have `MATCH_COLUMNS` attribute values equal to those of the input data, and the optional `UPDATE_CONDITION` is true.
+ +
MERGE
+
Inserts new rows and updates the `UPDATE_COLUMNS` of existing rows where `MATCH_COLUMNS` attribute values are equal to those of the input data, and the optional `UPDATE_CONDITION` is true. New rows are identified when the `MATCH_COLUMNS` value in the source data does not have a corresponding value in the existing data of the target table. In those cases, the **entire row** from the source file is inserted, not only the `MATCH` and `UPDATE` columns. If there are multiple new `MATCH_COLUMNS` values that are the same, only one new row for that value will be inserted. Use `UPDATE_CONDITION` to filter out the rows to discard.
+ +
MATCH\_COLUMNS
+
Required if `MODE` is `UPDATE` or `MERGE`. Specifies the column(s) to use as the join condition for the update. The attribute value in the specified target column(s) must be equal to that of the corresponding source data column(s) in order for the row to be updated in the target table.
+ +
UPDATE\_COLUMNS
+
Required if `MODE` is `UPDATE` or `MERGE`. Specifies the column(s) to update for the rows that meet the `MATCH_COLUMNS` criteria and the optional `UPDATE_CONDITION`.
+ +
UPDATE\_CONDITION
+
Optional. Specifies a Boolean condition (similar to what you would declare in a `WHERE` clause) that must be met for a row in the target table to be updated (or inserted in the case of a `MERGE`).
+ +
MAPPING
+
Optional. If a mapping is specified, it overrides the default source-to-target column mapping. The default source-to-target mapping is based on a match of column names as defined in the source `COLUMNS` section and the column names of the target `TABLE`. A mapping is specified as either: + +`target_column_name: source_column_name` + +or + +`target_column_name: 'expression'` + +Where <expression> is any expression that you would specify in the `SELECT` list of a query, such as a constant value, a column reference, an operator invocation, a function call, and so on.
+ +
PRELOAD
+
Optional. Specifies operations to run prior to the load operation. Currently, the only preload operation is `TRUNCATE`.
+ +
TRUNCATE
+
Optional. If set to true, `hawq load` will remove all rows in the target table prior to loading it.
+ +
REUSE\_TABLES
+
Optional. If set to true, `hawq load` will not drop the external table objects and staging table objects it creates. These objects will be reused for future load operations that use the same load specifications. Reusing objects improves performance of trickle loads (ongoing small loads to the same target table).
+ +
SQL
+
Optional. Defines SQL commands to run before and/or after the load operation. Commands that contain spaces or special characters must be enclosed in quotes. You can specify multiple `BEFORE` and/or `AFTER` commands. List commands in the desired order of execution.
+ +
BEFORE
+
Optional. A SQL command to run before the load operation starts. Enclose commands in quotes.
+ +
AFTER
+
Optional. A SQL command to run after the load operation completes. Enclose commands in quotes.
+ +## Notes + +If your database object names were created using a double-quoted identifier (delimited identifier), you must specify the delimited name within single quotes in the `hawq load` control file. For example, if you create a table as follows: + +``` sql +CREATE TABLE "MyTable" ("MyColumn" text); +``` + +Your YAML-formatted `hawq load` control file would refer to the above table and column names as follows: + +``` pre +- COLUMNS: + - '"MyColumn"': text +OUTPUT: + - TABLE: public.'"MyTable"' +``` + +## Log File Format + +Log files output by `hawq load` have the following format: + +``` pre +timestamp|level|message +``` + +Where <timestamp> takes the form: `YYYY-MM-DD HH:MM:SS`, <level> is one of `DEBUG`, `LOG`, `INFO`, `ERROR`, and <message> is a normal text message. + +Some `INFO` messages that may be of interest in the log files are (where *\#* corresponds to the actual number of seconds, units of data, or failed rows): + +``` pre +INFO|running time: #.## seconds +INFO|transferred #.# kB of #.# kB. +INFO|hawq load succeeded +INFO|hawq load succeeded with warnings +INFO|hawq load failed +INFO|1 bad row +INFO|# bad rows +``` + +## Examples + +Run a load job as defined in `my_load.yml`: + +``` shell +$ hawq load -f my_load.yml +``` + +Example load control file: + +``` pre +--- +VERSION: 1.0.0.1 +DATABASE: ops +USER: gpadmin +HOST: mdw-1 +PORT: 5432 +GPLOAD: + INPUT: + - SOURCE: + LOCAL_HOSTNAME: + - etl1-1 + - etl1-2 +           - etl1-3 + - etl1-4 + PORT: 8081 + FILE: +           - /var/load/data/* + - COLUMNS: + - name: text +           - amount: float4 +           - category: text +           - desc: text + - date: date + - FORMAT: text + - DELIMITER: '|' +    - ERROR_LIMIT: 25 + - ERROR_TABLE: payables.err_expenses + OUTPUT: + - TABLE: payables.expenses + - MODE: INSERT +   SQL: +   - BEFORE: "INSERT INTO audit VALUES('start', current_timestamp)" +   - AFTER: "INSERT INTO audit VALUES('end', +current_timestamp)" +``` + +## See Also + +[gpfdist](gpfdist.html#topic1), [CREATE EXTERNAL TABLE](../../sql/CREATE-EXTERNAL-TABLE.html#topic1) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb b/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb new file mode 100644 index 0000000..c230d6d --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb @@ -0,0 +1,254 @@ +--- +title: hawq register +--- + +Loads and registers AO or Parquet-formatted tables in HDFS into a corresponding table in HAWQ. + +## Synopsis + +``` pre +Usage 1: +hawq register [] [-f ] [-e ] + +Usage 2: +hawq register [] [-c ][-F] + +Connection Options: + [-h | --host ] + [-p | --port ] + [-U | --user ] + [-d | --database ] + +Misc. Options: + [-f | --filepath ] + [-e | --eof] + [-F | --force ] + [-c | --config ] +hawq register help | -? +hawq register --version +``` + +## Prerequisites + +The client machine where `hawq register` is executed must meet the following conditions: + +- All hosts in your HAWQ cluster (master and segments) must have network access between them and the hosts containing the data to be loaded. +- The Hadoop client must be configured and the hdfs filepath specified. +- The files to be registered and the HAWQ table must be located in the same HDFS cluster. +- The target table DDL is configured with the correct data type mapping. + +## Description + +`hawq register` is a utility that loads and registers existing data files or folders in HDFS into HAWQ internal tables, allowing HAWQ to directly read the data and use internal table processing for operations such as transactions and high perforance, without needing to load or copy it. Data from the file or directory specified by \ is loaded into the appropriate HAWQ table directory in HDFS and the utility updates the corresponding HAWQ metadata for the files. + +You can use `hawq register` to: + +- Load and register external Parquet-formatted file data generated by an external system such as Hive or Spark. +- Recover cluster data from a backup cluster. + +Two usage models are available. + +###Usage Model 1: Register file data to an existing table. + +`hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-f filepath] [-e eof]` + +Metadata for the Parquet file(s) and the destination table must be consistent. Different data types are used by HAWQ tables and Parquet files, so the data is mapped. Refer to the section [Data Type Mapping](hawqregister.html#topic1__section7) below. You must verify that the structure of the Parquet files and the HAWQ table are compatible before running `hawq register`. + +####Limitations + +Only HAWQ or Hive-generated Parquet tables are supported. +Hash tables and partitioned tables are not supported in this use model. + +###Usage Model 2: Use information from a YAML configuration file to register data + +`hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-c configfile] [--force] ` + +Files generated by the `hawq extract` command are registered through use of metadata in a YAML configuration file. Both AO and Parquet tables can be registered. Tables need not exist in HAWQ before being registered. + +The register process behaves differently, according to different conditions. + +- Existing tables have files appended to the existing HAWQ table. +- If a table does not exist, it is created and registered into HAWQ. +- If the -\\\-force option is used, the data in existing catalog tables is erased and re-registered. + + +###Limitations for Registering Hive Tables to HAWQ +The currently-supported data types for generating Hive tables into HAWQ tables are: boolean, int, smallint, tinyint, bigint, float, double, string, binary, char, and varchar. + +The following HIVE data types cannot be converted to HAWQ equivalents: timestamp, decimal, array, struct, map, and union. + +Only single-level partitioned tables are supported. + +###Data Type Mapping + +HAWQ and Parquet tables and HIVE and HAWQ tables use different data types. Mapping must be used for compatibility. You are responsible for making sure your implementation is mapped to the appropriate data type before running `hawq register`. The tables below show equivalent data types, if available. + +Table 1. HAWQ to Parquet Mapping + +|HAWQ Data Type | Parquet Data Type | +| :------------| :---------------| +| bool | boolean | +| int2/int4/date | int32 | +| int8/money | int64 | +| time/timestamptz/timestamp | int64 | +| float4 | float | +|float8 | double | +|bit/varbit/bytea/numeric | Byte array | +|char/bpchar/varchar/name| Byte array | +| text/xml/interval/timetz | Byte array | +| macaddr/inet/cidr | Byte array | + +**Additional HAWQ-to-Parquet Mapping** + +**point**: + +``` +group { + required int x; + required int y; +} +``` + +**circle:** + +``` +group { + required int x; + required int y; + required int r; +} +``` + +**box:** + +``` +group { + required int x1; + required int y1; + required int x2; + required int y2; +} +``` + +**iseg:** + + +``` +group { + required int x1; + required int y1; + required int x2; + required int y2; +} +``` + +**path**: + +``` +group { + repeated group { + required int x; + required int y; + } +} +``` + + +Table 2. HIVE to HAWQ Mapping + +|HIVE Data Type | HAWQ Data Type | +| :------------| :---------------| +| boolean | bool | +| tinyint | int2 | +| smallint | int2/smallint | +| int | int4 / int | +| bigint | int8 / bigint | +| float | float4 | +| double | float8 | +| string | varchar | +| binary | bytea | +| char | char | +| varchar | varchar | + + +## Options + +**General Options** + +
-? (show help)
+
Show help, then exit. + +
-\\\-version
+
Show the version of this utility, then exit.
+ + +**Connection Options** + +
-h , -\\\-host \
+
Specifies the host name of the machine on which the HAWQ master database server is running. If not specified, reads from the environment variable `$PGHOST` or defaults to `localhost`.
+ +
-p , -\\\-port \
+
Specifies the TCP port on which the HAWQ master database server is listening for connections. If not specified, reads from the environment variable `$PGPORT` or defaults to 5432.
+ +
-U , -\\\-user \
+
The database role name to connect as. If not specified, reads from the environment variable `$PGUSER` or defaults to the current system user name.
+ +
-d , -\\\-database \
+
The database to register the Parquet HDFS data into. The default is `postgres`
+ +
-f , -\\\-filepath \
+
The path of the file or directory in HDFS containing the files to be registered.
+ +
\
+
The HAWQ table that will store the data to be registered. If the --config option is not supplied, the table cannot use hash distribution. Random table distribution is strongly preferred. If hash distribution must be used, make sure that the distribution policy for the data files described in the YAML file is consistent with the table being registered into.
+ +####Miscellaneous Options + +The following options are used with specific use models. + +
-e , -\\\-eof \
+
Specify the end of the file to be registered. \ represents the valid content length of the file, in bytes to be used, a value between 0 the actual size of the file. If this option is not included, the actual file size, or size of files within a folder, is used. Used with Use Model 1.
+ +
-F , -\\\-force
+
Used for disaster recovery of a cluster. Clears all HDFS-related catalog contents in `pg_aoseg.pg_paqseg_$relid `and re-registers files to a specified table. The HDFS files are not removed or modified. To use this option for recovery, data is assumed to be periodically imported to the cluster to be recovered. Used with Usage Model 2.
+ +
-c , -\\\-config \
+
Registers files specified by YAML-format configuration files into HAWQ. Used with Usage Model 2.
+ + +## Example: Usage Model 2 + +This example shows how to register files using a YAML configuration file. This file is usually generated by the `hawq extract` command. + +Create a table and insert data into the table: + +``` +=> CREATE TABLE paq1(a int, b varchar(10))with(appendonly=true, orientation=parquet);` +=> INSERT INTO paq1 values(generate_series(1,1000), 'abcde'); +``` + +Extract the table's metadata. + +``` +hawq extract -o paq1.yml paq1 +``` + +Use the YAML file to register the new table paq2: + +``` +hawq register --config paq1.yml paq2 +``` + +Select the new table to determine if the content has already been registered: + +``` +=> SELECT count(*) FROM paq2; +``` +The result should return 1000. + +## See Also + +[hawq extract](hawqextract.html#topic1) + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb b/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb new file mode 100644 index 0000000..6d80e90 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb @@ -0,0 +1,112 @@ +--- +title: hawq restart +--- + +Shuts down and then restarts a HAWQ system after shutdown is complete. + +## Synopsis + +``` pre +hawq restart [-l|--logdir ] [-q|--quiet] [-v|--verbose] + [-M|--mode smart | fast | immediate] [-u|--reload] [-m|--masteronly] [-R|--restrict] + [-t|--timeout ] [-U | --special-mode maintenance] + [--ignore-bad-hosts cluster | allsegments] + +``` + +``` pre +hawq restart -? | -h | --help + +hawq restart --version +``` + +## Description + +The `hawq restart` utility is used to shut down and restart the HAWQ server processes. It is essentially equivalent to performing a `hawq stop -M smart` operation followed by `hawq start`. + +The \ in the command specifies what entity should be started: e.g. a cluster, a segment, the master node, standby node, or all segments in the cluster. + +When the `hawq restart` command runs, the utility uploads changes made to the master `pg_hba.conf` file or to the runtime configuration parameters in the master `hawq-site.xml` file without interruption of service. Note that any active sessions will not pick up the changes until they reconnect to the database. + +## Objects + +
cluster
+
Restart a HAWQ cluster.
+ +
master
+
Restart HAWQ master.
+ +
segment
+
Restart a local segment node.
+ +
standby
+
Restart a HAWQ standby.
+ +
allsegments
+
Restart all segments.
+ +## Options + +
-a (do not prompt)
+
Do not prompt the user for confirmation.
+ +
-l, -\\\-logdir \
+
Specifies the log directory for logs of the management tools. The default is `~/hawq/Adminlogs/`.
+ +
-q, -\\\-quiet
+
Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.
+ +
-v, -\\\-verbose
+
Displays detailed status, progress and error messages output by the utility.
+ +
-t, -\\\-timeout \
+
Specifies a timeout in seconds to wait for a segment instance to start up. If a segment instance was shutdown abnormally (due to power failure or killing its `postgres` database listener process, for example), it may take longer to start up due to the database recovery and validation process. If not specified, the default timeout is 60 seconds.
+ +
-M, -\\\-mode smart | fast | immediate
+
Smart shutdown is the default. Shutdown fails with a warning message, if active connections are found. + +Fast shut down interrupts and rolls back any transactions currently in progress . + +Immediate shutdown aborts transactions in progress and kills all `postgres` processes without allowing the database server to complete transaction processing or clean up any temporary or in-process work files. Because of this, immediate shutdown is not recommended. In some instances, it can cause database corruption that requires manual recovery.
+ +
-u, -\\\-reload
+
Utility mode. This mode runs on the master, only, and only allows incoming sessions that specify gp\_session\_role=utility. It allows bash scripts to reload the parameter values and connect but protects the system from normal clients who might be trying to connect to the system during startup.
+ +
-R, -\\\-restrict
+
Starts HAWQ in restricted mode (only database superusers are allowed to connect).
+ +
-U, -\\\-special-mode maintenance
+
(Superuser only) Start HAWQ in \[maintenance | upgrade\] mode. In maintenance mode, the `gp_maintenance_conn` parameter is set.
+ +
-\\\-ignore\-bad\-hosts cluster | allsegments
+
Overrides copying configuration files to a host on which SSH validation fails. If ssh to a skipped host is reestablished, make sure the configuration files are re-synched once it is reachable.
+ +
-? , -h , -\\\-help (help)
+
Displays the online help.
+ +
-\\\-version (show utility version)
+
Displays the version of this utility.
+ +## Examples + +Restart a HAWQ cluster: + +``` shell +$ hawq restart cluster +``` + +Restart a HAWQ system in restricted mode (only allow superuser connections): + +``` shell +$ hawq restart cluster -R +``` + +Start the HAWQ master instance only and connect in utility mode: + +``` shell +$ hawq start master -m PGOPTIONS='-c gp_session_role=utility' psql +``` + +## See Also + +[hawq stop](hawqstop.html#topic1), [hawq start](hawqstart.html#topic1) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb b/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb new file mode 100644 index 0000000..77f64a8 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb @@ -0,0 +1,95 @@ +--- +title: hawq scp +--- + +Copies files between multiple hosts at once. + +## Synopsis + +``` pre +hawq scp -f | -h [-h ...] + [--ignore-bad-hosts] [-J ] [-r] [-v] + [[@]:] [...] + [[@]:] + +hawq scp -? + +hawq scp --version +``` + +## Description + +The `hawq scp` utility allows you to copy one or more files from the specified hosts to other specified hosts in one command using SCP (secure copy). For example, you can copy a file from the HAWQ master host to all of the segment hosts at the same time. + +To specify the hosts involved in the SCP session, use the `-f` option to specify a file containing a list of host names, or use the `-h` option to name single host names on the command-line. At least one host name (`-h`) or a host file (`-f`) is required. The `-J` option allows you to specify a single character to substitute for the *hostname* in the `` and `` destination strings. If `-J` is not specified, the default substitution character is an equal sign (`=`). For example, the following command will copy `.bashrc` from the local host to `/home/gpadmin` on all hosts named in `hostfile_gpssh`: + +``` shell +$ hawq scp -f hostfile_hawqssh .bashrc =:/home/gpadmin +``` + +If a user name is not specified in the host list or with *user*`@` in the file path, `hawq scp` will copy files as the currently logged in user. To determine the currently logged in user, invoke the `whoami` command. By default, `hawq scp` copies to `$HOME` of the session user on the remote hosts after login. To ensure the file is copied to the correct location on the remote hosts, use absolute paths. + +Before using `hawq scp`, you must have a trusted host setup between the hosts involved in the SCP session. You can use the utility `hawq ssh-exkeys` to update the known host files and exchange public keys between hosts if you have not done so already. + +## Arguments +
-f \
+
Specifies the name of a file that contains a list of hosts that will participate in this SCP session. The syntax of the host file is one host per line as follows: + +``` pre + +``` +
+ +
-h \
+
Specifies a single host name that will participate in this SCP session. You can use the `-h` option multiple times to specify multiple host names.
+ +
\
+
The name (or absolute path) of a file or directory that you want to copy to other hosts (or file locations). This can be either a file on the local host or on another named host.
+ +
\
+
The path where you want the file(s) to be copied on the named hosts. If an absolute path is not used, the file will be copied relative to `$HOME` of the session user. You can also use the equal sign '`=`' (or another character that you specify with the `-J` option) in place of a \. This will then substitute in each host name as specified in the supplied host file (`-f`) or with the `-h` option.
+ +## Options + +
+-\\\-ignore-bad-hosts +
+
+Overrides copying configuration files to a host on which SSH validation fails. If SSH to a skipped host is reestablished, make sure the files are re-synched once it is reachable. +
+ +
-J \
+
The `-J` option allows you to specify a single character to substitute for the \ in the `` and `` destination strings. If `-J` is not specified, the default substitution character is an equal sign (`=`).
+ + +
-v (verbose mode)
+
Reports additional messages in addition to the SCP command output.
+ +
-r (recursive mode)
+
If \ is a directory, copies the contents of \ and all subdirectories.
+ +
-? (help)
+
Displays the online help.
+ +
-\\\-version
+
Displays the version of this utility.
+ +## Examples + +Copy the file named `installer.tar` to `/` on all the hosts in the file `hostfile_hawqssh`. + +``` shell +$ hawq scp -f hostfile_hawqssh installer.tar =:/ +``` + +Copy the file named *myfuncs.so* to the specified location on the hosts named `sdw1` and `sdw2`: + +``` shell +$ hawq scp -h sdw1 -h sdw2 myfuncs.so =:/usr/local/-db/lib +``` + +## See Also + +[hawq ssh](hawqssh.html#topic1), [hawq ssh-exkeys](hawqssh-exkeys.html#topic1) + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb b/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb new file mode 100644 index 0000000..2567faf --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb @@ -0,0 +1,105 @@ +--- +title: hawq ssh-exkeys +--- + +Exchanges SSH public keys between hosts. + +## Synopsis + +``` pre +hawq ssh-exkeys -f | - h [-h ...] [-p ] + +hawq ssh-exkeys -e -x [-p ] + +hawq ssh-exkeys --version + +hawq ssh-exkeys [-? | --help] +``` + +## Description + +The `hawq ssh-exkeys` utility exchanges SSH keys between the specified host names (or host addresses). This allows SSH connections between HAWQ hosts and network interfaces without a password prompt. The utility is used to initially prepare a HAWQ system for password-free SSH access, and also to add additional ssh keys when expanding a HAWQ system. + +To specify the hosts involved in an initial SSH key exchange, use the `-f` option to specify a file containing a list of host names (recommended), or use the `-h` option to name single host names on the command-line. At least one host name (`-h`) or a host file is required. Note that the local host is included in the key exchange by default. + +To specify new expansion hosts to be added to an existing HAWQ system, use the `-e` and `-x` options. The `-e` option specifies a file containing a list of existing hosts in the system that already have SSH keys. The `-x` option specifies a file containing a list of new hosts that need to participate in the SSH key exchange. + +Keys are exchanged as the currently logged in user. A good practice is performing the key exchange process twice: once as `root` and once as the `gpadmin` user (the designated owner of your HAWQ installation). The HAWQ management utilities require that the same non-root user be created on all hosts in the HAWQ system, and the utilities must be able to connect as that user to all hosts without a password prompt. + +The `hawq ssh-exkeys` utility performs key exchange using the following steps: + +- Creates an RSA identification key pair for the current user if one does not already exist. The public key of this pair is added to the `authorized_keys` file of the current user. +- Updates the `known_hosts` file of the current user with the host key of each host specified using the `-h`, `-f`, `-e`, and `-x` options. +- Connects to each host using `ssh` and obtains the `authorized_keys`, `known_hosts`, and `id_rsa.pub` files to set up password-free access. +- Adds keys from the `id_rsa.pub` files obtained from each host to the `authorized_keys` file of the current user. +- Updates the `authorized_keys`, `known_hosts`, and `id_rsa.pub` files on all hosts with new host information (if any). + +## Options + +
-e \
+
When doing a system expansion, this is the name and location of a file containing all configured host names and host addresses (interface names) for each host in your *current* HAWQ system (master, standby master and segments), one name per line without blank lines or extra spaces. Hosts specified in this file cannot be specified in the host file used with `-x`.
+ +
-f \
+
Specifies the name and location of a file containing all configured host names and host addresses (interface names) for each host in your HAWQ system (master, standby master and segments), one name per line without blank lines or extra spaces.
+ +
-h \
+
Specifies a single host name (or host address) that will participate in the SSH key exchange. You can use the `-h` option multiple times to specify multiple host names and host addresses.
+ +
-p \
+
Specifies the password used to log in to the hosts. The hosts should share the same password. This option is useful when invoking `hawq ssh-exkeys` in a script.
+ +
-\\\-version
+
Displays the version of this utility.
+ +
-x \
+
When doing a system expansion, this is the name and location of a file containing all configured host names and host addresses (interface names) for each new segment host you are adding to your HAWQ system, one name per line without blank lines or extra spaces. Hosts specified in this file cannot be specified in the host file used with `-e`.
+ +
-?, --help (help)
+
Displays the online help.
+ +## Examples + +Exchange SSH keys between all host names and addresses listed in the file `hostfile_exkeys`: + +``` shell +$ hawq ssh-exkeys -f hostfile_exkeys +``` + +Exchange SSH keys between the hosts `sdw1`, `sdw2`, and `sdw3`: + +``` shell +$ hawq ssh-exkeys -h sdw1 -h sdw2 -h sdw3 +``` + +Exchange SSH keys between existing hosts `sdw1`, `sdw2`, and `sdw3`, and new hosts `sdw4` and `sdw5` as part of a system expansion operation: + +``` shell +$ cat hostfile_exkeys +mdw +mdw-1 +mdw-2 +smdw +smdw-1 +smdw-2 +sdw1 +sdw1-1 +sdw1-2 +sdw2 +sdw2-1 +sdw2-2 +sdw3 +sdw3-1 +sdw3-2 +$ cat hostfile_hawqexpand +sdw4 +sdw4-1 +sdw4-2 +sdw5 +sdw5-1 +sdw5-2 +$ hawq ssh-exkeys -e hostfile_exkeys -x hostfile_hawqexpand +``` + +## See Also + +[hawq ssh](hawqssh.html#topic1), [hawq scp](hawqscp.html#topic1) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb b/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb new file mode 100644 index 0000000..ee31308 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb @@ -0,0 +1,105 @@ +--- +title: hawq ssh +--- + +Provides SSH access to multiple hosts at once. + +## Synopsis + +``` pre +hawq ssh -f ) | (-h [-h ...] + [-e] + [-u ] + [-v] + [] + +hawq ssh [-? | --help] + +hawq ssh --version +``` + +## Description + +The `hawq ssh` utility allows you to run bash shell commands on multiple hosts at once using SSH (secure shell). You can execute a single command by specifying it on the command-line, or omit the command to enter into an interactive command-line session. + +To specify the hosts involved in the SSH session, use the `-f` option to specify a file containing a list of host names, or use the `-h` option to name single host names on the command-line. At least one host name (`-h`) or a host file (`-f`) is required. Note that the current host is ***not*** included in the session by default — to include the local host, you must explicitly declare it in the list of hosts involved in the session. + +Before using `hawq ssh`, you must have a trusted host setup between the hosts involved in the SSH session. You can use the utility `hawq ssh-exkeys` to update the known host files and exchange public keys between hosts if you have not done so already. + +If you do not specify a command on the command-line, `hawq ssh` will go into interactive mode. At the `hawq ssh` command prompt (`=>`), you can enter a command as you would in a regular bash terminal command-line, and the command will be executed on all hosts involved in the session. To end an interactive session, press `CTRL`+`D` on the keyboard or type `exit` or `quit`. + +If a user name is not specified in the host file or via the `-u` option, `hawq ssh` will execute commands as the currently logged in user. To determine the currently logged in user, do a `whoami` command. By default, `hawq ssh` goes to `$HOME` of the session user on the remote hosts after login. To ensure commands are executed correctly on all remote hosts, you should always enter absolute paths. + +## Arguments +
-f \
+
Specifies the name of a file that contains a list of hosts that will participate in this SSH session. The host name is required, and you can optionally specify an alternate user name and/or SSH port number per host. The syntax of the host file is one host per line as follows: + +``` pre +[username@]hostname[:ssh_port] +``` +
+ +
-h \
+
Specifies a single host name that will participate in this SSH session. You can use the `-h` option multiple times to specify multiple host names.
+ + +## Options + +
\
+
A bash shell command to execute on all hosts involved in this session (optionally enclosed in quotes). If not specified, `hawq ssh` will start an interactive session.
+ +
-e (echo)
+
Optional. Echoes the commands passed to each host and their resulting output while running in non-interactive mode.
+ +
-u \
+
Specifies the userid for the SSH session.
+ +
-v (verbose mode)
+
Reports additional messages in addition to the command output when running in non-interactive mode.
+ +
-\\\-version
+
Displays the version of this utility.
+ +
-?, -\\\-help
+
Displays the online help.
+ +## Examples + +Start an interactive group SSH session with all hosts listed in the file `hostfile_hawqssh`: + +``` shell +$ hawq ssh -f hostfile_hawqssh +``` + +At the `hawq ssh` interactive command prompt, run a shell command on all the hosts involved in this session. + +``` pre +=> ls -a /data/path-to-masterdd/* +``` + +Exit an interactive session: + +``` pre +=> exit +=> quit +``` + +Start a non-interactive group SSH session with the hosts named `sdw1` and `sdw2` and pass a file containing several commands named `command_file` to `hawq ssh`: + +``` shell +$ hawq ssh -h sdw1 -h sdw2 -v -e < command_file +``` + +Execute single commands in non-interactive mode on hosts `sdw2` and `localhost`: + +``` shell +$ hawq ssh -h sdw2 -h localhost -v -e 'ls -a /data/primary/*' +$ hawq ssh -h sdw2 -h localhost -v -e 'echo $GPHOME' +$ hawq ssh -h sdw2 -h localhost -v -e 'ls -1 | wc -l' +``` + +## See Also + +[hawq ssh-exkeys](hawqssh-exkeys.html#topic1), [hawq scp](hawqscp.html#topic1) + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb b/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb new file mode 100644 index 0000000..ff7b427 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb @@ -0,0 +1,119 @@ +--- +title: hawq start +--- + +Starts a HAWQ system. + +## Synopsis + +``` pre +hawq start [-l| --logdir ] [-q| --quiet] + [-v|--verbose] [-m|--masteronly] [-t|--timeout ] + [-R | --restrict] [-U | --special-mode maintenance] + [--ignore-bad-hosts cluster | allsegments] + +``` + +``` pre +hawq start -? | -h | --help + +hawq start --version +``` + +## Description + +The `hawq start` utility is used to start the HAWQ server processes. When you start a HAWQ system, you are actually starting several `postgres` database server listener processes at once (the master and all of the segment instances). The `hawq start` utility handles the startup of the individual instances. Each instance is started in parallel. + +The *object* in the command specifies what entity should be started: e.g. a cluster, a segment, the master node, standby node, or all segments in the cluster. + +The first time an administrator runs `hawq start cluster`, the utility creates a static hosts cache file named `$GPHOME/etc/slaves` to store the segment host names. Subsequently, the utility uses this list of hosts to start the system more efficiently. The utility will create a new hosts cache file at each startup. + +The `hawq start master` command starts only the HAWQ master, without segment or standby nodes. These can be started later, using `hawq start segment` and/or `hawq start standby`. + +**Note:** Typically you should always use `hawq start cluster` or `hawq restart cluster` to start the cluster. If you do end up using `hawq start standby|master|segment` to start nodes individually, make sure you always start the standby before the active master. Otherwise, the standby can become unsynchronized with the active master. + +Before you can start a HAWQ system, you must have initialized the system or node by using `hawq init ` first. + +## Objects + +
cluster
+
Start a HAWQ cluster.
+ +
master
+
Start HAWQ master.
+ +
segment
+
Start a local segment node.
+ +
standby
+
Start a HAWQ standby.
+ +
allsegments
+
Start all segments.
+ +## Options + +
-l , -\\\-logdir \
+
Specifies the log directory for logs of the management tools. The default is `~/hawq/Adminlogs/`.
+ +
-q , -\\\-quiet
+
Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.
+ +
-v , -\\\-verbose
+
Displays detailed status, progress and error messages output by the utility.
+ +
-m , -\\\-masteronly
+
Optional. Starts the HAWQ master instance only, in utility mode, which may be useful for maintenance tasks. This mode only allows connections to the master in utility mode. For example: + +``` shell +$ PGOPTIONS='-c gp_role=utility' psql +``` +
+ +
-R , -\\\-restrict (restricted mode)
+
Starts HAWQ in restricted mode (only database superusers are allowed to connect).
+ +
-t , -\\\-timeout \
+
Specifies a timeout in seconds to wait for a segment instance to start up. If a segment instance was shutdown abnormally (due to power failure or killing its `postgres` database listener process, for example), it may take longer to start up due to the database recovery and validation process. If not specified, the default timeout is 60 seconds.
+ +
-U , -\\\-special-mode maintenance
+
(Superuser only) Start HAWQ in \[maintenance | upgrade\] mode. In maintenance mode, the `gp_maintenance_conn` parameter is set.
+ +
-\\\-ignore-bad-hosts cluster | allsegments
+
Overrides copying configuration files to a host on which SSH validation fails. If ssh to a skipped host is reestablished, make sure the configuration files are re-synched once it is reachable.
+ +
-? , -h , -\\\-help (help)
+
Displays the online help.
+ +
--version (show utility version)
+
Displays the version of this utility.
+ +## Examples + +Start a HAWQ system: + +``` shell +$ hawq start cluster +``` + +Start a HAWQ master in maintenance mode: + +``` shell +$ hawq start master -m +``` + +Start a HAWQ system in restricted mode (only allow superuser connections): + +``` shell +$ hawq start cluster -R +``` + +Start the HAWQ master instance only and connect in utility mode: + +``` shell +$ hawq start master -m PGOPTIONS='-c gp_session_role=utility' psql +``` + +## See Also + +[hawq stop](hawqstop.html#topic1), [hawq init](hawqinit.html#topic1) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb b/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb new file mode 100644 index 0000000..3927442 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb @@ -0,0 +1,65 @@ +--- +title: hawq state +--- + +Shows the status of a running HAWQ system. + +## Synopsis + +``` pre +hawq state + [-b] + [-l | --logdir ] + [(-v | --verbose) | (-q | --quiet)] + +hawq state [-h | --help] +``` + +## Description + +The `hawq state` utility displays information about a running HAWQ instance. A HAWQ system is comprised of multiple PostgreSQL database instances (segments) spanning multiple machines, and the `hawq state` utility can provide additional status information, such as: + +- Total segment count. +- Which segments are down. +- Master and segment configuration information (hosts, data directories, etc.). +- The ports used by the system. +- Whether a standby master is present, and if it is active. + +## Options + +
-b (brief status)
+
Display a brief summary of the state of the HAWQ system. This is the default mode.
+ +
-l, -\\\-logdir \
+
Specifies the directory to check for logfiles. The default is `$GPHOME/hawqAdminLogs`. + +Log files within the directory are named according to the command being invoked, for example: hawq\_config\_\.log, hawq\_state\_\.log, etc.
+ +
-q, -\\\-quiet
+
Run in quiet mode. Except for warning messages, command output is not displayed on the screen. However, this information is still written to the log file.
+ +
-v, -\\\-verbose
+
Displays error messages and outputs detailed status and progress information.
+ +
-h, -\\\-help (help)
+
Displays the online help.
+ +## Examples + +Show brief status information of a HAWQ system: + +``` shell +$ hawq state -b +``` + +Change the log directory from `hawqAdminLogs` to `TodaysLogs`. + +```shell +$ hawq state -l TodaysLogs +$ ls TodaysLogs +hawq_config_20160707.log hawq_init_20160707.log master.initdb +``` + +## See Also + +[hawq start](hawqstart.html#topic1), [gplogfilter](gplogfilter.html#topic1) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb b/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb new file mode 100644 index 0000000..dd54156 --- /dev/null +++ b/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb @@ -0,0 +1,104 @@ +--- +title: hawq stop +--- + +Stops or restarts a HAWQ system. + +## Synopsis + +``` pre +hawq stop [-a | --prompt] + [-M (smart|fast|immediate) | --mode (smart|fast|immediate)] + [-t | --timeout ] + [-l | --logdir ] + [(-v | --verbose) | (-q | --quiet)] + +hawq stop [-? | -h | --help] +``` + +## Description + +The `hawq stop` utility is used to stop the database servers that comprise a HAWQ system. When you stop a HAWQ system, you are actually stopping several `postgres` database server processes at once (the master and all of the segment instances). The `hawq stop` utility handles the shutdown of the individual instances. Each instance is shut down in parallel. + +By default, you are not allowed to shut down HAWQ if there are any client connections to the database. Use the `-M fast` option to roll back all in progress transactions and terminate any connections before shutting down. If there are any transactions in progress, the default behavior is to wait for them to commit before shutting down. + +With the `-u` option, the utility uploads changes made to the master `pg_hba.conf` file or to *runtime* configuration parameters in the master `hawq-site.xml` file without interruption of service. Note that any active sessions will not pick up the changes until they reconnect to the database. +If the HAWQ cluster has active connections, use the command `hawq stop cluster -u -M fast` to ensure that changes to the parameters are reloaded. + +## Objects + +
cluster
+
Stop a HAWQ cluster.
+ +
master
+
Shuts down a HAWQ master instance that was started in maintenance mode.
+ +
segment
+
Stop a local segment node.
+ +
standby
+
Stop the HAWQ standby master process.
+ +
allsegments
+
Stop all segments.
+ +## Options + +
-a, -\\\-prompt
+
Do not prompt the user for confirmation before executing.
+ +
-l, -\\\-logdir \
+
The directory to write the log file. The default is `~/hawq/Adminlogs/`.
+ +
-M, -\\\-mode (smart | fast | immediate)
+
Smart shutdown is the default. Shutdown fails with a warning message, if active connections are found. + +Fast shut down interrupts and rolls back any transactions currently in progress . + +Immediate shutdown aborts transactions in progress and kills all `postgres` processes without allowing the database server to complete transaction processing or clean up any temporary or in-process work files. Because of this, immediate shutdown is not recommended. In some instances, it can cause database corruption that requires manual recovery.
+ +
-q, -\\\-quiet
+
Run in quiet mode. Command output is not displayed on the screen, but is still written to the log file.
+ +
-t, -\\\-timeout \
+
Specifies a timeout threshold (in seconds) to wait for a segment instance to shutdown. If a segment instance does not shut down in the specified number of seconds, `hawq stop` displays a message indicating that one or more segments are still in the process of shutting down and that you cannot restart HAWQ until the segment instance(s) are stopped. This option is useful in situations where `hawq stop` is executed and there are very large transactions that need to rollback. These large transactions can take over a minute to rollback and surpass the default timeout period of 600 seconds.
+ +
-u, -\\\-reload
+
This option reloads configuration parameter values without restarting the HAWQ cluster.
+ +
-v, -\\\-verbose
+
Displays detailed status, progress and error messages output by the utility.
+ +
-?, -h, -\\\-help (help)
+
Displays the online help.
+ + +## Examples + +Stop a HAWQ system in smart mode: + +``` shell +$ hawq stop cluster -M smart +``` + +Stop a HAWQ system in fast mode: + +``` shell +$ hawq stop cluster -M fast +``` + +Stop a master instance that was started in maintenance mode: + +``` shell +$ hawq stop master -m +``` + +Reload the `hawq-site.xml` and `pg_hba.conf` files after making configuration changes but do not shutdown the HAWQ array: + +``` shell +$ hawq stop cluster -u +``` + +## See Also + +[hawq start](hawqstart.html#topic1) http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/createdb.html.md.erb ---------------------------------------------------------------------- diff --git a/markdown/reference/cli/client_utilities/createdb.html.md.erb b/markdown/reference/cli/client_utilities/createdb.html.md.erb new file mode 100644 index 0000000..31b0c80 --- /dev/null +++ b/markdown/reference/cli/client_utilities/createdb.html.md.erb @@ -0,0 +1,105 @@ +--- +title: createdb +--- + +Creates a new database. + +## Synopsis + +``` pre + +createdb [] [] [-e | --echo] [ ['']] + +createdb --help + +createdb --version + +``` +where: + +``` pre + = + [-h | --host ] + [-p | -- port ] + [-U | --username ] + [-W | --password] + + = + [-D | --tablespace ] + [-E | --encoding ] + [-O | --owner ] + [-T