hawq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yo...@apache.org
Subject [05/50] [abbrv] incubator-hawq-docs git commit: MASTER_DATA_DIRECTORY clarifications - HAWQ-1031
Date Thu, 29 Sep 2016 17:22:19 GMT
MASTER_DATA_DIRECTORY clarifications - HAWQ-1031


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/1a7efeff
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/1a7efeff
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/1a7efeff

Branch: refs/heads/master
Commit: 1a7efeffc1e1a5e010f7aa4cba4622f9da1357bb
Parents: bec5e3e
Author: Lisa Owen <lowen@pivotal.io>
Authored: Mon Aug 29 13:23:01 2016 -0700
Committer: Lisa Owen <lowen@pivotal.io>
Committed: Mon Aug 29 13:23:01 2016 -0700

----------------------------------------------------------------------
 ...acesandHighAvailabilityEnabledHDFS.html.md.erb | 18 ++++++++----------
 clientaccess/client_auth.html.md.erb              | 14 +++++++-------
 clientaccess/disable-kerberos.html.md.erb         | 11 +++--------
 clientaccess/kerberos.html.md.erb                 | 18 +++++++++---------
 reference/HAWQEnvironmentVariables.html.md.erb    |  8 --------
 5 files changed, 27 insertions(+), 42 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/1a7efeff/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb b/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
index 54cec32..3147033 100644
--- a/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
+++ b/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html.md.erb
@@ -12,7 +12,7 @@ To enable the HDFS NameNode HA feature for use with HAWQ, you need to perform
th
 1. Collect information about the target filespace.
 1. Stop the HAWQ cluster and backup the catalog (**Note:** Ambari users must perform this
manual step.)
 1. Move the filespace location using the command line tool (**Note:** Ambari users must perform
this manual step.)
-1. Reconfigure $\{GPHOME\}/etc/hdfs-client.xml and $\{GPHOME\}/etc/hawq-site.xml files. Then,
synchronize updated configuration files to all HAWQ nodes.
+1. Reconfigure `${GPHOME}/etc/hdfs-client.xml` and `${GPHOME}/etc/hawq-site.xml` files. Then,
synchronize updated configuration files to all HAWQ nodes.
 1. Start the HAWQ cluster and resynchronize the standby master after moving the filespace.
 
 
@@ -76,26 +76,24 @@ To move the filespace location to a HA-enabled HDFS location, you must
move the
 When you enable HA HDFS, you are changing the HAWQ catalog and persistent tables. You cannot
perform transactions while persistent tables are being updated. Therefore, before you move
the filespace location, back up the catalog. This is to ensure that you do not lose data due
to a hardware failure or during an operation \(such as killing the HAWQ process\). 
 
 
-1. If you defined a custom port for HAWQ master, export the PGPORT environment variable.
For example:
+1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable.
For example:
 
 	```shell
 	export PGPORT=9000
 	```
 
-1. If you have not configured it already, export the MASTER\_DATA\_DIRECTORY environment
variable.
+1. Save the HAWQ master data directory, found in the `hawq_master_directory` property value
from `hawq-site.xml` to an environment variable.
  
 	```bash
-	export MASTER_DATA_DIRECTORY=/path/to/master/catalog
+	export MDATA_DIR=/path/to/hawq_master_directory
 	```
 
-	See [Environment Variables](/20/reference/HAWQEnvironmentVariables.html) for more information
on environment variables.
-
 1.  Disconnect all workload connections. Check the active connection with:
 
     ```shell
     $ psql -p ${PGPORT} -c "select * from pg_catalog.pg_stat_activity" -d template1
     ```
-    where $\{PGPORT\} corresponds to the port number you optionally customized for HAWQ master.

+    where `${PGPORT}` corresponds to the port number you optionally customized for HAWQ master.

     
 
 2.  Issue a checkpoint: 
@@ -113,7 +111,7 @@ When you enable HA HDFS, you are changing the HAWQ catalog and persistent
table
 4.  Copy the master data directory to a backup location:
 
     ```shell
-    $ cp -r ${MASTER_DATA_DIRECTORY} /catalog/backup/location
+    $ cp -r ${MDATA_DIR} /catalog/backup/location
     ```
 	The master data directory contains the catalog. Fatal errors can occur due to hardware failure
or if you fail to kill a HAWQ process before attempting a filespace location change. Make
sure you back this directory up.
 
@@ -123,7 +121,7 @@ When you enable HA HDFS, you are changing the HAWQ catalog and persistent
table
 
 HAWQ provides the command line tool, `hawq filespace`, to move the location of the filespace.
 
-1. If you defined a custom port for HAWQ master, export the PGPORT environment variable.
For example:
+1. If you defined a custom port for HAWQ master, export the `PGPORT` environment variable.
For example:
 
 	```shell
 	export PGPORT=9000
@@ -139,7 +137,7 @@ HAWQ provides the command line tool, `hawq filespace`, to move the location
of t
 
 Non-fatal error can occur if you provide invalid input or if you have not stopped HAWQ before
attempting a filespace location change. Check that you have followed the instructions from
the beginning, or correct the input error before you re-run `hawq filespace`.
 
-Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before
attempting a filespace location change. When a fatal error occurs, you will see the message,
"PLEASE RESTORE MASTER DATA DIRECTORY" in the output. If this occurs, shut down the database
and restore the `${MASTER_DATA_DIRECTORY}` that you backed up in Step 4.
+Fatal errors can occur due to hardware failure or if you fail to kill a HAWQ process before
attempting a filespace location change. When a fatal error occurs, you will see the message,
"PLEASE RESTORE MASTER DATA DIRECTORY" in the output. If this occurs, shut down the database
and restore the `${MDATA_DIR}` that you backed up in Step 4.
 
 ### <a id="configuregphomeetchdfsclientxml"></a>Step 5: Update HAWQ to Use NameNode
HA by Reconfiguring hdfs-client.xml and hawq-site.xml 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/1a7efeff/clientaccess/client_auth.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/client_auth.html.md.erb b/clientaccess/client_auth.html.md.erb
index 1c06d42..9173aed 100644
--- a/clientaccess/client_auth.html.md.erb
+++ b/clientaccess/client_auth.html.md.erb
@@ -6,11 +6,11 @@ When a HAWQ system is first initialized, the system contains one predefined
*sup
 
 ## <a id="topic2"></a>Allowing Connections to HAWQ 
 
-Client access and authentication is controlled by the standard PostgreSQL host-based authentication
file, pg\_hba.conf. In HAWQ, the pg\_hba.conf file of the master instance controls client
access and authentication to your HAWQ system. HAWQ segments have pg\_hba.conf files that
are configured to allow only client connections from the master host and never accept client
connections. Do not alter the pg\_hba.conf file on your segments.
+Client access and authentication is controlled by the standard PostgreSQL host-based authentication
file, `pg_hba.conf`. In HAWQ, the `pg_hba.conf` file of the master instance controls client
access and authentication to your HAWQ system. HAWQ segments have `pg_hba.conf` files that
are configured to allow only client connections from the master host and never accept client
connections. Do not alter the `pg_hba.conf` file on your segments.
 
 See [The pg\_hba.conf File](http://www.postgresql.org/docs/9.0/interactive/auth-pg-hba-conf.html)
in the PostgreSQL documentation for more information.
 
-The general format of the pg\_hba.conf file is a set of records, one per line. HAWQ ignores
blank lines and any text after the `#` comment character. A record consists of a number of
fields that are separated by spaces and/or tabs. Fields can contain white space if the field
value is quoted. Records cannot be continued across lines. Each remote client access record
has the following format:
+The general format of the `pg_hba.conf` file is a set of records, one per line. HAWQ ignores
blank lines and any text after the `#` comment character. A record consists of a number of
fields that are separated by spaces and/or tabs. Fields can contain white space if the field
value is quoted. Records cannot be continued across lines. Each remote client access record
has the following format:
 
 ```
 *host*   *database*   *role*   *CIDR-address*   *authentication-method*
@@ -38,13 +38,13 @@ The following table describes meaning of each field.
 
 ### <a id="topic3"></a>Editing the pg\_hba.conf File 
 
-This example shows how to edit the pg\_hba.conf file of the master to allow remote client
access to all databases from all roles using encrypted password authentication.
+This example shows how to edit the `pg_hba.conf` file of the master to allow remote client
access to all databases from all roles using encrypted password authentication.
 
-**Note:** For a more secure system, consider removing all connections that use trust authentication
from your master pg\_hba.conf. Trust authentication means the role is granted access without
any authentication, therefore bypassing all security. Replace trust entries with ident authentication
if your system has an ident service available.
+**Note:** For a more secure system, consider removing all connections that use trust authentication
from your master `pg_hba.conf`. Trust authentication means the role is granted access without
any authentication, therefore bypassing all security. Replace trust entries with ident authentication
if your system has an ident service available.
 
 #### <a id="ip144328"></a>Editing pg\_hba.conf 
 
-1.  Open the file $MASTER\_DATA\_DIRECTORY/pg\_hba.conf in a text editor.
+1.  Obtain the master data directory from the `hawq_master_directory` property value in `hawq-site.xml`
and use a text editor to open the `pg_hba.conf` file in this directory.
 2.  Add a line to the file for each type of connection you want to allow. Records are read
sequentially, so the order of the records is significant. Typically, earlier records will
have tight connection match parameters and weaker authentication methods, while later records
will have looser match parameters and stronger authentication methods. For example:
 
     ```
@@ -68,7 +68,7 @@ This example shows how to edit the pg\_hba.conf file of the master to allow
remo
     ```
 
 3.  Save and close the file.
-4.  Reload the pg\_hba.conf configuration file for your changes to take effect:
+4.  Reload the `pg_hba.conf `configuration file for your changes to take effect:
 
     ``` bash
     $ hawq stop -u
@@ -104,7 +104,7 @@ The following steps set the parameter values with the HAWQ utility `hawq
config`
 
 ### <a id="ip142411"></a>To change the number of allowed connections 
 
-1.  Log into the HAWQ master host as the HAWQ administrator and source the file `$GPHOME/greenplum_path.sh`.
+1.  Log into the HAWQ master host as the HAWQ administrator and source the file `/usr/local/hawq/greenplum_path.sh`.
 2.  Set the value of the `max_connections` parameter. This `hawq config` command sets the
value to 100 on all HAWQ instances.
 
     ``` bash

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/1a7efeff/clientaccess/disable-kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/disable-kerberos.html.md.erb b/clientaccess/disable-kerberos.html.md.erb
index 8986b25..b5d7eeb 100644
--- a/clientaccess/disable-kerberos.html.md.erb
+++ b/clientaccess/disable-kerberos.html.md.erb
@@ -14,15 +14,12 @@ Follow these steps to disable Kerberos security for HAWQ and PXF for manual
inst
         $ ssh hawq_master_fqdn
         ```
 
-    2.  Run the following commands to set environment variables:
+    2.  Run the following command to set up HAWQ environment variables:
 
         ``` bash
         $ source /usr/local/hawq/greenplum_path.sh
-        $ export MASTER_DATA_DIRECTORY = /gpsql
         ```
 
-        **Note:** Substitute the correct value of MASTER\_DATA\_DIRECTORY for your configuration.
-
     3.  Start HAWQ if necessary:
 
         ``` bash
@@ -35,15 +32,13 @@ Follow these steps to disable Kerberos security for HAWQ and PXF for manual
inst
         $ hawq config --masteronly -c enable_secure_filesystem -v “off”
         ```
 
-        **Note:** Substitute the correct value of MASTER\_DATA\_DIRECTORY for your configuration.
-
     5.  Change the permission of the HAWQ HDFS data directory:
 
         ``` bash
         $ sudo -u hdfs hdfs dfs -chown -R gpadmin:gpadmin /hawq_data
         ```
 
-    6.  On the HAWQ master node and on all segment server nodes, edit the /usr/local/hawq/etc/hdfs-client.xml
file to disable kerberos security. Comment or remove the following properties in each file:
+    6.  On the HAWQ master node and on all segment server nodes, edit the `/usr/local/hawq/etc/hdfs-client.xml`
file to disable kerberos security. Comment or remove the following properties in each file:
 
         ``` xml
         <!--
@@ -66,7 +61,7 @@ Follow these steps to disable Kerberos security for HAWQ and PXF for manual
inst
         ```
 
 3.  Disable security for PXF:
-    1.  On each PXF node, edit the /etc/gphd/pxf/conf/pxf-site.xml to comment or remove the
properties:
+    1.  On each PXF node, edit the `/etc/gphd/pxf/conf/pxf-site.xml` to comment or remove
the properties:
 
         ``` xml
         <!--

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/1a7efeff/clientaccess/kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/kerberos.html.md.erb b/clientaccess/kerberos.html.md.erb
index 5db0f3b..77d6998 100644
--- a/clientaccess/kerberos.html.md.erb
+++ b/clientaccess/kerberos.html.md.erb
@@ -4,9 +4,9 @@ title: Using Kerberos Authentication
 
 You can control access to HAWQ with a Kerberos authentication server.
 
-HAWQ supports the Generic Security Service Application Program Interface \(GSSAPI\) with
Kerberos authentication. GSSAPI provides automatic authentication \(single sign-on\) for systems
that support it. You specify the HAWQ users \(roles\) that require Kerberos authentication
in the HAWQ configuration file pg\_hba.conf. The login fails if Kerberos authentication is
not available when a role attempts to log in to HAWQ.
+HAWQ supports the Generic Security Service Application Program Interface \(GSSAPI\) with
Kerberos authentication. GSSAPI provides automatic authentication \(single sign-on\) for systems
that support it. You specify the HAWQ users \(roles\) that require Kerberos authentication
in the HAWQ configuration file `pg_hba.conf`. The login fails if Kerberos authentication is
not available when a role attempts to log in to HAWQ.
 
-Kerberos provides a secure, encrypted authentication service. It does not encrypt data exchanged
between the client and database and provides no authorization services. To encrypt data exchanged
over the network, you must use an SSL connection. To manage authorization for access to HAWQ
databases and objects such as schemas and tables, you use settings in the pg\_hba.conf file
and privileges given to HAWQ users and roles within the database. For information about managing
authorization privileges, see [Managing Roles and Privileges](roles_privs.html).
+Kerberos provides a secure, encrypted authentication service. It does not encrypt data exchanged
between the client and database and provides no authorization services. To encrypt data exchanged
over the network, you must use an SSL connection. To manage authorization for access to HAWQ
databases and objects such as schemas and tables, you use settings in the `pg_hba.conf` file
and privileges given to HAWQ users and roles within the database. For information about managing
authorization privileges, see [Managing Roles and Privileges](roles_privs.html).
 
 For more information about Kerberos, see [http://web.mit.edu/kerberos/](http://web.mit.edu/kerberos/).
 
@@ -54,7 +54,7 @@ Follow these steps to install and configure a Kerberos Key Distribution
Center \
     sudo yum install krb5-libs krb5-server krb5-workstation
     ```
 
-2.  Edit the /etc/krb5.conf configuration file. The following example shows a Kerberos server
with a default `KRB.EXAMPLE.COM` realm.
+2.  Edit the `/etc/krb5.conf` configuration file. The following example shows a Kerberos
server with a default `KRB.EXAMPLE.COM` realm.
 
     ```
     [logging]
@@ -104,7 +104,7 @@ Follow these steps to install and configure a Kerberos Key Distribution
Center \
     kdb5_util create -s
     ```
 
-    The `kdb5_util`create option creates the database to store keys for the Kerberos realms
that are managed by this KDC server. The -s option creates a stash file. Without the stash
file, every time the KDC server starts it requests a password.
+    The `kdb5_util`create option creates the database to store keys for the Kerberos realms
that are managed by this KDC server. The `-s` option creates a stash file. Without the stash
file, every time the KDC server starts it requests a password.
 
 4.  Add an administrative user to the KDC database with the `kadmin.local` utility. Because
it does not itself depend on Kerberos authentication, the `kadmin.local` utility allows you
to add an initial administrative user to the local Kerberos server. To add the user `gpadmin`
as an administrative user to the KDC database, run the following command:
 
@@ -114,7 +114,7 @@ Follow these steps to install and configure a Kerberos Key Distribution
Center \
 
     Most users do not need administrative access to the Kerberos server. They can use `kadmin`
to manage their own principals \(for example, to change their own password\). For information
about `kadmin`, see the [Kerberos documentation](http://web.mit.edu/kerberos/krb5-latest/doc/).
 
-5.  If needed, edit the /var/kerberos/krb5kdc/kadm5.acl file to grant the appropriate permissions
to `gpadmin`.
+5.  If needed, edit the `/var/kerberos/krb5kdc/kadm5.acl` file to grant the appropriate permissions
to `gpadmin`.
 6.  Start the Kerberos daemons:
 
     ```
@@ -183,7 +183,7 @@ Install the Kerberos client libraries on the HAWQ master and configure
the Kerbe
     sudo kdestroy
     ```
 
-5.  Use the Kerberos utility `kinit` to request a ticket using the keytab file on the HAWQ
master for `gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM`. The -t option specifies the keytab file
on the HAWQ master.
+5.  Use the Kerberos utility `kinit` to request a ticket using the keytab file on the HAWQ
master for `gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM`. The `-t` option specifies the keytab file
on the HAWQ master.
 
     ```
     # kinit -k -t gpdb-kerberos.keytab gpadmin/kerberos-gpdb@KRB.EXAMPLE.COM
@@ -246,13 +246,13 @@ After you have set up Kerberos on the HAWQ master, you can configure
HAWQ to use
     $ psql -U "adminuser/mdw.proddb" -h mdw.proddb
     ```
 
-    If the default user is `adminuser`, the pg\_ident.conf file and the pg\_hba.conf file
can be configured so that the `adminuser` can log in to the database as the Kerberos principal
`adminuser/mdw.proddb` without specifying the `-U` option:
+    If the default user is `adminuser`, the `pg_ident.conf` file and the `pg_hba.conf` file
can be configured so that the `adminuser` can log in to the database as the Kerberos principal
`adminuser/mdw.proddb` without specifying the `-U` option:
 
     ``` bash
     $ psql -h mdw.proddb
     ```
 
-    The following username map is defined in the HAWQ file `$MASTER_DATA_DIRECTORY/pg_ident.conf`:
+    The `pg_ident.conf` file defines the username map. This file is located in the HAWQ master
data directory (identified by the `hawq_master_directory` property value in `hawq-site.xml`):
 
     ```
     # MAPNAME   SYSTEM-USERNAME        GP-USERNAME
@@ -283,7 +283,7 @@ Enable Kerberos-authenticated JDBC access to HAWQ.
 You can configure HAWQ to use Kerberos to run user-defined Java functions.
 
 1.  Ensure that Kerberos is installed and configured on the HAWQ master. See [Install and
Configure the Kerberos Client](#topic6).
-2.  Create the file .java.login.config in the folder /home/gpadmin and add the following
text to the file:
+2.  Create the file `.java.login.config` in the folder `/home/gpadmin` and add the following
text to the file:
 
     ```
     pgjdbc {

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/1a7efeff/reference/HAWQEnvironmentVariables.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HAWQEnvironmentVariables.html.md.erb b/reference/HAWQEnvironmentVariables.html.md.erb
index beb28d3..8061781 100644
--- a/reference/HAWQEnvironmentVariables.html.md.erb
+++ b/reference/HAWQEnvironmentVariables.html.md.erb
@@ -41,14 +41,6 @@ export LD_LIBRARY_PATH
 
 The following are HAWQ environment variables. You may want to add the connection-related
environment variables to your profile, for convenience. That way, you do not have to type
so many options on the command line for client connections. Note that these environment variables
should be set on the HAWQ master host only.
 
-### <a id="master_data_directory"></a>MASTER\_DATA\_DIRECTORY
-
-This variable is only needed for legacy compatibility. The master data directory is now set
in hawq-site.xml, by using the `hawq config` command. If used, this variable should point
to the directory created by the `hawq init` utility in the master data directory location.
For example:
-
-``` pre
-MASTER_DATA_DIRECTORY=/data/master
-export MASTER_DATA_DIRECTORY
-```
 
 ### <a id="pgappname"></a>PGAPPNAME
 


Mime
View raw message