hawq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yo...@apache.org
Subject [36/57] [abbrv] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs
Date Tue, 10 Jan 2017 23:54:27 GMT
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-table.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-table.html.md.erb b/markdown/ddl/ddl-table.html.md.erb
new file mode 100644
index 0000000..bc4f0c4
--- /dev/null
+++ b/markdown/ddl/ddl-table.html.md.erb
@@ -0,0 +1,149 @@
+---
+title: Creating and Managing Tables
+---
+
+HAWQ Tables are similar to tables in any relational database, except that table rows are distributed across the different segments in the system. When you create a table, you specify the table's distribution policy.
+
+## <a id="topic26"></a>Creating a Table 
+
+The `CREATE TABLE` command creates a table and defines its structure. When you create a table, you define:
+
+-   The columns of the table and their associated data types. See [Choosing Column Data Types](#topic27).
+-   Any table constraints to limit the data that a column or table can contain. See [Setting Table Constraints](#topic28).
+-   The distribution policy of the table, which determines how HAWQ divides data is across the segments. See [Choosing the Table Distribution Policy](#topic34).
+-   The way the table is stored on disk.
+-   The table partitioning strategy for large tables, which specifies how the data should be divided. See [Creating and Managing Databases](../ddl/ddl-database.html).
+
+### <a id="topic27"></a>Choosing Column Data Types 
+
+The data type of a column determines the types of data values the column can contain. Choose the data type that uses the least possible space but can still accommodate your data and that best constrains the data. For example, use character data types for strings, date or timestamp data types for dates, and numeric data types for numbers.
+
+There are no performance differences among the character data types `CHAR`, `VARCHAR`, and `TEXT` apart from the increased storage size when you use the blank-padded type. In most situations, use `TEXT` or `VARCHAR` rather than `CHAR`.
+
+Use the smallest numeric data type that will accommodate your numeric data and allow for future expansion. For example, using `BIGINT` for data that fits in `INT` or `SMALLINT` wastes storage space. If you expect that your data values will expand over time, consider that changing from a smaller datatype to a larger datatype after loading large amounts of data is costly. For example, if your current data values fit in a `SMALLINT` but it is likely that the values will expand, `INT` is the better long-term choice.
+
+Use the same data types for columns that you plan to use in cross-table joins. When the data types are different, the database must convert one of them so that the data values can be compared correctly, which adds unnecessary overhead.
+
+HAWQ supports the parquet columnar storage format, which can increase performance on large queries. Use parquet tables for HAWQ internal tables.
+
+### <a id="topic28"></a>Setting Table Constraints 
+
+You can define constraints to restrict the data in your tables. HAWQ support for constraints is the same as PostgreSQL with some limitations, including:
+
+-   `CHECK` constraints can refer only to the table on which they are defined.
+-   `FOREIGN KEY` constraints are allowed, but not enforced.
+-   Constraints that you define on partitioned tables apply to the partitioned table as a whole. You cannot define constraints on the individual parts of the table.
+
+#### <a id="topic29"></a>Check Constraints 
+
+Check constraints allow you to specify that the value in a certain column must satisfy a Boolean \(truth-value\) expression. For example, to require positive product prices:
+
+``` sql
+=> CREATE TABLE products
+     ( product_no integer,
+       name text,
+       price numeric CHECK (price > 0) );
+```
+
+#### <a id="topic30"></a>Not-Null Constraints 
+
+Not-null constraints specify that a column must not assume the null value. A not-null constraint is always written as a column constraint. For example:
+
+``` sql
+=> CREATE TABLE products
+     ( product_no integer NOT NULL,
+       name text NOT NULL,
+       price numeric );
+```
+
+#### <a id="topic33"></a>Foreign Keys 
+
+Foreign keys are not supported. You can declare them, but referential integrity is not enforced.
+
+Foreign key constraints specify that the values in a column or a group of columns must match the values appearing in some row of another table to maintain referential integrity between two related tables. Referential integrity checks cannot be enforced between the distributed table segments of a HAWQ database.
+
+### <a id="topic34"></a>Choosing the Table Distribution Policy 
+
+All HAWQ tables are distributed. The default is `DISTRIBUTED RANDOMLY` \(round-robin distribution\) to determine the table row distribution. However, when you create or alter a table, you can optionally specify `DISTRIBUTED BY` to distribute data according to a hash-based policy. In this case, the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. Columns of geometric or user-defined data types are not eligible as HAWQ distribution key columns. 
+
+Randomly distributed tables have benefits over hash distributed tables. For example, after expansion, HAWQ's elasticity feature lets it automatically use more resources without needing to redistribute the data. For extremely large tables, redistribution is very expensive. Also, data locality for randomly distributed tables is better, especially after the underlying HDFS redistributes its data during rebalancing or because of DataNode failures. This is quite common when the cluster is large.
+
+However, hash distributed tables can be faster than randomly distributed tables. For example, for TPCH queries, where there are several queries, HASH distributed tables can have performance benefits. Choose a distribution policy that best suits your application scenario. When you `CREATE TABLE`, you can also specify the `bucketnum` option. The `bucketnum` determines the number of hash buckets used in creating a hash-distributed table or for PXF external table intermediate processing. The number of buckets also affects how many virtual segments will be created when processing this data. The bucketnumber of a gpfdist external table is the number of gpfdist location, and the bucketnumber of a command external table is `ON #num`. PXF external tables use the `default_hash_table_bucket_number` parameter to control virtual segments. 
+
+HAWQ's elastic execution runtime is based on virtual segments, which are allocated on demand, based on the cost of the query. Each node uses one physical segment and a number of dynamically allocated virtual segments distributed to different hosts, thus simplifying performance tuning. Large queries use large numbers of virtual segments, while smaller queries use fewer virtual segments. Tables do not need to be redistributed when nodes are added or removed.
+
+In general, the more virtual segments are used, the faster the query will be executed. You can tune the parameters for `default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` to adjust performance by controlling the number of virtual segments used for a query. However, be aware that if the value of `default_hash_table_bucket_number` is changed, data must be redistributed, which can be costly. Therefore, it is better to set the `default_hash_table_bucket_number` up front, if you expect to need a larger number of virtual segments. However, you might need to adjust the value in `default_hash_table_bucket_number` after cluster expansion, but should take care not to exceed the number of virtual segments per query set in `hawq_rm_nvseg_perquery_limit`. Refer to the recommended guidelines for setting the value of `default_hash_table_bucket_number`, later in this section.
+
+For random or gpfdist external tables, as well as user-defined functions, the value set in the `hawq_rm_nvseg_perquery_perseg_limit` parameter limits the number of virtual segments that are used for one segment for one query, to optimize query resources. Resetting this parameter is not recommended.
+
+Consider the following points when deciding on a table distribution policy.
+
+-   **Even Data Distribution** — For the best possible performance, all segments should contain equal portions of data. If the data is unbalanced or skewed, the segments with more data must work harder to perform their portion of the query processing.
+-   **Local and Distributed Operations** — Local operations are faster than distributed operations. Query processing is fastest if the work associated with join, sort, or aggregation operations is done locally, at the segment level. Work done at the system level requires distributing tuples across the segments, which is less efficient. When tables share a common distribution key, the work of joining or sorting on their shared distribution key columns is done locally. With a random distribution policy, local join operations are not an option.
+-   **Even Query Processing** — For best performance, all segments should handle an equal share of the query workload. Query workload can be skewed if a table's data distribution policy and the query predicates are not well matched. For example, suppose that a sales transactions table is distributed based on a column that contains corporate names \(the distribution key\), and the hashing algorithm distributes the data based on those values. If a predicate in a query references a single value from the distribution key, query processing runs on only one segment. This works if your query predicates usually select data on a criteria other than corporation name. For queries that use corporation name in their predicates, it's possible that only one segment instance will handle the query workload.
+
+HAWQ utilizes dynamic parallelism, which can affect the performance of a query execution significantly. Performance depends on the following factors:
+
+-   The size of a randomly distributed table.
+-   The `bucketnum` of a hash distributed table.
+-   Data locality.
+-   The values of `default_hash_table_bucket_number`, and `hawq_rm_nvseg_perquery_limit` \(including defaults and user-defined values\).
+
+For any specific query, the first four factors are fixed values, while the configuration parameters in the last item can be used to tune performance of the query execution. In querying a random table, the query resource load is related to the data size of the table, usually one virtual segment for one HDFS block. As a result, querying a large table could use a large number of resources.
+
+The `bucketnum` for a hash table specifies the number of hash buckets to be used in creating virtual segments. A HASH distributed table is created with `default_hash_table_bucket_number` buckets. The default bucket value can be changed in session level or in the `CREATE TABLE` DDL by using the `bucketnum` storage parameter.
+
+In an Ambari-managed HAWQ cluster, the default bucket number \(`default_hash_table_bucket_number`\) is derived from the number of segment nodes. In command-line-managed HAWQ environments, you can use the `--bucket_number` option of `hawq init` to explicitly set `default_hash_table_bucket_number` during cluster initialization.
+
+**Note:** For best performance with large tables, the number of buckets should not exceed the value of the `default_hash_table_bucket_number` parameter. Small tables can use one segment node, `WITH bucketnum=1`. For larger tables, the `bucketnum` is set to a multiple of the number of segment nodes, for the best load balancing on different segment nodes. The elastic runtime will attempt to find the optimal number of buckets for the number of nodes being processed. Larger tables need more virtual segments, and hence use larger numbers of buckets.
+
+The following statement creates a table “sales” with 8 buckets, which would be similar to a hash-distributed table on 8 segments.
+
+``` sql
+=> CREATE TABLE sales(id int, profit float)  WITH (bucketnum=8) DISTRIBUTED BY (id);
+```
+
+There are four ways of creating a table from an origin table. The ways in which the new table is generated from the original table are listed below.
+
+<table>
+  <tr>
+    <th></th>
+    <th>Syntax</th>
+  </tr>
+  <tr><td>INHERITS</td><td><pre><code>CREATE TABLE new_table INHERITS (origintable) [WITH(bucketnum=x)] <br/>[DISTRIBUTED BY col]</code></pre></td></tr>
+  <tr><td>LIKE</td><td><pre><code>CREATE TABLE new_table (LIKE origintable) [WITH(bucketnum=x)] <br/>[DISTRIBUTED BY col]</code></pre></td></tr>
+  <tr><td>AS</td><td><pre><code>CREATE TABLE new_table [WITH(bucketnum=x)] AS SUBQUERY [DISTRIBUTED BY col]</code></pre></td></tr>
+  <tr><td>SELECT INTO</td><td><pre><code>CREATE TABLE origintable [WITH(bucketnum=x)] [DISTRIBUTED BY col]; SELECT * <br/>INTO new_table FROM origintable;</code></pre></td></tr>
+</table>
+
+The optional `INHERITS` clause specifies a list of tables from which the new table automatically inherits all columns. Hash tables inherit bucketnumbers from their origin table if not otherwise specified. If `WITH` specifies `bucketnum` in creating a hash-distributed table, it will be copied. If distribution is specified by column, the table will inherit it. Otherwise, the table will use default distribution from `default_hash_table_bucket_number`.
+
+The `LIKE` clause specifies a table from which the new table automatically copies all column names, data types, not-null constraints, and distribution policy. If a `bucketnum` is specified, it will be copied. Otherwise, the table will use default distribution.
+
+For hash tables, the `SELECT INTO` function always uses random distribution.
+
+#### <a id="topic_kjg_tqm_gv"></a>Declaring Distribution Keys 
+
+`CREATE TABLE`'s optional clause `DISTRIBUTED BY` specifies the distribution policy for a table. The default is a random distribution policy. You can also choose to distribute data as a hash-based policy, where the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. HASH distributed tables are created with the number of hash buckets specified by the `default_hash_table_bucket_number` parameter.
+
+Policies for different application scenarios can be specified to optimize performance. The number of virtual segments used for query execution can now be tuned using the `hawq_rm_nvseg_perquery_limit `and `hawq_rm_nvseg_perquery_perseg_limit` parameters, in connection with the `default_hash_table_bucket_number` parameter, which sets the default `bucketnum`. For more information, see the guidelines for Virtual Segments in the next section and in [Query Performance](../query/query-performance.html#topic38).
+
+#### <a id="topic_wff_mqm_gv"></a>Performance Tuning 
+
+Adjusting the values of the configuration parameters `default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` can tune performance by controlling the number of virtual segments being used. In most circumstances, HAWQ's elastic runtime will dynamically allocate virtual segments to optimize performance, so further tuning should not be needed..
+
+Hash tables are created using the value specified in `default_hash_table_bucket_number`. Queries for hash tables use a fixed number of buckets, regardless of the amount of data present. Explicitly setting `default_hash_table_bucket_number` can be useful in managing resources. If you desire a larger or smaller number of hash buckets, set this value before you create tables. Resources are dynamically allocated to a multiple of the number of nodes. If you use `hawq init --bucket_number` to set the value of `default_hash_table_bucket_number` during cluster initialization or expansion, the value should not exceed the value of `hawq_rm_nvseg_perquery_limit`. This server parameter defines the maximum number of virtual segments that can be used for a query \(default = 512, with a maximum of 65535\). Modifying the value to greater than 1000 segments is not recommended.
+
+The following per-node guidelines apply to values for `default_hash_table_bucket_number`.
+
+|Number of Nodes|default\_hash\_table\_bucket\_number value|
+|---------------|------------------------------------------|
+|<= 85|6 \* \#nodes|
+|\> 85 and <= 102|5 \* \#nodes|
+|\> 102 and <= 128|4 \* \#nodes|
+|\> 128 and <= 170|3 \* \#nodes|
+|\> 170 and <= 256|2 \* \#nodes|
+|\> 256 and <= 512|1 \* \#nodes|
+|\> 512|512|
+
+Reducing the value of `hawq_rm_nvseg_perquery_perseg_limit`can improve concurrency and increasing the value of `hawq_rm_nvseg_perquery_perseg_limit`could possibly increase the degree of parallelism. However, for some queries, increasing the degree of parallelism will not improve performance if the query has reached the limits set by the hardware. Therefore, increasing the value of `hawq_rm_nvseg_perquery_perseg_limit` above the default value is not recommended. Also, changing the value of `default_hash_table_bucket_number` after initializing a cluster means the hash table data must be redistributed. If you are expanding a cluster, you might wish to change this value, but be aware that retuning could adversely affect performance.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-tablespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-tablespace.html.md.erb b/markdown/ddl/ddl-tablespace.html.md.erb
new file mode 100644
index 0000000..8720665
--- /dev/null
+++ b/markdown/ddl/ddl-tablespace.html.md.erb
@@ -0,0 +1,154 @@
+---
+title: Creating and Managing Tablespaces
+---
+
+Tablespaces allow database administrators to have multiple file systems per machine and decide how to best use physical storage to store database objects. They are named locations within a filespace in which you can create objects. Tablespaces allow you to assign different storage for frequently and infrequently used database objects or to control the I/O performance on certain database objects. For example, place frequently-used tables on file systems that use high performance solid-state drives \(SSD\), and place other tables on standard hard drives.
+
+A tablespace requires a file system location to store its database files. In HAWQ, the master and each segment require a distinct storage location. The collection of file system locations for all components in a HAWQ system is a *filespace*. Filespaces can be used by one or more tablespaces.
+
+## <a id="topic10"></a>Creating a Filespace 
+
+A filespace sets aside storage for your HAWQ system. A filespace is a symbolic storage identifier that maps onto a set of locations in your HAWQ hosts' file systems. To create a filespace, prepare the logical file systems on all of your HAWQ hosts, then use the `hawq filespace` utility to define the filespace. You must be a database superuser to create a filespace.
+
+**Note:** HAWQ is not directly aware of the file system boundaries on your underlying systems. It stores files in the directories that you tell it to use. You cannot control the location on disk of individual files within a logical file system.
+
+### <a id="im178954"></a>To create a filespace using hawq filespace 
+
+1.  Log in to the HAWQ master as the `gpadmin` user.
+
+    ``` shell
+    $ su - gpadmin
+    ```
+
+2.  Create a filespace configuration file:
+
+    ``` shell
+    $ hawq filespace -o hawqfilespace_config
+    ```
+
+3.  At the prompt, enter a name for the filespace, a master file system location, and the primary segment file system locations. For example:
+
+    ``` shell
+    $ hawq filespace -o hawqfilespace_config
+    ```
+    ``` pre
+    Enter a name for this filespace
+    > testfs
+    Enter replica num for filespace. If 0, default replica num is used (default=3)
+    > 
+
+    Please specify the DFS location for the filespace (for example: localhost:9000/fs)
+    location> localhost:8020/fs        
+    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-[created]
+    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-
+    To add this filespace to the database please run the command:
+       hawqfilespace --config /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+       
+    ``` shell
+    $ cat /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+    ``` pre
+    filespace:testfs
+    fsreplica:3
+    dfs_url::localhost:8020/fs
+    ```
+    ``` shell
+    $ hawq filespace --config /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+    ``` pre
+    Reading Configuration file: '/Users/gpadmin/curwork/git/hawq/hawqfilespace_config'
+
+    CREATE FILESPACE testfs ON hdfs 
+    ('localhost:8020/fs/testfs') WITH (NUMREPLICA = 3);
+    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Connecting to database
+    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Filespace "testfs" successfully created
+
+    ```
+
+
+4.  `hawq filespace` creates a configuration file. Examine the file to verify that the hawq filespace configuration is correct. The following is a sample configuration file:
+
+    ```
+    filespace:fastdisk
+    mdw:1:/hawq_master_filespc/gp-1
+    sdw1:2:/hawq_pri_filespc/gp0
+    sdw2:3:/hawq_pri_filespc/gp1
+    ```
+
+5.  Run hawq filespace again to create the filespace based on the configuration file:
+
+    ``` shell
+    $ hawq filespace -c hawqfilespace_config
+    ```
+
+
+## <a id="topic13"></a>Creating a Tablespace 
+
+After you create a filespace, use the `CREATE TABLESPACE` command to define a tablespace that uses that filespace. For example:
+
+``` sql
+=# CREATE TABLESPACE fastspace FILESPACE fastdisk;
+```
+
+Database superusers define tablespaces and grant access to database users with the `GRANT``CREATE`command. For example:
+
+``` sql
+=# GRANT CREATE ON TABLESPACE fastspace TO admin;
+```
+
+## <a id="topic14"></a>Using a Tablespace to Store Database Objects 
+
+Users with the `CREATE` privilege on a tablespace can create database objects in that tablespace, such as tables, indexes, and databases. The command is:
+
+``` sql
+CREATE TABLE tablename(options) TABLESPACE spacename
+```
+
+For example, the following command creates a table in the tablespace *space1*:
+
+``` sql
+CREATE TABLE foo(i int) TABLESPACE space1;
+```
+
+You can also use the `default_tablespace` parameter to specify the default tablespace for `CREATE TABLE` and `CREATE INDEX` commands that do not specify a tablespace:
+
+``` sql
+SET default_tablespace = space1;
+CREATE TABLE foo(i int);
+```
+
+The tablespace associated with a database stores that database's system catalogs, temporary files created by server processes using that database, and is the default tablespace selected for tables and indexes created within the database, if no `TABLESPACE` is specified when the objects are created. If you do not specify a tablespace when you create a database, the database uses the same tablespace used by its template database.
+
+You can use a tablespace from any database if you have appropriate privileges.
+
+## <a id="topic15"></a>Viewing Existing Tablespaces and Filespaces 
+
+Every HAWQ system has the following default tablespaces.
+
+-   `pg_global` for shared system catalogs.
+-   `pg_default`, the default tablespace. Used by the *template1* and *template0* databases.
+
+These tablespaces use the system default filespace, `pg_system`, the data directory location created at system initialization.
+
+To see filespace information, look in the *pg\_filespace* and *pg\_filespace\_entry* catalog tables. You can join these tables with *pg\_tablespace* to see the full definition of a tablespace. For example:
+
+``` sql
+=# SELECT spcname AS tblspc, fsname AS filespc,
+          fsedbid AS seg_dbid, fselocation AS datadir
+   FROM   pg_tablespace pgts, pg_filespace pgfs,
+          pg_filespace_entry pgfse
+   WHERE  pgts.spcfsoid=pgfse.fsefsoid
+          AND pgfse.fsefsoid=pgfs.oid
+   ORDER BY tblspc, seg_dbid;
+```
+
+## <a id="topic16"></a>Dropping Tablespaces and Filespaces 
+
+To drop a tablespace, you must be the tablespace owner or a superuser. You cannot drop a tablespace until all objects in all databases using the tablespace are removed.
+
+Only a superuser can drop a filespace. A filespace cannot be dropped until all tablespaces using that filespace are removed.
+
+The `DROP TABLESPACE` command removes an empty tablespace.
+
+The `DROP FILESPACE` command removes an empty filespace.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl-view.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl-view.html.md.erb b/markdown/ddl/ddl-view.html.md.erb
new file mode 100644
index 0000000..35da41e
--- /dev/null
+++ b/markdown/ddl/ddl-view.html.md.erb
@@ -0,0 +1,25 @@
+---
+title: Creating and Managing Views
+---
+
+Views enable you to save frequently used or complex queries, then access them in a `SELECT` statement as if they were a table. A view is not physically materialized on disk: the query runs as a subquery when you access the view.
+
+If a subquery is associated with a single query, consider using the `WITH` clause of the `SELECT` command instead of creating a seldom-used view.
+
+## <a id="topic101"></a>Creating Views 
+
+The `CREATE VIEW`command defines a view of a query. For example:
+
+``` sql
+CREATE VIEW comedies AS SELECT * FROM films WHERE kind = 'comedy';
+```
+
+Views ignore `ORDER BY` and `SORT` operations stored in the view.
+
+## <a id="topic102"></a>Dropping Views 
+
+The `DROP VIEW` command removes a view. For example:
+
+``` sql
+DROP VIEW topten;
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/ddl/ddl.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/ddl/ddl.html.md.erb b/markdown/ddl/ddl.html.md.erb
new file mode 100644
index 0000000..7873fe7
--- /dev/null
+++ b/markdown/ddl/ddl.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Defining Database Objects
+---
+
+This section covers data definition language \(DDL\) in HAWQ and how to create and manage database objects.
+
+Creating objects in a HAWQ includes making up-front choices about data distribution, storage options, data loading, and other HAWQ features that will affect the ongoing performance of your database system. Understanding the options that are available and how the database will be used will help you make the right decisions.
+
+Most of the advanced HAWQ features are enabled with extensions to the SQL `CREATE` DDL statements.
+
+This section contains the topics:
+
+*  <a class="subnav" href="./ddl-database.html">Creating and Managing Databases</a>
+*  <a class="subnav" href="./ddl-tablespace.html">Creating and Managing Tablespaces</a>
+*  <a class="subnav" href="./ddl-schema.html">Creating and Managing Schemas</a>
+*  <a class="subnav" href="./ddl-table.html">Creating and Managing Tables</a>
+*  <a class="subnav" href="./ddl-storage.html">Table Storage Model and Distribution Policy</a>
+*  <a class="subnav" href="./ddl-partition.html">Partitioning Large Tables</a>
+*  <a class="subnav" href="./ddl-view.html">Creating and Managing Views</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/02-pipeline.png
----------------------------------------------------------------------
diff --git a/markdown/images/02-pipeline.png b/markdown/images/02-pipeline.png
new file mode 100644
index 0000000..26fec1b
Binary files /dev/null and b/markdown/images/02-pipeline.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/03-gpload-files.jpg b/markdown/images/03-gpload-files.jpg
new file mode 100644
index 0000000..d50435f
Binary files /dev/null and b/markdown/images/03-gpload-files.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/basic_query_flow.png
----------------------------------------------------------------------
diff --git a/markdown/images/basic_query_flow.png b/markdown/images/basic_query_flow.png
new file mode 100644
index 0000000..59172a2
Binary files /dev/null and b/markdown/images/basic_query_flow.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/markdown/images/ext-tables-xml.png b/markdown/images/ext-tables-xml.png
new file mode 100644
index 0000000..f208828
Binary files /dev/null and b/markdown/images/ext-tables-xml.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/ext_tables.jpg b/markdown/images/ext_tables.jpg
new file mode 100644
index 0000000..d5a0940
Binary files /dev/null and b/markdown/images/ext_tables.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/ext_tables_multinic.jpg b/markdown/images/ext_tables_multinic.jpg
new file mode 100644
index 0000000..fcf09c4
Binary files /dev/null and b/markdown/images/ext_tables_multinic.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/gangs.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/gangs.jpg b/markdown/images/gangs.jpg
new file mode 100644
index 0000000..0d14585
Binary files /dev/null and b/markdown/images/gangs.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/gporca.png
----------------------------------------------------------------------
diff --git a/markdown/images/gporca.png b/markdown/images/gporca.png
new file mode 100644
index 0000000..2909443
Binary files /dev/null and b/markdown/images/gporca.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/markdown/images/hawq_hcatalog.png b/markdown/images/hawq_hcatalog.png
new file mode 100644
index 0000000..35b74c3
Binary files /dev/null and b/markdown/images/hawq_hcatalog.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/images/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/markdown/images/slice_plan.jpg b/markdown/images/slice_plan.jpg
new file mode 100644
index 0000000..ad8da83
Binary files /dev/null and b/markdown/images/slice_plan.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/install/aws-config.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/install/aws-config.html.md.erb b/markdown/install/aws-config.html.md.erb
new file mode 100644
index 0000000..21cadf5
--- /dev/null
+++ b/markdown/install/aws-config.html.md.erb
@@ -0,0 +1,123 @@
+---
+title: Amazon EC2 Configuration
+---
+
+Amazon Elastic Compute Cloud (EC2) is a service provided by Amazon Web Services (AWS).  You can install and configure HAWQ on virtual servers provided by Amazon EC2. The following information describes some considerations when deploying a HAWQ cluster in an Amazon EC2 environment.
+
+## <a id="topic_wqv_yfx_y5"></a>About Amazon EC2 
+
+Amazon EC2 can be used to launch as many virtual servers as you need, configure security and networking, and manage storage. An EC2 *instance* is a virtual server in the AWS cloud virtual computing environment.
+
+EC2 instances are managed by AWS. AWS isolates your EC2 instances from other users in a virtual private cloud (VPC) and lets you control access to the instances. You can configure instance features such as operating system, network connectivity (network ports and protocols, IP addresses), access to the Internet, and size and type of disk storage. 
+
+For information about Amazon EC2, see the [EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html).
+
+## <a id="topic_nhk_df4_2v"></a>Create and Launch HAWQ Instances
+
+Use the *Amazon EC2 Console* to launch instances and configure, start, stop, and terminate (delete) virtual servers. When you launch a HAWQ instance, you select and configure key attributes via the EC2 Console.
+
+
+### <a id="topic_amitype"></a>Choose AMI Type
+
+An Amazon Machine Image (AMI) is a template that contains a software configuration including the operating system, application server, and applications that best suit your purpose. When configuring a HAWQ virtual instance, we recommend you use a *hardware virtualized* AMI running 64-bit Red Hat Enterprise Linux version 6.4 or 6.5 or 64-bit CentOS 6.4 or 6.5.  Obtain the licenses and instances directly from the OS provider.
+
+### <a id="topic_selcfgstorage"></a>Consider Storage
+EC2 instances can be launched as either Elastic Block Store (EBS)-backed or instance store-backed.  
+
+Instance store-backed storage is generally better performing than EBS and recommended for HAWQ's large data workloads. SSD (solid state) instance store is preferred over magnetic drives.
+
+**Note** EC2 *instance store* provides temporary block-level storage. This storage is located on disks that are physically attached to the host computer. While instance store provides high performance, powering off the instance causes data loss. Soft reboots preserve instance store data. 
+     
+Virtual devices for instance store volumes for HAWQ EC2 instance store instances are named `ephemeralN` (where *N* varies based on instance type). CentOS instance store block device are named `/dev/xvdletter` (where *letter* is a lower case letter of the alphabet).
+
+### <a id="topic_cfgplacegrp"></a>Configure Placement Group 
+
+A placement group is a logical grouping of instances within a single availability zone that together participate in a low-latency, 10 Gbps network.  Your HAWQ master and segment cluster instances should support enhanced networking and reside in a single placement group (and subnet) for optimal network performance.  
+
+If your Ambari node is not a DataNode, locating the Ambari node instance in a subnet separate from the HAWQ master/segment placement group enables you to manage multiple HAWQ clusters from the single Ambari instance.
+
+Amazon recommends that you use the same instance type for all instances in the placement group and that you launch all instances within the placement group at the same time.
+
+Membership in a placement group has some implications on your HAWQ cluster.  Specifically, growing the cluster over capacity may require shutting down all HAWQ instances in the current placement group and restarting the instances to a new placement group. Instance store volumes are lost in this scenario.
+
+### <a id="topic_selinsttype"></a>Select EC2 Instance Type
+
+An EC2 instance type is a specific combination of CPU, memory, default storage, and networking capacity.  
+
+Several instance store-backed EC2 instance types have shown acceptable performance for HAWQ nodes in development and production environments: 
+
+| Instance Type  | Env | vCPUs | Memory (GB) | Disk Capacity (GB) | Storage Type |
+|-------|-----|------|--------|----------|--------|
+| cc2.8xlarge  | Dev | 32 | 60.5 | 4 x 840 | HDD |
+| d2.2xlarge  | Dev | 8 | 60 | 6 x 2000 | HDD |
+| d2.4xlarge  | Dev/QA | 16 | 122 | 12 x 2000 | HDD |
+| i2.8xlarge  | Prod | 32 | 244 | 8 x 800 | SSD |
+| hs1.8xlarge  | Prod | 16 | 117 | 24 x 2000 | HDD |
+| d2.8xlarge  | Prod | 36 | 244 | 24 x 2000 | HDD |
+ 
+For optimal network performance, the chosen HAWQ instance type should support EC2 enhanced networking. Enhanced networking results in higher performance, lower latency, and lower jitter. Refer to [Enhanced Networking on Linux Instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html) for detailed information on enabling enhanced networking in your instances.
+
+All instance types identified in the table above support enhanced networking.
+
+### <a id="topic_cfgnetw"></a>Configure Networking 
+
+Your HAWQ cluster instances should be in a single VPC and on the same subnet. Instances are always assigned a VPC internal IP address. This internal IP address should be used for HAWQ communication between hosts. You can also use the internal IP address to access an instance from another instance within the HAWQ VPC.
+
+You may choose to locate your Ambari node on a separate subnet in the VPC. Both a public IP address for the instance and an Internet gateway configured for the EC2 VPC are required to access the Ambari instance from an external source and for the instance to access the Internet. 
+
+Ensure your Ambari and HAWQ master instances are each assigned a public IP address for external and internet access. We recommend you also assign an Elastic IP Address to the HAWQ master instance.
+
+
+###Configure Security Groups<a id="topic_cfgsecgrp"></a>
+
+A security group is a set of rules that control network traffic to and from your HAWQ instance.  One or more rules may be associated with a security group, and one or more security groups may be associated with an instance.
+
+To configure HAWQ communication between nodes in the HAWQ cluster, include and open the following ports in the appropriate security group for the HAWQ master and segment nodes:
+
+| Port  | Application |
+|-------|-------------------------------------|
+| 22    | ssh - secure connect to other hosts |
+
+To allow access to/from a source external to the Ambari management node, include and open the following ports in an appropriate security group for your Ambari node:
+
+| Port  | Application |
+|-------|-------------------------------------|
+| 22    | ssh - secure connect to other hosts |
+| 8080  | Ambari - HAWQ admin/config web console |  
+
+
+###Generate Key Pair<a id="topic_cfgkeypair"></a>
+AWS uses public-key cryptography to secure the login information for your instance. You use the EC2 console to generate and name a key pair when you launch your instance.  
+
+A key pair for an EC2 instance consists of a *public key* that AWS stores, and a *private key file* that you maintain. Together, they allow you to connect to your instance securely. The private key file name typically has a `.pem` suffix.
+
+This example logs into an into EC2 instance from an external location with the private key file `my-test.pem` as user `user1`.  In this example, the instance is configured with the public IP address `192.0.2.0` and the private key file resides in the current directory.
+
+```shell
+$ ssh -i my-test.pem user1@192.0.2.0
+```
+
+##Additional HAWQ Considerations <a id="topic_mj4_524_2v"></a>
+
+After launching your HAWQ instance, you will connect to and configure the instance. The  *Instances* page of the EC2 Console lists the running instances and their associated network access information.
+
+Before installing HAWQ, set up the EC2 instances as you would local host server machines. Configure the host operating system, configure host network information (for example, update the `/etc/hosts` file), set operating system parameters, and install operating system packages. For information about how to prepare your operating system environment for HAWQ, see [Apache HAWQ System Requirements](../requirements/system-requirements.html) and [Select HAWQ Host Machines](../install/select-hosts.html).
+
+###Passwordless SSH Configuration<a id="topic_pwdlessssh_cc"></a>
+
+HAWQ hosts will be configured during the installation process to use passwordless SSH for intra-cluster communications. Temporary password-based authentication must be enabled on each HAWQ host in preparation for this configuration. Password authentication is typically disabled by default in cloud images. Update the cloud configuration in `/etc/cloud/cloud.cfg` to enable password authentication in your AMI(s). Set `ssh_pwauth: True` in this file. If desired, disable password authentication after HAWQ installation by setting the property back to `False`.
+  
+##References<a id="topic_hgz_zwy_bv"></a>
+
+Links to related Amazon Web Services and EC2 features and information.
+
+- [Amazon Web Services](https://aws.amazon.com)
+- [Amazon Machine Image \(AMI\)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
+- [EC2 Instance Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html)
+- [Elastic Block Store](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html)
+- [EC2 Key Pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
+- [Elastic IP Address](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html)
+- [Enhanced Networking on Linux Instances](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html)
+- [Internet Gateways] (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html)
+- [Subnet Public IP Addressing](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html#subnet-public-ip)
+- [Virtual Private Cloud](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/install/select-hosts.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/install/select-hosts.html.md.erb b/markdown/install/select-hosts.html.md.erb
new file mode 100644
index 0000000..ecbe0b5
--- /dev/null
+++ b/markdown/install/select-hosts.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Select HAWQ Host Machines
+---
+
+Before you begin to install HAWQ, follow these steps to select and prepare the host machines.
+
+Complete this procedure for all HAWQ deployments:
+
+1.  **Choose the host machines that will host a HAWQ segment.** Keep in mind these restrictions and requirements:
+    -   Each host must meet the system requirements for the version of HAWQ you are installing.
+    -   Each HAWQ segment must be co-located on a host that runs an HDFS DataNode.
+    -   The HAWQ master segment and standby master segment must be hosted on separate machines.
+2.  **Choose the host machines that will run PXF.** Keep in mind these restrictions and requirements:
+    -   PXF must be installed on the HDFS NameNode *and* on all HDFS DataNodes.
+    -   If you have configured Hadoop with high availability, PXF must also be installed on all HDFS nodes including all NameNode services.
+    -   If you want to use PXF with HBase or Hive, you must first install the HBase client \(hbase-client\) and/or Hive client \(hive-client\) on each machine where you intend to install PXF. See the [HDP installation documentation](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/index.html) for more information.
+3.  **Verify that required ports on all machines are unused.** By default, a HAWQ master or standby master service configuration uses port 5432. Hosts that run other PostgreSQL instances cannot be used to run a default HAWQ master or standby service configuration because the default PostgreSQL port \(5432\) conflicts with the default HAWQ port. You must either change the default port configuration of the running PostgreSQL instance or change the HAWQ master port setting during the HAWQ service installation to avoid port conflicts.
+    
+    **Note:** The Ambari server node uses PostgreSQL as the default metadata database. The Hive Metastore uses MySQL as the default metadata database.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/02-pipeline.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/02-pipeline.png b/markdown/mdimages/02-pipeline.png
new file mode 100644
index 0000000..26fec1b
Binary files /dev/null and b/markdown/mdimages/02-pipeline.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/03-gpload-files.jpg b/markdown/mdimages/03-gpload-files.jpg
new file mode 100644
index 0000000..d50435f
Binary files /dev/null and b/markdown/mdimages/03-gpload-files.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/1-assign-masters.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/1-assign-masters.tiff b/markdown/mdimages/1-assign-masters.tiff
new file mode 100644
index 0000000..b5c4cb4
Binary files /dev/null and b/markdown/mdimages/1-assign-masters.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/1-choose-services.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/1-choose-services.tiff b/markdown/mdimages/1-choose-services.tiff
new file mode 100644
index 0000000..d21b706
Binary files /dev/null and b/markdown/mdimages/1-choose-services.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/3-assign-slaves-and-clients.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/3-assign-slaves-and-clients.tiff b/markdown/mdimages/3-assign-slaves-and-clients.tiff
new file mode 100644
index 0000000..93ea3bd
Binary files /dev/null and b/markdown/mdimages/3-assign-slaves-and-clients.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/4-customize-services-hawq.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/4-customize-services-hawq.tiff b/markdown/mdimages/4-customize-services-hawq.tiff
new file mode 100644
index 0000000..c6bfee8
Binary files /dev/null and b/markdown/mdimages/4-customize-services-hawq.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/5-customize-services-pxf.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/5-customize-services-pxf.tiff b/markdown/mdimages/5-customize-services-pxf.tiff
new file mode 100644
index 0000000..3812aa1
Binary files /dev/null and b/markdown/mdimages/5-customize-services-pxf.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/6-review.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/6-review.tiff b/markdown/mdimages/6-review.tiff
new file mode 100644
index 0000000..be7debb
Binary files /dev/null and b/markdown/mdimages/6-review.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/7-install-start-test.tiff
----------------------------------------------------------------------
diff --git a/markdown/mdimages/7-install-start-test.tiff b/markdown/mdimages/7-install-start-test.tiff
new file mode 100644
index 0000000..b556e9a
Binary files /dev/null and b/markdown/mdimages/7-install-start-test.tiff differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/ext-tables-xml.png b/markdown/mdimages/ext-tables-xml.png
new file mode 100644
index 0000000..f208828
Binary files /dev/null and b/markdown/mdimages/ext-tables-xml.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/ext_tables.jpg b/markdown/mdimages/ext_tables.jpg
new file mode 100644
index 0000000..d5a0940
Binary files /dev/null and b/markdown/mdimages/ext_tables.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/ext_tables_multinic.jpg b/markdown/mdimages/ext_tables_multinic.jpg
new file mode 100644
index 0000000..fcf09c4
Binary files /dev/null and b/markdown/mdimages/ext_tables_multinic.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gangs.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gangs.jpg b/markdown/mdimages/gangs.jpg
new file mode 100644
index 0000000..0d14585
Binary files /dev/null and b/markdown/mdimages/gangs.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gp_orca_fallback.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gp_orca_fallback.png b/markdown/mdimages/gp_orca_fallback.png
new file mode 100644
index 0000000..000a6af
Binary files /dev/null and b/markdown/mdimages/gp_orca_fallback.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gpfdist_instances.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gpfdist_instances.png b/markdown/mdimages/gpfdist_instances.png
new file mode 100644
index 0000000..6fae2d4
Binary files /dev/null and b/markdown/mdimages/gpfdist_instances.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gpfdist_instances_backup.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gpfdist_instances_backup.png b/markdown/mdimages/gpfdist_instances_backup.png
new file mode 100644
index 0000000..7cd3e1a
Binary files /dev/null and b/markdown/mdimages/gpfdist_instances_backup.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/gporca.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/gporca.png b/markdown/mdimages/gporca.png
new file mode 100644
index 0000000..2909443
Binary files /dev/null and b/markdown/mdimages/gporca.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/hawq_architecture_components.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/hawq_architecture_components.png b/markdown/mdimages/hawq_architecture_components.png
new file mode 100644
index 0000000..cea50b0
Binary files /dev/null and b/markdown/mdimages/hawq_architecture_components.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/hawq_hcatalog.png b/markdown/mdimages/hawq_hcatalog.png
new file mode 100644
index 0000000..35b74c3
Binary files /dev/null and b/markdown/mdimages/hawq_hcatalog.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/hawq_high_level_architecture.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/hawq_high_level_architecture.png b/markdown/mdimages/hawq_high_level_architecture.png
new file mode 100644
index 0000000..d88bf7a
Binary files /dev/null and b/markdown/mdimages/hawq_high_level_architecture.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/partitions.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/partitions.jpg b/markdown/mdimages/partitions.jpg
new file mode 100644
index 0000000..d366e21
Binary files /dev/null and b/markdown/mdimages/partitions.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/piv-opt.png
----------------------------------------------------------------------
diff --git a/markdown/mdimages/piv-opt.png b/markdown/mdimages/piv-opt.png
new file mode 100644
index 0000000..f8f192b
Binary files /dev/null and b/markdown/mdimages/piv-opt.png differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/resource_queues.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/resource_queues.jpg b/markdown/mdimages/resource_queues.jpg
new file mode 100644
index 0000000..7f5a54c
Binary files /dev/null and b/markdown/mdimages/resource_queues.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/slice_plan.jpg b/markdown/mdimages/slice_plan.jpg
new file mode 100644
index 0000000..ad8da83
Binary files /dev/null and b/markdown/mdimages/slice_plan.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/source/gporca.graffle
----------------------------------------------------------------------
diff --git a/markdown/mdimages/source/gporca.graffle b/markdown/mdimages/source/gporca.graffle
new file mode 100644
index 0000000..fb835d5
Binary files /dev/null and b/markdown/mdimages/source/gporca.graffle differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/source/hawq_hcatalog.graffle
----------------------------------------------------------------------
diff --git a/markdown/mdimages/source/hawq_hcatalog.graffle b/markdown/mdimages/source/hawq_hcatalog.graffle
new file mode 100644
index 0000000..f46bfb2
Binary files /dev/null and b/markdown/mdimages/source/hawq_hcatalog.graffle differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/mdimages/standby_master.jpg
----------------------------------------------------------------------
diff --git a/markdown/mdimages/standby_master.jpg b/markdown/mdimages/standby_master.jpg
new file mode 100644
index 0000000..ef195ab
Binary files /dev/null and b/markdown/mdimages/standby_master.jpg differ


Mime
View raw message