hawq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yo...@apache.org
Subject [2/3] incubator-hawq-docs git commit: Links to resource queue sections.
Date Fri, 14 Oct 2016 23:28:18 GMT
Links to resource queue sections.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/5edd907d
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/5edd907d
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/5edd907d

Branch: refs/heads/develop
Commit: 5edd907de2b353b6e13e954524fbca9d4a3a7787
Parents: bf16a06
Author: Jane Beckman <jbeckman@pivotal.io>
Authored: Fri Oct 14 14:31:41 2016 -0700
Committer: Jane Beckman <jbeckman@pivotal.io>
Committed: Fri Oct 14 14:31:41 2016 -0700

----------------------------------------------------------------------
 bestpractices/querying_data_bestpractices.html.md.erb | 5 +++--
 query/query-profiling.html.md.erb                     | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/5edd907d/bestpractices/querying_data_bestpractices.html.md.erb
----------------------------------------------------------------------
diff --git a/bestpractices/querying_data_bestpractices.html.md.erb b/bestpractices/querying_data_bestpractices.html.md.erb
index 7a86176..15fecfe 100644
--- a/bestpractices/querying_data_bestpractices.html.md.erb
+++ b/bestpractices/querying_data_bestpractices.html.md.erb
@@ -16,12 +16,13 @@ If a query performs poorly, examine its query plan and ask the following
questio
     If the plan is not choosing the optimal join order, set `join_collapse_limit=1` and use
explicit `JOIN` syntax in your SQL statement to force the legacy query optimizer (planner)
to the specified join order. You can also collect more statistics on the relevant join columns.
 
 -   **Does the optimizer selectively scan partitioned tables?** If you use table partitioning,
is the optimizer selectively scanning only the child tables required to satisfy the query
predicates? Scans of the parent tables should return 0 rows since the parent tables do not
contain any data. See [Verifying Your Partition Strategy](../ddl/ddl-partition.html#topic74)
for an example of a query plan that shows a selective partition scan.
--   **Does the optimizer choose hash aggregate and hash join operations where applicable?**
Hash operations are typically much faster than other types of joins or aggregations. Row comparison
and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer
to choose hash operations, there must be sufficient memory available to hold the estimated
number of rows. Try increasing work memory to improve performance for a query. If possible,
run an `EXPLAIN             ANALYZE` for the query to show which plan operations spilled to
disk, how much work memory they used, and how much memory was required to avoid spilling to
disk. For example:
+-   **Does the optimizer choose hash aggregate and hash join operations where applicable?**
Hash operations are typically much faster than other types of joins or aggregations. Row comparison
and sorting is done in memory rather than reading/writing from disk. To enable the query optimizer
to choose hash operations, there must be sufficient memory available to hold the estimated
number of rows. Try increasing work memory to improve performance for a query. If possible,
run an `EXPLAIN  ANALYZE` for the query to show which plan operations spilled to disk, how
much work memory they used, and how much memory was required to avoid spilling to disk. For
example:
 
     `Work_mem used: 23430K bytes avg, 23430K bytes max (seg0). Work_mem               wanted:
33649K bytes avg, 33649K bytes max (seg0) to lessen workfile I/O affecting 2             
 workers.`
 
 **Note**
-The *work\_mem* property is not configurable. Use resource queues to manage memory use.
+The *work\_mem* property is not configurable. Use resource queues to manage memory use. For
more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html)
and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
+
 
     The "bytes wanted" message from `EXPLAIN               ANALYZE` is based on the amount
of data written to work files and is not exact. The minimum `work_mem` needed can differ from
the suggested value.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/5edd907d/query/query-profiling.html.md.erb
----------------------------------------------------------------------
diff --git a/query/query-profiling.html.md.erb b/query/query-profiling.html.md.erb
index 8d770d3..b3139cf 100644
--- a/query/query-profiling.html.md.erb
+++ b/query/query-profiling.html.md.erb
@@ -89,7 +89,7 @@ The estimated startup cost for this plan is `00.00` (no cost) and a total
cost o
     workfile I/O affecting 2 workers.
     ```
 **Note**
-The *work\_mem* property is not configurable. Use resource queues to manage memory use.
+The *work\_mem* property is not configurable. Use resource queues to manage memory use. For
more information on resource queues, see [Configuring Resource Management](../resourcemgmt/ConfigureResourceManagement.html)
and [Working with Hierarchical Resource Queues](../resourcemgmt/ResourceQueues.html).
 
 -   The time (in milliseconds) in which the segment that produced the most rows retrieved
the first row, and the time taken for that segment to retrieve all rows. The result may omit
*&lt;time&gt; to first row* if it is the same as the *&lt;time&gt; to end*.
 


Mime
View raw message