hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "Hive/AdminManual/Configuration" by AshishThusoo
Date Thu, 22 Jan 2009 01:07:56 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by AshishThusoo:

  A number of configuration variables in Hive can be used by the administrator to change the
behavior for their installations and user sessions. These variables can be configured in any
of the following ways, shown in the order of preference:
   * Using the set command in the cli for setting session level values for the configuration
variable for all statements subsequent to the set command. e.g.
- set hive.exec.scratchdir=/tmp/mydir
+   set hive.exec.scratchdir=/tmp/mydir
    sets the scratch directory (which is used by hive to store temporary output and plans)
to /tmp/mydir for all subseq
   * Using -hiveconf option on the cli for the entire session. e.g.
- bin/hive -hiveconf hive.exec.scratchdir=/tmp/mydir
+   bin/hive -hiveconf hive.exec.scratchdir=/tmp/mydir
   * In hive-site.xml. This is used for setting values for the entire Hive configuration.
- <property>
+   <property>
-   <name>hive.exec.scratchdir</name>
+     <name>hive.exec.scratchdir</name>
-   <value>/tmp/mydir</value>
+     <value>/tmp/mydir</value>
-   <description>Scratch space for Hive jobs</description>
+     <description>Scratch space for Hive jobs</description>
- </property>
+   </property>
   * hive-default.xml - This configuration file contains the default values for various configuration
variables that come with prepackaged in a Hive distribution. These should not be changed by
the administrator. In order to override any of the values, create hive-site.xml instead and
set the value in that file as shown above.
@@ -26, +26 @@

  === Hive Configuration Variables ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
- ||hive.exec.script.wrapper||||null||
+ ||hive.exec.script.wrapper||Wrapper around any invocations to script operator e.g. if this
is set to python, the script passed to the script operator will be invoked as {{{python <script
command>}}}. If the value is null or not set, the script is invoked as {{{<script command>}}}||null||
- ||hive.exec.scratchdir||||/tmp/"+System.getProperty("user.name")+"/hive"||
- ||hive.exec.submitviachild||||false||
- ||hive.exec.script.maxerrsize||||100000||
- ||hive.exec.compress.output||||false||
- ||hive.exec.compress.intermediate||||false||
+ ||hive.exec.scratchdir||This directory is used by hive to store the plans for different
map/reduce stages for the query as well as to stored the intermediate outputs of these stages||/tmp/<user.name>/hive||
+ ||hive.exec.submitviachild||Determines whether the map/reduce jobs should be submitted through
a separate jvm in the non local mode||false - By default jobs are submitted through the same
jvm as the compiler||
+ ||hive.exec.script.maxerrsize||Maximum number of serialization errors allowed in a user
script invoked through {{{TRANSFORM}}} or {{{MAP}}} or {{{REDUCE}}} constructs||100000||
+ ||hive.exec.compress.output||Determines whether the output of the final map/reduce job in
a query is compressed or not||false||
+ ||hive.exec.compress.intermediate||Determines whether the output of the intermediate map/reduce
jobs in a query is compressed or not||false||

View raw message