hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "Hive/AdminManual/Configuration" by AshishThusoo
Date Thu, 22 Jan 2009 01:37:46 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by AshishThusoo:
http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration

------------------------------------------------------------------------------
  === Hive Configuration Variables ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
  ||hive.exec.script.wrapper||Wrapper around any invocations to script operator e.g. if this
is set to python, the script passed to the script operator will be invoked as {{{python <script
command>}}}. If the value is null or not set, the script is invoked as {{{<script command>}}}.||null||
- ||hive.exec.plan||||null||
+ ||hive.exec.plan|| ||null||
  ||hive.exec.scratchdir||This directory is used by hive to store the plans for different
map/reduce stages for the query as well as to stored the intermediate outputs of these stages.||/tmp/<user.name>/hive||
  ||hive.exec.submitviachild||Determines whether the map/reduce jobs should be submitted through
a separate jvm in the non local mode.||false - By default jobs are submitted through the same
jvm as the compiler||
  ||hive.exec.script.maxerrsize||Maximum number of serialization errors allowed in a user
script invoked through {{{TRANSFORM}}} or {{{MAP}}} or {{{REDUCE}}} constructs.||100000||
@@ -37, +37 @@

  ||hive.aux.jars.path||The location of the plugin jars that contain implementations of user
defined functions and serdes.||||
  ||hive.partition.pruning||A strict value for this variable indicates that an error is thrown
by the compiler in case no partition predicate is provided on a partitioned table. This is
used to protect against a user inadvertently issuing a query against all the partitions of
the table.||nonstrict||
  ||hive.map.aggr||Determines whether the map side aggregation is on or not.||false||
- ||hive.join.emit.interval||||1000||
+ ||hive.join.emit.interval|| ||1000||
- ||hive.map.aggr.hash.percentmemory||||(float)0.5||
+ ||hive.map.aggr.hash.percentmemory|| ||(float)0.5||
  ||hive.default.fileformat||||TextFile||
  
  === Hive MetaStore Configuration Variables ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
- ||hive.metastore.metadb.dir||||||
+ ||hive.metastore.metadb.dir|| || ||
- ||hive.metastore.warehouse.dir||||||
+ ||hive.metastore.warehouse.dir|| || ||
- ||hive.metastore.uris||||||
+ ||hive.metastore.uris|| || ||
- ||hive.metastore.usefilestore||||||
+ ||hive.metastore.usefilestore|| || ||
- ||hive.metastore.rawstore.impl||||||
+ ||hive.metastore.rawstore.impl|| || ||
- ||hive.metastore.local||||||
+ ||hive.metastore.local|| || ||
- ||javax.jdo.option.ConnectionURL||||||
+ ||javax.jdo.option.ConnectionURL|| || ||
- ||javax.jdo.option.ConnectionDriverName||||||
+ ||javax.jdo.option.ConnectionDriverName|| || ||
- ||javax.jdo.option.ConnectionUserName||||||
+ ||javax.jdo.option.ConnectionUserName|| || ||
- ||javax.jdo.option.ConnectionPassword||||||
+ ||javax.jdo.option.ConnectionPassword|| || ||
- ||org.jpox.autoCreateSchema||||||
+ ||org.jpox.autoCreateSchema|| || ||
- ||org.jpox.fixedDatastore||||||
+ ||org.jpox.fixedDatastore|| || ||
- ||hive.metastore.checkForDefaultDb||||||
+ ||hive.metastore.checkForDefaultDb|| || ||
  
  === Hive Configuration Variables used to interact with Hadoop ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
  ||hadoop.bin.path||The location of hadoop script which is used to submit jobs to hadoop
when submitting through a separate jvm.||$HADOOP_HOME/bin/hadoop||
  ||hadoop.config.dir||The location of the configuration directory of the hadoop installation||$HADOOP_HOME/conf||
- ||fs.default.name||||file:///||
+ ||fs.default.name|| ||file:///||
- ||map.input.file||||null||
+ ||map.input.file|| ||null||
  ||mapred.job.tracker||The url to the jobtracker. If this is set to local then map/reduce
is run in the local mode.||local||
  ||mapred.reduce.tasks||The number of reducers for each map/reduce stage in the query plan.||1||
  ||mapred.job.name||The name of the map/reduce job||null||
  
  === Hive Variables used to pass run time information ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
- ||hive.session.id||The id of the Hive Session.||||
+ ||hive.session.id||The id of the Hive Session.|| ||
- ||hive.query.string||The query string passed to the map/reduce job.||||
+ ||hive.query.string||The query string passed to the map/reduce job.|| ||
- ||hive.query.planid||The id of the plan for the map/reduce stage.||||
+ ||hive.query.planid||The id of the plan for the map/reduce stage.|| ||
  ||hive.jobname.length||The maximum length of the jobname.||50||
- ||hive.table.name||The name of the hive table. This is passed to the user scripts through
the script operator.||||
+ ||hive.table.name||The name of the hive table. This is passed to the user scripts through
the script operator.|| ||
- ||hive.partition.name||The name of the hive partition. This is passed to the user scripts
through the script operator.||||
+ ||hive.partition.name||The name of the hive partition. This is passed to the user scripts
through the script operator.|| ||
- ||hive.alias||The alias being processed. This is also passed to the user scripts through
the script operator.||||
+ ||hive.alias||The alias being processed. This is also passed to the user scripts through
the script operator.|| ||
  

Mime
View raw message