Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 39783 invoked from network); 22 Jan 2009 00:44:46 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 Jan 2009 00:44:46 -0000 Received: (qmail 33826 invoked by uid 500); 22 Jan 2009 00:44:45 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 33795 invoked by uid 500); 22 Jan 2009 00:44:45 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 33786 invoked by uid 99); 22 Jan 2009 00:44:45 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Jan 2009 16:44:45 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.130] (HELO eos.apache.org) (140.211.11.130) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Jan 2009 00:44:40 +0000 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id 9A049118DD for ; Thu, 22 Jan 2009 00:44:19 +0000 (GMT) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Apache Wiki To: core-commits@hadoop.apache.org Date: Thu, 22 Jan 2009 00:44:19 -0000 Message-ID: <20090122004419.11051.3401@eos.apache.org> Subject: [Hadoop Wiki] Update of "Hive/LanguageManual/LanguageManual/Cli" by suresh antony X-Virus-Checked: Checked by ClamAV on apache.org Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification. The following page has been changed by suresh antony: http://wiki.apache.org/hadoop/Hive/LanguageManual/LanguageManual/Cli New page: = Hive Cli = $HIVE_HOME/bin/hive is an interactive shell utility. Use this to run hive queries. Use ";"(semicolon) to separate different commands in shell. == Hive Command line Options == Usage: hive [-hiveconf x=y]* [<-f filename>|<-e query-string>] [-S] -e 'quoted query string' Sql from command line -f Sql from file -S Silent mode in interactive shell -hiveconf x=y Use this to set hive/hadoop configuration variables. Hive variables are documented [[here]] -e and -f cannot be specified together. In the absence of these options, interactive shell is started * Example of running Query from command line * $HIVE_HOME/bin/hive -e 'select a.col from tab1 a' * Example of setting hive configuration variables * $HIVE_HOME/bin/hive -e 'select a.col from tab1 a' -hiveconf hive.exec.scratchdir=/home/my/hive_scratch -hiveconf mapred.reduce.tasks=32 == Hive interactive Shell Command == When $HIVE_HOME/bin/hive ran without any -e/-f option it goes into interactive shell mode. * hive> quit; * Use quit or exit to come out of interactive shell. * hive> set; * This will print list of configuration variables that overriden by user or hive. * hive> set -v; * This will give all possible hadoop/hive configuration variables. * hive> set =; * Use this set value of particular parameter. One thing to note here is that if you miss spell the variable name, there cli will not show an error. * hive> set key; * Use this check the value of particular variable. * hive> add FILE *; * Adds a file to the list of resources. * hive> list FILE; * list all the resources already added. * hive> list FILE *; * Check given resources are already added or not. * hive> ! ; * execute a shell command from hive shell * hive> dfs ; * execute dfs command command from hive shell. * hive> ; * executes hive query and prints results to stdout. === Hive Resources === You can add a file to list of resources using 'add FILE '. This could be a local file or nfs file. Once files is added to the list of resources, hive query could access this file from any where in the cluster. Otherwise location of the file must be accessible to all machines in cluster. Example: {{{ hive> add FILE /tmp/tt.py hive> from networks a MAP a.networkid USING 'python tt.py' as nn where a.ds = '2009-01-04' limit 10; }}}