Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E2CA710676 for ; Thu, 17 Apr 2014 04:42:47 +0000 (UTC) Received: (qmail 3486 invoked by uid 500); 17 Apr 2014 04:42:39 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 3397 invoked by uid 500); 17 Apr 2014 04:42:38 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 3390 invoked by uid 99); 17 Apr 2014 04:42:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2014 04:42:38 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sxin@gopivotal.com designates 209.85.219.51 as permitted sender) Received: from [209.85.219.51] (HELO mail-oa0-f51.google.com) (209.85.219.51) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2014 04:42:34 +0000 Received: by mail-oa0-f51.google.com with SMTP id i4so13604333oah.10 for ; Wed, 16 Apr 2014 21:42:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=QraFf20Rj/t0meHHqCZjw4weA2zZVP4zfA1rqKMw44E=; b=eLoYgP5MtNOewyFe+1Zzkhm2K95D2MriiW/lRWvwxPtjdKG3p5ADnbjfOV/wSJQfIz dHmwuyxFsiYDvEr4xZ4A4n0tB0YthiAQ9QdowgZ9QFScRwc2fCGWgU+jbIQ3M0TdxFRF WskdEArC4fvb9FoXx5K9mOE1Qzzheqjzi7ezNfZQPI7sN6p50vI3su5CXkaHzl+JCuqH KKznhx2y9B+vbwgHQiOqMkYqT9RmSD0aAUN8fj9MWhoiJkeLCBtg/UKzRfMiqiiXgkrO wrndJeqvel35Rxsfdv+wEtVlgtLvG3nUopHwrAbzgynEmHy1NmhYulHzuYoqqiUAnvyS IdEQ== X-Gm-Message-State: ALoCoQmXLesNK6U32MNceh4AaginK2qvobDtuoLmE0moGiu+fufeGPMe4LU9Q4eAbCRMAd4klFyn MIME-Version: 1.0 X-Received: by 10.182.104.101 with SMTP id gd5mr134913obb.54.1397709733506; Wed, 16 Apr 2014 21:42:13 -0700 (PDT) Received: by 10.76.125.104 with HTTP; Wed, 16 Apr 2014 21:42:13 -0700 (PDT) In-Reply-To: <90B31BC55A56431DB1BEEC3193450BFE@neusofte4edc94> References: <90B31BC55A56431DB1BEEC3193450BFE@neusofte4edc94> Date: Thu, 17 Apr 2014 12:42:13 +0800 Message-ID: Subject: Re: question about hive under hadoop From: Shengjun Xin To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0115ec142a34dc04f735a8a9 X-Virus-Checked: Checked by ClamAV on apache.org --089e0115ec142a34dc04f735a8a9 Content-Type: text/plain; charset=ISO-8859-1 For the first problem, you need to check the hive.log for the details On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing wrote: > I use hive-0.11.0 under hadoop 2.2.0, like follows: > [hadoop@node1 software]$ hive > 14/04/16 19:11:02 INFO Configuration.deprecation: > mapred.input.dir.recursive is deprecated. Instead, use > mapreduce.input.fileinputformat.input.dir.recursive > 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is > deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize > 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size is > deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize > 14/04/16 19:11:02 INFO Configuration.deprecation: > mapred.min.split.size.per.rack is deprecated. Instead, use > mapreduce.input.fileinputformat.split.minsize.per.rack > 14/04/16 19:11:02 INFO Configuration.deprecation: > mapred.min.split.size.per.node is deprecated. Instead, use > mapreduce.input.fileinputformat.split.minsize.per.node > 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is > deprecated. Instead, use mapreduce.job.reduces > 14/04/16 19:11:02 INFO Configuration.deprecation: > mapred.reduce.tasks.speculative.execution is deprecated. Instead, use > mapreduce.reduce.speculative > 14/04/16 19:11:03 WARN conf.Configuration: > org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter: > mapreduce.job.end-notification.max.retry.interval; Ignoring. > 14/04/16 19:11:03 WARN conf.Configuration: > org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override final parameter: > mapreduce.job.end-notification.max.attempts; Ignoring. > Logging initialized using configuration in > jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties > Hive history > file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > > > Then I crete a table named ufodata,like follows: > hive> CREATE TABLE ufodata(sighted STRING, reported STRING, > > sighting_location STRING,shape STRING, duration STRING, > > description STRING COMMENT 'Free text description') > > COMMENT 'The UFO data set.' ; > OK > Time taken: 1.588 seconds > hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata; > Loading data to table default.ufodata > rmr: DEPRECATED: Please use 'rm -r' instead. > Deleted /user/hive/warehouse/ufodata > Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: > 0, total_size: 75342464, raw_data_size: 0] > OK > Time taken: 1.483 seconds > > Then I want to count the table ufodata,like follows: > > hive> select count(*) from ufodata; > Total MapReduce jobs = 1 > Launching Job 1 out of 1 > Number of reduce tasks determined at compile time: 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapred.reduce.tasks= > Starting Job = job_1397699833108_0002, Tracking URL = > http://master:8088/proxy/application_1397699833108_0002/ > Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job -kill > job_1397699833108_0002 > > I have two question: > 1. Why do above command failed, where is wrong? how to solve it? > 2. When I use following command to quit hive,and reboot computer > hive>quit; > $reboot > > Then I use following command under hive > hive>describe ufodata; > Table not found 'ufodata' > > Where is my table? I am puzzled with it. How to resove above two question? > > Thanks > > > > > > > > > --------------------------------------------------------------------------------------------------- > Confidentiality Notice: The information contained in this e-mail and any > accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader > of this communication is > not the intended recipient, unauthorized use, forwarding, printing, > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the original > message and all copies from > your system. Thank you. > > --------------------------------------------------------------------------------------------------- > -- Regards Shengjun --089e0115ec142a34dc04f735a8a9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
For the first problem, you need to check the hive.log for = the details


On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <= zhangsc@neusoft.co= m> wrote:
I use hive-0.11.0=20 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:0= 2=20 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated.= =20 Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/1= 6=20 19:11:02 INFO Configuration.deprecation: mapred.max.split.size is deprecate= d.=20 Instead, use mapreduce.input.fileinputformat.split.maxsize
14/04/16 19:1= 1:02=20 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instea= d,=20 use mapreduce.input.fileinputformat.split.minsize
14/04/16 19:11:02 INFO= =20 Configuration.deprecation: mapred.min.split.size.per.rack is deprecated.=20 Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/0= 4/16=20 19:11:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is= =20 deprecated. Instead, use=20 mapreduce.input.fileinputformat.split.minsize.per.node
14/04/16 19:11:02= INFO=20 Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use= =20 mapreduce.job.reduces
14/04/16 19:11:02 INFO Configuration.deprecation:= =20 mapred.reduce.tasks.speculative.execution is deprecated. Instead, use=20 mapreduce.reduce.speculative
14/04/16 19:11:03 WARN conf.Configuration: = org.apache.hadoop.hive.conf.LoopingByteArrayIn= putStream@17a9eb9:an=20 attempt to override final parameter:=20 mapreduce.job.end-notification.max.retry.interval;=A0 Ignoring.
14/04/16= =20 19:11:03 WARN conf.Configuration: org.apache.h= adoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:an=20 attempt to override final parameter:=20 mapreduce.job.end-notification.max.attempts;=A0 Ignoring.
Logging initialized using configuration in=20 jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.= properties
Hive=20 history file=3D/tmp/hadoop/hive_job= _log_hadoop_4933@node1_201404161911_2112956781.txt
SLF4J:=20 Class path contains multiple SLF4J bindings.
SLF4J: Found binding in=20 [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12= -1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J:=20 Found binding in=20 [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j= /impl/StaticLoggerBinder.class]
SLF4J:=20 See http://www.slf4j.org/codes.html#multiple_bindings=20 for an explanation.
SLF4J: Actual binding is of type=20 [org.slf4j.impl.Log4jLoggerFactory]
=A0
=A0
Then I crete a table named ufodata,like follows:
hive> CREATE TA= BLE=20 ufodata(sighted STRING, reported STRING,
=A0=A0=A0 >=20 sighting_location STRING,shape STRING, duration STRING,
=A0=A0=A0=20 > description STRING COMMENT 'Free text description')
=A0=A0= =A0=20 > COMMENT 'The UFO data set.' ;
OK
Time taken: 1.588=20 seconds
hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO = TABLE=20 ufodata;
Loading data to table default.ufodata
rmr: DEPRECATED: Pleas= e use=20 'rm -r' instead.
Deleted /user/hive/warehouse/ufodata
Table= =20 default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows: 0,=20 total_size: 75342464, raw_data_size: 0]
OK
Time taken: 1.483 seconds<= /div>
=A0
Then I want to count the table ufodata,like follows:

hive> select count(*) from ufodata;
Total MapReduce jobs =3D= =20 1
Launching Job 1 out of 1
Number of reduce tasks determined at compi= le=20 time: 1
In order to change the average load for a reducer (in=20 bytes):
=A0 set hive.exec.reducers.bytes.per.reducer=3D<number>In=20 order to limit the maximum number of reducers:
=A0 set=20 hive.exec.reducers.max=3D<number>
In order to set a constant numbe= r of=20 reducers:
=A0 set mapred.reduce.tasks=3D<number>
Starting Job = =3D=20 job_1397699833108_0002, Tracking URL =3D http://master:8088/pro= xy/application_1397699833108_0002/
Kill=20 Command =3D /home/software/hadoop-2.2.0/bin/hadoop job=A0 -kill=20 job_1397699833108_0002
=A0
I have two question:
1. Why=A0do above command failed, where is=20 wrong?=A0 how to solve it?
2. When I use following command to quit hive,and= =20 reboot computer
hive>quit;
$reboot
=A0
Then I use following command under=20 hive
hive>describe ufodata;
Table not found 'ufodata'
=A0
Where is my table? I am puzzled with it.=20 How=A0to resove above two question?
=A0
Thanks

=A0
=A0
=A0
=A0
=A0

------------------------------= ---------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any ac= companying attachment(s)
is intended only for the use of the intended recipient and may be confident= ial and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader = of this communication is
not the intended recipient, unauthorized use, forwarding, printing,=A0 stor= ing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this commu= nication in error,please
immediately notify the sender by return e-mail, and delete the original mes= sage and all copies from
your system. Thank you.
---------------------------------------------------------------------------= ------------------------




--
R= egards
Shengjun
--089e0115ec142a34dc04f735a8a9--