hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shengjun Xin <s...@gopivotal.com>
Subject Re: question about hive under hadoop
Date Thu, 17 Apr 2014 05:33:47 GMT
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties


On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing <zhangsc@neusoft.com> wrote:

>  Where is hive.log?  Thanks.
>
> ----- Original Message -----
> *From:* Shengjun Xin <sxin@gopivotal.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, April 17, 2014 12:42 PM
> *Subject:* Re: question about hive under hadoop
>
> For the first problem, you need to check the hive.log for the details
>
>
> On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing <zhangsc@neusoft.com> wrote:
>
>>  I use hive-0.11.0 under hadoop 2.2.0, like follows:
>> [hadoop@node1 software]$ hive
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.input.dir.recursive is deprecated. Instead, use
>> mapreduce.input.fileinputformat.input.dir.recursive
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.max.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.min.split.size
>> is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.rack is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.rack
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.min.split.size.per.node is deprecated. Instead, use
>> mapreduce.input.fileinputformat.split.minsize.per.node
>> 14/04/16 19:11:02 INFO Configuration.deprecation: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>> 14/04/16 19:11:02 INFO Configuration.deprecation:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override
final parameter:
>> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>> 14/04/16 19:11:03 WARN conf.Configuration:
>> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@17a9eb9:anattempt to override
final parameter:
>> mapreduce.job.end-notification.max.attempts;  Ignoring.
>> Logging initialized using configuration in
>> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_4933@node1_201404161911_2112956781.txt
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>>
>> Then I crete a table named ufodata,like follows:
>> hive> CREATE TABLE ufodata(sighted STRING, reported STRING,
>>     > sighting_location STRING,shape STRING, duration STRING,
>>     > description STRING COMMENT 'Free text description')
>>     > COMMENT 'The UFO data set.' ;
>> OK
>> Time taken: 1.588 seconds
>> hive> LOAD DATA INPATH '/tmp/ufo.tsv' OVERWRITE INTO TABLE ufodata;
>> Loading data to table default.ufodata
>> rmr: DEPRECATED: Please use 'rm -r' instead.
>> Deleted /user/hive/warehouse/ufodata
>> Table default.ufodata stats: [num_partitions: 0, num_files: 1, num_rows:
>> 0, total_size: 75342464, raw_data_size: 0]
>> OK
>> Time taken: 1.483 seconds
>>
>> Then I want to count the table ufodata,like follows:
>>
>> hive> select count(*) from ufodata;
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks determined at compile time: 1
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=<number>
>> Starting Job = job_1397699833108_0002, Tracking URL =
>> http://master:8088/proxy/application_1397699833108_0002/
>> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job  -kill
>> job_1397699833108_0002
>>
>> I have two question:
>> 1. Why do above command failed, where is wrong?  how to solve it?
>> 2. When I use following command to quit hive,and reboot computer
>> hive>quit;
>> $reboot
>>
>> Then I use following command under hive
>> hive>describe ufodata;
>> Table not found 'ufodata'
>>
>> Where is my table? I am puzzled with it. How to resove above two question?
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
>  Regards
> Shengjun
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 
Regards
Shengjun

Mime
View raw message