hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shengjun Xin <s...@gopivotal.com>
Subject Re: question about hive sql
Date Tue, 22 Apr 2014 03:08:37 GMT
You need to check the container log for the details


On Tue, Apr 22, 2014 at 10:27 AM, EdwardKing <zhangsc@neusoft.com> wrote:

>  I use hive under hadoop 2.2.0, first I start hive
> [hadoop@master sbin]$ hive
> 14/04/21 19:06:32 INFO Configuration.deprecation:
> mapred.input.dir.recursive is deprecated. Instead, use
> mapreduce.input.fileinputformat.input.dir.recursive
> 14/04/21 19:06:32 INFO Configuration.deprecation: mapred.max.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
> 14/04/21 19:06:32 INFO Configuration.deprecation: mapred.min.split.size is
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 14/04/21 19:06:32 INFO Configuration.deprecation:
> mapred.min.split.size.per.rack is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.rack
> 14/04/21 19:06:32 INFO Configuration.deprecation:
> mapred.min.split.size.per.node is deprecated. Instead, use
> mapreduce.input.fileinputformat.split.minsize.per.node
> 14/04/21 19:06:32 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/04/21 19:06:32 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/04/21 19:06:32 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@2128d0:an attempt
> to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 14/04/21 19:06:32 WARN conf.Configuration:
> org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@2128d0:an attempt
> to override final parameter: mapreduce.job.end-notification.max.attempts;
> Ignoring.
> Logging initialized using configuration in
> jar:file:/home/software/hive-0.11.0/lib/hive-common-0.11.0.jar!/hive-log4j.properties
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_7623@master_201404211906_2069310090.txt
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/software/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/software/hive-0.11.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Then I creat a table
> hive> create table test(id STRING);
> OK
> Time taken: 17.277 seconds
> Then  I insert some date into test
> hive> load data inpath 'a.txt' overwrite into table test;
> Loading data to table default.test
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Deleted /user/hive/warehouse/test
> Table default.test stats: [num_partitions: 0, num_files: 1, num_rows: 0,
> total_size: 19, raw_data_size: 0]
> OK
> Time taken: 1.855 seconds
>
> hive> select * from test;
> OK
> China
> US
> Australia
> Time taken: 0.526 seconds, Fetched: 3 row(s)
> Now I use count command, I expected the result value is 3, but it runs
> failure!  Why? Where is wrong? I am puzzled with it for several days.
> Anyone could tell me how to correct it?
> hive> select count(*) from test;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Starting Job = job_1398132272370_0001, Tracking URL =
> http://master:8088/proxy/application_1398132272370_0001/
> Kill Command = /home/software/hadoop-2.2.0/bin/hadoop job  -kill
> job_1398132272370_0001
> Hadoop job information for Stage-1: number of mappers: 0; number of
> reducers: 0
> 2014-04-21 19:15:56,684 Stage-1 map = 0%,  reduce = 0%
> Ended Job = job_1398132272370_0001 with errors
> Error during job, obtaining debugging information...
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched:
> Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> hive>
>
> Error information under
> http://172.11.12.6:8088/cluster/app/application_1398132272370_0001
> User:  hadoop
> Name:  select count(*) from test(Stage-1)
> Application Type:  MAPREDUCE
> State:  FAILED
> FinalStatus:  FAILED
> Started:  21-Apr-2014 19:14:55
> Elapsed:  57sec
> Tracking URL:  History
> Diagnostics:
> Application application_1398132272370_0001 failed 2 times due to AM
> Container for appattempt_1398132272370_0001_000002 exited with exitCode: 1
> due to: Exception from container-launch:
> org.apache.hadoop.util.Shell$ExitCodeException:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
> at org.apache.hadoop.util.Shell.run(Shell.java:379)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> .Failing this attempt.. Failing the application
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 
Regards
Shengjun

Mime
View raw message