hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bejoy KS" <bejoy...@yahoo.com>
Subject Re: hive mapred problem
Date Tue, 09 Oct 2012 12:27:55 GMT
Hi Ajit

Is Oracle jdk used in your data nodes? If it is open jdk then I guess jps won't work. Also
have you configured java correctly on DNs for the user that issues jps command?

The hive 'GettingStarted' on hive wiki is a good start off. It lets you know which all properties
to override in your hive-site.xml.


Regards
Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: Ajit Kumar Shreevastava <Ajit.Shreevastava@hcl.com>
Date: Tue, 9 Oct 2012 16:59:20 
To: user@hive.apache.org<user@hive.apache.org>
Reply-To: user@hive.apache.org
Subject: RE: hive mapred problem

Hi Nitin,



Thanks for your reply.

I want configuration file of hive with respect to distributed hadoop.



Again My jps command is also failed in datanode.



When I run command on master node:-->



[hadoop@NHCLT-PC44-2 hadoop]$ jps

5694 NameNode

5992 SecondaryNameNode

5843 DataNode

6224 TaskTracker

6085 JobTracker

6701 Jps

4785 RunJar



But when I run the same command on slave node:-->

[hadoop@NHCLT-PC44-4 hadoop]$ jps

-bash: jps: command not found



Can you please help me in this regard.





-----Original Message-----
From: Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Tuesday, October 09, 2012 4:03 PM
To: user@hive.apache.org
Subject: Re: hive mapred problem



I am not sure what you mean by distibuted hive.



but a multinode hadoop cluster setup you can setup using following link

http://ankitasblogger.blogspot.in/2011/01/hadoop-cluster-setup.html



if you run into hic-ups feel free to reach me out on my email (dont

want hive userlist to be bugged for hadoop installations)









On Tue, Oct 9, 2012 at 3:54 PM, Ajit Kumar Shreevastava

<Ajit.Shreevastava@hcl.com> wrote:

> Hi Nitin

>

> Sorry Nitin, Actually I mean Fully distributed mode( hadoop on multimode).

> I want configuration file for both hadoop and hive.

>

>

>

> -----Original Message-----

> From: Nitin Pawar [mailto:nitinpawar432@gmail.com]

> Sent: Tuesday, October 09, 2012 2:46 PM

> To: user@hive.apache.org

> Subject: Re: hive mapred problem

>

> I did not get the distributed mode for hadoop and hive question. can

> you explain what exactly what you want to achieve ?

>

> Thanks,

> Nitin

>

> On Tue, Oct 9, 2012 at 2:42 PM, Ajit Kumar Shreevastava

> <Ajit.Shreevastava@hcl.com> wrote:

>> Hi Nitin,

>>

>>

>>

>> Thanks for your reply...

>>

>>

>>

>> Now my query is running but output is like :-->

>>

>>

>>

>> hive> select count(1) from pokes;

>>

>> Total MapReduce jobs = 1

>>

>> Launching Job 1 out of 1

>>

>> Number of reduce tasks determined at compile time: 1

>>

>> In order to change the average load for a reducer (in bytes):

>>

>>   set hive.exec.reducers.bytes.per.reducer=<number>

>>

>> In order to limit the maximum number of reducers:

>>

>>   set hive.exec.reducers.max=<number>

>>

>> In order to set a constant number of reducers:

>>

>>   set mapred.reduce.tasks=<number>

>>

>> Starting Job = job_201210091435_0001, Tracking URL =

>> http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210091435_0001

>>

>> Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill

>> job_201210091435_0001

>>

>> Hadoop job information for Stage-1: number of mappers: 1; number of

>> reducers: 2

>>

>> 2012-10-09 14:37:14,587 Stage-1 map = 0%,  reduce = 0%

>>

>> 2012-10-09 14:37:20,609 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:21,613 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:22,620 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:23,625 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:24,630 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:25,634 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:26,638 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:27,642 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:28,650 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:29,654 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:30,658 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:31,662 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU

>> 0.47 sec

>>

>> 2012-10-09 14:37:32,667 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU

>> 1.66 sec

>>

>> 2012-10-09 14:37:33,672 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU

>> 1.66 sec

>>

>> 2012-10-09 14:37:34,678 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> 2012-10-09 14:37:35,682 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> 2012-10-09 14:37:36,686 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> 2012-10-09 14:37:37,690 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> 2012-10-09 14:37:38,694 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> 2012-10-09 14:37:39,698 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> 2012-10-09 14:37:40,702 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU

>> 3.0 sec

>>

>> MapReduce Total cumulative CPU time: 3 seconds 0 msec

>>

>> Ended Job = job_201210091435_0001

>>

>> MapReduce Jobs Launched:

>>

>> Job 0: Map: 1  Reduce: 2   Cumulative CPU: 3.0 sec   HDFS Read: 6034 HDFS

>> Write: 6 SUCCESS

>>

>> Total MapReduce CPU Time Spent: 3 seconds 0 msec

>>

>> OK

>>

>> 500

>>

>> 0

>>

>> Time taken: 35.161 seconds

>>

>>

>>

>> Can you do one favor for me? I want configuration file template for

>> distributed mode for both hadoop and hive.

>>

>>

>>

>> Regards

>>

>> Ajit

>>

>>

>>

>>

>>

>> -----Original Message-----

>> From: Nitin Pawar [mailto:nitinpawar432@gmail.com]

>> Sent: Monday, October 08, 2012 5:52 PM

>> To: user@hive.apache.org

>> Subject: Re: hive mapred problem

>>

>>

>>

>> from the error looks like you have some incorrect hive settings which

>>

>> are failing the job initialization.

>>

>>

>>

>> this is the error

>>

>>>java.io.IOException: Number of maps in JobConf doesn't match number of

>>

>>> recieved splits for job job_201210051717_0015! numMapTasks=10

>>

>>

>>

>> can you tell us if you are setting in hive variables before firing up

>>

>> the query?  something like split size or # no maps etc

>>

>>

>>

>> On Mon, Oct 8, 2012 at 5:30 PM, Ajit Kumar Shreevastava

>>

>> <Ajit.Shreevastava@hcl.com> wrote:

>>

>>> Hi Nitin,

>>

>>>

>>

>>>

>>

>>>

>>

>>> Job tracker log:-->

>>

>>>

>>

>>>

>>

>>>

>>

>>> 2012-10-08 15:43:29,331 WARN org.apache.hadoop.conf.Configuration:

>>

>>>

>>> /usr/local/hadoopstorage/mapred/local/jobTracker/job_201210051717_0015.xml:a

>>

>>> attempt to override fina

>>

>>>

>>

>>> l parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:29,331 WARN org.apache.hadoop.conf.Configuration:

>>

>>>

>>> /usr/local/hadoopstorage/mapred/local/jobTracker/job_201210051717_0015.xml:a

>>

>>> attempt to override fina

>>

>>>

>>

>>> l parameter: mapred.map.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:29,332 INFO org.apache.hadoop.mapred.JobInProgress:

>>

>>> job_201210051717_0015: nMaps=10 nReduces=2 max=-1

>>

>>>

>>

>>> 2012-10-08 15:43:29,332 INFO org.apache.hadoop.mapred.JobTracker: Job

>>

>>> job_201210051717_0015 added successfully for user 'hadoop' to queue

>>

>>> 'default'

>>

>>>

>>

>>> 2012-10-08 15:43:29,332 INFO org.apache.hadoop.mapred.AuditLogger:

>>

>>> USER=hadoop  IP=10.99.42.9   OPERATION=SUBMIT_JOB

>>

>>> TARGET=job_201210051717_0015    RESULT=SUCCESS

>>

>>>

>>

>>> 2012-10-08 15:43:29,332 INFO org.apache.hadoop.mapred.JobTracker:

>>

>>> Initializing job_201210051717_0015

>>

>>>

>>

>>> 2012-10-08 15:43:29,332 INFO org.apache.hadoop.mapred.JobInProgress:

>>

>>> Initializing job_201210051717_0015

>>

>>>

>>

>>> 2012-10-08 15:43:29,350 INFO org.apache.hadoop.mapred.JobInProgress:

>>

>>> jobToken generated and stored with users keys in

>>

>>> /usr/local/hadoopstorage/mapred/system/job_2012100

>>

>>>

>>

>>> 51717_0015/jobToken

>>

>>>

>>

>>> 2012-10-08 15:43:29,352 ERROR org.apache.hadoop.mapred.JobTracker: Job

>>

>>> initialization failed:

>>

>>>

>>

>>> java.io.IOException: Number of maps in JobConf doesn't match number of

>>

>>> recieved splits for job job_201210051717_0015! numMapTasks=10, #splits=1

>>

>>>

>>

>>>         at

>>

>>> org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:703)

>>

>>>

>>

>>>         at

>>> org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4207)

>>

>>>

>>

>>>         at

>>

>>>

>>> org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)

>>

>>>

>>

>>>         at

>>

>>>

>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

>>

>>>

>>

>>>         at

>>

>>>

>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

>>

>>>

>>

>>>         at java.lang.Thread.run(Thread.java:662)

>>

>>>

>>

>>>

>>

>>>

>>

>>> 2012-10-08 15:43:29,352 INFO org.apache.hadoop.mapred.JobTracker: Failing

>>

>>> job job_201210051717_0015

>>

>>>

>>

>>> 2012-10-08 15:43:29,352 INFO

>>

>>> org.apache.hadoop.mapred.JobInProgress$JobSummary:

>>

>>>

>>> jobId=job_201210051717_0015,submitTime=1349691209316,launchTime=0,,finishTime=1349691209

>>

>>>

>>

>>>

>>> 352,numMaps=0,numSlotsPerMap=1,numReduces=0,numSlotsPerReduce=1,user=hadoop,queue=default,status=FAILED,mapSlotSeconds=0,reduceSlotsSeconds=0,clusterMapCapacity=14,clus

>>

>>>

>>

>>> terReduceCapacity=14,jobName=select count(1) from pokes(Stage-1)

>>

>>>

>>

>>> 2012-10-08 15:43:29,353 INFO org.apache.hadoop.mapred.JobHistory: Moving

>>

>>>

>>> file:/var/log/hadoop/history/job_201210051717_0015_1349691209316_hadoop_select+count%281%29+fro

>>

>>>

>>

>>> m+pokes%28Stage-1%29 to

>>

>>>

>>> file:/var/log/hadoop/history/done/version-1/NHCLT-PC44-2_1349437647390_/2012/10/08/000000

>>

>>>

>>

>>> 2012-10-08 15:43:29,356 INFO org.apache.hadoop.mapred.JobHistory: Moving

>>

>>> file:/var/log/hadoop/history/job_201210051717_0015_conf.xml to

>>

>>> file:/var/log/hadoop/history/don

>>

>>>

>>

>>> e/version-1/NHCLT-PC44-2_1349437647390_/2012/10/08/000000

>>

>>>

>>

>>> 2012-10-08 15:43:30,362 INFO org.apache.hadoop.mapred.JobTracker: Killing

>>

>>> job job_201210051717_0015

>>

>>>

>>

>>> 2012-10-08 17:17:35,641 INFO

>>

>>>

>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:

>>

>>> Updating the current master key for generating delegation

>>

>>>

>>

>>> tokens

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>> and hive log:-->

>>

>>>

>>

>>>

>>

>>>

>>

>>> 2012-10-08 15:43:08,597 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-1260952199191868105.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:08,606 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:08,606 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:09,000 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-1260952199191868105.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:09,004 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:09,004 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:09,014 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-1260952199191868105.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:09,018 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:09,018 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,309 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-7458429923592584125.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,319 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,320 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,759 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-7458429923592584125.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,766 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,766 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,783 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-7458429923592584125.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,786 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:17,787 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:24,119 ERROR DataNucleus.Plugin

>>

>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires

>>

>>> "org.eclipse.core.resources" but it cannot be resolved.

>>

>>>

>>

>>> 2012-10-08 15:43:24,119 ERROR DataNucleus.Plugin

>>

>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires

>>

>>> "org.eclipse.core.resources" but it cannot be resolved.

>>

>>>

>>

>>> 2012-10-08 15:43:24,119 ERROR DataNucleus.Plugin

>>

>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires

>>

>>> "org.eclipse.core.runtime" but it cannot be resolved.

>>

>>>

>>

>>> 2012-10-08 15:43:24,119 ERROR DataNucleus.Plugin

>>

>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires

>>

>>> "org.eclipse.core.runtime" but it cannot be resolved.

>>

>>>

>>

>>> 2012-10-08 15:43:24,120 ERROR DataNucleus.Plugin

>>

>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires

>>

>>> "org.eclipse.text" but it cannot be resolved.

>>

>>>

>>

>>> 2012-10-08 15:43:24,120 ERROR DataNucleus.Plugin

>>

>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires

>>

>>> "org.eclipse.text" but it cannot be resolved.

>>

>>>

>>

>>> 2012-10-08 15:43:24,734 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-7458429923592584125.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:24,750 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:24,751 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:28,535 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/tmp/hive-default-7458429923592584125.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:28,537 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:28,537 WARN  conf.Configuration

>>

>>> (Configuration.java:loadResource(1245)) -

>>

>>> file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final

>>

>>> parameter: mapred.reduce.tasks;  Ignoring.

>>

>>>

>>

>>> 2012-10-08 15:43:29,113 WARN  mapred.JobClient

>>

>>> (JobClient.java:copyAndConfigureFiles(667)) - Use GenericOptionsParser for

>>

>>> parsing the arguments. Applications should implement Tool for the same.

>>

>>>

>>

>>> 2012-10-08 15:43:29,243 WARN  snappy.LoadSnappy

>>

>>> (LoadSnappy.java:<clinit>(46)) - Snappy native library not loaded

>>

>>>

>>

>>> 2012-10-08 15:43:30,354 ERROR exec.Task

>>> (SessionState.java:printError(400))

>>

>>> - Ended Job = job_201210051717_0015 with errors

>>

>>>

>>

>>> 2012-10-08 15:43:30,356 ERROR exec.Task

>>> (SessionState.java:printError(400))

>>

>>> - Error during job, obtaining debugging information...

>>

>>>

>>

>>> 2012-10-08 15:43:30,369 ERROR ql.Driver

>>> (SessionState.java:printError(400))

>>

>>> - FAILED: Execution Error, return code 2 from

>>

>>> org.apache.hadoop.hive.ql.exec.MapRedTask

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>> Regards,

>>

>>>

>>

>>> Ajit

>>

>>>

>>

>>>

>>

>>>

>>

>>> -----Original Message-----

>>

>>> From: Nitin Pawar [mailto:nitinpawar432@gmail.com]

>>

>>> Sent: Monday, October 08, 2012 4:19 PM

>>

>>> To: user@hive.apache.org

>>

>>> Subject: Re: hive mapred problem

>>

>>>

>>

>>>

>>

>>>

>>

>>> Can you provide with job logs and failed task logs?

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>> On Mon, Oct 8, 2012 at 4:13 PM, Ajit Kumar Shreevastava

>>

>>>

>>

>>> <Ajit.Shreevastava@hcl.com> wrote:

>>

>>>

>>

>>>> Hi,

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> When I run the query "select count(1) from pokes;" it fails with the

>>

>>>> message

>>

>>>

>>

>>>> as below.

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> hive> select count(1) from pokes;

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Total MapReduce jobs = 1

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Launching Job 1 out of 1

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Number of reduce tasks determined at compile time: 1

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> In order to change the average load for a reducer (in bytes):

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>   set hive.exec.reducers.bytes.per.reducer=<number>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> In order to limit the maximum number of reducers:

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>   set hive.exec.reducers.max=<number>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> In order to set a constant number of reducers:

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>   set mapred.reduce.tasks=<number>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Starting Job = job_201210051717_0015, Tracking URL =

>>

>>>

>>

>>>> http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210051717_0015

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill

>>

>>>

>>

>>>> job_201210051717_0015

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Hadoop job information for Stage-1: number of mappers: 0; number of

>>

>>>

>>

>>>> reducers: 0

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> 2012-10-08 15:43:30,351 Stage-1 map = 100%,  reduce = 100%

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Ended Job = job_201210051717_0015 with errors

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Error during job, obtaining debugging information...

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> FAILED: Execution Error, return code 2 from

>>

>>>

>>

>>>> org.apache.hadoop.hive.ql.exec.MapRedTask

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> MapReduce Jobs Launched:

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Total MapReduce CPU Time Spent: 0 msec

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Thanks and Regards

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Ajit Kumar Shreevastava

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> ADCOE (App Development Center Of Excellence )

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> Mobile: 9717775634

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> ::DISCLAIMER::

>>

>>>

>>

>>>>

>>

>>>>

>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------

>>

>>>

>>

>>>>

>>

>>>

>>

>>>> The contents of this e-mail and any attachment(s) are confidential and

>>

>>>

>>

>>>> intended for the named recipient(s) only.

>>

>>>

>>

>>>> E-mail transmission is not guaranteed to be secure or error-free as

>>

>>>

>>

>>>> information could be intercepted, corrupted,

>>

>>>

>>

>>>> lost, destroyed, arrive late or incomplete, or may contain viruses in

>>

>>>

>>

>>>> transmission. The e mail and its contents

>>

>>>

>>

>>>> (with or without referred errors) shall therefore not attach any

>>>> liability

>>

>>>

>>

>>>> on the originator or HCL or its affiliates.

>>

>>>

>>

>>>> Views or opinions, if any, presented in this email are solely those of

>>>> the

>>

>>>

>>

>>>> author and may not necessarily reflect the

>>

>>>

>>

>>>> views or opinions of HCL or its affiliates. Any form of reproduction,

>>

>>>

>>

>>>> dissemination, copying, disclosure, modification,

>>

>>>

>>

>>>> distribution and / or publication of this message without the prior

>>

>>>> written

>>

>>>

>>

>>>> consent of authorized representative of

>>

>>>

>>

>>>> HCL is strictly prohibited. If you have received this email in error

>>

>>>> please

>>

>>>

>>

>>>> delete it and notify the sender immediately.

>>

>>>

>>

>>>> Before opening any email and/or attachments, please check them for

>>>> viruses

>>

>>>

>>

>>>> and other defects.

>>

>>>

>>

>>>>

>>

>>>

>>

>>>>

>>

>>>>

>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>>

>>

>>> --

>>

>>>

>>

>>> Nitin Pawar

>>

>>

>>

>>

>>

>>

>>

>> --

>>

>> Nitin Pawar

>

>

>

> --

> Nitin Pawar







--

Nitin Pawar

Mime
View raw message