hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anil gupta <anilgupt...@gmail.com>
Subject Re:
Date Sun, 29 Jul 2012 22:47:30 GMT
Seems like you are also stuck in the same problem as i am... I am going to
work on changing my conf tomorrow to fix this. How much memory your node
has?
Check the logs of Nodemanagers..At the bottom of the logs file you will see
that NM is stopping some components.(sorry i cant recall the exact name)

~Anil

On Sun, Jul 29, 2012 at 1:08 PM, abhiTowson cal
<abhishek.dodda1@gmail.com>wrote:

> Hi anil,
>
> Thanks for the reply.Same as your case my pi job is haulted and their
> is no progress.
>
> Regards
> Abhishek
>
> On Sun, Jul 29, 2012 at 3:31 PM, anil gupta <anilgupta84@gmail.com> wrote:
> > Hi Abhishek,
> >
> > I didnt mean to ask you whether it returns result or not. I meant that
> you
> > should check that the classpath is correct. It should have the
> directories
> > where yarn is installed.
> >
> > ~Anil
> >
> > On Sun, Jul 29, 2012 at 12:23 PM, abhiTowson cal
> > <abhishek.dodda1@gmail.com>wrote:
> >
> >> hi anil,
> >>
> >> Hadoop class path is also working fine.
> >>
> >> Regards
> >> Abhishek
> >>
> >> Thanks for
> >>
> >> On Sun, Jul 29, 2012 at 3:20 PM, abhiTowson cal
> >> <abhishek.dodda1@gmail.com> wrote:
> >> > Hi Anil,
> >> >           Iam using chd4 with yarn.
> >> >
> >> > On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta <anilgupta84@gmail.com>
> >> wrote:
> >> >> Are you using cdh4? In you cluster are you using yarn or mr1?
> >> >> Check the classpath of Hadoop by Hadoop classpath command.
> >> >>
> >> >> Best Regards,
> >> >> Anil
> >> >>
> >> >> On Jul 29, 2012, at 12:12 PM, abhiTowson cal <
> abhishek.dodda1@gmail.com>
> >> wrote:
> >> >>
> >> >>> HI Anil,
> >> >>>
> >> >>> I have already tried this,but issue could not be resolved.
> >> >>>
> >> >>> Regards
> >> >>> Abhishek
> >> >>>
> >> >>> On Sun, Jul 29, 2012 at 3:05 PM, anil gupta <anilgupta84@gmail.com>
> >> wrote:
> >> >>>> Hi Abhishek,
> >> >>>>
> >> >>>> Once you make sure that whatever Harsh said in the previous
email
> is
> >> >>>> present in the cluster and then also the job runs in Local
Mode.
> Then
> >> try
> >> >>>> running the job with hadoop --config option.
> >> >>>> Refer to this discussion for more detail:
> >> >>>>
> >>
> https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/4tMGfvJFzrg
> >> >>>>
> >> >>>> HTH,
> >> >>>> Anil
> >> >>>>
> >> >>>> On Sun, Jul 29, 2012 at 11:43 AM, Harsh J <harsh@cloudera.com>
> wrote:
> >> >>>>
> >> >>>>> For a job to get submitted to a cluster, you will need
proper
> client
> >> >>>>> configurations. Have you configured your mapred-site.xml
and
> >> >>>>> yarn-site.xml properly inside /etc/hadoop/conf/mapred-site.xml
and
> >> >>>>> /etc/hadoop/conf/yarn-site.xml at the client node?
> >> >>>>>
> >> >>>>> On Mon, Jul 30, 2012 at 12:00 AM, abhiTowson cal
> >> >>>>> <abhishek.dodda1@gmail.com> wrote:
> >> >>>>>> Hi All,
> >> >>>>>>
> >> >>>>>> I am getting problem that job is running in localrunner
rather
> than
> >> >>>>>> the cluster enviormnent.
> >> >>>>>> And when am running the job i would not be able to
see the job
> id in
> >> >>>>>> the resource manager UI
> >> >>>>>>
> >> >>>>>> Can you please go through the issues and let me know
ASAP.
> >> >>>>>>
> >> >>>>>> sudo -u hdfs hadoop jar
> >> >>>>>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
teragen
> >> >>>>>> 1000000 /benchmark/teragen/input
> >> >>>>>> 12/07/29 13:35:59 WARN conf.Configuration: session.id
is
> >> deprecated.
> >> >>>>>> Instead, use dfs.metrics.session-id
> >> >>>>>> 12/07/29 13:35:59 INFO jvm.JvmMetrics: Initializing
JVM Metrics
> with
> >> >>>>>> processName=JobTracker, sessionId=
> >> >>>>>> 12/07/29 13:35:59 INFO util.NativeCodeLoader: Loaded
the
> >> native-hadoop
> >> >>>>> library
> >> >>>>>> 12/07/29 13:35:59 WARN mapred.JobClient: Use GenericOptionsParser
> >> for
> >> >>>>>> parsing the arguments. Applications should implement
Tool for the
> >> >>>>>> same.
> >> >>>>>> Generating 1000000 using 1 maps with step of 1000000
> >> >>>>>> 12/07/29 13:35:59 INFO mapred.JobClient: Running job:
> job_local_0001
> >> >>>>>> 12/07/29 13:35:59 INFO mapred.LocalJobRunner: OutputCommitter
> set in
> >> >>>>> config null
> >> >>>>>> 12/07/29 13:35:59 INFO mapred.LocalJobRunner: OutputCommitter
is
> >> >>>>>> org.apache.hadoop.mapred.FileOutputCommitter
> >> >>>>>> 12/07/29 13:35:59 WARN mapreduce.Counters: Group
> >> >>>>>> org.apache.hadoop.mapred.Task$Counter is deprecated.
Use
> >> >>>>>> org.apache.hadoop.mapreduce.TaskCounter instead
> >> >>>>>> 12/07/29 13:35:59 INFO util.ProcessTree: setsid exited
with exit
> >> code 0
> >> >>>>>> 12/07/29 13:35:59 INFO mapred.Task:  Using
> ResourceCalculatorPlugin
> >> :
> >> >>>>>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@47c297a3
> >> >>>>>> 12/07/29 13:36:00 WARN mapreduce.Counters: Counter
name
> >> >>>>>> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters
as
> group
> >> >>>>>> name and  BYTES_READ as counter name instead
> >> >>>>>> 12/07/29 13:36:00 INFO mapred.MapTask: numReduceTasks:
0
> >> >>>>>> 12/07/29 13:36:00 INFO mapred.JobClient:  map 0% reduce
0%
> >> >>>>>> 12/07/29 13:36:01 INFO mapred.Task:
> >> Task:attempt_local_0001_m_000000_0
> >> >>>>>> is done. And is in the process of commiting
> >> >>>>>> 12/07/29 13:36:01 INFO mapred.LocalJobRunner:
> >> >>>>>> 12/07/29 13:36:01 INFO mapred.Task: Task
> >> attempt_local_0001_m_000000_0
> >> >>>>>> is allowed to commit now
> >> >>>>>> 12/07/29 13:36:01 INFO mapred.FileOutputCommitter:
Saved output
> of
> >> >>>>>> task 'attempt_local_0001_m_000000_0' to
> >> >>>>>> hdfs://hadoop-master-1/benchmark/teragen/input
> >> >>>>>> 12/07/29 13:36:01 INFO mapred.LocalJobRunner:
> >> >>>>>> 12/07/29 13:36:01 INFO mapred.Task: Task
> >> 'attempt_local_0001_m_000000_0'
> >> >>>>> done.
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:  map 100%
reduce 0%
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient: Job complete:
> >> job_local_0001
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient: Counters:
19
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:   File System
Counters
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     FILE:
Number of
> bytes
> >> >>>>> read=142686
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     FILE:
Number of
> bytes
> >> >>>>>> written=220956
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     FILE:
Number of read
> >> >>>>> operations=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     FILE:
Number of
> large
> >> >>>>>> read operations=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     FILE:
Number of
> write
> >> >>>>> operations=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     HDFS:
Number of
> bytes
> >> read=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     HDFS:
Number of
> bytes
> >> >>>>>> written=100000000
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     HDFS:
Number of read
> >> >>>>> operations=1
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     HDFS:
Number of
> large
> >> >>>>>> read operations=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     HDFS:
Number of
> write
> >> >>>>> operations=2
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:   Map-Reduce
Framework
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Map input
> >> records=1000000
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Map output
> >> records=1000000
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Input
split bytes=82
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Spilled
Records=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     CPU time
spent
> (ms)=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Physical
memory
> (bytes)
> >> >>>>> snapshot=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Virtual
memory
> (bytes)
> >> >>>>> snapshot=0
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:     Total
committed heap
> >> >>>>>> usage (bytes)=124715008
> >> >>>>>> 12/07/29 13:36:02 INFO mapred.JobClient:
> >> >>>>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
> >> >>>>>>
> >> >>>>>> Regards
> >> >>>>>> Abhishek
> >> >>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>> --
> >> >>>>> Harsh J
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> --
> >> >>>> Thanks & Regards,
> >> >>>> Anil Gupta
> >>
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message