Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 47F9C10CE1 for ; Tue, 17 Dec 2013 15:14:31 +0000 (UTC) Received: (qmail 37924 invoked by uid 500); 17 Dec 2013 15:14:06 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 37614 invoked by uid 500); 17 Dec 2013 15:14:05 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 37606 invoked by uid 99); 17 Dec 2013 15:14:05 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Dec 2013 15:14:05 +0000 X-ASF-Spam-Status: No, hits=-0.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shekhar2581@gmail.com designates 209.85.220.170 as permitted sender) Received: from [209.85.220.170] (HELO mail-vc0-f170.google.com) (209.85.220.170) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Dec 2013 15:13:58 +0000 Received: by mail-vc0-f170.google.com with SMTP id la4so4241056vcb.15 for ; Tue, 17 Dec 2013 07:13:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=rMsTUX48cQYiRJdUJDtlkPdGLQkg6cqDLwA03K1Ixu8=; b=zRhFuom74hQ0sWjes0uJYMlXDmkxIA9lBchvurZuMcDAeECOWUKupgd7Y0+E//9FuT 2w+sJNFSPHmNLpYqLCPbpYvvAV45fvq70Eq18oxi9Z2ZqR95/6AJ3NJtpQJPlTctNQ/G OvQ+osdxZ7bhg5elFA8shYoYddwd+0fdus+tU0vARAVvq4QhgsqCdPnUXwNYJ+dISSin hLcOFjdgyPIq6VAAB/pNK8V9v1Q+GlurxmjfZUBqXYZjUFMS3mMii3jiTO+xI8SDUfZn z3TbSOkAwDYuWn8UCH33Au6aQP6yjGW9egnrMRpChNGn4LHBoKBS6RxeoGWEPMG068pI x8RA== X-Received: by 10.52.191.41 with SMTP id gv9mr1265253vdc.62.1387293217876; Tue, 17 Dec 2013 07:13:37 -0800 (PST) MIME-Version: 1.0 Received: by 10.220.43.82 with HTTP; Tue, 17 Dec 2013 07:13:17 -0800 (PST) In-Reply-To: References: From: Shekhar Sharma Date: Tue, 17 Dec 2013 20:43:17 +0530 Message-ID: Subject: Re: XmlInputFormat Hadoop -Mapreduce To: user@hadoop.apache.org Content-Type: multipart/mixed; boundary=089e013a0f2e745e8504edbc5f51 X-Virus-Checked: Checked by ClamAV on apache.org --089e013a0f2e745e8504edbc5f51 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hello Ranjini, PFA the source code for XML Input Format. Also find the output and the input which i have used. ATTACHED FILES DESRIPTION: (1) emp.xml --->Input Data for testing (2)emp_op.tar.zg-->Output. Results of the map only job ( I have set the number of reducer=3D0) (3)src.tar--> the source files (Please create a project in eclipse and paste the files ) The code is written with appropriate package and source folder.. RUNNING THE JOB: hadoop jar xml.jar com.xg.hadoop.training.mr.MyDriver -D START_TAG=3D\ -D END_TAG=3D\ emp op Explaination of the above command: (1) xml.jar is the jar name which we create either through eclipse or maven or ant (2) com.xg.hadoop.training.mr.MyDriver is the fully qualified driver class name. It means that MyDriver is residing under package com.xg.hadoop.training.mr (3) -D START_TAG=3D will not work because it will treat the Employee as input directory which is not the case..Therefore you need to escape them and thats why it is written as -D START_TAG=3D\, you can very well see that the two angular brackets are escaped. The similar explanation goes for -D END_TAG (4) emp is the input data which is present on HDFS (5) op is the output directory which will be created as part of mapreduce j= ob. NOTE: The number of reducers is explicitly set to ZERO. So this will map reduce job will always run ZERO reducer tasks. You need to change the driver code. Hope this would help and you will be able to solve your problem. In case if you face any difficulty please feel free to contact Regards, K Som Shekhar Sharma +91-8197243810 Regards, Som Shekhar Sharma +91-8197243810 On Tue, Dec 17, 2013 at 5:42 PM, Ranjini Rathinam wrote: > Hi, > > I have attached the code. Please verify. > > Please suggest . I am using hadoop 0.20 version. > > > import java.io.IOException; > import java.util.logging.Level; > import java.util.logging.Logger; > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.fs.Path; > import org.apache.hadoop.io.NullWritable; > import org.apache.hadoop.io.Text; > import org.apache.hadoop.mapreduce.Job; > import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; > import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; > import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; > //import org.apache.hadoop.mapreduce.lib.input.XmlInputFormat; > > public class ParserDriverMain { > > public static void main(String[] args) { > try { > runJob(args[0], args[1]); > > } catch (IOException ex) { > Logger.getLogger(ParserDriverMain.class.getName()).log(Level.SEVERE, null= , > ex); > } > > } > > //The code is mostly self explanatory. You need to define the starting an= d > ending tag of to split a record from the xml file and it can be defined i= n > the following lines > > //conf.set("xmlinput.start", ""); > //conf.set("xmlinput.end", ""); > > > public static void runJob(String input,String output ) throws IOException= { > > Configuration conf =3D new Configuration(); > > conf.set("xmlinput.start", ""); > conf.set("xmlinput.end", ""); > conf.set("io.serializations","org.apache.hadoop.io.serializer.JavaSeriali= zation,org.apache.hadoop.io.serializer.WritableSerialization"); > > Job job =3D new Job(conf, "jobName"); > > input=3D"/user/hduser/Ran/"; > output=3D"/user/task/Sales/"; > FileInputFormat.setInputPaths(job, input); > job.setJarByClass(ParserDriverMain.class); > job.setMapperClass(MyParserMapper.class); > job.setNumReduceTasks(1); > job.setInputFormatClass(XmlInputFormatNew.class); > job.setOutputKeyClass(NullWritable.class); > job.setOutputValueClass(Text.class); > Path outPath =3D new Path(output); > FileOutputFormat.setOutputPath(job, outPath); > FileSystem dfs =3D FileSystem.get(outPath.toUri(), conf); > if (dfs.exists(outPath)) { > dfs.delete(outPath, true); > } > > > try { > > job.waitForCompletion(true); > > } catch (InterruptedException ex) { > Logger.getLogger(ParserDriverMain.class.getName()).log(Level.SEVERE, null= , > ex); > } catch (ClassNotFoundException ex) { > Logger.getLogger(ParserDriverMain.class.getName()).log(Level.SEVERE, null= , > ex); > } > > } > > } > > > > > > import java.io.IOException; > import java.util.logging.Level; > import java.util.logging.Logger; > import org.apache.hadoop.io.LongWritable; > import org.apache.hadoop.io.NullWritable; > import org.apache.hadoop.io.Text; > import org.apache.hadoop.mapreduce.Mapper; > import org.jdom.Document; > import org.jdom.Element; > import org.jdom.JDOMException; > import org.jdom.input.SAXBuilder; > import java.io.Reader; > import java.io.StringReader; > > /** > * > * @author root > */ > public class MyParserMapper extends Mapper Text> { > > @Override > public void map(LongWritable key, Text value1,Context context)throws > IOException, InterruptedException { > > String xmlString =3D value1.toString(); > System.out.println("xmlString=3D=3D=3D=3D"+xmlString); > SAXBuilder builder =3D new SAXBuilder(); > Reader in =3D new StringReader(xmlString); > String value=3D""; > try { > > Document doc =3D builder.build(in); > Element root =3D doc.getRootElement(); > > //String tag1 > =3Droot.getChild("tag").getChild("tag1").getTextTrim() ; > > // String tag2 > =3Droot.getChild("tag").getChild("tag1").getChild("tag2").getTextTrim(); > value=3D > root.getChild("id").getChild("ename").getChild("dept").getChild("sal").ge= tChild("location").getTextTrim(); > context.write(NullWritable.get(), new > Text(value)); > } catch (JDOMException ex) { > > Logger.getLogger(MyParserMapper.class.getName()).log(Level.SEVERE, null, > ex); > } catch (IOException ex) { > > Logger.getLogger(MyParserMapper.class.getName()).log(Level.SEVERE, null, > ex); > } > > } > > } > > > > > > > import java.io.IOException; > import org.apache.hadoop.fs.FSDataInputStream; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.fs.Path; > import org.apache.hadoop.io.DataOutputBuffer; > import org.apache.hadoop.io.LongWritable; > import org.apache.hadoop.io.Text; > import org.apache.hadoop.mapreduce.Job; > import org.apache.hadoop.mapreduce.Mapper; > import org.apache.hadoop.mapreduce.Reducer; > import org.apache.hadoop.mapreduce.InputSplit; > import org.apache.hadoop.mapreduce.RecordReader; > import org.apache.hadoop.mapreduce.TaskAttemptContext; > import org.apache.hadoop.mapreduce.TaskAttemptID; > import org.apache.hadoop.mapreduce.lib.input.FileSplit; > import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; > import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; > import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; > import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; > /** > * Reads records that are delimited by a specifc begin/end tag. > */ > public class XmlInputFormatNew extends TextInputFormat { > > public static final String START_TAG_KEY =3D ""; > public static final String END_TAG_KEY =3D ""; > > @Override > public RecordReader createRecordReader(InputSplit = is, > TaskAttemptContext tac) { > > return new XmlRecordReader(); > > } > public static class XmlRecordReader extends > RecordReader { > private byte[] startTag; > private byte[] endTag; > private long start; > private long end; > private FSDataInputStream fsin; > private DataOutputBuffer buffer =3D new DataOutputBuffer(); > private LongWritable key =3D new LongWritable(); > private Text value =3D new Text(); > > @Override > public void initialize(InputSplit is, TaskAttemptContext tac) thr= ows > IOException, InterruptedException { > FileSplit fileSplit=3D (FileSplit) is; > startTag =3D > tac.getConfiguration().get(START_TAG_KEY).getBytes("utf-8"); > endTag =3D > tac.getConfiguration().get(END_TAG_KEY).getBytes("utf-8"); > > > start =3D fileSplit.getStart(); > end =3D start + fileSplit.getLength(); > Path file =3D fileSplit.getPath(); > > FileSystem fs =3D file.getFileSystem(tac.getConfiguration= ()); > fsin =3D fs.open(fileSplit.getPath()); > fsin.seek(start); > > > } > > @Override > public boolean nextKeyValue() throws IOException, > InterruptedException { > if (fsin.getPos() < end) { > if (readUntilMatch(startTag, false)) { > try { > buffer.write(startTag); > if (readUntilMatch(endTag, true)) { > > value.set(buffer.getData(), 0, buffer.getLength()); > key.set(fsin.getPos()); > return true; > } > } finally { > buffer.reset(); > } > } > } > return false; > } > > @Override > public LongWritable getCurrentKey() throws IOException, > InterruptedException { > return key; > } > > @Override > public Text getCurrentValue() throws IOException, > InterruptedException { > return value; > > > > } > > @Override > public float getProgress() throws IOException, InterruptedExcepti= on > { > return (fsin.getPos() - start) / (float) (end - start); > } > > @Override > public void close() throws IOException { > fsin.close(); > } > private boolean readUntilMatch(byte[] match, boolean withinBlock) > throws IOException { > int i =3D 0; > while (true) { > int b =3D fsin.read(); > // end of file: > if (b =3D=3D -1) return false; > // save to buffer: > if (withinBlock) buffer.write(b); > > // check if we're matching: > if (b =3D=3D match[i]) { > i++; > if (i >=3D match.length) return true; > } else i =3D 0; > // see if we've passed the stop point: > if (!withinBlock && i =3D=3D 0 && fsin.getPos() >=3D end) return = false; > } > } > > } > > > } > then also following error has occured , please help. > > hduser@localhost:~$ hadoop jar xml.jar ParserDriverMain Ran Sales > 13/12/17 15:02:01 WARN mapred.JobClient: Use GenericOptionsParser for > parsing the arguments. Applications should implement Tool for the same. > 13/12/17 15:02:01 INFO input.FileInputFormat: Total input paths to proces= s : > 1 > 13/12/17 15:02:01 INFO mapred.JobClient: Running job: job_201312161706_00= 21 > 13/12/17 15:02:02 INFO mapred.JobClient: map 0% reduce 0% > 13/12/17 15:02:12 INFO mapred.JobClient: Task Id : > attempt_201312161706_0021_m_000000_0, Status : FAILED > Error: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but > interface was expected > 13/12/17 15:02:18 INFO mapred.JobClient: Task Id : > attempt_201312161706_0021_m_000000_1, Status : FAILED > Error: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but > interface was expected > 13/12/17 15:02:24 INFO mapred.JobClient: Task Id : > attempt_201312161706_0021_m_000000_2, Status : FAILED > Error: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but > interface was expected > 13/12/17 15:02:33 INFO mapred.JobClient: Job complete: job_201312161706_0= 021 > 13/12/17 15:02:33 INFO mapred.JobClient: Counters: 3 > 13/12/17 15:02:33 INFO mapred.JobClient: Job Counters > 13/12/17 15:02:33 INFO mapred.JobClient: Launched map tasks=3D4 > 13/12/17 15:02:33 INFO mapred.JobClient: Data-local map tasks=3D4 > 13/12/17 15:02:33 INFO mapred.JobClient: Failed map tasks=3D1 > hduser@localhost:~$ > > > > > > > > > > Regards > Ranjini R > > On Tue, Dec 17, 2013 at 3:20 PM, unmesha sreeveni > wrote: >> >> Mine is working properly . >> Output >> 13/12/17 15:18:12 WARN util.NativeCodeLoader: Unable to load native-hado= op >> library for your platform... using builtin-java classes where applicable >> 13/12/17 15:18:13 WARN conf.Configuration: session.id is deprecated. >> Instead, use dfs.metrics.session-id >> 13/12/17 15:18:13 INFO jvm.JvmMetrics: Initializing JVM Metrics with >> processName=3DJobTracker, sessionId=3D >> 13/12/17 15:18:13 WARN mapred.JobClient: Use GenericOptionsParser for >> parsing the arguments. Applications should implement Tool for the same. >> 13/12/17 15:18:13 WARN mapred.JobClient: No job jar file set. User >> classes may not be found. See JobConf(Class) or JobConf#setJar(String). >> 13/12/17 15:18:13 INFO input.FileInputFormat: Total input paths to proce= ss >> : 1 >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: OutputCommitter set in >> config null >> 13/12/17 15:18:13 INFO mapred.JobClient: Running job: >> job_local2063093851_0001 >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: OutputCommitter is >> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: Waiting for map tasks >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: Starting task: >> attempt_local2063093851_0001_m_000000_0 >> 13/12/17 15:18:13 WARN mapreduce.Counters: Group >> org.apache.hadoop.mapred.Task$Counter is deprecated. Use >> org.apache.hadoop.mapreduce.TaskCounter instead >> 13/12/17 15:18:13 INFO util.ProcessTree: setsid exited with exit code 0 >> 13/12/17 15:18:13 INFO mapred.Task: Using ResourceCalculatorPlugin : >> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@109c4289 >> 13/12/17 15:18:13 INFO mapred.MapTask: Processing split: >> file:/home/sreeveni/myfiles/xml/conf:0+217 >> 13/12/17 15:18:13 INFO mapred.MapTask: Map output collector class =3D >> org.apache.hadoop.mapred.MapTask$MapOutputBuffer >> 13/12/17 15:18:13 INFO mapred.MapTask: io.sort.mb =3D 100 >> 13/12/17 15:18:13 INFO mapred.MapTask: data buffer =3D 79691776/99614720 >> 13/12/17 15:18:13 INFO mapred.MapTask: record buffer =3D 262144/327680 >> =91 >> dfs.replication >> 1 >> =91 >> =91 >> dfs >> 2 >> =91 >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: >> 13/12/17 15:18:13 INFO mapred.MapTask: Starting flush of map output >> 13/12/17 15:18:13 INFO mapred.MapTask: Finished spill 0 >> 13/12/17 15:18:13 INFO mapred.Task: >> Task:attempt_local2063093851_0001_m_000000_0 is done. And is in the proc= ess >> of commiting >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: >> 13/12/17 15:18:13 INFO mapred.Task: Task >> 'attempt_local2063093851_0001_m_000000_0' done. >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: Finishing task: >> attempt_local2063093851_0001_m_000000_0 >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: Map task executor complete= . >> 13/12/17 15:18:13 WARN mapreduce.Counters: Group >> org.apache.hadoop.mapred.Task$Counter is deprecated. Use >> org.apache.hadoop.mapreduce.TaskCounter instead >> 13/12/17 15:18:13 INFO mapred.Task: Using ResourceCalculatorPlugin : >> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1bf54903 >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: >> 13/12/17 15:18:13 INFO mapred.Merger: Merging 1 sorted segments >> 13/12/17 15:18:13 INFO mapred.Merger: Down to the last merge-pass, with = 1 >> segments left of total size: 30 bytes >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: >> 13/12/17 15:18:13 INFO mapred.Task: >> Task:attempt_local2063093851_0001_r_000000_0 is done. And is in the proc= ess >> of commiting >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: >> 13/12/17 15:18:13 INFO mapred.Task: Task >> attempt_local2063093851_0001_r_000000_0 is allowed to commit now >> 13/12/17 15:18:13 INFO output.FileOutputCommitter: Saved output of task >> 'attempt_local2063093851_0001_r_000000_0' to /home/sreeveni/myfiles/xmlO= ut >> 13/12/17 15:18:13 INFO mapred.LocalJobRunner: reduce > reduce >> 13/12/17 15:18:13 INFO mapred.Task: Task >> 'attempt_local2063093851_0001_r_000000_0' done. >> 13/12/17 15:18:14 INFO mapred.JobClient: map 100% reduce 100% >> 13/12/17 15:18:14 INFO mapred.JobClient: Job complete: >> job_local2063093851_0001 >> 13/12/17 15:18:14 INFO mapred.JobClient: Counters: 20 >> 13/12/17 15:18:14 INFO mapred.JobClient: File System Counters >> 13/12/17 15:18:14 INFO mapred.JobClient: FILE: Number of bytes >> read=3D780 >> 13/12/17 15:18:14 INFO mapred.JobClient: FILE: Number of bytes >> written=3D185261 >> 13/12/17 15:18:14 INFO mapred.JobClient: FILE: Number of read >> operations=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: FILE: Number of large read >> operations=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: FILE: Number of write >> operations=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Map-Reduce Framework >> 13/12/17 15:18:14 INFO mapred.JobClient: Map input records=3D2 >> 13/12/17 15:18:14 INFO mapred.JobClient: Map output records=3D2 >> 13/12/17 15:18:14 INFO mapred.JobClient: Map output bytes=3D24 >> 13/12/17 15:18:14 INFO mapred.JobClient: Input split bytes=3D101 >> 13/12/17 15:18:14 INFO mapred.JobClient: Combine input records=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Combine output records=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Reduce input groups=3D2 >> 13/12/17 15:18:14 INFO mapred.JobClient: Reduce shuffle bytes=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Reduce input records=3D2 >> 13/12/17 15:18:14 INFO mapred.JobClient: Reduce output records=3D4 >> 13/12/17 15:18:14 INFO mapred.JobClient: Spilled Records=3D4 >> 13/12/17 15:18:14 INFO mapred.JobClient: CPU time spent (ms)=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Physical memory (bytes) >> snapshot=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Virtual memory (bytes) >> snapshot=3D0 >> 13/12/17 15:18:14 INFO mapred.JobClient: Total committed heap usage >> (bytes)=3D446431232 >> >> Can u list ur jar files? >> >> >> On Tue, Dec 17, 2013 at 3:11 PM, unmesha sreeveni >> wrote: >>> >>> wait let me check out :) >>> >>> >>> On Tue, Dec 17, 2013 at 3:09 PM, Ranjini Rathinam >>> wrote: >>>> >>>> Hi, >>>> >>>> I am trying to process xml via mapreduce. and output should be in text >>>> format. >>>> >>>> I am using hadoop 0.20 >>>> >>>> the following error has occured , the link provided >>>> >>>> https://github.com/studhadoop/xmlparsing-hadoop/blob/master/XmlParser1= 1.java >>>> >>>> >>>> I have used the Package org.apache.hadoop.mapreduce.lib. only. >>>> >>>> then also following error has occured , please help. >>>> >>>> hduser@localhost:~$ hadoop jar xml.jar ParserDriverMain Ran Sales >>>> 13/12/17 15:02:01 WARN mapred.JobClient: Use GenericOptionsParser for >>>> parsing the arguments. Applications should implement Tool for the same= . >>>> 13/12/17 15:02:01 INFO input.FileInputFormat: Total input paths to >>>> process : 1 >>>> 13/12/17 15:02:01 INFO mapred.JobClient: Running job: >>>> job_201312161706_0021 >>>> 13/12/17 15:02:02 INFO mapred.JobClient: map 0% reduce 0% >>>> 13/12/17 15:02:12 INFO mapred.JobClient: Task Id : >>>> attempt_201312161706_0021_m_000000_0, Status : FAILED >>>> Error: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but >>>> interface was expected >>>> 13/12/17 15:02:18 INFO mapred.JobClient: Task Id : >>>> attempt_201312161706_0021_m_000000_1, Status : FAILED >>>> Error: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but >>>> interface was expected >>>> 13/12/17 15:02:24 INFO mapred.JobClient: Task Id : >>>> attempt_201312161706_0021_m_000000_2, Status : FAILED >>>> Error: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but >>>> interface was expected >>>> 13/12/17 15:02:33 INFO mapred.JobClient: Job complete: >>>> job_201312161706_0021 >>>> 13/12/17 15:02:33 INFO mapred.JobClient: Counters: 3 >>>> 13/12/17 15:02:33 INFO mapred.JobClient: Job Counters >>>> 13/12/17 15:02:33 INFO mapred.JobClient: Launched map tasks=3D4 >>>> 13/12/17 15:02:33 INFO mapred.JobClient: Data-local map tasks=3D4 >>>> 13/12/17 15:02:33 INFO mapred.JobClient: Failed map tasks=3D1 >>>> hduser@localhost:~$ >>>> >>>> >>>> >>>> >>>> >>>> thanks in advance. >>>> >>>> Ranjini >>>> >>> >>> >>> >>> >>> -- >>> Thanks & Regards >>> >>> Unmesha Sreeveni U.B >>> Junior Developer >>> >>> >> >> >> >> -- >> Thanks & Regards >> >> Unmesha Sreeveni U.B >> Junior Developer >> >> > > --089e013a0f2e745e8504edbc5f51 Content-Type: text/xml; charset=US-ASCII; name="emp.xml" Content-Disposition: attachment; filename="emp.xml" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hpbahwas0 PEVtcGxveWVlPgo8Zm5hbWU+c29tPC9mbmFtZT4KPGxuYW1lPnNoZWtoYXI8L2xuYW1lPgo8L0Vt cGxveWVlPgoKPEVtcGxveWVlPgo8Zm5hbWU+c29tPC9mbmFtZT4KPG1uYW1lPnNoYXJtYTwvbW5h bWU+CjxsbmFtZT5zaGVraGFyPC9sbmFtZT4KPC9FbXBsb3llZT4KCjxFbXBsb3llZT4KPGZuYW1l PnNvbTwvZm5hbWU+CjxtbmFtZT5zaGFybWE8L21uYW1lPgo8bG5hbWU+c2hla2hhcjwvbG5hbWU+ Cjxkb2I+MjUwODE5ODE8L2RvYj4KPC9FbXBsb3llZT4KCgo8RW1wbG95ZWU+CjxmbmFtZT5zb208 L2ZuYW1lPgo8bW5hbWU+c2hhcm1hPC9tbmFtZT4KPGxuYW1lPnNoZWtoYXI8L2xuYW1lPgo8ZG9i PjI1MDgxOTgxPC9kb2I+CjxhZGRyZXNzPkJhbmdhbG9yZTwvYWRkcmVzcz4KPC9FbXBsb3llZT4K CjxFbXBsb3llZT4KPGZuYW1lPnNvbTwvZm5hbWU+CjxtbmFtZT5zaGFybWE8L21uYW1lPgo8bG5h bWU+c2hla2hhcjwvbG5hbWU+Cjxkb2I+MjUwODE5ODE8L2RvYj4KPG51bWJlcj44MTk3MjQzODEw PC9udW1iZXI+CjwvRW1wbG95ZWU+CgoK --089e013a0f2e745e8504edbc5f51 Content-Type: application/x-gzip; name="emp_op.tar.gz" Content-Disposition: attachment; filename="emp_op.tar.gz" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hpbahwb71 H4sIAOJlsFIAA+3UTU6DQBQH8InLnoID2M5HGcA4YWF07dadmbZj28gAGejCw3gCr+DhbB1MCkk1 MRRi/P82A4/hzUseD2PLx6Kk5JzYXhxLv0Zha20QLoQMWbTfNyeMCxlJEsizVtXYVbV2QUBqp7f5 Nl+f2vfT8z/K+P7PSu3qqZ1+NmO2dMs+zzjkjDp9b/VfdPsvecRJwPos4pR/3n/f6gtye399+f7w +jZ2PTCsZv6Px7/3M/z/Pz41/5wJ0Zn/cB5KzP8Q1J0ts+LFmHSinnJtTVoVVlF/OVGZD23M80Y7 RbMmTI/e+j6BbRJoZ7WidpS0q2KRCskSfpVwRQ93gx6lVytnqiq90flaZ4Uzin6FBq0j39mFcek+ FotwnnCmaBNqlTH2FwkAAAAAAAAAAAAAAAAAAL/xAZY2mR0AKAAA --089e013a0f2e745e8504edbc5f51 Content-Type: application/x-gzip; name="src.tar XML_INPUT_FORMAT.gz" Content-Disposition: attachment; filename="src.tar XML_INPUT_FORMAT.gz" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hpbahwb72 H4sIAJhlsFIAA+1Z62/bOBLPV+uv4OXDrty6svyQfTi3xbVNe+hu2xRJ9m4PRbGgZdrhRpYEicrj Fvnfb4Z6K7ItuY1xh+WgjfWY4Qz5mxnOUGFg948emUyg6dSKfyfj0m9CR4Ph0Bqb1mQ8BL7B0JyO j4j12IYhRaGgASFHIqDc5e5qE9+u9/+nFAL+a5jZYzpBe/xH5tBU+B+CMvx/p9f0kZygFf7jEeI/ HQ8U/oegMv62t34EH2iP/9gaTBX+h6CH+N+uvrcLtMffGowshf8hqBb/S7rwPP+7uUF7/CdjU8X/ QWgb/umUv9URWuA/HZjANzSHg4nC/xDUCP918E0u0AZ/qPwQf8gDCv9DUCP8bVgkb82Xe3pBm/w/ GWD8D01L4X8Qahr/n2kQso/U91lgIGsbHQjwpIJ7Ef/pcJDFv2VNAP/RdAT9v/lYky7Snxx/n9pX dMUIIG/crowYeSOdrLEOZprG174XCIKwG9wz3p++vbWZL7jn5i+9YGVQGOySpWMA5wfPXf0r4ILO HTbbyvkpcpxmnBfsVmzhWFM/YIvIZkbsrGChH80dbhPboWFICn5MYCTmLkIS3z4vWtsjqCf9W7Tu JflD62idv59esyDgC6Z1kvGvPb4goF4vjkOu2F0PByHX1Ilg2DeeK/DWjn+7RFwG3k1ICovaI+/h XRBEvmCL7KnWAb2dRMy4gfGZnozpgnndmda517R7rSX+TeP/491JwK/3CP6jnfE/HI2HSfxPhpaF +/9oOLZU/B+CWsT/w2ADb1wa4NFLvoog6raE5TI0PlNxefAk8JM3b8Tn8LnBXT8SxjvusPd49c4L 1rSZFpT2IpGKn8rLnfKR4I5x4XlOE56zyHVZAYwNeKWVmvHrxw+lWZSzYBrNWQrMQSSgwGFr5oqQ oOI/tLpkx11BgsjVz0UAar98JTRYhVkyy5IW5spOv08uTk9OyatIeM9WDKZBIbGRNROX3oKEIpoD U/xP/gHMyO/w/wVx2Q2BO33FBBqod3vk+JcQFBKYHpHzI/EEke0Yc2AHJI2QiZ9o8PruDU5Wz1KX nLtkyvkKixRzlxeuVibeL2L2Yl2U8+aMsS/8zBJb0HG38P0TE3rMWQyGWis+Resz6X4XNLwKdTN9 XXFggy4W8hbjTwfZnlxWeYegfTG/dqVo1XNRR3y/SXKQSCY23VCOom88dCBEXxdBxFKOgIkocIkp 96lO5kiQfUW+efKdHhVvg3lEGOiFaFaKMjqJFN1zQ/yTUbv+79e1c8ZsL1icMbpoXAzs2P8nUO0X 9n8T9n9rqvb/w9CO/T9Ffu8uADb+d+cnVFCZgyC4GV1vrxMwD53fhYLt4NtdT6DaOIO9jpZL3D2/ T7PSuPqI5+w7vBl7MbQaCWDmfyVgpXyR9BV7FDuJfeUCoRLoWZ1QfFhumHBRkt7Ih0QMWzwh8zvB IJNjhIkLupppBKj6FsZ9+M6BoWO5uhcgUnn8wMfIEuqEClPVH8g8/okLjepbvVsWr7Z1iVjxcVUk b/wSZnyATJIroby4Sp8UG0oIQ8Gpw//D9NybCA+hNX0APhHUbt5QAlYFK0jmCmSZXr0geva0Czpn JYEUVZga6DWSGg2LSCo3/y4+0o/PL16dXfx28eofx/LBawA91I8jsXz21+NuecTYE3aM9/bTSbPR ctI2XOfTAJ3ZrHHcc3yo1wwIJgJvLPO0LPOBuSuojGqEMFNJ3qoaWUpV3CEHQ+ZA8ONECgXy53r9 GtVox0DAIULD85mr1xiwQQgKQHaly7lWbMyu7/PHG914DrUaoy74/y3WwbLE1fd2U8KXRJfWofle CCM9R1S6BT5kgRyw+MWF7ukjFfalnjprjyypE7JutzSsCO4qauLMkByzpMKVdapRE3twj8jCt6yj 7HkyKWCFrSeaYDaYgLB6NXskf5h6VUU35B8pXVqJ+gBICm80qfz+vnB3Dy7mUsfZsA4BQ2Wl8e+1 6lX6myiUCz1r5SqlFIvOHQUBNKLgNfs5TGIJLFY7O2TezvV/q8uWjJHIzzY5htbKzqXjUWno58Bb AUbh/kYm1lVC61mc6rqkD29QWZegj2fP2y2r3NBsxwvrF7NikbQk4Z49cLZ8k03zSyUUk+pijTe9 jOmGi0vuvnY8+2qrCXi+wSFrmqnmm0vM33FHW0w1wDeX2RWMRQuKtvb7cr/wljJ//62Un0DoBXk2 6G6IFhAN6TUjwktCsCxdmkYpV82LqRpGgQLQvkKZG/ZjwOLlgMK+xhj56gv/Wk5b/OnTorMiNycv E27DkdmpW59j7gmDOZWXMZ4ZY4lFMEMfyk22ACwY+JTnE9+DNS2b95fCdMkPP+CIMCRelb0VzJI7 Qd2Sxn4jvRQu8UYdDCC17P/Lh2MNvwXsOv+3RuPs+59pjbD/t6yB6v8PQY37/23tv+r+v+XbQ/qh stEJAf404/0fPXwoiL0/2f+44iDfdGJpBPwwX4Rq5VF9Wb7/5In2hCBUIex1CFsI+ydUghR2+AVz +JrjB5b5HaEk9JnNl9CFsRUkeSxHBF0Z2pN+5bynlNiz4x5SmTuUBlpWziVH97JxIPGxPcm6/d9+ fvtv2PiPn79d+453x9jL49l20aSxzwT7RUntYWGZDLXrRMqGnCNYkavpQUpeB1UbCjzKqZyQFeq+ pNJQXx4UKVKkSJEiRYoUKVKkSJEiRYoUKVKkSJEiRYoUHZ7+C33qDa4AUAAA --089e013a0f2e745e8504edbc5f51--