hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chhaya Vishwakarma <Chhaya.Vishwaka...@lntinfotech.com>
Subject RE: XML parsing in Hadoop
Date Thu, 28 Nov 2013 11:40:24 GMT
Thank you all finally I am able to do it

Chhaya Vishwakarma

From: Sofia Georgiakaki [mailto:geosofie_tuc@yahoo.com]
Sent: Thursday, November 28, 2013 2:58 PM
To: user@hadoop.apache.org
Subject: Re: XML parsing in Hadoop

Hello Chhaya,

I'm not sure why the job launches 4 map tasks, since your input file's size is 2MB, which
is less than 1 HDFS block (64MB by default) - I would expect to initialize only 1 mapper,
unless you have changed the default HDFS block size value.

As I see in the code, you use TextInputFormat.class to read your input file. This means that
your map function will be executed once per line of your input. However, inside your map function
you still read all the input split:
FileSplit fileSplit = (FileSplit)context.getInputSplit();
This means that if you have many lines in your input (I guess you do), you read multiple time
the same input split, which I suspect is wrong?
Moreover, you might want to revise the line
if ( colvalue.toString().equalsIgnoreCase(null) ) .
Do you mean
if ( colvalue==null) ?

I think it would be helpful to read once more the MapReduce programming model, in order to
better understand when each map & reduce function is executed and how. You can use this
link  http://developer.yahoo.com/hadoop/tutorial/module4.html , or the official Apache Hadoop
This will help you fit your algorithm in the MapReduce paradigm more easily. If you need further
clarifications, I would be happy to help!


On Thursday, November 28, 2013 11:03 AM, Chhaya Vishwakarma <Chhaya.Vishwakarma@lntinfotech.com<mailto:Chhaya.Vishwakarma@lntinfotech.com>>
2mB file

From: unmesha sreeveni [mailto:unmeshabiju@gmail.com]
Sent: Thursday, November 28, 2013 2:23 PM
To: User Hadoop
Subject: Re: XML parsing in Hadoop

How much is ur size of input file?

On Thu, Nov 28, 2013 at 2:17 PM, Chhaya Vishwakarma <Chhaya.Vishwakarma@lntinfotech.com<mailto:Chhaya.Vishwakarma@lntinfotech.com>>

Yes I have run it without MR it takes few seconds to run. So I think its MR issue only
I have a single node cluster its launching 4 map tasks. Trying with only one file.

Chhaya Vishwakarma

From: Mirko Kämpf [mailto:mirko.kaempf@gmail.com<mailto:mirko.kaempf@gmail.com>]
Sent: Thursday, November 28, 2013 12:53 PM
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: XML parsing in Hadoop


did you run the same code in stand alone mode without MapReduce framework?
How long takes the code in you map() function standalone?
Compare those two different times (t_0 MR mode, t_1 standalone mode) to find out
if it is a MR issue or something which comes from the xml-parser logic or the data ...

Usually it should be not that slow. But what cluster do you have and how many mappers / reducers
and how many of such 2NB files do you have?

Best wishes

2013/11/28 Chhaya Vishwakarma <Chhaya.Vishwakarma@lntinfotech.com<mailto:Chhaya.Vishwakarma@lntinfotech.com>>

The below code parses XML file, Here the output of the code is correct but the job takes long
time for completion.
It took 20 hours to parse 2MB file.
Kindly suggest what changes could be done to increase the performance.

package xml;

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;

import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;

import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;

import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import org.apache.log4j.Logger;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.NodeList;
import org.xml.sax.SAXException;

public class ReadXmlMR
                static Logger log = Logger.getLogger(ReadXmlMR.class.getName());
                 public static String fileName = new String();
                 public static Document dom;
                 public void configure(JobConf job) {
         fileName = job.get("map.input.file");

                public static class Map extends Mapper<LongWritable,Text,Text,Text>

                                public void map(LongWritable key, Text value,Context context
) throws IOException, InterruptedException
                                                try {
                                                                FileSplit fileSplit = (FileSplit)context.getInputSplit();
                                                                Configuration conf = context.getConfiguration();

                                                                DocumentBuilderFactory dbf
= DocumentBuilderFactory.newInstance();

                                                                FSDataInputStream fstream1;
                                                                Path file = fileSplit.getPath();
                                                FileSystem fs = file.getFileSystem(conf);
                                                fstream1 = fs.open(fileSplit.getPath());
                                                                DocumentBuilder db = dbf.newDocumentBuilder();
                                                                dom = db.parse(fstream1);
                                                                Element docEle = null;
                                                                docEle = dom.getDocumentElement();

                                                                XPath xpath = XPathFactory.newInstance().newXPath();

                                                                Object result =  xpath.compile("//*").evaluate(dom,

                                                                NodeList nodes = (NodeList)

                                                                for (int n = 2; n < nodes.getLength();

                                                                                Text colvalue=new
                                                                                Text nodename=
new Text("");

                                                                                nodename =
new Text(nodes.item(n).getNodeName());
= new Text(nodes.item(n).getFirstChild().getNodeValue());}catch(Exception e){}

                                                                } catch (ParserConfigurationException
e) {
                                                                // TODO Auto-generated catch
                                                                } catch (SAXException e) {
                                                                // TODO Auto-generated catch

                                                                } catch (XPathExpressionException
e) {
                                                                // TODO Auto-generated catch



                public static void main(String[] args) throws Exception


                Configuration conf = new Configuration();

        Job job = new Job(conf, "XmlParsing");



                FileInputFormat.addInputPath(job, new Path(args[0]));
                FileOutputFormat.setOutputPath(job, new Path(args[1]));





Chhaya Vishwakarma

The contents of this e-mail and any attachment(s) may contain confidential or privileged information
for the intended recipient(s). Unintended recipients are prohibited from taking action on
the basis of information in this e-mail and using or disseminating the information, and must
notify the sender and delete it from their system. L&T Infotech will not accept responsibility
or liability for the accuracy or completeness of, or the presence of any virus or disabling
code in this e-mail"

Thanks & Regards

Unmesha Sreeveni U.B
Junior Developer

View raw message