Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D8D7310CB8 for ; Fri, 7 Feb 2014 16:37:44 +0000 (UTC) Received: (qmail 72617 invoked by uid 500); 7 Feb 2014 16:37:30 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 72348 invoked by uid 500); 7 Feb 2014 16:37:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 72337 invoked by uid 99); 7 Feb 2014 16:37:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Feb 2014 16:37:29 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of piaozhexiu@gmail.com designates 209.85.216.53 as permitted sender) Received: from [209.85.216.53] (HELO mail-qa0-f53.google.com) (209.85.216.53) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Feb 2014 16:37:22 +0000 Received: by mail-qa0-f53.google.com with SMTP id cm18so5650264qab.12 for ; Fri, 07 Feb 2014 08:37:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=vms0ZO01uB5V4/vHxTCXHnpl0r0kjvigzggtPSFa0XA=; b=WaAC9lzIFEVo82qgXCUO0QRQhqc4LEPee/N2B7dg7BH7fHDmNjyuPi6AO4QGFOekza IAxgMkl+JY40aFm8GAqr2kxQrzc+NBa90I191sP8Kaijdbj2xd4nIVGLqpfcsCRKaULk 7Ah4EHXUI7PeWHXDazBp9JauWYMErjpgEmUqRydLqY/3kqg1UeF5T+Tr/tptAHHwL20y haOO9WGFeZXeIHKWf3Qsg1afA3tSL4ZTmkwWM8X/B3FOMKJ2sh1i91krgfC0MkAfB91/ qlJ0AP+V9uUouuBnHJCXS8N9YAxyjKtetynVHMupAbh6OxJbdpx9Yv3UUHgV4sYrwjxS B35w== MIME-Version: 1.0 X-Received: by 10.140.109.72 with SMTP id k66mr22443401qgf.20.1391791021499; Fri, 07 Feb 2014 08:37:01 -0800 (PST) Received: by 10.140.21.40 with HTTP; Fri, 7 Feb 2014 08:37:01 -0800 (PST) In-Reply-To: References: Date: Fri, 7 Feb 2014 11:37:01 -0500 Message-ID: Subject: Re: java.lang.OutOfMemoryError: Java heap space From: Cheolsoo Park To: user@hadoop.apache.org Cc: "user@pig.apache.org" Content-Type: multipart/alternative; boundary=001a1139b03e7046c404f1d399dc X-Virus-Checked: Checked by ClamAV on apache.org --001a1139b03e7046c404f1d399dc Content-Type: text/plain; charset=ISO-8859-1 Hi Prav, You're thinking correctly, and it's true that Pig bags are spillable. However, spilling is no magic, meaning you can still run into OOM with huge bags like you have here. Pig runs Spillable Memory Manager (SMM) in a separate thread. When spilling is triggered, SMM locks bags that it's trying to spill to disk. After the spilling is finished, GC frees up memory. The problem is that it's possible that more bags are loaded into memory while the spilling is in progress. Now JVM triggers GC, but GC cannot free up memory because SMM is locking the bags, resulting in OOM error. This happens quite often. Sounds like you do group-by to reduce the number of rows before join and don't immediately run any aggregation function on the grouped bags. If that's the case, can you compress those bags? For eg, you could add a foreach after group-by and run a UDF that compresses a bag and returns it as bytearray. From there, you're moving around small blobs rather than big bags. Of course, you will need to decompress them when you restore data out of those bags at some point. This trick saved me several times in the past particularly when I dealt with bags of large chararrays. Just a thought. Hope this is helpful. Thanks, Cheolsoo On Fri, Feb 7, 2014 at 7:37 AM, praveenesh kumar wrote: > Thanks Park for sharing the above configs > > But I am wondering if the above config changes would make any huge > difference in my case. > As per my logs, I am very worried about this line - > > INFO org.apache.hadoop.mapred.MapTask: Record too large for in-memory buffer: 644245358 bytes > > If I am understanding it properly, my 1 record is very large to fit into the memory, which is causing the issue. > > Any of the above changes wouldn't make any huge impact, please correct me if I am taking it totally wrong. > > - Adding hadoop user group here as well, to throw some valuable inputs to understand the above question. > > > Since I am doing a join on a grouped bag, do you think that might be the case ? > > But if that is the issue, as far as I understand Bags in Pig are spillable, it shouldn't have given this issue. > > I can't get rid of group by, Grouping by first should idealing improve my join. But if this is the root cause, if I am understanding it correctly, > > do you think I should get rid of group-by. > > But my question in that case would be what would happen if I do group by later after join, if will result in much bigger bag (because it would have more records after join) > > Am I thinking here correctly ? > > Regards > > Prav > > > > On Fri, Feb 7, 2014 at 3:11 AM, Cheolsoo Park wrote: > >> Looks like you're running out of space in MapOutputBuffer. Two >> suggestions- >> >> 1) >> You said that io.sort.mb is already set to 768 MB, but did you try to >> lower >> io.sort.spill.percent in order to spill earlier and more often? >> >> Page 12- >> >> http://www.slideshare.net/Hadoop_Summit/optimizing-mapreduce-job-performance >> >> 2) >> Can't you increase the parallelism of mappers so that each mapper has to >> handle a smaller size of data? Pig determines the number of mappers by >> total input size / pig.maxCombinedSplitSize (128MB by default). So you can >> try to lower pig.maxCombinedSplitSize. >> >> But I admit Pig internal data types are not memory-efficient, and that is >> an optimization opportunity. Contribute! >> >> >> >> On Thu, Feb 6, 2014 at 2:54 PM, praveenesh kumar > >wrote: >> >> > Its a normal join. I can't use replicated join, as the data is very >> large. >> > >> > Regards >> > Prav >> > >> > >> > On Thu, Feb 6, 2014 at 7:52 PM, abhishek >> > wrote: >> > >> > > Hi Praveenesh, >> > > >> > > Did you use "replicated join" in your pig script or is it a regular >> join >> > ?? >> > > >> > > Regards >> > > Abhishek >> > > >> > > Sent from my iPhone >> > > >> > > > On Feb 6, 2014, at 11:25 AM, praveenesh kumar > > >> > > wrote: >> > > > >> > > > Hi all, >> > > > >> > > > I am running a Pig Script which is running fine for small data. But >> > when >> > > I >> > > > scale the data, I am getting the following error at my map stage. >> > > > Please refer to the map logs as below. >> > > > >> > > > My Pig script is doing a group by first, followed by a join on the >> > > grouped >> > > > data. >> > > > >> > > > >> > > > Any clues to understand where I should look at or how shall I deal >> with >> > > > this situation. I don't want to just go by just increasing the heap >> > > space. >> > > > My map jvm heap space is already 3 GB with io.sort.mb = 768 MB. >> > > > >> > > > 2014-02-06 19:15:12,243 WARN >> org.apache.hadoop.util.NativeCodeLoader: >> > > > Unable to load native-hadoop library for your platform... using >> > > > builtin-java classes where applicable 2014-02-06 19:15:15,025 INFO >> > > > org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 >> > > > 2014-02-06 19:15:15,123 INFO org.apache.hadoop.mapred.Task: Using >> > > > ResourceCalculatorPlugin : >> > > > >> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2bd9e2822014-02-06 >> > > > 19:15:15,546 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 768 >> > > > 2014-02-06 19:15:19,846 INFO org.apache.hadoop.mapred.MapTask: data >> > > buffer >> > > > = 612032832/644245088 2014-02-06 19:15:19,846 INFO >> > > > org.apache.hadoop.mapred.MapTask: record buffer = 9563013/10066330 >> > > > 2014-02-06 19:15:20,037 INFO >> org.apache.hadoop.io.compress.CodecPool: >> > Got >> > > > brand-new decompressor 2014-02-06 19:15:21,083 INFO >> > > > >> > > >> > >> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: >> > > > Created input record counter: Input records from _1_tmp1327641329 >> > > > 2014-02-06 19:15:52,894 INFO org.apache.hadoop.mapred.MapTask: >> Spilling >> > > map >> > > > output: buffer full= true 2014-02-06 19:15:52,895 INFO >> > > > org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 611949600; >> > > bufvoid >> > > > = 644245088 2014-02-06 19:15:52,895 INFO >> > > org.apache.hadoop.mapred.MapTask: >> > > > kvstart = 0; kvend = 576; length = 10066330 2014-02-06 19:16:06,182 >> > INFO >> > > > org.apache.hadoop.mapred.MapTask: Finished spill 0 2014-02-06 >> > > 19:16:16,169 >> > > > INFO org.apache.pig.impl.util.SpillableMemoryManager: first memory >> > > handler >> > > > call - Collection threshold init = 328728576(321024K) used = >> > > > 1175055104(1147514K) committed = 1770848256(1729344K) max = >> > > > 2097152000(2048000K) 2014-02-06 19:16:20,446 INFO >> > > > org.apache.pig.impl.util.SpillableMemoryManager: Spilled an >> estimate of >> > > > 308540402 bytes from 1 objects. init = 328728576(321024K) used = >> > > > 1175055104(1147514K) committed = 1770848256(1729344K) max = >> > > > 2097152000(2048000K) 2014-02-06 19:17:22,246 INFO >> > > > org.apache.pig.impl.util.SpillableMemoryManager: first memory >> handler >> > > call- >> > > > Usage threshold init = 328728576(321024K) used = >> 1768466512(1727018K) >> > > > committed = 1770848256(1729344K) max = 2097152000(2048000K) >> 2014-02-06 >> > > > 19:17:35,597 INFO org.apache.pig.impl.util.SpillableMemoryManager: >> > > Spilled >> > > > an estimate of 1073462600 bytes from 1 objects. init = >> > 328728576(321024K) >> > > > used = 1768466512(1727018K) committed = 1770848256(1729344K) max = >> > > > 2097152000(2048000K) 2014-02-06 19:18:01,276 INFO >> > > > org.apache.hadoop.mapred.MapTask: Spilling map output: buffer full= >> > true >> > > > 2014-02-06 19:18:01,288 INFO org.apache.hadoop.mapred.MapTask: >> > bufstart = >> > > > 611949600; bufend = 52332788; bufvoid = 644245088 2014-02-06 >> > 19:18:01,288 >> > > > INFO org.apache.hadoop.mapred.MapTask: kvstart = 576; kvend = 777; >> > > length = >> > > > 10066330 2014-02-06 19:18:03,377 INFO >> org.apache.hadoop.mapred.MapTask: >> > > > Finished spill 1 2014-02-06 19:18:05,494 INFO >> > > > org.apache.hadoop.mapred.MapTask: Record too large for in-memory >> > buffer: >> > > > 644246693 bytes 2014-02-06 19:18:36,008 INFO >> > > > org.apache.pig.impl.util.SpillableMemoryManager: Spilled an >> estimate of >> > > > 306271368 bytes from 1 objects. init = 328728576(321024K) used = >> > > > 1449267128(1415299K) committed = 2097152000(2048000K) max = >> > > > 2097152000(2048000K) 2014-02-06 19:18:44,448 INFO >> > > > org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' >> > truncater >> > > > with mapRetainSize=-1 and reduceRetainSize=-1 2014-02-06 >> 19:18:44,780 >> > > FATAL >> > > > org.apache.hadoop.mapred.Child: Error running child : >> > > > java.lang.OutOfMemoryError: Java heap space at >> > > > java.util.Arrays.copyOf(Arrays.java:2786) at >> > > > java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94) >> at >> > > > java.io.DataOutputStream.write(DataOutputStream.java:90) at >> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.java:384) at >> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.java:306) at >> > > > >> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:454) at >> > > > >> org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:542) at >> > > > org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:523) >> at >> > > > >> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:361) at >> > > > >> org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:542) at >> > > > >> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:357) at >> > > > org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:57) at >> > > > >> > > >> > >> org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.java:123) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:90) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:77) >> > > > at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:179) at >> > > > >> > > >> > >> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord(MapTask.java:1501) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1091) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691) >> > > > at >> > > > >> > > >> > >> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80) >> > > > at >> > > > >> > > >> > >> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:128) >> > > > at >> > > > >> > > >> > >> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:269) >> > > > at >> > > > >> > > >> > >> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:262) >> > > > at >> > > > >> > > >> > >> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64) >> > > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at >> > > > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at >> > > > org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at >> > > > org.apache.hadoop.mapred.Child$4.run(Child.java:255) at >> > > > java.security.AccessController.doPrivileged(Native Method) at >> > > > javax.security.auth.Subject.doAs(Subject.java:396) at >> > > > >> > > >> > >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) >> > > > at org.apache.hadoop.mapred.Child.main(Child.java:249) >> > > >> > >> > > --001a1139b03e7046c404f1d399dc Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Prav,

You're thinking correctly,= and it's true that Pig bags are spillable.

Ho= wever, spilling is no magic, meaning you can still run into OOM with huge b= ags like you have here. Pig runs Spillable Memory Manager (SMM) in a separa= te thread. When spilling is triggered, SMM locks bags that it's trying = to spill to disk. After the spilling is finished, GC frees up memory. The p= roblem is that it's possible that more bags are loaded into memory whil= e the spilling is in progress. Now JVM triggers GC, but GC cannot free up m= emory because SMM is locking the bags, resulting in OOM error. This happens= quite often.

Sounds like you do group-by to reduce the number of row= s before join and don't immediately run any aggregation function on the= grouped bags. If that's the case, can you compress those bags? For eg,= you could add a foreach after group-by and run a UDF that compresses a bag= and returns it as bytearray. From there, you're moving around small bl= obs rather than big bags. Of course, you will need to decompress them when = you restore data out of those bags at some point. This trick saved me sever= al times in the past particularly when I dealt with bags of large chararray= s.

Just a thought. Hope this is helpful.

Thanks,
Cheolsoo

On Fri, Feb 7, 2014 at 7:37 AM, praveenesh= kumar <praveenesh@gmail.com> wrote:
Thanks Park for s= haring the above configs

But I am wondering if the above confi= g changes would make any huge difference in my case.
As per my logs, I am very worried about this line -
 INFO org.apache.hadoop.mapred.MapTask: Re=
cord too large for in-memory buffer: 644245358 bytes

If I am understanding it properly, my 1 record is ve=
ry large to fit into the memory, which is causing the issue.
Any of the above changes wouldn't make any huge impact, please correct = me if I am taking it totally wrong.

 - Adding hadoop use=
r group here as well, to throw some valuable inputs to understand the above=
 question.

Since I am doing= a join on a grouped bag, do you think that might be the case ?
<= /pre>
But if that is the i=
ssue, as far as I understand Bags in Pig are spillable, it shouldn't ha=
ve given this issue. 
I can't get rid =
of group by, Grouping by first should idealing improve my join. But if this=
 is the root cause, if I am understanding it correctly, 
do you think I should get rid of group-by.

But my question in that case would be what wo=
uld happen if I do group by later after join, if will result in much bigger=
 bag (because it would have more records after join)
Am I thinking here correctly ?

Regards
Prav


On Fri, Feb 7, 2014 at 3:11 AM, Cheo= lsoo Park <piaozhexiu@gmail.com> wrote:
Looks like you're running out of space i= n MapOutputBuffer. Two suggestions-

1)
You said that io.sort.mb is already set to 768 MB, but did you try to lower=
io.sort.spill.percent in order to spill earlier and more often?

Page 12-
http://www.slideshare.net/Hadoop_Summit/opt= imizing-mapreduce-job-performance

2)
Can't you increase the parallelism of mappers so that each mapper has t= o
handle a smaller size of data? Pig determines the number of mappers by
total input size / pig.maxCombinedSplitSize (128MB by default). So you can<= br> try to lower pig.maxCombinedSplitSize.

But I admit Pig internal data types are not memory-efficient, and that is an optimization opportunity. Contribute!



On Thu, Feb 6, 2014 at 2:54 PM, praveenesh kumar <praveenesh@gmail.com>wrote:

> Its a normal join. I can't use replicated join, as the data is ver= y large.
>
> Regards
> Prav
>
>
> On Thu, Feb 6, 2014 at 7:52 PM, abhishek <abhishek.dodda1@gmail.com>
> wrote:
>
> > Hi Praveenesh,
> >
> > Did you use "replicated join" in your pig script or is = it a regular join
> ??
> >
> > Regards
> > Abhishek
> >
> > Sent from my iPhone
> >
> > > On Feb 6, 2014, at 11:25 AM, praveenesh kumar <praveenesh@gmail.com>= ;
> > wrote:
> > >
> > > Hi all,
> > >
> > > I am running a Pig Script which is running fine for small da= ta. But
> when
> > I
> > > scale the data, I am getting the following error at my map s= tage.
> > > Please refer to the map logs as below.
> > >
> > > My Pig script is doing a group by first, followed by a join = on the
> > grouped
> > > data.
> > >
> > >
> > > Any clues to understand where I should look at or how shall = I deal with
> > > this situation. I don't want to just go by just increasi= ng the heap
> > space.
> > > My map jvm heap space is already 3 GB with io.sort.mb =3D 76= 8 MB.
> > >
> > > 2014-02-06 19:15:12,243 WARN org.apache.hadoop.util.NativeCo= deLoader:
> > > Unable to load native-hadoop library for your platform... us= ing
> > > builtin-java classes where applicable 2014-02-06 19:15:15,02= 5 INFO
> > > org.apache.hadoop.util.ProcessTree: setsid exited with exit = code 0
> > > 2014-02-06 19:15:15,123 INFO org.apache.hadoop.mapred.Task: = Using
> > > ResourceCalculatorPlugin :
> > > org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2bd9e28= 22014-02-06
> > > 19:15:15,546 INFO org.apache.hadoop.mapred.MapTask: io.sort.= mb =3D 768
> > > 2014-02-06 19:15:19,846 INFO org.apache.hadoop.mapred.MapTas= k: data
> > buffer
> > > =3D 612032832/644245088 2014-02-06 19:15:19,846 INFO
> > > org.apache.hadoop.mapred.MapTask: record buffer =3D 9563013/= 10066330
> > > 2014-02-06 19:15:20,037 INFO org.apache.hadoop.io.compress.C= odecPool:
> Got
> > > brand-new decompressor 2014-02-06 19:15:21,083 INFO
> > >
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecord= Reader:
> > > Created input record counter: Input records from _1_tmp13276= 41329
> > > 2014-02-06 19:15:52,894 INFO org.apache.hadoop.mapred.MapTas= k: Spilling
> > map
> > > output: buffer full=3D true 2014-02-06 19:15:52,895 INFO
> > > org.apache.hadoop.mapred.MapTask: bufstart =3D 0; bufend =3D= 611949600;
> > bufvoid
> > > =3D 644245088 2014-02-06 19:15:52,895 INFO
> > org.apache.hadoop.mapred.MapTask:
> > > kvstart =3D 0; kvend =3D 576; length =3D 10066330 2014-02-06= 19:16:06,182
> INFO
> > > org.apache.hadoop.mapred.MapTask: Finished spill 0 2014-02-0= 6
> > 19:16:16,169
> > > INFO org.apache.pig.impl.util.SpillableMemoryManager: first = memory
> > handler
> > > call - Collection threshold init =3D 328728576(321024K) used= =3D
> > > 1175055104(1147514K) committed =3D 1770848256(1729344K) max = =3D
> > > 2097152000(2048000K) 2014-02-06 19:16:20,446 INFO
> > > org.apache.pig.impl.util.SpillableMemoryManager: Spilled an = estimate of
> > > 308540402 bytes from 1 objects. init =3D 328728576(321024K) = used =3D
> > > 1175055104(1147514K) committed =3D 1770848256(1729344K) max = =3D
> > > 2097152000(2048000K) 2014-02-06 19:17:22,246 INFO
> > > org.apache.pig.impl.util.SpillableMemoryManager: first memor= y handler
> > call-
> > > Usage threshold init =3D 328728576(321024K) used =3D 1768466= 512(1727018K)
> > > committed =3D 1770848256(1729344K) max =3D 2097152000(2048000K)= 2014-02-06
> > > 19:17:35,597 INFO org.apache.pig.impl.util.SpillableMemoryMa= nager:
> > Spilled
> > > an estimate of 1073462600 bytes from 1 objects. init =3D
> 328728576(321024K)
> > > used =3D 1768466512(1727018K) committed =3D 1770848256(17293= 44K) max =3D
> > > 2097152000(2048000K) 2014-02-06 19:18:01,276 INFO
> > > org.apache.hadoop.mapred.MapTask: Spilling map output: buffe= r full=3D
> true
> > > 2014-02-06 19:18:01,288 INFO org.apache.hadoop.mapred.MapTas= k:
> bufstart =3D
> > > 611949600; bufend =3D 52332788; bufvoid =3D 644245088 2014-0= 2-06
> 19:18:01,288
> > > INFO org.apache.hadoop.mapred.MapTask: kvstart =3D 576; kven= d =3D 777;
> > length =3D
> > > 10066330 2014-02-06 19:18:03,377 INFO org.apache.hadoop.mapr= ed.MapTask:
> > > Finished spill 1 2014-02-06 19:18:05,494 INFO
> > > org.apache.hadoop.mapred.MapTask: Record too large for in-me= mory
> buffer:
> > > 644246693 bytes 2014-02-06 19:18:36,008 INFO
> > > org.apache.pig.impl.util.SpillableMemoryManager: Spilled an = estimate of
> > > 306271368 bytes from 1 objects. init =3D 328728576(321024K) = used =3D
> > > 1449267128(1415299K) committed =3D 2097152000(2048000K) max =3D=
> > > 2097152000(2048000K) 2014-02-06 19:18:44,448 INFO
> > > org.apache.hadoop.mapred.TaskLogsTruncater: Initializing log= s'
> truncater
> > > with mapRetainSize=3D-1 and reduceRetainSize=3D-1 2014-02-06= 19:18:44,780
> > FATAL
> > > org.apache.hadoop.mapred.Child: Error running child :
> > > java.lang.OutOfMemoryError: Java heap space at
> > > java.util.Arrays.copyOf(Arrays.java:2786) at
> > > java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.ja= va:94) at
> > > java.io.DataOutputStream.write(DataOutputStream.java:90) at<= br> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.java:384)= at
> > > java.io.DataOutputStream.writeUTF(DataOutputStream.java:306)= at
> > > org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.j= ava:454) at
> > > org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.j= ava:542) at
> > > org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.jav= a:523) at
> > > org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.j= ava:361) at
> > > org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.j= ava:542) at
> > > org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.j= ava:357) at
> > > org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:5= 7) at
> > >
> >
> org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.j= ava:123)
> > > at
> > >
> >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializ= er.serialize(WritableSerialization.java:90)
> > > at
> > >
> >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializ= er.serialize(WritableSerialization.java:77)
> > > at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:1= 79) at
> > >
> >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord(Map= Task.java:1501)
> > > at
> > >
> >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:= 1091)
> > > at
> > >
> >
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java= :691)
> > > at
> > >
> >
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutp= utContext.java:80)
> > > at
> > >
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGeneri= cMapReduce$Map.collect(PigGenericMapReduce.java:128)
> > > at
> > >
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGeneri= cMapBase.runPipeline(PigGenericMapBase.java:269)
> > > at
> > >
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGeneri= cMapBase.map(PigGenericMapBase.java:262)
> > > at
> > >
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGeneri= cMapBase.map(PigGenericMapBase.java:64)
> > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) a= t
> > > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:7= 64) at
> > > org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at > > > org.apache.hadoop.mapred.Child$4.run(Child.java:255) at
> > > java.security.AccessController.doPrivileged(Native Method) a= t
> > > javax.security.auth.Subject.doAs(Subject.java:396) at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1121)
> > > at org.apache.hadoop.mapred.Child.main(Child.java:249)
> >
>


--001a1139b03e7046c404f1d399dc--