Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2A83A10D0F for ; Fri, 7 Feb 2014 16:45:42 +0000 (UTC) Received: (qmail 88661 invoked by uid 500); 7 Feb 2014 16:45:31 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 88574 invoked by uid 500); 7 Feb 2014 16:45:31 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 88566 invoked by uid 99); 7 Feb 2014 16:45:31 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Feb 2014 16:45:31 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of praveenesh@gmail.com designates 209.85.219.46 as permitted sender) Received: from [209.85.219.46] (HELO mail-oa0-f46.google.com) (209.85.219.46) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Feb 2014 16:45:26 +0000 Received: by mail-oa0-f46.google.com with SMTP id n16so4499734oag.33 for ; Fri, 07 Feb 2014 08:45:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=JTy1/XJFQbVT6fxYeVqzs1u/F3NcCYeJJrBpBIZC2Ww=; b=pOHhMYG62MZhUpidBfUJgOkwoHFGDW49xsbeFNvcWLrWQGen236kvtECCMZQSgmjBE E9wR2EEBlqH+OuVIe0ETSBbpLzZQRzuFDXJku6NG4qU47b0RDbFYWzOLEisSkWFOoJI3 UXKa5UTwiQn8uSu2npkcjsGrrQZzNSKqlLgSokhmsWcEAn4/hC//iQjI+Nc2PsGyYTsx CuGv8Ca25IW16J6YHp8tu05LyeL5caG6+9sFt9baqkIwM6OwXX60P+m69gfFTqInACOk AQnsLdTYJ+o83D1FzVCyduA097EQVG6HItY7kgfN04WGoLwB3ZYYbzOWcToO1Mgvx86w Dhww== MIME-Version: 1.0 X-Received: by 10.60.45.206 with SMTP id p14mr13532049oem.21.1391791505841; Fri, 07 Feb 2014 08:45:05 -0800 (PST) Received: by 10.182.95.6 with HTTP; Fri, 7 Feb 2014 08:45:05 -0800 (PST) In-Reply-To: References: Date: Fri, 7 Feb 2014 16:45:05 +0000 Message-ID: Subject: Re: java.lang.OutOfMemoryError: Java heap space From: praveenesh kumar To: user@pig.apache.org Cc: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a11c2019a4ea47404f1d3b653 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2019a4ea47404f1d3b653 Content-Type: text/plain; charset=ISO-8859-1 Hi Park, Your explanation makes perfect sense in my case. Thanks for explaining what is happening behind the scenes. I am wondering you used normal java compression/decompression or is there a UDF already available to do this stuff or some kind of property that we need to enable to say to PIG that compress bags before spilling. Regards Prav On Fri, Feb 7, 2014 at 4:37 PM, Cheolsoo Park wrote: > Hi Prav, > > You're thinking correctly, and it's true that Pig bags are spillable. > > However, spilling is no magic, meaning you can still run into OOM with huge > bags like you have here. Pig runs Spillable Memory Manager (SMM) in a > separate thread. When spilling is triggered, SMM locks bags that it's > trying to spill to disk. After the spilling is finished, GC frees up > memory. The problem is that it's possible that more bags are loaded into > memory while the spilling is in progress. Now JVM triggers GC, but GC > cannot free up memory because SMM is locking the bags, resulting in OOM > error. This happens quite often. > > Sounds like you do group-by to reduce the number of rows before join and > don't immediately run any aggregation function on the grouped bags. If > that's the case, can you compress those bags? For eg, you could add a > foreach after group-by and run a UDF that compresses a bag and returns it > as bytearray. From there, you're moving around small blobs rather than big > bags. Of course, you will need to decompress them when you restore data out > of those bags at some point. This trick saved me several times in the past > particularly when I dealt with bags of large chararrays. > > Just a thought. Hope this is helpful. > > Thanks, > Cheolsoo > > > On Fri, Feb 7, 2014 at 7:37 AM, praveenesh kumar >wrote: > > > Thanks Park for sharing the above configs > > > > But I am wondering if the above config changes would make any huge > > difference in my case. > > As per my logs, I am very worried about this line - > > > > INFO org.apache.hadoop.mapred.MapTask: Record too large for in-memory > buffer: 644245358 bytes > > > > If I am understanding it properly, my 1 record is very large to fit into > the memory, which is causing the issue. > > > > Any of the above changes wouldn't make any huge impact, please correct > me if I am taking it totally wrong. > > > > - Adding hadoop user group here as well, to throw some valuable inputs > to understand the above question. > > > > > > Since I am doing a join on a grouped bag, do you think that might be the > case ? > > > > But if that is the issue, as far as I understand Bags in Pig are > spillable, it shouldn't have given this issue. > > > > I can't get rid of group by, Grouping by first should idealing improve > my join. But if this is the root cause, if I am understanding it correctly, > > > > do you think I should get rid of group-by. > > > > But my question in that case would be what would happen if I do group by > later after join, if will result in much bigger bag (because it would have > more records after join) > > > > Am I thinking here correctly ? > > > > Regards > > > > Prav > > > > > > > > On Fri, Feb 7, 2014 at 3:11 AM, Cheolsoo Park >wrote: > > > >> Looks like you're running out of space in MapOutputBuffer. Two > >> suggestions- > >> > >> 1) > >> You said that io.sort.mb is already set to 768 MB, but did you try to > >> lower > >> io.sort.spill.percent in order to spill earlier and more often? > >> > >> Page 12- > >> > >> > http://www.slideshare.net/Hadoop_Summit/optimizing-mapreduce-job-performance > >> > >> 2) > >> Can't you increase the parallelism of mappers so that each mapper has to > >> handle a smaller size of data? Pig determines the number of mappers by > >> total input size / pig.maxCombinedSplitSize (128MB by default). So you > can > >> try to lower pig.maxCombinedSplitSize. > >> > >> But I admit Pig internal data types are not memory-efficient, and that > is > >> an optimization opportunity. Contribute! > >> > >> > >> > >> On Thu, Feb 6, 2014 at 2:54 PM, praveenesh kumar >> >wrote: > >> > >> > Its a normal join. I can't use replicated join, as the data is very > >> large. > >> > > >> > Regards > >> > Prav > >> > > >> > > >> > On Thu, Feb 6, 2014 at 7:52 PM, abhishek > >> > wrote: > >> > > >> > > Hi Praveenesh, > >> > > > >> > > Did you use "replicated join" in your pig script or is it a regular > >> join > >> > ?? > >> > > > >> > > Regards > >> > > Abhishek > >> > > > >> > > Sent from my iPhone > >> > > > >> > > > On Feb 6, 2014, at 11:25 AM, praveenesh kumar < > praveenesh@gmail.com > >> > > >> > > wrote: > >> > > > > >> > > > Hi all, > >> > > > > >> > > > I am running a Pig Script which is running fine for small data. > But > >> > when > >> > > I > >> > > > scale the data, I am getting the following error at my map stage. > >> > > > Please refer to the map logs as below. > >> > > > > >> > > > My Pig script is doing a group by first, followed by a join on the > >> > > grouped > >> > > > data. > >> > > > > >> > > > > >> > > > Any clues to understand where I should look at or how shall I deal > >> with > >> > > > this situation. I don't want to just go by just increasing the > heap > >> > > space. > >> > > > My map jvm heap space is already 3 GB with io.sort.mb = 768 MB. > >> > > > > >> > > > 2014-02-06 19:15:12,243 WARN > >> org.apache.hadoop.util.NativeCodeLoader: > >> > > > Unable to load native-hadoop library for your platform... using > >> > > > builtin-java classes where applicable 2014-02-06 19:15:15,025 INFO > >> > > > org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 > >> > > > 2014-02-06 19:15:15,123 INFO org.apache.hadoop.mapred.Task: Using > >> > > > ResourceCalculatorPlugin : > >> > > > > >> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2bd9e2822014-02-06 > >> > > > 19:15:15,546 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = > 768 > >> > > > 2014-02-06 19:15:19,846 INFO org.apache.hadoop.mapred.MapTask: > data > >> > > buffer > >> > > > = 612032832/644245088 2014-02-06 19:15:19,846 INFO > >> > > > org.apache.hadoop.mapred.MapTask: record buffer = 9563013/10066330 > >> > > > 2014-02-06 19:15:20,037 INFO > >> org.apache.hadoop.io.compress.CodecPool: > >> > Got > >> > > > brand-new decompressor 2014-02-06 19:15:21,083 INFO > >> > > > > >> > > > >> > > >> > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: > >> > > > Created input record counter: Input records from _1_tmp1327641329 > >> > > > 2014-02-06 19:15:52,894 INFO org.apache.hadoop.mapred.MapTask: > >> Spilling > >> > > map > >> > > > output: buffer full= true 2014-02-06 19:15:52,895 INFO > >> > > > org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = > 611949600; > >> > > bufvoid > >> > > > = 644245088 2014-02-06 19:15:52,895 INFO > >> > > org.apache.hadoop.mapred.MapTask: > >> > > > kvstart = 0; kvend = 576; length = 10066330 2014-02-06 > 19:16:06,182 > >> > INFO > >> > > > org.apache.hadoop.mapred.MapTask: Finished spill 0 2014-02-06 > >> > > 19:16:16,169 > >> > > > INFO org.apache.pig.impl.util.SpillableMemoryManager: first memory > >> > > handler > >> > > > call - Collection threshold init = 328728576(321024K) used = > >> > > > 1175055104(1147514K) committed = 1770848256(1729344K) max = > >> > > > 2097152000(2048000K) 2014-02-06 19:16:20,446 INFO > >> > > > org.apache.pig.impl.util.SpillableMemoryManager: Spilled an > >> estimate of > >> > > > 308540402 bytes from 1 objects. init = 328728576(321024K) used = > >> > > > 1175055104(1147514K) committed = 1770848256(1729344K) max = > >> > > > 2097152000(2048000K) 2014-02-06 19:17:22,246 INFO > >> > > > org.apache.pig.impl.util.SpillableMemoryManager: first memory > >> handler > >> > > call- > >> > > > Usage threshold init = 328728576(321024K) used = > >> 1768466512(1727018K) > >> > > > committed = 1770848256(1729344K) max = 2097152000(2048000K) > >> 2014-02-06 > >> > > > 19:17:35,597 INFO org.apache.pig.impl.util.SpillableMemoryManager: > >> > > Spilled > >> > > > an estimate of 1073462600 bytes from 1 objects. init = > >> > 328728576(321024K) > >> > > > used = 1768466512(1727018K) committed = 1770848256(1729344K) max = > >> > > > 2097152000(2048000K) 2014-02-06 19:18:01,276 INFO > >> > > > org.apache.hadoop.mapred.MapTask: Spilling map output: buffer > full= > >> > true > >> > > > 2014-02-06 19:18:01,288 INFO org.apache.hadoop.mapred.MapTask: > >> > bufstart = > >> > > > 611949600; bufend = 52332788; bufvoid = 644245088 2014-02-06 > >> > 19:18:01,288 > >> > > > INFO org.apache.hadoop.mapred.MapTask: kvstart = 576; kvend = 777; > >> > > length = > >> > > > 10066330 2014-02-06 19:18:03,377 INFO > >> org.apache.hadoop.mapred.MapTask: > >> > > > Finished spill 1 2014-02-06 19:18:05,494 INFO > >> > > > org.apache.hadoop.mapred.MapTask: Record too large for in-memory > >> > buffer: > >> > > > 644246693 bytes 2014-02-06 19:18:36,008 INFO > >> > > > org.apache.pig.impl.util.SpillableMemoryManager: Spilled an > >> estimate of > >> > > > 306271368 bytes from 1 objects. init = 328728576(321024K) used = > >> > > > 1449267128(1415299K) committed = 2097152000(2048000K) max = > >> > > > 2097152000(2048000K) 2014-02-06 19:18:44,448 INFO > >> > > > org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' > >> > truncater > >> > > > with mapRetainSize=-1 and reduceRetainSize=-1 2014-02-06 > >> 19:18:44,780 > >> > > FATAL > >> > > > org.apache.hadoop.mapred.Child: Error running child : > >> > > > java.lang.OutOfMemoryError: Java heap space at > >> > > > java.util.Arrays.copyOf(Arrays.java:2786) at > >> > > > java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94) > >> at > >> > > > java.io.DataOutputStream.write(DataOutputStream.java:90) at > >> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.java:384) at > >> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.java:306) at > >> > > > > >> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:454) at > >> > > > > >> org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:542) at > >> > > > org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:523) > >> at > >> > > > > >> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:361) at > >> > > > > >> org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:542) at > >> > > > > >> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:357) at > >> > > > org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:57) at > >> > > > > >> > > > >> > > >> > org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.java:123) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:90) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:77) > >> > > > at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:179) at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord(MapTask.java:1501) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1091) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:128) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:269) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:262) > >> > > > at > >> > > > > >> > > > >> > > >> > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64) > >> > > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at > >> > > > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at > >> > > > org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) at > >> > > > org.apache.hadoop.mapred.Child$4.run(Child.java:255) at > >> > > > java.security.AccessController.doPrivileged(Native Method) at > >> > > > javax.security.auth.Subject.doAs(Subject.java:396) at > >> > > > > >> > > > >> > > >> > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > >> > > > at org.apache.hadoop.mapred.Child.main(Child.java:249) > >> > > > >> > > >> > > > > > --001a11c2019a4ea47404f1d3b653 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Park,

Your explanation makes per= fect sense in my case. Thanks for explaining what is happening behind the s= cenes. I am wondering you used normal java compression/decompression or is = there a UDF already available to do this stuff or some kind of property tha= t we need to enable to say to PIG that compress bags before spilling.

Regards
Prav


On Fri, Feb 7, 2014 at 4:37 PM,= Cheolsoo Park <piaozhexiu@gmail.com> wrote:
Hi Prav,

You're thinking correctly, and it's true that Pig bags are spillabl= e.

However, spilling is no magic, meaning you can still run into OOM with huge=
bags like you have here. Pig runs Spillable Memory Manager (SMM) in a
separate thread. When spilling is triggered, SMM locks bags that it's trying to spill to disk. After the spilling is finished, GC frees up
memory. The problem is that it's possible that more bags are loaded int= o
memory while the spilling is in progress. Now JVM triggers GC, but GC
cannot free up memory because SMM is locking the bags, resulting in OOM
error. This happens quite often.

Sounds like you do group-by to reduce the number of rows before join and don't immediately run any aggregation function on the grouped bags. If<= br> that's the case, can you compress those bags? For eg, you could add a foreach after group-by and run a UDF that compresses a bag and returns it as bytearray. From there, you're moving around small blobs rather than = big
bags. Of course, you will need to decompress them when you restore data out=
of those bags at some point. This trick saved me several times in the past<= br> particularly when I dealt with bags of large chararrays.

Just a thought. Hope this is helpful.

Thanks,
Cheolsoo


On Fri, Feb 7, 2014 at 7:37 AM, praveenesh kumar <praveenesh@gmail.com>wrote:

> Thanks Park for sharing the above configs
>
> But I am wondering if the above config changes would make any huge
> difference in my case.
> As per my logs, I am very worried about this line -
>
> =A0INFO org.apache.hadoop.mapred.MapTask: Record too large for in-memo= ry buffer: 644245358 bytes
>
> If I am understanding it properly, my 1 record is very large to fit in= to the memory, which is causing the issue.
>
> Any of the above changes wouldn't make any huge impact, please cor= rect me if I am taking it totally wrong.
>
> =A0- Adding hadoop user group here as well, to throw some valuable inp= uts to understand the above question.
>
>
> Since I am doing a join on a grouped bag, do you think that might be t= he case ?
>
> But if that is the issue, as far as I understand Bags in Pig are spill= able, it shouldn't have given this issue.
>
> I can't get rid of group by, Grouping by first should idealing imp= rove my join. But if this is the root cause, if I am understanding it corre= ctly,
>
> do you think I should get rid of group-by.
>
> But my question in that case would be what would happen if I do group = by later after join, if will result in much bigger bag (because it would ha= ve more records after join)
>
> Am I thinking here correctly ?
>
> Regards
>
> Prav
>
>
>
> On Fri, Feb 7, 2014 at 3:11 AM, Cheolsoo Park <piaozhexiu@gmail.com>wrote:
>
>> Looks like you're running out of space in MapOutputBuffer. Two=
>> suggestions-
>>
>> 1)
>> You said that io.sort.mb is already set to 768 MB, but did you try= to
>> lower
>> io.sort.spill.percent in order to spill earlier and more often? >>
>> Page 12-
>>
>> http://www.slideshare.net/Hadoop_S= ummit/optimizing-mapreduce-job-performance
>>
>> 2)
>> Can't you increase the parallelism of mappers so that each map= per has to
>> handle a smaller size of data? Pig determines the number of mapper= s by
>> total input size / pig.maxCombinedSplitSize (128MB by default). So= you can
>> try to lower pig.maxCombinedSplitSize.
>>
>> But I admit Pig internal data types are not memory-efficient, and = that is
>> an optimization opportunity. Contribute!
>>
>>
>>
>> On Thu, Feb 6, 2014 at 2:54 PM, praveenesh kumar <praveenesh@gmail.com
>> >wrote:
>>
>> > Its a normal join. I can't use replicated join, as the da= ta is very
>> large.
>> >
>> > Regards
>> > Prav
>> >
>> >
>> > On Thu, Feb 6, 2014 at 7:52 PM, abhishek <abhishek.dodda1@gmail.com>
>> > wrote:
>> >
>> > > Hi Praveenesh,
>> > >
>> > > Did you use "replicated join" in your pig scri= pt or is it a regular
>> join
>> > ??
>> > >
>> > > Regards
>> > > Abhishek
>> > >
>> > > Sent from my iPhone
>> > >
>> > > > On Feb 6, 2014, at 11:25 AM, praveenesh kumar <<= a href=3D"mailto:praveenesh@gmail.com">praveenesh@gmail.com
>> >
>> > > wrote:
>> > > >
>> > > > Hi all,
>> > > >
>> > > > I am running a Pig Script which is running fine for= small data. But
>> > when
>> > > I
>> > > > scale the data, I am getting the following error at= my map stage.
>> > > > Please refer to the map logs as below.
>> > > >
>> > > > My Pig script is doing a group by first, followed b= y a join on the
>> > > grouped
>> > > > data.
>> > > >
>> > > >
>> > > > Any clues to understand where I should look at or h= ow shall I deal
>> with
>> > > > this situation. I don't want to just go by just= increasing the heap
>> > > space.
>> > > > My map jvm heap space is already 3 GB with io.sort.= mb =3D 768 MB.
>> > > >
>> > > > 2014-02-06 19:15:12,243 WARN
>> org.apache.hadoop.util.NativeCodeLoader:
>> > > > Unable to load native-hadoop library for your platf= orm... using
>> > > > builtin-java classes where applicable 2014-02-06 19= :15:15,025 INFO
>> > > > org.apache.hadoop.util.ProcessTree: setsid exited w= ith exit code 0
>> > > > 2014-02-06 19:15:15,123 INFO org.apache.hadoop.mapr= ed.Task: Using
>> > > > ResourceCalculatorPlugin :
>> > > >
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2bd9e2822014-= 02-06
>> > > > 19:15:15,546 INFO org.apache.hadoop.mapred.MapTask:= io.sort.mb =3D 768
>> > > > 2014-02-06 19:15:19,846 INFO org.apache.hadoop.mapr= ed.MapTask: data
>> > > buffer
>> > > > =3D 612032832/644245088 2014-02-06 19:15:19,846 INF= O
>> > > > org.apache.hadoop.mapred.MapTask: record buffer =3D= 9563013/10066330
>> > > > 2014-02-06 19:15:20,037 INFO
>> org.apache.hadoop.io.compress.CodecPool:
>> > Got
>> > > > brand-new decompressor 2014-02-06 19:15:21,083 INFO=
>> > > >
>> > >
>> >
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRe= cordReader:
>> > > > Created input record counter: Input records from _1= _tmp1327641329
>> > > > 2014-02-06 19:15:52,894 INFO org.apache.hadoop.mapr= ed.MapTask:
>> Spilling
>> > > map
>> > > > output: buffer full=3D true 2014-02-06 19:15:52,895= INFO
>> > > > org.apache.hadoop.mapred.MapTask: bufstart =3D 0; b= ufend =3D 611949600;
>> > > bufvoid
>> > > > =3D 644245088 2014-02-06 19:15:52,895 INFO
>> > > org.apache.hadoop.mapred.MapTask:
>> > > > kvstart =3D 0; kvend =3D 576; length =3D 10066330 2= 014-02-06 19:16:06,182
>> > INFO
>> > > > org.apache.hadoop.mapred.MapTask: Finished spill 0 = 2014-02-06
>> > > 19:16:16,169
>> > > > INFO org.apache.pig.impl.util.SpillableMemoryManage= r: first memory
>> > > handler
>> > > > call - Collection threshold init =3D 328728576(3210= 24K) used =3D
>> > > > 1175055104(1147514K) committed =3D 1770848256(17293= 44K) max =3D
>> > > > 2097152000(2048000K) 2014-02-06 19:16:20,446 INFO >> > > > org.apache.pig.impl.util.SpillableMemoryManager: Sp= illed an
>> estimate of
>> > > > 308540402 bytes from 1 objects. init =3D 328728576(= 321024K) used =3D
>> > > > 1175055104(1147514K) committed =3D 1770848256(17293= 44K) max =3D
>> > > > 2097152000(2048000K) 2014-02-06 19:17:22,246 INFO >> > > > org.apache.pig.impl.util.SpillableMemoryManager: fi= rst memory
>> handler
>> > > call-
>> > > > Usage threshold init =3D 328728576(321024K) used = =3D
>> 1768466512(1727018K)
>> > > > committed =3D 1770848256(1729344K) max =3D 20971520= 00(2048000K)
>> 2014-02-06
>> > > > 19:17:35,597 INFO org.apache.pig.impl.util.Spillabl= eMemoryManager:
>> > > Spilled
>> > > > an estimate of 1073462600 bytes from 1 objects. ini= t =3D
>> > 328728576(321024K)
>> > > > used =3D 1768466512(1727018K) committed =3D 1770848= 256(1729344K) max =3D
>> > > > 2097152000(2048000K) 2014-02-06 19:18:01,276 INFO >> > > > org.apache.hadoop.mapred.MapTask: Spilling map outp= ut: buffer full=3D
>> > true
>> > > > 2014-02-06 19:18:01,288 INFO org.apache.hadoop.mapr= ed.MapTask:
>> > bufstart =3D
>> > > > 611949600; bufend =3D 52332788; bufvoid =3D 6442450= 88 2014-02-06
>> > 19:18:01,288
>> > > > INFO org.apache.hadoop.mapred.MapTask: kvstart =3D = 576; kvend =3D 777;
>> > > length =3D
>> > > > 10066330 2014-02-06 19:18:03,377 INFO
>> org.apache.hadoop.mapred.MapTask:
>> > > > Finished spill 1 2014-02-06 19:18:05,494 INFO
>> > > > org.apache.hadoop.mapred.MapTask: Record too large = for in-memory
>> > buffer:
>> > > > 644246693 bytes 2014-02-06 19:18:36,008 INFO
>> > > > org.apache.pig.impl.util.SpillableMemoryManager: Sp= illed an
>> estimate of
>> > > > 306271368 bytes from 1 objects. init =3D 328728576(= 321024K) used =3D
>> > > > 1449267128(1415299K) committed =3D 2097152000(20480= 00K) max =3D
>> > > > 2097152000(2048000K) 2014-02-06 19:18:44,448 INFO >> > > > org.apache.hadoop.mapred.TaskLogsTruncater: Initial= izing logs'
>> > truncater
>> > > > with mapRetainSize=3D-1 and reduceRetainSize=3D-1 2= 014-02-06
>> 19:18:44,780
>> > > FATAL
>> > > > org.apache.hadoop.mapred.Child: Error running child= :
>> > > > java.lang.OutOfMemoryError: Java heap space at
>> > > > java.util.Arrays.copyOf(Arrays.java:2786) at
>> > > > java.io.ByteArrayOutputStream.write(ByteArrayOutput= Stream.java:94)
>> at
>> > > > java.io.DataOutputStream.write(DataOutputStream.jav= a:90) at
>> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.= java:384) at
>> > > > java.io.DataOutputStream.writeUTF(DataOutputStream.= java:306) at
>> > > >
>> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:45= 4) at
>> > > >
>> org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:54= 2) at
>> > > > org.apache.pig.data.BinInterSedes.writeBag(BinInter= Sedes.java:523)
>> at
>> > > >
>> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:36= 1) at
>> > > >
>> org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:54= 2) at
>> > > >
>> org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:35= 7) at
>> > > > org.apache.pig.data.BinSedesTuple.write(BinSedesTup= le.java:57) at
>> > > >
>> > >
>> >
>> org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritab= le.java:123)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.io.serializer.WritableSerialization$WritableSeri= alizer.serialize(WritableSerialization.java:90)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.io.serializer.WritableSerialization$WritableSeri= alizer.serialize(WritableSerialization.java:77)
>> > > > at org.apache.hadoop.mapred.IFile$Writer.append(IFi= le.java:179) at
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord= (MapTask.java:1501)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.j= ava:1091)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.= java:691)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInput= OutputContext.java:80)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGe= nericMapReduce$Map.collect(PigGenericMapReduce.java:128)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGe= nericMapBase.runPipeline(PigGenericMapBase.java:269)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGe= nericMapBase.map(PigGenericMapBase.java:262)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGe= nericMapBase.map(PigGenericMapBase.java:64)
>> > > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.ja= va:144) at
>> > > > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTa= sk.java:764) at
>> > > > org.apache.hadoop.mapred.MapTask.run(MapTask.java:3= 70) at
>> > > > org.apache.hadoop.mapred.Child$4.run(Child.java:255= ) at
>> > > > java.security.AccessController.doPrivileged(Native = Method) at
>> > > > javax.security.auth.Subject.doAs(Subject.java:396) = at
>> > > >
>> > >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1121)
>> > > > at org.apache.hadoop.mapred.Child.main(Child.java:2= 49)
>> > >
>> >
>>
>
>

--001a11c2019a4ea47404f1d3b653--