Return-Path: X-Original-To: apmail-pig-dev-archive@www.apache.org Delivered-To: apmail-pig-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6983410B18 for ; Wed, 29 Jan 2014 16:12:13 +0000 (UTC) Received: (qmail 51311 invoked by uid 500); 29 Jan 2014 16:12:12 -0000 Delivered-To: apmail-pig-dev-archive@pig.apache.org Received: (qmail 51262 invoked by uid 500); 29 Jan 2014 16:12:11 -0000 Mailing-List: contact dev-help@pig.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@pig.apache.org Delivered-To: mailing list dev@pig.apache.org Received: (qmail 51162 invoked by uid 500); 29 Jan 2014 16:12:11 -0000 Delivered-To: apmail-hadoop-pig-dev@hadoop.apache.org Received: (qmail 51082 invoked by uid 99); 29 Jan 2014 16:12:10 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Jan 2014 16:12:10 +0000 Date: Wed, 29 Jan 2014 16:12:10 +0000 (UTC) From: "Rajesh Balamohan (JIRA)" To: pig-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (PIG-3730) Performance issue in SelfSpillBag MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/PIG-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated PIG-3730: ---------------------------------- Attachment: PIG-3730-trunk-v1.patch v1 patch for trunk > Performance issue in SelfSpillBag > --------------------------------- > > Key: PIG-3730 > URL: https://issues.apache.org/jira/browse/PIG-3730 > Project: Pig > Issue Type: Bug > Components: impl > Affects Versions: 0.11 > Environment: Pig 0.11 with MR-V1 > Reporter: Rajesh Balamohan > Attachments: PIG-3730-trunk-v1.patch > > > We have bunch of joins in our pig scripts (joining 5 to 15 datasets together). Pig creates a bunch of REPLICATED, HASH_JOINs and we observed heavy performance degradation in one of the launched M/R job. This was specifically on the reducer side. Taking multiple threaddumps revealed the following > "main" prio=10 tid=0x00007fbaa801c000 nid=0x1464 runnable [0x00007fbaaee76000] > java.lang.Thread.State: RUNNABLE > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1781) > - locked <0x00000000b5316370> (a org.apache.hadoop.mapred.JobConf) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:712) > at org.apache.pig.data.SelfSpillBag$MemoryLimits.init(SelfSpillBag.java:73) > at org.apache.pig.data.SelfSpillBag$MemoryLimits.(SelfSpillBag.java:65) > at org.apache.pig.data.SelfSpillBag.(SelfSpillBag.java:39) > at org.apache.pig.data.InternalCachedBag.(InternalCachedBag.java:63) > at org.apache.pig.data.InternalCachedBag.(InternalCachedBag.java:59) > at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POJoinPackage.getNext(POJoinPackage.java:146) > at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:422) > at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:405) > at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:257) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:164) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:610) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:444) > at org.apache.hadoop.mapred.Child$4.run(Child.java:268) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.mapred.Child.main(Child.java:262) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1781) > - locked <0x00000000b5316388> (a org.apache.hadoop.mapred.JobConf) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:712) > at org.apache.pig.data.SelfSpillBag$MemoryLimits.init(SelfSpillBag.java:73) > at org.apache.pig.data.SelfSpillBag$MemoryLimits.(SelfSpillBag.java:65) > at org.apache.pig.data.SelfSpillBag.(SelfSpillBag.java:39) > at org.apache.pig.data.InternalCachedBag.(InternalCachedBag.java:63) > at org.apache.pig.data.InternalCachedBag.(InternalCachedBag.java:59) > at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POJoinPackage.getNext(POJoinPackage.java:146) > at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:422) > at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:405) > at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:257) > at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:164) > at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:610) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:444) > at org.apache.hadoop.mapred.Child$4.run(Child.java:268) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.mapred.Child.main(Child.java:262) > In certain corner cases (where pig.cachedbag.type is not "default"), InternalCachedBag is initialized in POJoinPackage. > InternalCachedBag internally calls SelfSpillBag--> MemoryLimits --> PigMapReduce.sJobConfInternal.get().get( > PigConfiguration.PROP_CACHEDBAG_MEMUSAGE); > Since this is happening very frequently, the cost of Configuration.get() itself is increasing causing the degradation. Here is the counters snippet from one of the reducer. > E.g : counter snippet from a reducer > FILE: Number of bytes read 57,762,717 > FILE: Number of bytes written 25,256,417 > HDFS: Number of bytes read 0 > HDFS: Number of bytes written 2,521,311 > HDFS: Number of read operations 0 > HDFS: Number of large read operations 0 > HDFS: Number of write operations 1 > Reduce input groups 4,282,722 > Reduce shuffle bytes 26,858,192 > Reduce input records 4,912,881 > Reduce output records 630,159 > Spilled Records 4,912,881 -- This message was sent by Atlassian JIRA (v6.1.5#6160)