hadoop-pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (PIG-1144) set default_parallelism construct does not set the number of reducers correctly
Date Wed, 16 Dec 2009 01:30:18 GMT

    [ https://issues.apache.org/jira/browse/PIG-1144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12791112#action_12791112

Hadoop QA commented on PIG-1144:

+1 overall.  Here are the results of testing the latest attachment 
  against trunk revision 890596.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Pig-Patch-h8.grid.sp2.yahoo.net/125/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Pig-Patch-h8.grid.sp2.yahoo.net/125/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: http://hudson.zones.apache.org/hudson/job/Pig-Patch-h8.grid.sp2.yahoo.net/125/console

This message is automatically generated.

> set default_parallelism construct does not set the number of reducers correctly
> -------------------------------------------------------------------------------
>                 Key: PIG-1144
>                 URL: https://issues.apache.org/jira/browse/PIG-1144
>             Project: Pig
>          Issue Type: Bug
>          Components: impl
>    Affects Versions: 0.6.0
>         Environment: Hadoop 20 cluster with multi-node installation
>            Reporter: Viraj Bhat
>            Assignee: Daniel Dai
>             Fix For: 0.6.0
>         Attachments: brokenparallel.out, genericscript_broken_parallel.pig, PIG-1144-1.patch,
PIG-1144-2.patch, PIG-1144-3.patch
> Hi all,
>  I have a Pig script where I set the parallelism using the following set construct: "set
default_parallel 100" . I modified the "MRPrinter.java" to printout the parallelism
> {code}
> ...
> public void visitMROp(MapReduceOper mr)
> mStream.println("MapReduce node " + mr.getOperatorKey().toString() + " Parallelism "
+ mr.getRequestedParallelism());
> ...
> {code}
> When I run an explain on the script, I see that the last job which does the actual sort,
runs as a single reducer job. This can be corrected, by adding the PARALLEL keyword in front
of the ORDER BY.
> Attaching the script and the explain output
> Viraj

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message