hadoop-pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sriranjan Manjunath (JIRA)" <j...@apache.org>
Subject [jira] Commented: (PIG-1102) Collect number of spills per job
Date Fri, 18 Dec 2009 22:40:18 GMT

    [ https://issues.apache.org/jira/browse/PIG-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792680#action_12792680
] 

Sriranjan Manjunath commented on PIG-1102:
------------------------------------------

I ran the test again on my local machine, and it passes. The test failed because of "too many
open file descriptors". Is this a hudson related issue?

> Collect number of spills per job
> --------------------------------
>
>                 Key: PIG-1102
>                 URL: https://issues.apache.org/jira/browse/PIG-1102
>             Project: Pig
>          Issue Type: Improvement
>            Reporter: Olga Natkovich
>            Assignee: Sriranjan Manjunath
>             Fix For: 0.7.0
>
>         Attachments: PIG_1102.patch
>
>
> Memory shortage is one of the main performance issues in Pig. Knowing when we spill do
the disk is useful for understanding query performance and also to see how certain changes
in Pig effect that.
> Other interesting stats to collect would be average CPU usage and max mem usage but I
am not sure if this information is easily retrievable.
> Using Hadoop counters for this would make sense.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message