pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dmitriy V. Ryaboy (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PIG-1891) Enable StoreFunc to make intelligent decision based on job success or failure
Date Thu, 27 Sep 2012 03:57:07 GMT

    [ https://issues.apache.org/jira/browse/PIG-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464398#comment-13464398
] 

Dmitriy V. Ryaboy commented on PIG-1891:
----------------------------------------

I'm just grouchy cause I can't move a class of jobs till we fix a StoreFunc in Elephant-Bird
:). Appreciate you doing the work, this is a good feature! It's actually documented as an
incompatible feature (in release notes on this ticket, and by having the patch listed under
"incompatible changes" in CHANGES.txt). So procedurally speaking, it's fine.

We can probably have a guard around this issue by checking if the class has a declared method
"cleanupOnSuccess", which would restore backwards compatibility.
                
> Enable StoreFunc to make intelligent decision based on job success or failure
> -----------------------------------------------------------------------------
>
>                 Key: PIG-1891
>                 URL: https://issues.apache.org/jira/browse/PIG-1891
>             Project: Pig
>          Issue Type: New Feature
>    Affects Versions: 0.10.0
>            Reporter: Alex Rovner
>            Assignee: Eli Reisman
>            Priority: Minor
>              Labels: patch
>             Fix For: 0.11
>
>         Attachments: PIG-1891-1.patch, PIG-1891-2.patch, PIG-1891-3.patch
>
>
> We are in the process of using PIG for various data processing and component integration.
Here is where we feel pig storage funcs lack:
> They are not aware if the over all job has succeeded. This creates a problem for storage
funcs which needs to "upload" results into another system:
> DB, FTP, another file system etc.
> I looked at the DBStorage in the piggybank (http://svn.apache.org/viewvc/pig/trunk/contrib/piggybank/java/src/main/java/org/apache/pig/piggybank/storage/DBStorage.java?view=markup)
and what I see is essentially a mechanism which for each task does the following:
> 1. Creates a recordwriter (in this case open connection to db)
> 2. Open transaction.
> 3. Writes records into a batch
> 4. Executes commit or rollback depending if the task was successful.
> While this aproach works great on a task level, it does not work at all on a job level.

> If certain tasks will succeed but over job will fail, partial records are going to get
uploaded into the DB.
> Any ideas on the workaround? 
> Our current workaround is fairly ugly: We created a java wrapper that launches pig jobs
and then uploads to DB's once pig's job is successful. While the approach works, it's not
really integrated into pig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message