hadoop-pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pi Song (JIRA)" <j...@apache.org>
Subject [jira] Commented: (PIG-143) Proposal for refactoring of parsing logic in Pig
Date Wed, 12 Mar 2008 14:26:46 GMT

    [ https://issues.apache.org/jira/browse/PIG-143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577857#action_12577857
] 

Pi Song commented on PIG-143:
-----------------------------

Wow... You've done a very good work, Alan!!

As far as I can see here basic types in org.apache.pig.data are much cleaner than before.

I am also very glad that you're modelling operations and relations as graphs. This gives Pig+Hadoop
ability to execute data flows at the same complexity that Dryad can !! (though might not be
as fast as)

By the way, I've got a couple of questions and issues here:- 
- You mentioned in the last comment that you're using "generic graph". But in OperatorPlan
class, what I've seen is "directed graph". Whereas in order to modelling data flows, this
thing should be DAG. So..., I believe you are planning to add cycle detection logic.
- Yes, dependency order transverse seems to work well for type checking, and also +we may
use this mechanism to do parallelism at data flow level+.
- Your are using Java 6 in development. Please switch back to Java 5.
- A few unit tests are still failing.

In general, everything looks very good. I'm looking forward to your new logicalOp implementation!!

> Proposal for refactoring of parsing logic in Pig
> ------------------------------------------------
>
>                 Key: PIG-143
>                 URL: https://issues.apache.org/jira/browse/PIG-143
>             Project: Pig
>          Issue Type: Improvement
>          Components: impl
>            Reporter: Pi Song
>            Assignee: Pi Song
>         Attachments: ParserDrawing.png
>
>
> h2. Pig Script Parser Refactor Proposal 
> This is my initial proposal on pig script parser refactor work. Please note that I need
your opinions for improvements.
> *Problem*
> The basic concept is around the fact that currently we do validation logics in parsing
stage (for example, file existence checking) which I think is not clean and difficult to add
new validation rules. In the future, we will need to add more and more validation logics to
improve usability.
> *My proposal:-*  (see [^ParserDrawing.png])
> - Only keep parsing logic in the parser and leave output of parsing logic being unchecked
logical plans. (Therefore the parser only does syntactic checking)
> - Create a new class called LogicalPlanValidationManager which is responsible for validations
of the AST from the parser.
> - A new validation logic will be subclassing LogicalPlanValidator 
> - We can chain a set of LogicalPlanValidators inside LogicalPlanValidationManager to
do validation work. This allows a new LogicalPlanValidator to be added easily like a plug-in.

> - This greatly promotes modularity of the validation logics which  is +particularly good
when we have a lot of people working on different things+ (eg. streaming may require a special
validation logic)
> - We can set the execution order of validators
> - There might be some backend specific validations needed when we implement new execution
engines (For example a logical operation that one backend can do but others can't).  We can
plug-in this kind of validations on-the-fly based on the backend in use.
> *List of LogicalPlanValidators extracted from the current parser logic:-*
> - File existence validator
> - Alias existence validator
> *Logics possibly be added in the very near future:-*
> - Streaming script test execution
> - Type checking + casting promotion + type inference
> - Untyped plan test execution
> - Logic to prevent reading and writing from/to the same file
> The common way to implement a LogicalPlanValidator will be based on Visitor pattern.

> *Cons:-*
>  - By having every validation logic traversing AST from the root node every time, there
is a performance hit. However I think this is neglectable due to the fact that Pig is very
expressive and normally queries aren't too big (99% of queries contain no more than 1000 AST
nodes).
> *Next Step:-*
> LogicalPlanFinalizer which is also a pipeline except that each stage can modify the input
AST. This component will generally do a kind of global optimizations.
> *Further ideas:-*
> - Composite visitor can make validations more efficient in some cases but I don't think
we need
> - ASTs within the pipeline never change (read-only) so validations can be done in parallel
to improve responsiveness. But again I don't think we need this unless we have so many I/O
bound logics.
> - The same pipeline concept can also be applied in physical plan validation/optimization.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message