crunch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Wills (JIRA)" <>
Subject [jira] [Commented] (CRUNCH-294) Cost-based job planning
Date Sat, 16 Nov 2013 19:29:20 GMT


Josh Wills commented on CRUNCH-294:

Agree w/your take on cpuCost() and memCost() w/simple rules to decide on splits and DoFn isolation
based on them for now.

IMO, the thing we're always trying to minimize is overall pipeline runtime, w/a constraint
that we want to avoid failures due to e.g. running out of memory. In order to do true cost-based
planning, we need some rules that combine a few different things:

a) A good estimate of the amount of data written, the number of tasks that will be doing the
writing, and an estimate of the amount of data that can be written to HDFS per second on the
b) An estimate of the processing time required for each record in each DoFn along with an
estimate of the number of records each function will process (or maybe, to keep the units
consistent, cpuTime per input byte)
c) An estimate of the memory consumption for each DoFn and an overall memory budget for each
stage. (So maybe memUsage vs. memCost)

I think we've talked a few times about tracking stats during the run about actual data processed
at each phase that could be used by the planner on subsequent runs to optimize execution,
which is where a lot these estimates would come from.

> Cost-based job planning
> -----------------------
>                 Key: CRUNCH-294
>                 URL:
>             Project: Crunch
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Josh Wills
>            Assignee: Josh Wills
>         Attachments: CRUNCH-294.patch, jobplan-default-new.png, jobplan-default-old.png,
jobplan-large_s2_s3.png, jobplan-lopsided.png
> A bug report on the user list drove me to revisit some of the core planning logic, particularly
around how we decide where to split up DoFns between two dependent MapReduce jobs.
> I found an old TODO about using the scale factor from a DoFn to decide where to split
up the nodes between dependent GBKs, so I implemented a new version of the split algorithm
that takes advantage of how we've propagated support for multiple outputs on both the map
and reduce sides of a job to do finer-grained splits that use information from the scaleFactor
calculations to make smarter split decisions.
> One high-level change along with this: I changed the default scaleFactor() value in DoFn
to 0.99f to slightly prefer writes that occur later in a pipeline flow by default.

This message was sent by Atlassian JIRA

View raw message