airflow-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wang, Larry" <Larry.L.W...@dell.com>
Subject RE: Qs on airflow
Date Tue, 26 Sep 2017 13:16:31 GMT
Thanks Gerard for the answer. We are planning to use superset for the data visualization of
the airflow tasks, have you ever tried that?

From: Gerard Toonstra [mailto:gtoonstra@gmail.com]
Sent: Monday, September 25, 2017 11:56 PM
To: Wang, Larry <Larry.Wang@emc.com>
Cc: dev@airflow.incubator.apache.org; Wang, Ming <Ming.Wang@emc.com>; Ren, Xiaoyu <Xiaoyu.Ren@emc.com>
Subject: Re: Qs on airflow

Hi Larry,

1st: Unfortunately, this is a bit of an issue. You should think of an execution interval (dagrun)
of airflow as a piece of work that should complete for that window.
      Most of your questions try to bypass this design issue and you start to mix different
interval settings in the same dagrun.
      I'm starting to think you're going to set yourself up for a lot of complication that
way. The best thing to do is to try to
     come up with a sensible diagram that you'd expect to run within a single window and then
use retries and retry_intervals in a sensible way.

     If it creates more problems to have one single interval "window" definition (1 week),
then you could look into the "TriggerDagRunOperator",
     which can also help to solve "2". You'd split up the DAG into separate dags, which get
run by your controller dags. Then it should become easier
     to control how/when each task gets run and for which kind of interval.

    Yes, the sensor will continue to execute and it will duplicate itself for each dagrun.
The LatestOnlyOperator wouldn't be helpful there, because
     it would already have passed it. I'm not sure if anything changed in the sensor logic
since I last checked, but these things continue to run on a worker
     with a configured interval. An alternative would be to create your own operator that
'fails' instead (and doesn't keep running) and then gets retried
     until it either starves the retry-count or succeeds. That way it doesn't consume workers.


2nd: Not aware of any filters for this, but see 1. Split it out to a different DAG altogether
perhaps.
       I never used subdags and heard some scary stories about those.


3rd: Did you see the "landing times"  UI screen available from the top menu?  You can also
look into the airflow database and run your own queries there.
        Some crude charting can be customized with a bit of python and queries on that database
together with the "graphviz" tool, specifically dot.
       It's a very simple ASCII file that you generate and then convert using "dot -Tpng -osomefile.png
<mydotfile.dot>"
       It's got a rich syntax for adding text to edges, etc.

       If you want to do critical path analysis, gantt charts could be helpful. You can look
into plotly for that:

       https://plot.ly/python/gantt/

Rgds,

Gerard



On Mon, Sep 25, 2017 at 11:50 AM, Wang, Larry <Larry.L.Wang@dell.com<mailto:Larry.L.Wang@dell.com>>
wrote:
Dear experts,

We figured out how to do this by using sensor and ShortCircuitOperator, attached the code
for detail information,  here we met several issues as in below.

1st ,  it seems m_sensor also involves in the scheduling and running out of the task instances,
how should we avoid that? Can we just make it happen only once.

[cid:image002.jpg@01D3370C.3A1E35F0]

2nd, How can we hide out the ones w/o prefix of bbt in the tree? Are there any filter for
this?

3rd, We are seeking a way to get the execution summary of each time instead of the tree view
and draw it in a chart, are there any way to do this?

E.g.  bbt_low_loginvsi  run once and consume 18 hours.
         Bbt_opt_1 run 8 times and consume 70 hours in total

Thanks and best regards
Larry Wang



From: Wang, Larry
Sent: Monday, September 25, 2017 2:40 PM
To: 'Gerard Toonstra' <gtoonstra@gmail.com<mailto:gtoonstra@gmail.com>>; 'dev@airflow.incubator.apache.org<mailto:dev@airflow.incubator.apache.org>'
<dev@airflow.incubator.apache.org<mailto:dev@airflow.incubator.apache.org>>; Wang,
Ming <Ming.Wang@emc.com<mailto:Ming.Wang@emc.com>>; Ren, Xiaoyu <Xiaoyu.Ren@emc.com<mailto:Xiaoyu.Ren@emc.com>>
Subject: RE: Qs on airflow

Thanks so much for your valuable comments.

We are using Jenkins operator to orchestrate the Jenkins build from airflow.

The thing is that our base line workloads start in 3 phases to build our Jenkins jobs and
never stop, for example, phase 1 workload starts in day 1 while phase 2 workload starts in
day 3 without the stopping of the day 1 workload (and the they must be running at same time
after day 3), we will use Xcom and sensor like you mentioned to see if it works.

Some other questions are also listed in below, please check.

1st,  Can we set different scheduler for DAGs and its sub DAGs, e.g. DAGs are triggered weekly,
but sub DAGs are triggered hourly?
2nd, Is there any way to export the Graph view/Tree view from the airflow UI?
3rd, How can we add delay between tasks in one DAG besides sensor?

Thanks and best regards
Larry Wang

From: Wang, Larry
Sent: Monday, September 25, 2017 2:23 PM
To: 'Gerard Toonstra' <gtoonstra@gmail.com<mailto:gtoonstra@gmail.com>>; 'dev@airflow.incubator.apache.org<mailto:dev@airflow.incubator.apache.org>'
<dev@airflow.incubator.apache.org<mailto:dev@airflow.incubator.apache.org>>; Wang,
Ming <Ming.Wang@emc.com<mailto:Ming.Wang@emc.com>>; Ren, Xiaoyu <Xiaoyu.Ren@emc.com<mailto:Xiaoyu.Ren@emc.com>>
Subject: RE: Qs on airflow



From: Gerard Toonstra [mailto:gtoonstra@gmail.com]
Sent: Monday, September 25, 2017 2:01 PM
To: dev@airflow.incubator.apache.org<mailto:dev@airflow.incubator.apache.org>; Wang,
Larry <Larry.Wang@emc.com<mailto:Larry.Wang@emc.com>>
Subject: Re: Qs on airflow

Hi Larry,

The important thing to question is what kind of interface that other system has. It is a little
bit unusual in the sense that this DAG processes across multiple days.
The potential issue I foresee here is that you don't mention a consistent start date for the
DAG and you expect this to run in an ad-hoc manner. Most DAGs would process
"windows" of activity and you may get some issues with the time always resetting to the defined
scheduled start of the DAG.

What most DAGs would do to enable this is to have sensor tasks in the DAG. A Hadoop job for
example executes asynchronously from the originating request.
You'd have a task to kick off the job, save the job id and then in another task fetch the
job id through xcom and continue polling using this sensor task to verify
if the job finished (with either failed or finished). Then you allow the DAG to continue or
fail.


So this is where the job interface question comes into play. It depends what you have available
to verify the status of jobs and then you'd probably write some
operators around that job interface. If these jobs never surpass a week, then you could start
defining a week interval, so you're never crossing these boundaries.

Then look into for example the LatestOnlyOperator on how you can get the left-most execution
date (datetime) when the dagrun was started. There should be other
ways to get the exact start/datetime of the task of your interest (when the job was started),
then figure out the total processing time you need. Then run that sensor
every hour in a retry or something.


Alternatives are to look at what these tasks produce. For example, if you drop files into
S3 at the end of a process, look for those artifacts as a means to
identify if the task succeeded or failed. Or perhaps even easier, write control files in each
workload that you can check for in airflow, which can be easier than having to
implement a job control interface thingy.


You could also start the DAG and rely on 'retry' functionality in airflow and then you calculate
what interval size and how many retries you need to get to 3 days in total,
after which that task fails.


Rgds,

Gerard


On Mon, Sep 25, 2017 at 3:41 AM, Wang, Larry <Larry.L.Wang@dell.com<mailto:Larry.L.Wang@dell.com>>
wrote:
Any updates on this?

we basically want to build following DAG, and the group of BBTs in rectangle( start with snap
should be triggered in daily basis)




From: Wang, Larry
Sent: Sunday, September 24, 2017 11:23 PM
To: 'dev@airflow.apache.org<mailto:dev@airflow.apache.org>' <dev@airflow.apache.org<mailto:dev@airflow.apache.org>>
Subject: Qs on airflow

Hi experts,

I am new to airflow and want to ask some questions of it to see if it is possible to leverage
this tool in our daily works, please check them in below.

1st, I am implementing a system with 3 level workloads, the 1st workloads is triggered at
day 1, and then the 2nd workload is triggered at day 3 only if the 1st job could run long
enough with 3 days and then the last workload will be trigger at day 5 if both previous workloads
could continue running. Is this possible mapping to DAGs of the airflow?

2nd, Given the 1st workloads warming  up and  keep consuming certain system resources, a bunch
operation will be kicked out in a queue, is it possible?

Thanks and best regards
Larry  Wang



Mime
  • Unnamed multipart/related (inline, None, 0 bytes)
View raw message