hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sangjin Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3901) Populate flow run data in the flow_run & flow activity tables
Date Tue, 15 Sep 2015 21:44:47 GMT

    [ https://issues.apache.org/jira/browse/YARN-3901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14746305#comment-14746305

Sangjin Lee commented on YARN-3901:

I can answer and clarify some of [~gtCarrera9]'s questions.

bq. Any special considerations on not directly using getAndAdd (fetch-and-increment) here?

It's essentially using AtomicLong.compareAndSet(). The few lines around it are mostly to keep
pace with the current time. I hope that makes sense.

bq. It's fine to directly read data out in the UT now. We may want to switch this to use readers
sometime later?

I'm actually adding more unit tests that use the reader in YARN-4074.

bq. I noticed we never remove or disable anything in this table, so do we do this by setting
the ttl of the table? Or we create different rows for the same application to differentiate
the day an entity was posted? I think we're using the latter but would like to confirm. 

We're doing the latter. We create a new record in this table any time a new activity is done
for a given day for a flow.

bq. Also, with the current design, when the reader tries to get latest activities on a cluster,
it can craft a row key prefix cluster!inv(currTime-24h) and scan from the very beginning to
this record, right? (I'm trying to connect the two pieces together. )

If you're interested in the latest only, then it's even simpler. The prefix is just {{cluster!}}
and we can grab from the beginning. What you mention would get activities for today only,
which is slightly different.

> Populate flow run data in the flow_run & flow activity tables
> -------------------------------------------------------------
>                 Key: YARN-3901
>                 URL: https://issues.apache.org/jira/browse/YARN-3901
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Vrushali C
>            Assignee: Vrushali C
>         Attachments: YARN-3901-YARN-2928.1.patch, YARN-3901-YARN-2928.2.patch, YARN-3901-YARN-2928.3.patch,
YARN-3901-YARN-2928.4.patch, YARN-3901-YARN-2928.5.patch, YARN-3901-YARN-2928.6.patch, YARN-3901-YARN-2928.7.patch,
> As per the schema proposed in YARN-3815 in https://issues.apache.org/jira/secure/attachment/12743391/hbase-schema-proposal-for-aggregation.pdf
> filing jira to track creation and population of data in the flow run table. 
> Some points that are being  considered:
> - Stores per flow run information aggregated across applications, flow version
> RM’s collector writes to on app creation and app completion
> - Per App collector writes to it for metric updates at a slower frequency than the metric
updates to application table
> primary key: cluster ! user ! flow ! flow run id
> - Only the latest version of flow-level aggregated metrics will be kept, even if the
entity and application level keep a timeseries.
> - The running_apps column will be incremented on app creation, and decremented on app
> - For min_start_time the RM writer will simply write a value with the tag for the applicationId.
A coprocessor will return the min value of all written values. - 
> - Upon flush and compactions, the min value between all the cells of this column will
be written to the cell without any tag (empty tag) and all the other cells will be discarded.
> - Ditto for the max_end_time, but then the max will be kept.
> - Tags are represented as #type:value. The type can be not set (0), or can indicate running
(1) or complete (2). In those cases (for metrics) only complete app metrics are collapsed
on compaction.
> - The m! values are aggregated (summed) upon read. Only when applications are completed
(indicated by tag type 2) can the values be collapsed.
> - The application ids that have completed and been aggregated into the flow numbers are
retained in a separate column for historical tracking: we don’t want to re-aggregate for
those upon replay

This message was sent by Atlassian JIRA

View raw message