hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sangjin Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3901) Populate flow run data in the flow_run & flow activity tables
Date Sat, 05 Sep 2015 00:01:50 GMT

    [ https://issues.apache.org/jira/browse/YARN-3901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731633#comment-14731633

Sangjin Lee commented on YARN-3901:

I went through the code by myself, and also with [~vrushalic] and [~jrottinghuis]. The following
is feedback I have in addition to what's already brought up.

- l.147: At minimum, we should write flow metrics only if the entity is an application; currently
it would trigger for any entity write, which I don't think is correct.
- l.180: extra space before the semicolon
- l.228: use isApplicationEntity(te)
- l.246: this check is unnecessary because you come here only if you're writing an application
- l.304: I think we should check if the metrics is null before getting the row key and the
- l.316: I think it's fine not to check if metric is null as this is a private method and
in both cases it's already checked
- some javadoc and/or comments would be helpful
- some javadoc and/or comments would be helpful
- l.116: as we discussed offline, the cell timestamp issue will need to be addressed to avoid
collision of puts
- l.144: as we discussed in YARN-4074, we need to fix the order in which cells are added to
ensure they are added in the right order
- l.228-229: as we discussed offline, to be consistent we should use GenericObjectMapper.read()
- l.249-251: as we discussed offline, to be consistent we should use GenericObjectMapper.read()

I'll add some more comments on details of some of these issues we observed down below.

> Populate flow run data in the flow_run & flow activity tables
> -------------------------------------------------------------
>                 Key: YARN-3901
>                 URL: https://issues.apache.org/jira/browse/YARN-3901
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Vrushali C
>            Assignee: Vrushali C
>         Attachments: YARN-3901-YARN-2928.1.patch, YARN-3901-YARN-2928.2.patch, YARN-3901-YARN-2928.3.patch,
YARN-3901-YARN-2928.WIP.2.patch, YARN-3901-YARN-2928.WIP.patch
> As per the schema proposed in YARN-3815 in https://issues.apache.org/jira/secure/attachment/12743391/hbase-schema-proposal-for-aggregation.pdf
> filing jira to track creation and population of data in the flow run table. 
> Some points that are being  considered:
> - Stores per flow run information aggregated across applications, flow version
> RM’s collector writes to on app creation and app completion
> - Per App collector writes to it for metric updates at a slower frequency than the metric
updates to application table
> primary key: cluster ! user ! flow ! flow run id
> - Only the latest version of flow-level aggregated metrics will be kept, even if the
entity and application level keep a timeseries.
> - The running_apps column will be incremented on app creation, and decremented on app
> - For min_start_time the RM writer will simply write a value with the tag for the applicationId.
A coprocessor will return the min value of all written values. - 
> - Upon flush and compactions, the min value between all the cells of this column will
be written to the cell without any tag (empty tag) and all the other cells will be discarded.
> - Ditto for the max_end_time, but then the max will be kept.
> - Tags are represented as #type:value. The type can be not set (0), or can indicate running
(1) or complete (2). In those cases (for metrics) only complete app metrics are collapsed
on compaction.
> - The m! values are aggregated (summed) upon read. Only when applications are completed
(indicated by tag type 2) can the values be collapsed.
> - The application ids that have completed and been aggregated into the flow numbers are
retained in a separate column for historical tracking: we don’t want to re-aggregate for
those upon replay

This message was sent by Atlassian JIRA

View raw message