hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhijie Shen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3051) [Storage abstraction] Create backing storage read interface for ATS readers
Date Wed, 17 Jun 2015 17:26:01 GMT

    [ https://issues.apache.org/jira/browse/YARN-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14590133#comment-14590133
] 

Zhijie Shen commented on YARN-3051:
-----------------------------------

[~sjlee0], thanks for your chiming in. Varun, Li and I recently have a offline discussion.
In general, we agreed on focusing on storage-oriented interface (raw data query) together
with a FS implementation of it on this jira, but spinning off change about the user-oriented
interface, web front wire up, and single reader daemon setup and dealing with them separately.
The rationale is to roll out the reader interface fast, and we can work the HBase/Phoenix
implement and web front wireup on a commonly agreed interface in parallel. How do you think
about the plan?

bq.  It's already doing that to some extent, and we should push that some more. For instance,
it might be helpful to create Context. 

Context is useful. Instead of creating a new one, maybe we can reuse the existing Context,
which hosts more content than reader needs. So we just need to let reader put/get the required
information to/from it.

bq. In essence, one way to look at it is that a query onto the storage is really (context)
+ (predicate/filters) + (contents to retrieve). Then we could consolidate arguments into these
coarse-grained things.

+1 LGTM, but I think it's for the query of searching a set of qualified entities, right. For
fetching a single entity, the query may look like (context) + (entity identifier) + (contents
to retrieve)

Another issue I want to raise is that after our performance evaluation, we agreed on using
HBase for raw data and Phoenix for aggregated data. It implies that we need to use HBase to
implement the APIs for the raw entities, while use Phoenix to implement the APIs for the aggregated
data.

> [Storage abstraction] Create backing storage read interface for ATS readers
> ---------------------------------------------------------------------------
>
>                 Key: YARN-3051
>                 URL: https://issues.apache.org/jira/browse/YARN-3051
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>    Affects Versions: YARN-2928
>            Reporter: Sangjin Lee
>            Assignee: Varun Saxena
>         Attachments: YARN-3051-YARN-2928.003.patch, YARN-3051-YARN-2928.03.patch, YARN-3051-YARN-2928.04.patch,
YARN-3051.wip.02.YARN-2928.patch, YARN-3051.wip.patch, YARN-3051_temp.patch
>
>
> Per design in YARN-2928, create backing storage read interface that can be implemented
by multiple backing storage implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message