hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Junping Du (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-4025) Deal with byte representations of Longs in writer code
Date Tue, 18 Aug 2015 14:12:46 GMT

    [ https://issues.apache.org/jira/browse/YARN-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14701309#comment-14701309

Junping Du commented on YARN-4025:

Thanks [~sjlee0] for updating the patch. 003 patch looks good to me in general. Some minor
In ColumnHelper.java,
+   * @param columnPrefixBytes
+   *          optional prefix to limit columns. If null all columns are
+   *          returned.
+   */
+  public Map<byte[][], Object> readResultsHavingCompoundColumnQualifiers(
Do we handle columnPrefixBytes to be null here as Javadoc comments? I saw we handle this case
explicitly in readResults() but I didn't see it here. Let me know if I miss something. In
addition, looks like previous readResults() only handle the case CQs are all String. I think
we should update that method name to something like: readResultsWithAllStringColumnQualifiers()
to get rid of possible confusion. Last but not the least, for result to be null case, do we
need to handle it with log some warn messages like other cases in this patch? 

+              byte[][] columnQualifierParts = Separator.VALUES.split(
+                  columnNameParts[1], -1);
Checking with javadoc in Separator and TimelineWriterUtils - "a negative value indicates no
limit on number of segments.", so can we define a constant value like NO_LIMIT to replace
-1 here? Actually, from checking with implementation in TimelineWriterUtils, 0 also indicates
the same thing (no limit). Sounds like we don't have any tests in TestTimelineWriterUtils.java,
we may want to improve this in future?

In ApplicationTable,
- * |            | e!eventId?timestamp?infoKey: |              |              |
+ * |            | e!eventId=timestamp=infoKey: | 
I think we should do the same thing to some javadoc examples in EntityTable.java. 

Other looks fine to me.

> Deal with byte representations of Longs in writer code
> ------------------------------------------------------
>                 Key: YARN-4025
>                 URL: https://issues.apache.org/jira/browse/YARN-4025
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Vrushali C
>            Assignee: Sangjin Lee
>         Attachments: YARN-4025-YARN-2928.001.patch, YARN-4025-YARN-2928.002.patch, YARN-4025-YARN-2928.003.patch
> Timestamps are being stored as Longs in hbase by the HBaseTimelineWriterImpl code. There
seem to be some places in the code where there are conversions between Long to byte[] to String
for easier argument passing between function calls. Then these values end up being converted
back to byte[] while storing in hbase. 
> It would be better to pass around byte[] or the Longs themselves  as applicable. 
> This may result in some api changes (store function) as well in adding a few more function
calls like getColumnQualifier which accepts a pre-encoded byte array. It will be in addition
to the existing api which accepts a String and the ColumnHelper to return a byte[] column
name instead of a String one. 
> Filing jira to track these changes.

This message was sent by Atlassian JIRA

View raw message