pirk-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PIRK-4) Add Streaming Implementation for Apache Storm
Date Fri, 19 Aug 2016 19:18:20 GMT

    [ https://issues.apache.org/jira/browse/PIRK-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428679#comment-15428679
] 

ASF GitHub Bot commented on PIRK-4:
-----------------------------------

Github user smarthi commented on a diff in the pull request:

    https://github.com/apache/incubator-pirk/pull/74#discussion_r75537551
  
    --- Diff: src/main/java/org/apache/pirk/query/wideskies/QueryInfo.java ---
    @@ -96,6 +98,42 @@ public QueryInfo(UUID identifierInput, int numSelectorsInput, int hashBitSizeInp
         printQueryInfo();
       }
     
    +  public QueryInfo(Map queryInfoMap)
    +  {
    +    // Seemed that the Storm Config would serialize the map as a json and read back in
with numeric values as longs.
    +    // So had to cast as a long and call .intValue. However, this didn't work in the
PirkHashScheme and had to try
    +    // the normal way of doing it as well.
    +    try
    +    {
    +      identifier = UUID.fromString((String) queryInfoMap.get("uuid"));
    +      queryType = (String) queryInfoMap.get("queryType");
    +      numSelectors = ((Long) queryInfoMap.get("numSelectors")).intValue();
    +      hashBitSize = ((Long) queryInfoMap.get("hashBitSize")).intValue();
    +      hashKey = (String) queryInfoMap.get("hashKey");
    +      numBitsPerDataElement = ((Long) queryInfoMap.get("numBitsPerDataElement")).intValue();
    +      numPartitionsPerDataElement = ((Long) queryInfoMap.get("numPartitionsPerDataElement")).intValue();
    +      dataPartitionBitSize = ((Long) queryInfoMap.get("dataPartitionsBitSize")).intValue();
    +      useExpLookupTable = (boolean) queryInfoMap.get("useExpLookupTable");
    +      useHDFSExpLookupTable = (boolean) queryInfoMap.get("useHDFSExpLookupTable");
    +      embedSelector = (boolean) queryInfoMap.get("embedSelector");
    +    } catch (ClassCastException e)
    +    {
    +      identifier = UUID.fromString((String) queryInfoMap.get("uuid"));
    +      queryType = (String) queryInfoMap.get("queryType");
    +      numSelectors = (int) queryInfoMap.get("numSelectors");
    +      hashBitSize = (int) queryInfoMap.get("hashBitSize");
    +      hashKey = (String) queryInfoMap.get("hashKey");
    +      numBitsPerDataElement = (int) queryInfoMap.get("numBitsPerDataElement");
    +      numPartitionsPerDataElement = (int) queryInfoMap.get("numPartitionsPerDataElement");
    +      dataPartitionBitSize = (int) queryInfoMap.get("dataPartitionsBitSize");
    +      useExpLookupTable = (boolean) queryInfoMap.get("useExpLookupTable");
    +      useHDFSExpLookupTable = (boolean) queryInfoMap.get("useHDFSExpLookupTable");
    +      embedSelector = (boolean) queryInfoMap.get("embedSelector");
    +
    +    }
    --- End diff --
    
    ?? ?Why r we doing the same thing in try and catch clauses ?


> Add Streaming Implementation for Apache Storm
> ---------------------------------------------
>
>                 Key: PIRK-4
>                 URL: https://issues.apache.org/jira/browse/PIRK-4
>             Project: PIRK
>          Issue Type: Task
>          Components: Responder
>            Reporter: Chris Harris
>            Assignee: Chris Harris
>
> Per the Pirk Roadmap, this is a feature to add support for Apache Storm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message