hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "JongYoon Lim (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HAMA-983) Hama runner for DataFlow
Date Wed, 07 Dec 2016 22:08:58 GMT

    [ https://issues.apache.org/jira/browse/HAMA-983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15730085#comment-15730085

JongYoon Lim commented on HAMA-983:

Hi, Edward. First of all, sorry for long delay. 

This is process for testing beam-hama-runner. 
1. Define testing ParDo, for example, as below. 
    PCollection<KV<Text, LongWritable>> output = input.apply("test", ParDo.of(new
DoFn<KV<Text, LongWritable>, KV<Text, LongWritable>>() {
      public void processElement(ProcessContext c) {
        for (String word : c.element().toString().split("[^a-zA-Z']+")) {
          if (!word.isEmpty()) {
            c.output(KV.of(new Text(word), new LongWritable(11)));
2. For translation of ParDo, I can pass the ParDo to DoFnFunction which is a subclass of Superstep
and has OldDoFn.ProcessContext. Here, I'd like to create dofn instance in hama cluster after
finishing all translation. And I'm not sure how I can do it easily... 
  private static <InputT, OutputT> TransformTranslator<ParDo.Bound<InputT, OutputT>>
parDo() {
    return new TransformTranslator<ParDo.Bound<InputT, OutputT>>() {
      public void translate(final ParDo.Bound<InputT, OutputT> transform, TranslationContext
context) {
//        context.addSuperstep(TestSuperStep.class);
        DoFnFunction dofn = new DoFnFunction((OldDoFn<KV, KV>) transform.getFn());
//        context.addSuperstep(dofn.getClass());

> Hama runner for DataFlow
> ------------------------
>                 Key: HAMA-983
>                 URL: https://issues.apache.org/jira/browse/HAMA-983
>             Project: Hama
>          Issue Type: Bug
>            Reporter: Edward J. Yoon
>              Labels: gsoc2016
> As you already know, Apache Beam provides unified programming model for both batch and
streaming inputs.
> The APIs are generally associated with data filtering and transforming. So we'll need
to implement some data processing runner like https://github.com/dapurv5/MapReduce-BSP-Adapter/blob/master/src/main/java/org/apache/hama/mapreduce/examples/WordCount.java
> Also, implementing similarity join can be funny. According to http://www.ruizhang.info/publications/TPDS2015-Heads_Join.pdf,
Apache Hama is clearly winner among Apache Hadoop and Apache Spark.
> Since it consists of transformation, aggregation, and partition computations, I think
it's possible to implement using Apache Beam APIs.

This message was sent by Atlassian JIRA

View raw message