apex-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (APEXMALHAR-2013) HDFS output module for file copy
Date Fri, 18 Mar 2016 09:08:33 GMT

    [ https://issues.apache.org/jira/browse/APEXMALHAR-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201217#comment-15201217
] 

ASF GitHub Bot commented on APEXMALHAR-2013:
--------------------------------------------

Github user DT-Priyanka commented on a diff in the pull request:

    https://github.com/apache/incubator-apex-malhar/pull/216#discussion_r56628057
  
    --- Diff: library/src/main/java/com/datatorrent/lib/io/fs/HDFSFileCopyModule.java ---
    @@ -0,0 +1,123 @@
    +/**
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *   http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing,
    + * software distributed under the License is distributed on an
    + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    + * KIND, either express or implied.  See the License for the
    + * specific language governing permissions and limitations
    + * under the License.
    + */
    +
    +package com.datatorrent.lib.io.fs;
    +
    +import javax.validation.constraints.NotNull;
    +
    +import org.apache.hadoop.conf.Configuration;
    +
    +import com.datatorrent.api.Context.PortContext;
    +import com.datatorrent.api.DAG;
    +import com.datatorrent.api.Module;
    +import com.datatorrent.lib.io.block.AbstractBlockReader.ReaderRecord;
    +import com.datatorrent.lib.io.block.BlockMetadata;
    +import com.datatorrent.lib.io.fs.AbstractFileSplitter.FileMetadata;
    +import com.datatorrent.netlet.util.Slice;
    +
    +/**
    + * HDFS file copy module can be used in conjunction with file input modules to
    + * copy files from any file system to HDFS. This module supports parallel write
    + * to multiple blocks of the same file and then stitching those blocks in
    + * original sequence.
    + * 
    + * Essential operators are wrapped into single component using Module API.
    + * 
    + */
    +public class HDFSFileCopyModule implements Module
    +{
    +
    +  /**
    +   * Path of the output directory. Relative path of the files copied will be
    +   * maintained w.r.t. source directory and output directory
    +   */
    +  @NotNull
    +  protected String outputDirectoryPath;
    +
    +  /**
    +   * Flag to control if existing file with same name should be overwritten
    +   */
    +  private boolean overwriteOnConflict;
    +
    +  /**
    +   * Input port for files metadata.
    +   */
    +  public final transient ProxyInputPort<FileMetadata> filesMetadataInput = new
ProxyInputPort<FileMetadata>();
    +
    +  /**
    +   * Input port for blocks metadata
    +   */
    +  public final transient ProxyInputPort<BlockMetadata.FileBlockMetadata> blocksMetadataInput
= new ProxyInputPort<BlockMetadata.FileBlockMetadata>();
    +
    +  /**
    +   * Input port for blocks data
    +   */
    +  public final transient ProxyInputPort<ReaderRecord<Slice>> blockData =
new ProxyInputPort<ReaderRecord<Slice>>();
    +
    +  @Override
    +  public void populateDAG(DAG dag, Configuration conf)
    +  {
    +
    +    //Defining DAG
    --- End diff --
    
    As per best practices, we should not write comments unless it's really required.


> HDFS output module for file copy
> --------------------------------
>
>                 Key: APEXMALHAR-2013
>                 URL: https://issues.apache.org/jira/browse/APEXMALHAR-2013
>             Project: Apache Apex Malhar
>          Issue Type: Task
>            Reporter: Yogi Devendra
>            Assignee: Yogi Devendra
>
> To write files to HDFS using block-by-block approach.
> Main use-case being to copy the files. Thus, original sequence of blocks has to be maintained.

> To achieve this goal, this module would use information emitted by  HDFS input module
(APEXMALHAR-2008) viz. FileMetaData, BlockMetaData, BlockData.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message