ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Régis GARMY (JIRA) <j...@apache.org>
Subject [jira] [Commented] (AMBARI-13773) Ambari views file upload corruption
Date Mon, 14 Dec 2015 17:14:46 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15056302#comment-15056302
] 

Régis GARMY commented on AMBARI-13773:
--------------------------------------

Yes, I think this is the issue.
You may consider keeping the read/write by block.
But tast 1024 byte read block should not be written.
Last block to write should only contain the bytes from the last read block which are required
to complete the transfered file, regarding of its size.

> Ambari views file upload corruption
> -----------------------------------
>
>                 Key: AMBARI-13773
>                 URL: https://issues.apache.org/jira/browse/AMBARI-13773
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-views
>    Affects Versions: 2.1.0
>         Environment: HDP 2.3.0.0-2557 sandbox loaded by VirtualBox 5 hosted by Windows
7 Pro.
> 4 cores, 8 Go RAM.
>            Reporter: Régis GARMY
>
> When uploading text files (csv), files are corrupted.
> It affects Ambari HDFS Files view and Ambari Local Files view.
> When I upload a 100 rows csv file located on NTFS W7 filesystem, the file is transferred
without error. But the file is bigger on both HDFS and CentOs storage : they've got 102 rows,
the end of the file has been filled with extra datas. It looks like data block filling.
> File is correctly uploaded with Hue File manager.
> File size reported by HDFS web UI is always integer while using Ambari for upload (61
KB), float using Hue (60.04 KB),



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message