hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-466) hdfs_write infinite loop when dfs fails and cannot write files > 2 GB
Date Sat, 03 Apr 2010 01:28:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853048#action_12853048
] 

Eli Collins commented on HDFS-466:
----------------------------------

lgtm. 

Tested the latest patch:
* On trunk with {{hdfs_write temp $((1024*1024*1024*3)) $((1024*1024))}} and confirmed that
it created a 3gb file
* Ran the unit tests on trunk though they're blocked by HDFS-940 (libhdfs test uses UnixUserGroupInformation)
* Applied to branch 20 and ran the unit tests there which passed



> hdfs_write infinite loop when dfs fails and cannot write files > 2 GB
> ---------------------------------------------------------------------
>
>                 Key: HDFS-466
>                 URL: https://issues.apache.org/jira/browse/HDFS-466
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Pete Wyckoff
>            Assignee: Pete Wyckoff
>         Attachments: HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt,
HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt
>
>
> 1. hdfs_write  does not check hdfsWrite return code so -1 return code is ignored.
> 2. hdfs_write uses int for overall file length

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message