hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-727) bug setting block size hdfsOpenFile
Date Tue, 17 Nov 2009 04:23:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778713#action_12778713

Eli Collins commented on HDFS-727:

Hey Dhruba,

Now that I can run the libhdfs test on trunk (HDFS-756), I ran the libhdfs test w/o the patch
in this jira and confirmed that on an ubuntu 9.10 64-bit host the test fails due to this bug.
{{fprintf(stderr, "jBlockSize=%lld\n", jBlockSize);}} in hdfsOpenFile shows the corrupt value
in test output, and the failure (could only be replicated to 0 nodes) is the same failure
I saw before (there's no node that will accept this large block size).

     [exec] jBlockSize 47403621154816
     [exec] 09/11/16 20:08:06 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
     File /tmp/testfile.txt could only be replicated to 0 nodes, instead of 1

The patch still applies against trunk and 20.1 and 20.2.


> bug setting block size hdfsOpenFile 
> ------------------------------------
>                 Key: HDFS-727
>                 URL: https://issues.apache.org/jira/browse/HDFS-727
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Eli Collins
>            Assignee: Eli Collins
>             Fix For: 0.20.2, 0.21.0
>         Attachments: hdfs727.patch
> In hdfsOpenFile in libhdfs invokeMethod needs to cast the block size argument to a jlong
so a full 8 bytes are passed (rather than 4 plus some garbage which causes writes to fail
due to a bogus block size). 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message