hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9446) tSize of libhdfs in hadoop-2.7.1 is still int32_t
Date Mon, 30 Nov 2015 23:07:10 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032684#comment-15032684

Colin Patrick McCabe commented on HDFS-9446:

Please do not change the type of {{tSize}}.  It would silently break everyone using libhdfs,
causing crashes and memory corruption.  Instead, we are probably going to add a new API for
creating files that takes a block size bigger than 32 bits.  The other uses of {{tSize}} are
all places where 31 bits is enough (reading into and out of buffers which can't be larger
than 31 bits anyway)

> tSize of libhdfs in hadoop-2.7.1 is still int32_t
> -------------------------------------------------
>                 Key: HDFS-9446
>                 URL: https://issues.apache.org/jira/browse/HDFS-9446
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Glen Cao
> Issue (https://issues.apache.org/jira/browse/HDFS-466) says what I mentioned in the title
is fixed. However, I find that in the source (hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h)
of hadoop-2.7.1, tSize is still typedef-ed as int32_t and I don't find any compilation option
about that.
> In hdfs.h:
> 75     typedef int32_t   tSize; /// size of data for read/write io ops

This message was sent by Atlassian JIRA

View raw message