hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-8949) hdfsOpenFile() in HDFS C API does not support block sizes larger than 2GB
Date Fri, 22 Jan 2016 19:24:39 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Colin Patrick McCabe resolved HDFS-8949.
----------------------------------------
    Resolution: Duplicate

Duplicate of HDFS-9541

> hdfsOpenFile() in HDFS C API does not support block sizes larger than 2GB
> -------------------------------------------------------------------------
>
>                 Key: HDFS-8949
>                 URL: https://issues.apache.org/jira/browse/HDFS-8949
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Vlad Berindei
>
> hdfsOpenFile() has an int32 blocksize parameter which restricts the size of the blocks
to 2GB, while FileSystem.create accepts a long blockSize parameter.
> https://github.com/apache/hadoop/blob/c1d50a91f7c05e4aaf4655380c8dcd11703ff158/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h#L395
- int32 blocksize
> https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,
boolean, int, short, long) - long blockSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message