hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-578) Support for using server default values for blockSize and replication when creating a file
Date Tue, 01 Sep 2009 00:51:32 GMT

    [ https://issues.apache.org/jira/browse/HDFS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12749717#action_12749717
] 

dhruba borthakur commented on HDFS-578:
---------------------------------------

> My current thinking is if the client chooses to use server default for blockSize, it
should use server defaults for io.bytes.per.checksum and dfs.write.packet.size at the same
time. What do you think?

Sounds good to me.

> The bytesPerChecksum and and packetSize are always SS

The default value of 512 is suitable for random reads, isn't it? if an application knows that
it does not need random read support for a file, it can specify the bytesperChecksum to be
larger that the default.  Don't we want to allow that flexibility?

> Support for using server default values for blockSize and replication when creating a
file
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-578
>                 URL: https://issues.apache.org/jira/browse/HDFS-578
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs client, name-node
>            Reporter: Kan Zhang
>            Assignee: Kan Zhang
>
> This is a sub-task of HADOOP-4952. This improvement makes it possible for a client to
specify that it wants to use the server default values for blockSize and replication params
when creating a file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message