hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11128) abstracting out the scale tests for FileSystem Contract tests
Date Thu, 25 Sep 2014 09:31:34 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147575#comment-14147575
] 

Steve Loughran commented on HADOOP-11128:
-----------------------------------------

# the sizes and even test timeout can be configured by properties in the contract-test files
# some of the over 5GB tests, which test specific situations in the object stores, may need
to skip explicitly if the test machine isn't set to test the big file size.
# One test I don't think anyone has done yet (I think I may have done some of it in openstack),
is the wide & deep tree; that may exhibit different problems from the wide directory and
deep path.

> abstracting out the scale tests for FileSystem Contract tests
> -------------------------------------------------------------
>
>                 Key: HADOOP-11128
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11128
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Juan Yu
>
> Currently we have some scale tests for openstack and s3a. For now we'll just trust HDFS
to handle files >5GB and delete thousands of file in a directory properly.
> We should abstract out the scale tests so it can be applied to all FileSystems.
> A few things to consider for scale tests:
> scale tests rely on the tester having good/stable upload bandwidth, might need large
disk space. It needs to be configurable or optional.
> scale tests might need long time to finish, consider have test timeout configurable if
possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message