hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kai Zheng (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile
Date Sat, 09 Sep 2017 07:30:02 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16159810#comment-16159810
] 

Kai Zheng commented on HDFS-12398:
----------------------------------

Hi [~HuafengWang],

Thanks for working on the tests.  The idea seems good, and after refactored, the codes are
much less. However, the good effect of this refactoring doesn't look very obvious to me and
I'd like more the current way for some reasons.

1. The current way of having many test methods are much better readable;
2. It's also easier to debug if some of them are failed;
3. More important, every test case (contained in a test method) needs a brand new cluster
to start with;
4. Timeout can be fine-tuned for each test method in current way. 

I'm not very sure if these are all correct since I don't quite get the new approach. 

Note in the Jenkins building, the changed {{TestWriteReadStripedFile}} times out.

===The current way, having many test methods===
{code}
  @Test
  public void testFileSmallerThanOneStripe2() throws Exception {
    testOneFileUsingDFSStripedInputStream("/ec/SmallerThanOneStripe",
        cellSize + 123);
    testOneFileUsingDFSStripedInputStream("/ec/SmallerThanOneStripe2",
        cellSize + 123, true);
  }

  @Test
  public void testFileEqualsWithOneStripe() throws Exception {
    testOneFileUsingDFSStripedInputStream("/ec/EqualsWithOneStripe",
        cellSize * dataBlocks);
    testOneFileUsingDFSStripedInputStream("/ec/EqualsWithOneStripe2",
        cellSize * dataBlocks, true);
  }

  @Test
  public void testFileMoreThanOneStripe1() throws Exception {
    testOneFileUsingDFSStripedInputStream("/ec/MoreThanOneStripe1",
        cellSize * dataBlocks + 123);
    testOneFileUsingDFSStripedInputStream("/ec/MoreThanOneStripe12",
        cellSize * dataBlocks + 123, true);
  }

...

{code}
===Now after refactored===
{code}
+  /**
+   * Tests for different size of striped files.
+   */
+  @RunWith(Parameterized.class)
+  public static class TestReadStripedFiles extends StripedFileIOTestBase {
+    @Parameterized.Parameters(name = "{index}: {0}")
+    public static Iterable<Object[]> data() {
+      return ImmutableList.<Object[]>builder()
+          .add(new Object[]{"/ec/EmptyFile", 0})
+          .add(new Object[]{"/ec/SmallerThanOneCell", 1})
+          .add(new Object[]{"/ec/SmallerThanOneCell2", CELL_SIZE - 1})
+          .add(new Object[]{"/ec/EqualsWithOneCell", CELL_SIZE})
+          .add(new Object[]{"/ec/SmallerThanOneStripe", CELL_SIZE + 123})
+          .add(new Object[]{"/ec/SmallerThanOneStripe2", STRIPE_SIZE - 1})
+          .add(new Object[]{"/ec/EqualsWithOneStripe", STRIPE_SIZE})
+          .add(new Object[]{"/ec/MoreThanOneStripe", STRIPE_SIZE + 123})
+          .add(new Object[]{"/ec/MoreThanOneStripe2", STRIPE_SIZE * 2 + 123})
+          .add(new Object[]{"/ec/LessThanFullBlockGroup",
+              STRIPE_SIZE * (STRIPES_PER_BLOCK - 1) + CELL_SIZE})
+          .add(new Object[]{"/ec/FullBlockGroup", BLOCK_SIZE * DATA_BLOCKS})
+          .add(new Object[]{"/ec/MoreThanABlockGroup",
+              BLOCK_SIZE * DATA_BLOCKS + 123})
+          .add(new Object[]{"/ec/MoreThanABlockGroup2",
+              BLOCK_SIZE * DATA_BLOCKS + CELL_SIZE + 123})
+          .add(new Object[]{"/ec/MoreThanABlockGroup3",
+              BLOCK_SIZE * DATA_BLOCKS * 3 + STRIPE_SIZE + CELL_SIZE + 123})
+          .build();
+    }
{code}

> Use JUnit Paramaterized test suite in TestWriteReadStripedFile
> --------------------------------------------------------------
>
>                 Key: HDFS-12398
>                 URL: https://issues.apache.org/jira/browse/HDFS-12398
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: test
>            Reporter: Huafeng Wang
>            Assignee: Huafeng Wang
>            Priority: Trivial
>         Attachments: HDFS-12398.001.patch, HDFS-12398.002.patch
>
>
> The TestWriteReadStripedFile is basically doing the full product of file size with data
node failure or not. It's better to use JUnit Paramaterized test suite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message