flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GJL <...@git.apache.org>
Subject [GitHub] flink pull request #4896: [FLINK-7909] Unify Flink test bases
Date Thu, 21 Dec 2017 11:53:53 GMT
Github user GJL commented on a diff in the pull request:

    https://github.com/apache/flink/pull/4896#discussion_r158259149
  
    --- Diff: flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/RollingSinkSecuredITCase.java
---
    @@ -215,23 +211,15 @@ private static void startSecureFlinkClusterWithRecoveryModeEnabled()
{
     			dfs.mkdirs(new Path("/flink/checkpoints"));
     			dfs.mkdirs(new Path("/flink/recovery"));
     
    -			org.apache.flink.configuration.Configuration config = new org.apache.flink.configuration.Configuration();
    -
    -			config.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, 1);
    -			config.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, DEFAULT_PARALLELISM);
    -			config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, false);
    -			config.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, 3);
    -			config.setString(HighAvailabilityOptions.HA_MODE, "zookeeper");
    -			config.setString(CoreOptions.STATE_BACKEND, "filesystem");
    -			config.setString(HighAvailabilityOptions.HA_ZOOKEEPER_CHECKPOINTS_PATH, hdfsURI +
"/flink/checkpoints");
    -			config.setString(HighAvailabilityOptions.HA_STORAGE_PATH, hdfsURI + "/flink/recovery");
    -			config.setString("state.backend.fs.checkpointdir", hdfsURI + "/flink/checkpoints");
    -
    -			SecureTestEnvironment.populateFlinkSecureConfigurations(config);
    -
    -			cluster = TestBaseUtils.startCluster(config, false);
    -			TestStreamEnvironment.setAsContext(cluster, DEFAULT_PARALLELISM);
    +			MINICLUSTER_CONFIGURATION.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, false);
    --- End diff --
    
    Because of this:
    ```
    private static void skipIfHadoopVersionIsNotAppropriate() {
    		// Skips all tests if the Hadoop version doesn't match
    		String hadoopVersionString = VersionInfo.getVersion();
    		String[] split = hadoopVersionString.split("\\.");
    		if (split.length != 3) {
    			throw new IllegalStateException("Hadoop version was not of format 'X.X.X': " + hadoopVersionString);
    		}
    		Assume.assumeTrue(
    			// check whether we're running Hadoop version >= 3.x.x
    			Integer.parseInt(split[0]) >= 3
    		);
    	}
    ```
    I assume that the test will never run.
    
    I wonder if the test has ever worked correctly.


---

Mime
View raw message