flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1753) Add more tests for Kafka Connectors
Date Mon, 13 Apr 2015 10:09:12 GMT

    [ https://issues.apache.org/jira/browse/FLINK-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14492224#comment-14492224
] 

ASF GitHub Bot commented on FLINK-1753:
---------------------------------------

Github user rmetzger commented on a diff in the pull request:

    https://github.com/apache/flink/pull/589#discussion_r28227177
  
    --- Diff: flink-staging/flink-streaming/flink-streaming-connectors/src/main/java/org/apache/flink/streaming/connectors/kafka/api/simple/KafkaTopicUtils.java
---
    @@ -65,36 +75,145 @@ public void createTopic(String topicName, int numOfPartitions, int
replicationFa
     				LOG.warn("Kafka topic \"{}\" already exists. Returning without action.", topicName);
     			}
     		} else {
    +			LOG.info("Connecting zookeeper");
    +
    +			initZkClient();
     			AdminUtils.createTopic(zkClient, topicName, numOfPartitions, replicationFactor, topicConfig);
    +			closeZkClient();
    +		}
    +	}
    +
    +	public String getBrokerList(String topicName) {
    +		return getBrokerAddressList(getBrokerAddresses(topicName));
    +	}
    +
    +	public String getBrokerList(String topicName, int partitionId) {
    +		return getBrokerAddressList(getBrokerAddresses(topicName, partitionId));
    +	}
    +
    +	public Set<String> getBrokerAddresses(String topicName) {
    +		int numOfPartitions = getNumberOfPartitions(topicName);
    +
    +		HashSet<String> brokers = new HashSet<String>();
    +		for (int i = 0; i < numOfPartitions; i++) {
    +			brokers.addAll(getBrokerAddresses(topicName, i));
     		}
    +		return brokers;
    +	}
    +
    +	public Set<String> getBrokerAddresses(String topicName, int partitionId) {
    +		PartitionMetadata partitionMetadata = waitAndGetPartitionMetadata(topicName, partitionId);
    +		Collection<Broker> inSyncReplicas = JavaConversions.asJavaCollection(partitionMetadata.isr());
    +
    +		HashSet<String> addresses = new HashSet<String>();
    +		for (Broker broker : inSyncReplicas) {
    +			addresses.add(broker.connectionString());
    +		}
    +		return addresses;
    +	}
    +
    +	private static String getBrokerAddressList(Set<String> brokerAddresses) {
    +		StringBuilder brokerAddressList = new StringBuilder("");
    +		for (String broker : brokerAddresses) {
    +			brokerAddressList.append(broker);
    +			brokerAddressList.append(',');
    +		}
    +		brokerAddressList.deleteCharAt(brokerAddressList.length() - 1);
    +
    +		return brokerAddressList.toString();
     	}
     
     	public int getNumberOfPartitions(String topicName) {
    -		Seq<PartitionMetadata> partitionMetadataSeq = getTopicInfo(topicName).partitionsMetadata();
    +		Seq<PartitionMetadata> partitionMetadataSeq = getTopicMetadata(topicName).partitionsMetadata();
     		return JavaConversions.asJavaCollection(partitionMetadataSeq).size();
     	}
     
    -	public String getLeaderBrokerAddressForTopic(String topicName) {
    -		TopicMetadata topicInfo = getTopicInfo(topicName);
    +	public PartitionMetadata waitAndGetPartitionMetadata(String topicName, int partitionId)
{
    +		PartitionMetadata partitionMetadata;
    +		while (true) {
    +			try {
    +				partitionMetadata = getPartitionMetadata(topicName, partitionId);
    +				return partitionMetadata;
    +			} catch (LeaderNotAvailableException e) {
    +				// try fetching metadata again
    --- End diff --
    
    I would suggest to LOG.debug the exception


> Add more tests for Kafka Connectors
> -----------------------------------
>
>                 Key: FLINK-1753
>                 URL: https://issues.apache.org/jira/browse/FLINK-1753
>             Project: Flink
>          Issue Type: Improvement
>          Components: Streaming
>    Affects Versions: 0.9
>            Reporter: Robert Metzger
>            Assignee: Gábor Hermann
>
> The current {{KafkaITCase}} is only doing a single test.
> We need to refactor that test so that it brings up a Kafka/Zookeeper server and than
performs various tests:
> Tests to include:
> - A topology with non-string types MERGED IN 359b39c3
> - A topology with a custom Kafka Partitioning class MERGED IN 359b39c3
> - A topology testing the regular {{KafkaSource}}. MERGED IN 359b39c3
> - Kafka broker failure.
> - Flink TaskManager failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message