manifoldcf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karl Wright (JIRA)" <>
Subject [jira] [Commented] (CONNECTORS-1162) Apache Kafka Output Connector
Date Wed, 22 Jul 2015 00:03:04 GMT


Karl Wright commented on CONNECTORS-1162:

Hmm, I don't see proper set up in this code still.

Notice the corresponding code in AlfrescoConnectorTest:

  private AlfrescoClient client;
  private AlfrescoConnector connector;
  public void setup() throws Exception {
    connector = new AlfrescoConnector();

    when(client.fetchNodes(anyInt(), anyInt(), Mockito.any(AlfrescoFilters.class)))
            .thenReturn(new AlfrescoResponse(
                    0, 0, "", "", Collections.<Map<String, Object>>emptyList()));

Here, "client" corresponds to your "producer" object.  There needs to be a protected method,
for testing, in your connector called "setProducer()", which corresponds to "setClient()"
here, which I know you had before.

The @Before annotated methods are called once, before your tests run, and basically should
create both the KafkaProducer object and the connector object.  Be sure to use @Mock for the
KafkaProducer object since you want mockito to track it.  If you call a connector method,
like addOrReplaceDocument(), it should result in call(s) to your mocked producer object. 
So "when().thenReturn()" should work, and "verify()" after that.

Hope this helps.

> Apache Kafka Output Connector
> -----------------------------
>                 Key: CONNECTORS-1162
>                 URL:
>             Project: ManifoldCF
>          Issue Type: Wish
>    Affects Versions: ManifoldCF 1.8.1, ManifoldCF 2.0.1
>            Reporter: Rafa Haro
>            Assignee: Karl Wright
>              Labels: gsoc, gsoc2015
>             Fix For: ManifoldCF 1.10, ManifoldCF 2.2
>         Attachments: 1.JPG, 2.JPG
> Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality
of a messaging system, but with a unique design. A single Kafka broker can handle hundreds
of megabytes of reads and writes per second from thousands of clients.
> Apache Kafka is being used for a number of uses cases. One of them is to use Kafka as
a feeding system for streaming BigData processes, both in Apache Spark or Hadoop environment.
A Kafka output connector could be used for streaming or dispatching crawled documents or metadata
and put them in a BigData processing pipeline

This message was sent by Atlassian JIRA

View raw message