flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pnowojski <...@git.apache.org>
Subject [GitHub] flink issue #4310: [misc] Commit read offsets in Kafka integration tests
Date Wed, 19 Jul 2017 07:22:20 GMT
Github user pnowojski commented on the issue:

    For consumer side or mapper side it is natural to use that kind of validating mappers,
because you could just add them at the end of your pipeline. 
    For producers tests it isn't, because you need to spawn additional Flink job for this
purpose, which seems unnatural to me. It would add a test dependency to a consumer code (bug
in consumer would/could brake producer tests making the error messages very confusing). Furthermore
using second Flink job would be definitely more heavy and more time/resources consuming -
this second job would need to execute exactly same code as those methods, but wrapped into
additional layer (Flink application). Lastly this wrapping would add additional complexity
that could make this tests more prone for intermittent failures and timeouts. 
    If you have the data written somewhere, why don't you want to read them directly? One
more bonus reason for doing it as it is, it makes possible to test producers without spawning
any Flink job altogether in some mini IT cases (which I'm doing in tests for `Kafka011`, I
test `FlinkKafkaProducer` directly).

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message