Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 8AEDA200B99 for ; Wed, 5 Oct 2016 12:58:23 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 89B6C160ADB; Wed, 5 Oct 2016 10:58:23 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 5D959160AF3 for ; Wed, 5 Oct 2016 12:58:22 +0200 (CEST) Received: (qmail 71513 invoked by uid 500); 5 Oct 2016 10:58:21 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 71281 invoked by uid 99); 5 Oct 2016 10:58:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 05 Oct 2016 10:58:21 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 05ADE2C2A6C for ; Wed, 5 Oct 2016 10:58:21 +0000 (UTC) Date: Wed, 5 Oct 2016 10:58:21 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: issues@flink.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (FLINK-4329) Fix Streaming File Source Timestamps/Watermarks Handling MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 05 Oct 2016 10:58:23 -0000 [ https://issues.apache.org/jira/browse/FLINK-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15548387#comment-15548387 ] ASF GitHub Bot commented on FLINK-4329: --------------------------------------- Github user mxm commented on a diff in the pull request: https://github.com/apache/flink/pull/2593#discussion_r81944479 --- Diff: flink-fs-tests/src/test/java/org/apache/flink/hdfstests/ContinuousFileMonitoringTest.java --- @@ -106,6 +107,155 @@ public static void destroyHDFS() { // TESTS @Test + public void testFileReadingOperatorWithIngestionTime() throws Exception { + Set filesCreated = new HashSet<>(); + Map expectedFileContents = new HashMap<>(); + + for(int i = 0; i < NO_OF_FILES; i++) { + Tuple2 file = fillWithData(hdfsURI, "file", i, "This is test line."); + filesCreated.add(file.f0); + expectedFileContents.put(i, file.f1); + } + + TextInputFormat format = new TextInputFormat(new Path(hdfsURI)); + TypeInformation typeInfo = TypeExtractor.getInputFormatTypes(format); + + final long watermarkInterval = 10; + ExecutionConfig executionConfig = new ExecutionConfig(); + executionConfig.setAutoWatermarkInterval(watermarkInterval); + + ContinuousFileReaderOperator reader = new ContinuousFileReaderOperator<>(format); + reader.setOutputType(typeInfo, executionConfig); + + final TestTimeServiceProvider timeServiceProvider = new TestTimeServiceProvider(); + final OneInputStreamOperatorTestHarness tester = + new OneInputStreamOperatorTestHarness<>(reader, executionConfig, timeServiceProvider); + tester.setTimeCharacteristic(TimeCharacteristic.IngestionTime); + tester.open(); + + Assert.assertEquals(TimeCharacteristic.IngestionTime, tester.getTimeCharacteristic()); + + // test that watermarks are correctly emitted + + timeServiceProvider.setCurrentTime(201); + timeServiceProvider.setCurrentTime(301); + timeServiceProvider.setCurrentTime(401); + timeServiceProvider.setCurrentTime(501); + + int i = 0; + for(Object line: tester.getOutput()) { + if (!(line instanceof Watermark)) { + Assert.fail("Only watermarks are expected here "); + } + Watermark w = (Watermark) line; + Assert.assertEquals(200 + (i * 100), w.getTimestamp()); + i++; + } + + // clear the output to get the elements only and the final watermark + tester.getOutput().clear(); + Assert.assertEquals(0, tester.getOutput().size()); + + // create the necessary splits for the test + FileInputSplit[] splits = format.createInputSplits( + reader.getRuntimeContext().getNumberOfParallelSubtasks()); + + // and feed them to the operator + Map> actualFileContents = new HashMap<>(); + + long lastSeenWatermark = Long.MIN_VALUE; + int lineCounter = 0; // counter for the lines read from the splits + int watermarkCounter = 0; + + for(FileInputSplit split: splits) { + + // set the next "current processing time". + long nextTimestamp = timeServiceProvider.getCurrentProcessingTime() + watermarkInterval; + timeServiceProvider.setCurrentTime(nextTimestamp); + + // send the next split to be read and wait until it is fully read. + tester.processElement(new StreamRecord<>(split)); + synchronized (tester.getCheckpointLock()) { + while (tester.getOutput().isEmpty() || tester.getOutput().size() != (LINES_PER_FILE + 1)) { + tester.getCheckpointLock().wait(10); + } + } + + // verify that the results are the expected + for(Object line: tester.getOutput()) { + if (line instanceof StreamRecord) { + StreamRecord element = (StreamRecord) line; + lineCounter++; + + Assert.assertEquals(nextTimestamp, element.getTimestamp()); + + int fileIdx = Character.getNumericValue(element.getValue().charAt(0)); + List content = actualFileContents.get(fileIdx); + if (content == null) { + content = new ArrayList<>(); + actualFileContents.put(fileIdx, content); + } + content.add(element.getValue() + "\n"); + } else if (line instanceof Watermark) { + long watermark = ((Watermark) line).getTimestamp(); + + Assert.assertEquals(nextTimestamp - (nextTimestamp % watermarkInterval), watermark); + Assert.assertTrue(watermark > lastSeenWatermark); + watermarkCounter++; + + lastSeenWatermark = watermark; + } else { + Assert.fail("Unknown element in the list."); + } + } + + // clean the output to be ready for the next split + tester.getOutput().clear(); + } + + // now we are processing one split after the other, + // so all the elements must be here by now. + Assert.assertEquals(NO_OF_FILES * LINES_PER_FILE, lineCounter); + + // because we expect one watermark per split. + Assert.assertEquals(NO_OF_FILES, watermarkCounter); + + // then close the reader gracefully so that the Long.MAX watermark is emitted + synchronized (tester.getCheckpointLock()) { + tester.close(); + } + + for(org.apache.hadoop.fs.Path file: filesCreated) { + hdfs.delete(file, false); + } + + // check if the last element is the LongMax watermark (by now this must be the only element) + Assert.assertEquals(1, tester.getOutput().size()); + Assert.assertTrue(tester.getOutput().peek() instanceof Watermark); + Assert.assertEquals(Long.MAX_VALUE, ((Watermark) tester.getOutput().poll()).getTimestamp()); + + // check if the elements are the expected ones. + Assert.assertEquals(expectedFileContents.size(), actualFileContents.size()); + for (Integer fileIdx: expectedFileContents.keySet()) { + Assert.assertTrue("file" + fileIdx + " not found", actualFileContents.keySet().contains(fileIdx)); + + List cntnt = actualFileContents.get(fileIdx); + Collections.sort(cntnt, new Comparator() { --- End diff -- Sorting here wouldn't be necessary if you immediately compared the output of the split after reading it. > Fix Streaming File Source Timestamps/Watermarks Handling > -------------------------------------------------------- > > Key: FLINK-4329 > URL: https://issues.apache.org/jira/browse/FLINK-4329 > Project: Flink > Issue Type: Bug > Components: Streaming Connectors > Affects Versions: 1.1.0 > Reporter: Aljoscha Krettek > Assignee: Kostas Kloudas > Fix For: 1.2.0, 1.1.3 > > > The {{ContinuousFileReaderOperator}} does not correctly deal with watermarks, i.e. they are just passed through. This means that when the {{ContinuousFileMonitoringFunction}} closes and emits a {{Long.MAX_VALUE}} that watermark can "overtake" the records that are to be emitted in the {{ContinuousFileReaderOperator}}. Together with the new "allowed lateness" setting in window operator this can lead to elements being dropped as late. > Also, {{ContinuousFileReaderOperator}} does not correctly assign ingestion timestamps since it is not technically a source but looks like one to the user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)