Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B562B200D3C for ; Tue, 14 Nov 2017 16:51:26 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id B402C160BF4; Tue, 14 Nov 2017 15:51:26 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D46211609EF for ; Tue, 14 Nov 2017 16:51:25 +0100 (CET) Received: (qmail 60656 invoked by uid 500); 14 Nov 2017 15:51:24 -0000 Mailing-List: contact user-help@storm.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.apache.org Delivered-To: mailing list user@storm.apache.org Received: (qmail 60646 invoked by uid 99); 14 Nov 2017 15:51:24 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Nov 2017 15:51:24 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 9F4F518C86A for ; Tue, 14 Nov 2017 15:51:23 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.879 X-Spam-Level: * X-Spam-Status: No, score=1.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id b9BTsXvbKTp4 for ; Tue, 14 Nov 2017 15:51:22 +0000 (UTC) Received: from mail-lf0-f54.google.com (mail-lf0-f54.google.com [209.85.215.54]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 1CDFC5FD01 for ; Tue, 14 Nov 2017 15:51:22 +0000 (UTC) Received: by mail-lf0-f54.google.com with SMTP id r135so22842288lfe.5 for ; Tue, 14 Nov 2017 07:51:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=3QvuN+u8Nz2msS6qdnkOMKnleQQ9+ttXVto9ZUMubGs=; b=ScvzbhnnWjzoshSASST+efLMjsq6zgp2wqvavRu5mrJ4NBHz2vQTM3bV6lpABIXVMW FxjJL66cfA62dC/dsAmG7ggjT05RYPfHvh7CQVyJru15q5vLcb44iXfzUQ9NR1mYUOoU KuKByh9/TDYfUBNz5VQT8RbZEkCN2oU3pYa8u0nU6XPwgRfj1b4xMemPT0euBdi3esCG xFCNpJ6il2RaHMbuKoeGcF/1KdfpjZ8VAiUxizs6I6id87yHUGgoBxhqn37ksUK7+ZEO i2MGNWGbuIJgn9mKNBK9y0lHI8MPIdZUxDfDnjvzZSyxqmd3SSG1G21PEH03wyN7UaIF qJLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=3QvuN+u8Nz2msS6qdnkOMKnleQQ9+ttXVto9ZUMubGs=; b=XHL3/FUBBJAHKiz9EV329RcdVyJy0KUGmnlxv2MH5eo25xBARXmQV6mNXc/PMcjuWu ACEfqnpR4hgcezlgRo1tu5W7xscr8EvqhvLDSw37CjrCxN/1nyv2bxcDpPDrpUzEPPv0 m3dDDhhg4alzUUNrLd9tkHjZtSfiB5Q/6LxRzFYt+RnUKUx9r0eVOeEv/+9XK7w2j4MD FR9rXan2HL/LR7pfDu2b7yThZoytXfSgYN9eDEdFuaXmLMVD4YRmsfDcnAx2lCzRpHhN dhJ8XMg6cKzNtiiZc+zfeCeX6fQjZ0HtHsq0MhwoO9aH566st75MUh1aZZSWxBEv+8gD KJHg== X-Gm-Message-State: AJaThX7eSvovHoJYzLsfTN3p/A/FVg8gHEnKLzI3ihgSHVLg81fnhDbE 96UU4uIjRsYFfkdnJjTP4srz6SE3Eji65Yr7gSrf+Q== X-Google-Smtp-Source: AGs4zMakQEn6r2IsnM2mAebA3QLToiU36VSqeyDWgdZNeOOg3nlAr5SQtLS2UCEsYKCXjw9VCn+/pOnZ82JSxmi9X4o= X-Received: by 10.46.19.25 with SMTP id 25mr4989805ljt.190.1510674680496; Tue, 14 Nov 2017 07:51:20 -0800 (PST) MIME-Version: 1.0 From: Arnaud BOS Date: Tue, 14 Nov 2017 15:51:09 +0000 Message-ID: Subject: Parallelism, field grouping and windowing To: "user@storm.apache.org" Content-Type: multipart/alternative; boundary="f403043605c6b3c825055df35b2e" archived-at: Tue, 14 Nov 2017 15:51:26 -0000 --f403043605c6b3c825055df35b2e Content-Type: text/plain; charset="UTF-8" Hi all, Suppose I have a WindowedBolt subscribing to a KafkaSpout stream. The WindowedBolt is a sliding window of size 2 with a sliding interval of 1. The stream from KafkaSpout to WindowedBolt is partitioned by field (Field Grouping). To keep things simple let's assume the stream gets partitioned in two groups only, A and B in the context of two supervisors with one worker process each (setNumWorkers(2)): By default the number of tasks of a spout/bolt is equal to the parallelism hint, correct ? If the WindowedBolt has a parallelism hint of 2 and 2 tasks (setNumTasks(2) or not specified), one thread (executor) running exactly 1 task will execute on each worker/supervisor, correct ? Each executor/task will receive tuples from only one of the partitions, for instance executor/task 1 gets tuples from substream A and executor/task 2 get tuples from substream B, correct ? Assuming the latter is correct, if a node/supervisor fails, will the remaining task begin to receive tuples from the other half of the stream partition ? That is, if executor/task 2 (on supervisor 2) disappears, will the tuples from A and B be interleaved in the sliding window of executor/task 1 ? Or will the substream B just "hang" until another task is started to handle it ? All of this makes sense and my use case might look counterproductive but I need to be sure of what's happening, my workflow is inherently sequential and I don't want "streams" to interleave. Thanks ! --f403043605c6b3c825055df35b2e Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi all,

Suppose= I have a WindowedBolt subscribing to a KafkaSpout stream.
The Win= dowedBolt is a sliding window of size 2 with a sliding interval of 1.
The stream from KafkaSpout to WindowedBolt is partitioned by field (Fie= ld Grouping).
To keep things simple let's assume the stream g= ets partitioned in two groups only, A and B in the context of two superviso= rs with one worker process each (setNumWorkers(2)):

By default the number of tasks of a spout/bolt is equal to the parallelis= m hint, correct ?

If the WindowedBolt has a paralle= lism hint of 2 and 2 tasks (setNumTasks(2) or not specified), one thread (e= xecutor) running exactly 1 task will execute on each worker/supervisor, cor= rect ?

Each executor/task will receive tuples from= only one of the partitions, for instance executor/task 1 gets tuples from = substream A and executor/task 2 get tuples from substream B, correct ?
<= /div>

Assuming the latter is correct, if a node/supervis= or fails, will the remaining task begin to receive tuples from the other ha= lf of the stream partition ? That is, if executor/task 2 (on supervisor 2) = disappears, will the tuples from A and B be interleaved in the sliding wind= ow of executor/task 1 ? Or will the substream B just "hang" until= another task is started to handle it ?

All of this= makes sense and my use case might look counterproductive but I need to be = sure of what's happening, my workflow is inherently sequential and I do= n't want "streams" to interleave.

Thanks !
--f403043605c6b3c825055df35b2e--