edgent-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dale LaBossiere <dml.apa...@gmail.com>
Subject Re: Buffering on the edge (network issues)
Date Tue, 02 Jan 2018 17:38:36 GMT
Hi,

An Edgent stream is very light weight, it doesn’t have any inherent buffering per-se.  The
simple model is that the “next tuple” isn’t processed until processing of the current
tuple is complete — accepted by all its downstream streams - ultimately by a “sink”.
 i.e., downstream streams exert back pressure on overall processing / upstream processing.

Window streams (used for aggregation) include buffering.
PlumbingStreams.pressureReliever() [1] can be used to isolate upstream processing from downstream
streams (by adding a buffer).
Use of Topology.events() [2], directly or more typically by a connector, adds a buffer.

Individual connectors (typically the underlying 3rd party client library, e.g., MQTT, Kafka)
influence the behavior based on whether or not they provide any internal buffering.  MQTT,
and Edgent’s MQTT connector exposes quality of service and persistence provider controls.

[1] http://edgent.apache.org/javadoc/latest/org/apache/edgent/topology/plumbing/PlumbingStreams.html#pressureReliever-org.apache.edgent.topology.TStream-org.apache.edgent.function.Function-int-
<http://edgent.apache.org/javadoc/latest/org/apache/edgent/topology/plumbing/PlumbingStreams.html#pressureReliever-org.apache.edgent.topology.TStream-org.apache.edgent.function.Function-int->
[2] http://edgent.apache.org/javadoc/latest/org/apache/edgent/topology/Topology.html#events-org.apache.edgent.function.Consumer-
<http://edgent.apache.org/javadoc/latest/org/apache/edgent/topology/Topology.html#events-org.apache.edgent.function.Consumer->

Hope that helps,
— Dale

> On Dec 29, 2017, at 4:41 PM, Otis Gospodnetić <otis.gospodnetic@gmail.com> wrote:
> 
> Hi,
> 
> I tried looking through the API and examples to see if there is Edgent has
> any built-in buffering capabilities for when the sink cannot be reached for
> whatever reason.  I couldn't find anything like that. Is that something
> that one would have to write for each (new or custom) connector or is that
> a part of the framework and I just missed it?
> 
> If I missed it, how is buffering implemented?
> In-memory or on persistent media or some hybrid?
> Can one define the max buffer capacity in terms of either number of items
> or bytes?
> If so, are the oldest items in the buffer dropped when the max capacity (or
> their age) is reached?
> 
> Thanks,
> Otis
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message