Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 117DB200BFF for ; Tue, 17 Jan 2017 20:52:22 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 1020F160B46; Tue, 17 Jan 2017 19:52:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id DBDE9160B30 for ; Tue, 17 Jan 2017 20:52:20 +0100 (CET) Received: (qmail 55924 invoked by uid 500); 17 Jan 2017 19:52:20 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 55914 invoked by uid 99); 17 Jan 2017 19:52:20 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Jan 2017 19:52:19 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 7264B18066C for ; Tue, 17 Jan 2017 19:52:19 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.379 X-Spam-Level: ** X-Spam-Status: No, score=2.379 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id upUy1ti07GiJ for ; Tue, 17 Jan 2017 19:52:14 +0000 (UTC) Received: from mail-ot0-f176.google.com (mail-ot0-f176.google.com [74.125.82.176]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id DC0E35FC2B for ; Tue, 17 Jan 2017 19:52:13 +0000 (UTC) Received: by mail-ot0-f176.google.com with SMTP id 65so69838986otq.2 for ; Tue, 17 Jan 2017 11:52:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=zn2oFuGph+EH8/0qpH0CqFke71R4Yva9khyfn+NotTI=; b=VLMxu7doD/vU5U4a+r455J1z73eLCFo+nTVoVzxgWysvi9HnJWa1YYYzXm6ikgBEdt +4J3MNCgg5ewq891yue/mhfaeh1Vq3ztuhF0QkOhcBhWEcjYYFiJ7qe9dM67goXfJFEX kB6fLfgQgsYd3/J7oF50lUoNcekkk5mE2OeX1FPlzWyfRjFS9EjY+73FRwSfd+QaDqNB qdb1kTnLKlO4E41EIPbIauzqE+BglDIgJIwgo5FzPCMnhzudH21aKfUCzALvjCJcmtjg JuUhg3RFysuzF3uDCMeSxk+aIVoo2FtknXwbPRu9TtoYnTv9O0uRqDG0omFi3FeG6nwq ECzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=zn2oFuGph+EH8/0qpH0CqFke71R4Yva9khyfn+NotTI=; b=JAc1kST4Q/a5OY21CKTse7/JCU5yUDchtty1QFQGGWzJoXXAn8DFKy4ggzr9PATkE6 Wek8nP7C0ncHrt3WcTkpGjFfrH8vIffbV7Ni3Jcx8VWQZOEu8/g/NNQ6VDICRsPV0+G8 Js8CLzSM1RFnVhBCyiI6iTsbpyu0xiV6AuTGH4/ErK5KpA7tjbE7fpeS7spkIvztsN9g hTCdNcbZKNz3PEwG3lM5x6jfWLZOftG89HPFUkO7lq3G8egFCv3WJ6yOReja3WuAEMN6 V/3F25CafdGQlN5W/cj1NY/lXh7Re8mELFtZcgtvfDa2eGVGP2OlYrmo0EL0C5AHqw2E UmbQ== X-Gm-Message-State: AIkVDXLAbRUVMxIj6aW3wobcKbfDPqOxChIuiGVPeANYoGUWPdhSTg3X58RDlsYv7p0LHUK1GauW6q5vPCocsg== X-Received: by 10.157.56.179 with SMTP id p48mr19753410otc.153.1484682731882; Tue, 17 Jan 2017 11:52:11 -0800 (PST) MIME-Version: 1.0 Received: by 10.157.1.19 with HTTP; Tue, 17 Jan 2017 11:52:11 -0800 (PST) In-Reply-To: <1E74F11D-12E1-49D4-9786-81CA7D21B9BD@greenbytes.de> References: <8DD5F29F-3F46-48F7-91C3-59A7C86DD283@greenbytes.de> <3ABB396A-5A57-4FF1-8985-57279C9A6771@greenbytes.de> <1E74F11D-12E1-49D4-9786-81CA7D21B9BD@greenbytes.de> From: Kyriakos Zarifis Date: Tue, 17 Jan 2017 11:52:11 -0800 Message-ID: Subject: Re: HTTP/2 frame prioritization not honored To: dev@httpd.apache.org Content-Type: multipart/alternative; boundary=001a11c05284d68df105464fa28a archived-at: Tue, 17 Jan 2017 19:52:22 -0000 --001a11c05284d68df105464fa28a Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Stefan, Sorry for the delay, I just got back from traveling. I just tried your new patch and indeed it gets rid of the 100ms delay: The server now serves the high priority object only ~5-20ms (did a few runs) after it receives its request, and only sending 5-6 lower-prio frames in between! That's is a dramatic improvement compared to what I was observing in the first experiments (~500ms delay), and I think it affects not only this scenario that I was testing, but any scenario where objects of different priorities are conflicting. To verify this, I also tested another simple scenario in which I aggressively Server-Push several big objects when the server gets the base HTML file. Without the patch, objects embedded in the HTML (requests normally) are backed behind a large fraction of the pushed payload and delayed considerably (500ms). With the patch this is avoided (embedded objects server within a few ms after their request arrives, preempting Pushed objects) If you are interested, I have logs comparing the v1.8.8 performance to the baseline, for both the scenarios ( 1: "prefetched" objects triggered at the end of a page load delaying normal objects from the next navigation, and 2: "server-pushed" objects conflicting with embedded objects on the current page) Would this patch eventually make it upstream? I'd be very interested in some details on what was causing this and how you resolved it. On Fri, Jan 13, 2017 at 8:43 AM, Stefan Eissing < stefan.eissing@greenbytes.de> wrote: > Hi Kyriakos, > > maybe you can give https://github.com/icing/mod_h2/releases/tag/v1.8.8 a > try in your setup? I would be interested if it gets rid of the 100ms dela= y > in response processing. Thanks! > > Cheers, > > Stefan > > > Am 04.01.2017 um 19:27 schrieb Kyriakos Zarifis = : > > > > Hi Stefan, > > > > Yes, this is making a big, obvservable difference! > > > > Specifically, in all 3 repeats, the high priority stream is now served > 100ms after it was received, writing ~100 frames (~1.6MB) of currently > served, lower-priority stream. (was: 500ms, 500frames(~7.5MB)) > > > > In more detail, after the high-prio request is received, 20 more > low-prio frames are served before the h2_task for it logs that it opens t= he > output for the new stream. Then, another 80 low-frames are served before > the high-prio reply is written. (relevant logs below) > > > > This already has an observable impact on the transition to the next pag= e > the moment I click on the link (goes from 1.5sec to less than 500ms), whi= ch > I think is great because this tweak is relevant not just to this scenario= , > but to any higher level stream that begins while lower ones are served, > even within a single page. > > > > I'm wondering if the change you made can be pushed harder to make the > switch to the new stream even faster, e.g. avoiding even those 100 frames= ? > > > > > > Thanks, > > Kyriakos > > > > > > > > [Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864] > h2_stream.c(213): [client] AH03082: h2_stream(0-19): opened > > > > [Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864] > h2_session.c(452): [client] AH03066: h2_session(0): recv > FRAME[HEADERS[length=3D39, hend=3D1, stream=3D19, eos=3D1]], frames=3D13/= 1486 (r/s) > > > > 20 x lower-prio frames: > > > > [Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864] > h2_session.c(685): [client] AH03068: h2_session(0): sent > FRAME[DATA[length=3D16275, flags=3D0, stream=3D5, padlen=3D0]], frames=3D= 16/1486 (r/s) > > > > [Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864] > h2_task.c(106): [client] AH03348: h2_task(0-19): open output to GET > 204.57.7.200 /preposition/nextnav.html > > > > 80 x lower-prio frames: > > [Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864] > h2_session.c(685): [client] AH03068: h2_session(0): sent > FRAME[DATA[length=3D16275, flags=3D0, stream=3D5, padlen=3D0]], frames=3D= 16/1504 (r/s) > > > > [Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864] > h2_session.c(685): [client] AH03068: h2_session(0): sent > FRAME[HEADERS[length=3D87, hend=3D1, stream=3D19, eos=3D0]], frames=3D16/= 1587 (r/s) > > > > > > [Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864] > h2_session.c(685): [client] AH03068: h2_session(0): sent > FRAME[DATA[length=3D456, flags=3D1, stream=3D19, padlen=3D0]], frames=3D1= 6/1588 (r/s) > > > > > > On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing < > stefan.eissing@greenbytes.de> wrote: > > Hi Kyriakos, > > > > sorry for not replying earlier. I could find the issue you ran into, > namely that mod_http2 is obsessed with the streams it already has and doe= s > not submit ready responses - until the existing streams are done or pause= . > > > > I hope that the new release works much more nicely for you. You find it > at https://github.com/icing/mod_h2/releases/tag/v1.8.7 > > > > Thanks, > > > > Stefan > > > > > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis >: > > > > > > Thanks Stefan! > > > > > > I just tried the tweaked version. I think I am seeing similar > behavior, i.e. the higher-prio HTML reply is sent ~500ms after its reques= t > is received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime= . > > > > > > Before any conclusions, I wanted to make sure I compiled/used the > tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I > haven't done the process before: I couldn't find details on the right way > to swap in/out module versions, so I ended up compiling v.1.8.6 and > pointing to the created mod_http2.so in "/etc/apache2/mods-enabled/http2.= load", > but I'm really not sure that's the right way. The only way I verified it > was seeing this in /var/log/apache2/error.log: > > > > > > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git, > feats=3DCHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..." > > > > > > > > > Assuming this is an acceptable way to use the tweaked version of the > module (please let me know if not), where should I share two apache log > files (one trace for each module version) so you could verify what I see? > > > > > > > > > > > > > > > A few relevant lines from the v1.8.6 run (similar to the stable > module, AFAICT): > > > > > > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718] > h2_session.c(439): [client ] AH03066: h2_session(0): recv > FRAME[HEADERS[length=3D39, hend=3D1, stream=3D19, eos=3D1]], frames=3D13/= 1721 (r/s) > > > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] > h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET > /preposition/nextnav.html > > > > > > [ ... continue sending ~500 DATA frames for streams 7-11 ...] > > > > > > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718] > h2_session.c(661): [client ] AH03068: h2_session(0): sent > FRAME[HEADERS[length=3D87, hend=3D1, stream=3D19, eos=3D0]], frames=3D16/= 2209 (r/s) > > > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718] > h2_session.c(661): [client ] AH03068: h2_session(0): sent > FRAME[DATA[length=3D456, flags=3D1, stream=3D19, padlen=3D0]], frames=3D1= 6/2210 > (r/s)8.6 > > > > > > [ ... continue sending streams 11 onwards ...] > > > > > > Thanks! > > > > > > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing < > stefan.eissing@greenbytes.de> wrote: > > > Hi Kyriakos, > > > > > > have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6 > > > > > > That version flushes when at least 2 TLS records are ready to send. > Also, frame sizes are now aligned to TLS record sizes. So they are > influenced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings. > > > > > > Additionally, and highly experimental, I added H2TLSFlushCount to > configure the number of records to flush. You may play around with it > (default is 2) in your scenarios. > > > > > > I hope that this reduces buffering and makes the server more (another > word for agile, pls) to stream changes. Please let me know if that had an= y > effect on your tests. > > > > > > Thanks, > > > > > > Stefan > > > > > > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis < > kyr.zarifis@gmail.com>: > > > > > > > > That means the images should get a minim of ~30% of the available > bandwidth as long as they have data. My reading. > > > > > > > > Right. Makes sense. > > > > > > Stefan Eissing > > > > > > bytes GmbH > > > Hafenstrasse 16 > > > 48155 M=C3=BCnster > > > www.greenbytes.de > > > > > > > > > > Stefan Eissing > > > > bytes GmbH > > Hafenstrasse 16 > > 48155 M=C3=BCnster > > www.greenbytes.de > > > > > > Stefan Eissing > > bytes GmbH > Hafenstrasse 16 > 48155 M=C3=BCnster > www.greenbytes.de > > --001a11c05284d68df105464fa28a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi Stefan,

Sorry for the delay, I just = got back from traveling. I just tried your new patch and indeed it gets rid= of the 100ms delay: The server now serves the high priority object only ~5= -20ms (did a few runs) after it receives its request, and only sending 5-6 = lower-prio frames in between!

That's is a dram= atic improvement compared to what I was observing in the first experiments = (~500ms delay), and I think it affects not only this scenario that I was te= sting, but any scenario where objects of different priorities are conflicti= ng. To verify this, I also tested another simple scenario in which I aggres= sively Server-Push several big objects when the server gets the base HTML f= ile. Without the patch, objects embedded in the HTML (requests normally) ar= e backed behind a large fraction of the pushed payload and delayed consider= ably (500ms). With the patch this is avoided (embedded objects server withi= n a few ms after their request arrives, preempting Pushed objects)=C2=A0

If you are interested, I have logs comparing the v1.= 8.8 performance to the baseline, for both the scenarios ( 1: "prefetch= ed" objects triggered at the end of a page load delaying normal object= s from the next navigation, and 2: "server-pushed" objects confli= cting with embedded objects on the current page)

W= ould this patch eventually make it upstream? I'd be very interested in = some details on what was causing this and how you resolved it.=C2=A0
<= div>

On Fri, Jan 13, 2017 at 8:43 AM, Stefan Eissing <stefan.eiss= ing@greenbytes.de> wrote:
H= i Kyriakos,

maybe you can give https://github.com/icing/mod_<= wbr>h2/releases/tag/v1.8.8 a try in your setup? I would be interested i= f it gets rid of the 100ms delay in response processing. Thanks!

Cheers,

Stefan

> Am 04.01.2017 um 19:27 schrieb Kyriakos Zarifis <kyr.zarifis@gmail.com>:
>
> Hi Stefan,
>
> Yes, this is making a big, obvservable difference!
>
> Specifically, in all 3 repeats, the high priority stream is now served= 100ms after it was received, writing ~100 frames (~1.6MB) of currently ser= ved, lower-priority stream.=C2=A0 =C2=A0(was: 500ms, 500frames(~7.5MB))
>
> In more detail, after the high-prio request is received, 20 more low-p= rio frames are served before the h2_task for it logs that it opens the outp= ut for the new stream. Then, another 80 low-frames are served before the hi= gh-prio reply is written. (relevant logs below)
>
> This already has an observable impact on the transition to the next pa= ge the moment I click on the link (goes from 1.5sec to less than 500ms), wh= ich I think is great because this tweak is relevant not just to this scenar= io, but to any higher level stream that begins while lower ones are served,= even within a single page.
>
> I'm wondering if the change you made can be pushed harder to make = the switch to the new stream even faster, e.g. avoiding even those 100 fram= es?
>
>
> Thanks,
> Kyriakos
>
>
>
> [Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864] h2_stream.= c(213): [client] AH03082: h2_stream(0-19): opened
>
> [Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864] h2_session= .c(452): [client] AH03066: h2_session(0): recv FRAME[HEADERS[length=3D39, h= end=3D1, stream=3D19, eos=3D1]], frames=3D13/1486 (r/s)
>
> 20 x lower-prio frames:
>
> [Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864] h2_session= .c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=3D16275, f= lags=3D0, stream=3D5, padlen=3D0]], frames=3D16/1486 (r/s)
>
> [Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864] h2_task.c(= 106): [client] AH03348: h2_task(0-19): open output to GET 204.57.7.200 /pre= position/nextnav.html
>
> 80 x lower-prio frames:
> [Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864] h2_session= .c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=3D16275, f= lags=3D0, stream=3D5, padlen=3D0]], frames=3D16/1504 (r/s)
>
> [Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864] h2_session= .c(685): [client] AH03068: h2_session(0): sent FRAME[HEADERS[length=3D87, h= end=3D1, stream=3D19, eos=3D0]], frames=3D16/1587 (r/s)
>
>
> [Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864] h2_session= .c(685): [client] AH03068: h2_session(0): sent FRAME[DATA[length=3D456, fla= gs=3D1, stream=3D19, padlen=3D0]], frames=3D16/1588 (r/s)
>
>
> On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing <stefan.eissing@greenbytes.de> wrote: > Hi Kyriakos,
>
> sorry for not replying earlier. I could find the issue you ran into, n= amely that mod_http2 is obsessed with the streams it already has and does n= ot submit ready responses - until the existing streams are done or pause. >
> I hope that the new release works much more nicely for you. You find i= t at https://github.com/icing/mod_h2/release= s/tag/v1.8.7
>
> Thanks,
>
> Stefan
>
> > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <kyr.zarifis@gmail.com>:
> >
> > Thanks Stefan!
> >
> > I just tried the tweaked version. I think I am seeing similar beh= avior, i.e. the higher-prio HTML reply is sent ~500ms after its request is = received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> >
> > Before any conclusions, I wanted to make sure I compiled/used the= tweaked mod properly with my existing Apache/2.4.25 on Ubuntu, since I hav= en't done the process before: I couldn't find details on the right = way to swap in/out module versions, so I ended up compiling v.1.8.6 and poi= nting to the created mod_http2.so in "/etc/apache2/mods-enabled/h= ttp2.load", but I'm really not sure that's the right way. The = only way I verified it was seeing this in /var/log/apache2/error.log:
> >
> > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git, fe= ats=3DCHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."
> >
> >
> > Assuming this is an acceptable way to use the tweaked version of = the module (please let me know if not), where should I share two apache log= files (one trace for each module version) so you could verify what I see?<= br> > >
> >
> >
> >
> > A few relevant lines from the v1.8.6 run (similar to the stable m= odule, AFAICT):
> >
> > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718] h2_se= ssion.c(439): [client ] AH03066: h2_session(0): recv FRAME[HEADERS[length= =3D39, hend=3D1, stream=3D19, eos=3D1]], frames=3D13/1721 (r/s)
> > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] h2_ta= sk.c(106): [client ] AH03348: h2_task(0-19): open output to GET=C2=A0 /prep= osition/nextnav.html
> >
> > [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> >
> > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718] h2_se= ssion.c(661): [client ] AH03068: h2_session(0): sent FRAME[HEADERS[length= =3D87, hend=3D1, stream=3D19, eos=3D0]], frames=3D16/2209 (r/s)
> > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718] h2_se= ssion.c(661): [client ] AH03068: h2_session(0): sent FRAME[DATA[length=3D45= 6, flags=3D1, stream=3D19, padlen=3D0]], frames=3D16/2210 (r/s)8.6
> >
> > [ ... continue sending streams 11 onwards ...]
> >
> > Thanks!
> >
> > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing <stefan.eissing@greenbytes.de> wro= te:
> > Hi Kyriakos,
> >
> > have a look at https://github.com/icing= /mod_h2/releases/tag/v1.8.6
> >
> > That version flushes when at least 2 TLS records are ready to sen= d. Also, frame sizes are now aligned to TLS record sizes. So they are influ= enced by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> >
> > Additionally, and highly experimental, I added H2TLSFlushCount to= configure the number of records to flush. You may play around with it (def= ault is 2) in your scenarios.
> >
> > I hope that this reduces buffering and makes the server more (ano= ther word for agile, pls) to stream changes. Please let me know if that had= any effect on your tests.
> >
> > Thanks,
> >
> > Stefan
> >
> > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <kyr.zarifis@gmail.com>:
> > >
> > > That means the images should get a minim of ~30% of the avai= lable bandwidth as long as they have data. My reading.
> > >
> > > Right. Makes sense.
> >
> > Stefan Eissing
> >
> > <green/>bytes GmbH
> > Hafenstrasse 16
> > 48155 M=C3=BCnster
> > www.greenbytes.de
> >
> >
>
> Stefan Eissing
>
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 M=C3=BCnster
> www.greenbytes.de
>
>

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 M=C3=BCnster
w= ww.greenbytes.de


--001a11c05284d68df105464fa28a--