flex-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Javier Guerrero García <javi...@gmail.com>
Subject Re: Workers and Speed
Date Sun, 31 Jul 2016 09:18:44 GMT
Alex / Justin:

Just a general question, related to this thread: why are we just assuming
that a worker implementation should be always faster that a single threaded
model? Isn't that heavily dependent on the underlying hardware (and flash
capability to properly use it)? I mean:

In a single thread implementation, full CPU time available for calculation
only, even forgetting to update UI. CPU's only task it to finish the job,
all resources commit to that job, and every other think is left "on hold"
until job is done (even OS sometimes .... :)

In a multiple worker implementation, BESIDES THE OBVIOUS APPARENT
RESPONSIVENESS IMPROVEMENT, that same CPU and underlying hardware, in
addition to "getting the job done", has to take also care of:

   - First, prepare the "context" for each worker (data partitioning,
   variable replication, ...) and initialize each worker.
   - Be in charge for task switching between each worker
   - After task switching, resetting the branch prediction caches (screwing
   all previous predictions in the way) for each one of the threads / loops.
   - While at all that, also refresh and update the UI from time to time
   - Loop the main thread, and listen to worker finishing signals, platform
   events, mouse events, ... and handle them.
   - After all workers finish, combine the results to produce the final

All that using the same underlying hardware and CPU (benefits would be
obvious if we multithread using several different CPU's, one for each
worker, but that's not the case).

So, *IF, and ONLY ONLY IF*:

   - the underlying hardware as a pretty good (and true :) multicore
   support, keeping branch prediction caches for each core
   - all contexts "fit" into one of each core memory space, so no memory
   paging is neccesary
   - you don't run more workers than cores
   - Flash compiler can access and use those cores
   - and the splitting/joining tasks are fast

it could run faster than a single threaded model, right?

Besides being a more "elegant" solution (at least a more "by the book"), am
I missing something? Why is everyone so sure that a multithreaded solution
would run faster, hands down? (keeping "computational fashion" aside :)

On Sun, Jul 31, 2016 at 9:10 AM, Justin Mclean <justin@classsoftware.com>

> Hi,
> > It is bindable.  I will try turning off updates.
> I would guess it going to be an order or two magnitudes faster - depending
> on the changes you make to items in that array collection.
> > How do you go about accessing/manipulating an array inside of an
> > array collection?
> myAC.source
> >  I always just address the properties that I need by
> > accessing the arraycollection itself, ie
> >
> > myArrayCollection[1]["someProperty”]
> I think lookups like that are going to be slower than using the
> myAC[1].someProp form?
> > Should I just declare the variables outside of the loop and
> > just reuse them instead of instantiating them every single time inside of
> > the loops?
> Certainly worth a try (allocating and garbage collection of 38Kx38KxX vars
> cost is likely to be significant) but I’d give scout a try to see where the
> time is being taken up first so you know what to optimise.
> Thanks,
> Justin

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message