incubator-flex-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Frank Wienberg <fr...@jangaroo.net>
Subject Re: [FalconJx] New JavaScript runtime format prototype
Date Fri, 21 Dec 2012 01:29:32 GMT
On Fri, Dec 21, 2012 at 12:20 AM, Erik de Bruin <erik@ixsoftware.nl> wrote:

> > This the overall impression I have with the GCC approach: it introduced
> > complexity only to optimize it later on.
> > I think we better avoid complexity in the first place!
>
> Well, it introduces "complexity" in order to facilitate BETTER
> optimisation later on. So, by avoiding that "complexity" - simply
> because we like "simplicity" - we'd end up with an inferior product.
>

That's not what I was saying. I said if we can avoid complexity in the
first place, we should do so instead of introducing overhead and then
optimizing it away.
E.g. for a super call, what could be more simple *and* efficient than a
direct method invocation to the super method? How can GCC possibly optimize
that call (other than by inlining the super function, which is another
story)? The only thing it can optimize is its own bloated implementation of
goog.base().
Don't get me wrong, GCC is doing a great job in optimizing JavaScript code,
but at the same time it provides a domain-specific language for some
constructs where we don't need it, as we talk about generated, not
hand-written JS code. All the advantages of goog.base() mentioned in this
essay <http://bolinfest.com/essays/googbase.html> do not apply for us.
Quoting from that essay:

Although at first this may seem like a small victory, consider the
> advantages:




   - Fully qualified class names in the Closure Library are often long, so
   it is considerably easier to type goog.base(this, 'disposeInternal') than
   it is to type goog.ui.Component.superClass_.disposeInternal.call(this).
   - When renaming a class, superclass method invocations do not need to be
   updated if goog.base()is used.
   - When changing the superclass, the call to the superclass constructor
   function does not need to be updated if goog.base() is used (unless the
   arguments to the new constructor function are different).

Using AMD, you never have fully qualified names in your code, but class
references are assigned to local variables (define-callback-parameters).
Having generated code, any changes in the AS3 source code are reflected in
the generated JS code automatically.

The other argument is that GCC's idea of a concept does not necessarily
match ActionScript idea of the same concept. For example, the way GCC
implements class inheritance with JavaScript does not take into account
that classes may need to be initialized so that their static (initializer)
code is executed. If we really want to resemble ActionScript semantics, we
have to add such features, anyway, so we end up with a mix of Closure
concepts and our own.


> > For goog.provide and goog.require, RequireJS provides alternatives that
> > better fit our needs: asynchronous module loading much better reflects
> the
> > nature of script loading in the browser. Synchronous require calls were
> > made for Node.js or other environments that allow to load scripts
> > synchronously. RequireJS concentrates exactly on that task and does
> nothing
> > else.
>
> The point of the Closure Builder is that there are no modules. It's
> all one file (and a small one at that). There is no need to manage the
> loading of any modules (with the added overhead required to make it
> work) at all.
>

If you don't want any dynamic module loading, James Burke also provides a
minimal AMD implementation for that use case:
https://github.com/jrburke/requirejs/wiki/AMD-API-Shims
But I see separate module loading as a feature, not a problem. I never
liked the monolithic approach of GWT (Flash is not much better, but at
least it supports SWCs).
Why do you think it is most efficient to load all code at once? What about
all the code that is not always needed, e.g. browser-specific code (like
the polyfills in my prototype), or code that is only needed if the users
invokes a rare expert feature? What about loading common libraries from a
CDN, so that chances are that the users already have them in their browser
cache? For example, if we implemented the core Flash API which is usually
part of FlashPlayer, it would not make sense to statically link it into
each and every FlashJS application.
Another similar example are large modular applications of which only some
modules are updated. If you concat all modules to one big JS file, every
user has to download the new version of the big file (= the whole
application) instead of the updated code only.
Last argument is that modern browsers are quite good at loading static
resources in parallel requests, which is effectively prevented by serving
only one big file.
Ah, one more: the runtime overhead added by RequireJS to make module
loading happen is 15k (minimized, not even gzipped). C'mon, 15k.


>
> > goog.inherits and goog.base are IMHO not needed at all. In JS code
> > generated from ActionScript, what goog.base does can be expressed by a
> > direct function call (see my prototype). goog.inherits is replaced by my
> > defineClass() utility, which consists of just 60 lines of code and
> supports
> > many more AS3 language features than just inheritance. I also had an
> > alternative "VanillaJS" version that did not even use a helper method for
> > inheritance (since it is so easy using ES5 API), but adding the other
> > language feature, I found that this introduced too much repetitive,
> boiler
> > plate code.
>
> Again: personal preference, but I prefer the readability and
> consistency that comes from using "goog" for this. And I don't see
> (other than stepping into, instead of over the lines with "goog") any
> real difference in using one utility method over another. Let's
> revisit this when I have had the chance to mock up some more code so
> we can compare "intermediate" JS formats.
>

My take is to use no utility method at all for super calls (VanillaJS).


>
> > One thing I'd like to know: can you use the Closure Tools *completely
> > without* using the goog library in your JS code and still achieve
> > ADVANCED_OPTIMIZATIONS? If not, I think that is exactly where the
> > vendor-lock-in comes into play.
>
> Yes.
>

Then I'd like to create a variant of my prototype using Closure instead of
RequireJS to directly compare the two approaches. Would you help out
because of my lacking GCC / goog knowledge?



>
> > See above. When talking about 1500 lines of code, I don't worry about
> code
> > size, but about complexity that shines through during debugging.
>
> Step over?
>

Yes, but what if you want to step into the super call? goog.base() hides
the actual call to the super method deep in its implementation code (inside
a for-loop!).
And don't forget the second argument, stack trace. Unlike built-in
functions like Function#call() and #apply(), goog.base() is visible in a
stack trace, so multi-level super calls look like so:

  A.foo()
  goog.base()
  B.foo()
  goog.base()
  C.foo()
...

Again, unnecessary bloat.


>
> >> *tool/vendor dependency:* The Google Closure Tools are Open Source,
> >> available under the Apache License 2.0.
> >>
> >
> > I know that. Still, it is Google's own interpretation of modules,
> > inheritance, ... for which there are at least partially standards in
> > JavaScript (ECMAScript 5, AMD).
>
> Well, all code is someones interpretation of a requirement. And I
> think that for suppliers of frameworks (like Google and ourselves),
> that "partially" that you so casually throw in there really makes a
> difference. I'm sure no-one with a corporate client is willing to
> ditch IE 8 support "just yet". As long as Windows XP still holds at
> least a third of the total OS market share, that browser isn't going
> anywhere soon.
>

That's why I only used ES5 features in the prototype that can be
"polyfilled" in IE < 9, except for getter/setter functions. And if Falcon
can distinguish a getter access from a property access (someone asked that
around here, just can't remember who), we could even rewrite these to
become ES3 compatible (goog also cannot magically implement getters/setters
in IE < 9).
I hope you agree that using Object.create() is more future-prove and less
vendor-specific than goog.inherit().
I guess you are right that AMD is more of a standard than goog.require() is
not such a strong point. The more important argument is that asynchronous
require calls better fit the Web (in contrast to a server environment like
Node.js where you can load scripts synchronously) and thus enables dynamic
module loading, see discussion above.


>
> > To make the UI framework run cross-browser, I have no objections against
> > using an existing JavaScript framework. But this is my point when talking
> > about "separations of concerns". I wouldn't hard-wire the pure language
> > translation to "goog" just because it is a good base for a cross browser
> UI
> > framework. And any UI framework should be compatible with VanillaJS!
>
> A difference in approach, I guess. You focus on a single requirement
> and select the best tool for that, ending up with lots of tools when
> the number of requirements increase. I tend to look at the larger
> picture and select a tool that is best for the overall job... And
> knowing the kind of products Google has produced and maintained -
> using large teams, many different coders on one project - with that
> tool, I'm pretty confident it will serve this project very well
> indeed.
>

Yes, that's right, different approach. I rather get a bad feeling when one
tool want to do too many things, as I think there will always be some
strong parts and some weaker parts, and having a "swiss army knife"
solution, you probably can't mix-and-match in another tool so easily. And I
like making decisions when I feel ready for them, so already deciding which
UI framework to use when trying to simulate a language feels awkward for me.
I am not familiar with the goog library and haven't (knowingly) seen a web
UI built with it. At CoreMedia, we have been using Ext JS for quite a while
and I'm still impressed by the results of using their framework. With
Sencha Touch, they are also strong in the mobile market.



>
> > What makes a difference to me is that to link with RequireJS, you do not
> > have use a RequireJS-specific syntax, but an AMD-specific one. They
> > separate syntax and implementation, which GCC apparently doesn't.
>
> GCC does not need specific syntax. It offers you the ability to add
> annotation (but doesn't require it!) in order to help it do a better
> job, but it will do an excellent job on vanilla JS as well.
>

But how do you specify module dependencies then? Unfortunately, this is one
of the (few) things that is not possible in VanillaJS, and I think that AMD
is the "most vanilla" JavaScript syntax for modules one can imagine.
When we implemented the Jangaroo Runtime, there was no AMD / CommonJS etc.
back then, so we created our own dependency manager and script loader. I
estimate this to be 50% of the whole runtime code, and I'm glad the new
runtime will no longer have to care about this problem. The only question
for me to discuss is whether to use synchronous require() (like the goog
one) or asynchronous (like AMD / RequireJS), and I guess you know my
opinion on that by now.
The problem I have with synchronous require() is that for a Web
application, it is rather a compiler/linker instruction than a function
call. That's why I think AMD is closer to VanillaJS. To really load a
script synchronously in the browser, you'd have to do a synchronous XHR
(evil!) and then eval() the result (more evil!). Btw. this is what Ext JS
4's require mechanism actually does if you do not specify a dependency in
an AMD fashion in advance, so it supports both synchronous and asynchronous
dependencies during development.



>
> > Yes, that's my point. I argue that it does *too much* in one tool, so we
> > don't have fine-grained control over the code generation process.
>
> Actually, I'm talking about the complete set of Closure Tools: JSDoc
> annotations, Library, Compiler and Builder. That's 4 tools tailored to
> work together, but at least the Library and the Compiler are
> separately and independently usable. However, used together they
> complement each other in a way no other combination of tools I know of
> does.
>

So what concretely is missing in the tool combination (RequireJS +
shim-plugin) that I propose?



>
> > Yes, but the code is still generated (even if only in-memory) and parsed,
> > which is an unnecessary overhead in any case.
>
> So you suggest we build our own optimiser/minifier (not an easy task,
> it seems to me) and loose the ability to output "intermediate" code
> just because we want to shave of (maybe, at most) a couple of seconds
> from the compile time?
>
>
No. I suggest that eventually, the Falcon compiler has compiler switches
that influence whether debuggable or optimized JavaScript code is generated
(these may even be fine-grained to enable/disable certain optimizations).
Using your favorite build tool, you could then set up the compiler to run
twice, producing both types of output, if you like.
And concerning the complexity, I think that actually, our starting point is
much better for optimization than Closure's, as we already have a complete
model of all relevant code during compilation, and will for example need to
have means to map ActionScript identifiers to different
JavaScript identifiers, anyway. Without any need for annotations, we also
know things about the code like "this is a static ActionScript method, so
it will never disappear, be re-assigned or overwritten -- we can just
inline the code!".


> > I'm talking about a new feature available so far in Chrome only to let
> the
> > browser debugger map the JavaScript code to the original source code, see
> > for example
> > http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
> > If this feature was present in all major browsers, there would no longer
> be
> > a need for a debuggable JavaScript code format if we generated source
> maps.
> > Firefox has published an implementation plan, but it may take a while. IE
> > 10 will probably follow, now that Microsoft pushes TypeScript.
>
> Waiting for a non-standard feature to be implemented across browser
> and JS VMs might take "a while" :-P
>

I know, that's why I keep on evangelizing that we have to have a mode in
which we generate debuggable JavaScript code!

Thanks, Erik, it is really illuminating discussing these things with you!

Greetings,
-Frank-


>
> EdB
>
>
>
> --
> Ix Multimedia Software
>
> Jan Luykenstraat 27
> 3521 VB Utrecht
>
> T. 06-51952295
> I. www.ixsoftware.nl
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message