commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacob Beard <jbea...@cs.mcgill.ca>
Subject Re: [all] preparing initial commit
Date Tue, 18 May 2010 22:13:47 GMT
Hi,

Thanks to everyone for the feedback. In this email, I'm going to
describe the structure of my project, the current build system, and
try to propose solutions to some of the issues that have been raised.

The overall goal of SCXMLcgf/js is to construct an SCXML-to-JavaScript
compiler. The compiler should take one or more scxml source files, and
produce JavaScript code that may be executed in a web browser, or in a
shell environment using Mozilla Rhino. Additionally, the compiler
itself should be portable, so that it may be run in a web browser or
from the command line using Rhino. The code I have produced so far
only runs from the command line, but decisions that are made going
forward should attempt to ensure portability across both environments.

The project is currently composed of the following artifacts:
* the compiler: comprises six classes, each in their own .js file
* unit tests: comprises a test SCXML file, and a unit test script
* performance tests: comprises an SCXML file, a script that measures
performance of compiled js code, and a script that analyzes the
captured performance data

There is also an automation script written in JavaScript, which is
designed to be run under Rhino, which automates unit and performance
testing. The compiler is written entirely in JavaScript, and so a
compilation step is not required. Nevertheless, the automation script
does a lot. To illustrate this, here is a page from my notes which
illustrates the tasks performed in order to run performance tests:

http://jacobbeard.net/Note1.pdf

What this note says is that each test case can have many performance
tests. Each test case and each backend should be run through the
compiler to create JavaScript implementations. Each generated js
implementation and performance test are combined to create test html
files. Each test html file is then opened in all browsers on the
system, and data is captured in order to derive the final performance
results. These results are then passed to the analysis script to
create a summary of the results.

Unit testing is similar process.

I have never used Maven, and so did not originally consider this as an
option, but I did consider using Ant. I have used Rhino in conjunction
with Ant on previous projects, but I decided against it for this
project, because I felt I would be able to develop this nontrivial
build logic more quickly by writing it in JavaScript. This allowed me
to simply pass around native JavaScript data structures as I wired
things together, and made things like looping straightforward.

Here is a summary of the tasks currently performed by the automation script:

	- clean: deletes the build directory
	- init: creates the build directory
	- genJavaScript: generates JavaScript from unit and performance test SCXML
	- genPerformanceTestHtml: generates an HTML file for each performance
test script and each target JavaScript file
	- runPerformanceTestsWithSelenium: uses Selenium to open generated
performance test HTML files in browsers that are available on the
system, and capture benchmark data from them
	- doPerformanceAnalysis: sends captured benchmark data to the
performance analysis script
	- genUnitTestHtml: generates an HTML file for each performance test
script and each target JavaScript file
	- runUnitTestsWithSelenium: uses Selenium to open generated unit test
HTML files in browsers that are available on the system, and capture
unit test results from them
	- runUnitTestsWithRhino: runs unit tests with Rhino.


So, that concludes the introduction to my project and its build
system. I'll now attempt to answer some of the questions that have
been raised.

First, what role, if any, should Maven play in this project?

I have never worked with Maven, so my perspective on this is based on
an afternoon's research. I feel that Maven would, in theory, be useful
to this project, but it would take a lot of work in order to get it to
a point where it could be used, as plugins would need to be written to
interact with the JavaScript front-end through stdin. Also, many
portions of Maven's default lifecycle, such as compilation and
packaging, are not really relevant to the project at this point. The
main things that requires automation in the project, right now, are
unit and performance testing, and, potentially, downloading of library
dependencies. I'm not sure if Maven adds value when it comes to
downloading external libraries, but it's not difficult for me to think
about how this could be achieved using Rhino and a custom build
script. My preference would be to continue using the project's current
build system.

The next question is, should external libraries be checked into the
main source tree, or should they downloaded as part of a build script,
or via svn:externals?

It sounds like it would be desirable to download external libraries,
rather than checking them in, and I think this is fine. It should be
noted, however, that this decision will have an impact on the other
technology that will be used in the project. Specifically, this
decision will impact selection of a library for module loading.

Here's just a bit of background on this: because JavaScript does not
have native language features for module loading, it relies on
libraries to provide this functionality. So far, in the project, I
have been using the load() function provided Rhino to define
dependencies between modules. Unfortunately, this approach has two
disadvantages which make it inappropriate going forward: it is not
portable to the browser environment, and it does not know where it is
executing in the filesystem, which makes loading resources via
relative paths awkward.

I have investigated two libraries for module loading: Dojo, and
RequireJS. I initially used Dojo, because I was already familiar with
it, and am already using part of Dojo for unit testing. Unfortunately,
the Dojo module system has a limitation that would make the
downloading of external libraries difficult, which is that, for a
JavaScript module to be used by dojo, it must have a
"dojo.provide(<modulename>)" call in the file. For the majority of
these JavaScript libraries, this means that the file must be modified
to include this call. Dojo would therefore require JavaScript
libraries to be modified, and checked in.

RequireJS, on the other hand, does not impose this constraint, and is
better-suited to the purposes of this project in other ways, which I
can describe later if anyone is interested.

To summarize, my feeling on this is that it would be preferable to use
RequireJS as opposed to Dojo for module loading, and that external
libraries should be downloaded by a build script, or via
svn:externals, as opposed to being checked in.

If the above decisions seems reasonable (that Rhino be used for the
build system, RequireJS as a module loader library, and external
dependencies downloaded by the build script), then my next questions
would be about the about the project structure. Right now, all
artifacts are mostly located in one directory (as I said, Rhino's load
function makes relative paths tricky), so there is a great deal of
flexibility as to how this could be implemented. Would it make sense
to use something similar to a standard m2 project layout? Would this
help or hinder developer comprehension if maven is not being used?

Thanks again for the feedback. I look forward to hearing what you think,

Jake

	

On Tue, May 18, 2010 at 1:38 AM, Henri Yandell <flamefew@gmail.com> wrote:
> On Mon, May 17, 2010 at 11:21 AM, Rahul Akolkar <rahul.akolkar@gmail.com> wrote:
>> On Sun, May 16, 2010 at 11:25 PM, Jacob Beard <jbeard4@cs.mcgill.ca> wrote:
>>> Hi,
>>>
>>> I'm currently working to prepare the initial commit for my Google
>>> Summer of Code project. I have a quick question regarding external
>>> libraries. My project currently uses a few libraries which are
>>> licensed under liberal, non-copyleft licenses. A list of these
>>> libraries and their associated licenses is as follows:
>>>
>>> Mozilla Rhino - MPL
>>> Dojo JavaScript toolkit - BSD/AFL dual license
>>> Selenium - Apache 2.0 License
>>> js-beautify - something a bit non-standard:
>>> http://github.com/einars/js-beautify/blob/master/license.txt
>>> json2 - Public Domain
>>>
>> <snip/>
>>
>> I'll first address the licenses and then get to the how to include the
>> libraries bit.
>>
>> You may have seen the following categorization of licenses we use
>> (Category A is generally easier to incorporate than B):
>>
>>  http://www.apache.org/legal/3party.html#category-a
>>
>>  http://www.apache.org/legal/3party.html#category-b
>>
>> Given that, the above list you have looks OK:
>>  * Assuming we'll only depend on Rhino binaries
>>  * The js-beautify license seems reasonable, we may have to ping the
>> ASF Legal Affairs committee for a definite answer
>
> js-beautify license == MIT.
>
> Hen
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Mime
View raw message