Site menu JSCover: code coverage in Javascript
e-mail icon
Site menu

JSCover: code coverage in Javascript

e-mail icon


This article expresses the author's opinion at the time of writing. There are no guarantees of correctness, originality or current relevance. Copying the whole article is forbidden. Transcription of selected parts is allowed provided that author and this source are mentioned.

As you probably know, the HP-12C emulator and all its sibilings are written in Javascript.

It has a fair body of unit testing routines built over the years, but it was missing code coverage (CC) statistics. I took a look on many Javascript CC products, and ended up choosing JSCover.

JSCover operation is clever: it runs a local Web server, which instruments automatically the Javascript files served to the Web client e.g. a browser. Therefore, JSCover does not execute the code itself; it allows you to run it in whatever environment you want.

Example of command that starts such a server:

java -jar JSCover-all.jar -ws --document-root=data \

The data/ folder should contain HTML files that refer to Javascript code. The report/ folder will have CC data if it is explicitly stored. Files in data/ may be symlinks to other folders.

There is a page (http://localhost:8080/jscoverage.html) that gives access to CC statistics. At this point, the CC data is still at browser side (and report/ is empty). It needs to be stored explicitly. Storage can be triggered programatically as well, by calling jscoverage_store().

The CC instrumentation is pretty clever and conceptually simple: before every line of the original code, another line like _$jscoverage['foo.js'].lineData[123]++ is added. Much simpler and more portable than modifying a specific engine or using engine-specific debug hooks.

The main objective is clear: you can run your Web-based application naturally, using it in your favorite browser, and get coverage data.

The first problem I ran into, was how my unit tests were structured. I have this NIH habit of not using unit test frameworks, and I used Rhino-specific tricks like Thread.sleep() to test time-sensitive operations (e.g. display "blinks" to mimic a real calculator, meaning that display contents do not reflect immediately the result of an operation.)

Since I could not use Rhino (it cannot execute an HTML file), I had to make my unit test code truly asynchronous. Things like sleep(100) gave way to after(100, function {...}). At least the browser offers setTimeout() by itself, while in Rhino I had to simulate it using threads. If I were using Jasmine or similar, I would probably have been spared of these efforts.

At this point, my workflow was: open jscoverage.html on browser; from there, open hp11c.html, hp12c.html, etc. to run all unit tests, collect CC data and then store it. After storage, convert it to a more permanent and more familiar LCOV report:

java -cp JSCover-all.jar \
	--format=LCOV report/ data/
cd report
genhtml jscover.lcov
open index.html

I guess JSCover has portions written on Javascript, so it depends on Rhino to function. The JSCover-all.jar file in distribution contains the specific Rhino version that it needs to run (*) (**).

I found a bug in JSCover related to JS files without any executable code (either empty or just with comments). It was promptly fixed by the author, I tested the VCS version to check the fix. You should get version 1.0.1 or better to avoid this bug (or just make sure that no JS file is empty).

Having a way to generate a coverage report was great, but it would be even better if it were 100% automatic.


If I had the option, I would have used Rhino as JS engine, but it was not an option, so I had to find another way.

I found PhantomJS, a complete "headless" Webkit-based browser. It can do anything that a browser does, but renders offscreen. And it can be driven by a JS script, making it perfect for unit-testing.

PhantomJS distributes "static" binaries for major operating systems, so you don't have to install anything or messing with folders and libraries. I just added the binary to my VCS and that's it.

With the help of PhantomJS, I ended up with 100% automatic code coverage report:

rm -rf report/*
java -jar JSCover-all.jar -ws --document-root=data \
		--report-dir=report --include-unloaded-js &
sleep 1

if ./phantomjs phantom.js http://localhost:8080/hp11c.html &&
   ./phantomjs phantom.js http://localhost:8080/hp12c.html &&
   ./phantomjs phantom.js http://localhost:8080/hp16c.html; then
	echo "All tests run"
	sleep 1
	echo "Error running tests"
	sleep 1

kill $PID

if [ -e report/error* ]; then
	echo "Error log in report file, some error in JSCover?"
	exit 1

if [ "$ERROR" = "0" ]; then
	sleep 1
	java -cp JSCover-all.jar \
			--format=LCOV report/ data/ &&
		cd report &&
		genhtml jscover.lcov &&
		open index.html

The script that drives PhantomJS (phantom.js):

var system = require('system');

if (system.args.length !== 2) {
    console.log('Usage: phantomjs script URL');

var page = require('webpage').create();

page.onConsoleMessage = function(msg) {

page.onError = function (msg, trace) {
    trace.forEach(function(item) {
        console.log('  ', item.file, ':', item.line);

var check_done = function ()
	var res = page.evaluate(function () {
		return [ut_done, ut_exit];
	if (! res[0]) {
		console.log("PhantomJS: not done yet");
		setTimeout(check_done, 1000);
	} else {
		console.log("PhantomJS done, exit code " + res[1]);
};[1], function (status) {
    if (status !== "success") {
        console.log("Unable to load page (server out?)");
    } else {
	setTimeout(check_done, 0);

Some explanations are due here. First, if you use Jasmine or other major unit test framework, you won't need to put together such a script, since both PhantomJS and JSCover supply ready-to-use examples. I had to write one because my unit test "framework" is homebrew.

The Javascript code that runs inside a browser session (created by has no access to the outer environment (in which the driver script above is run) and has no access to phantom object. For example, the unit test routine cannot notify directly that it is done, and cannot request PhantomJS to exit.

In order to exchange data, you need to run page.evaluate(function). That function will be run in browser context. It cannot be run in a busy-waiting loop, otherwise it would severely slow down the browser context.

In the example above, I use page.evaluate() to retrieve ut_done and ut_exit variables once a second. When my unit testing is done, ut_done is set to 1, the driver script eventually detects this and calls phantom.exit().

Also, note that I added handlers for console messages and errors; these events happen completely within browser context and would be invisible otherwise. (I just saw I should have added phantom.exit(1) in the error handler.)


(*) I am beginning to like the JAR format. Selling my soul to yet another devil :)

(**) Rhino 1.7R4 has a bug related to JavaAdapter, which affects e.g. the ability of using java.util.TimerTask from Javascript (that I currently use to emulate setTimeout(), instead of sleeping threads). Many products that depend on Rhino, JSCover included, bundle 1.7R5 compiled from VCS. I got a fixed js.jar from one of them (too lazy to compile it!).

e-mail icon