http://nedbatchelder.com/code/modules/coverage.html
I had written a basic test routine for a now-defunct project, where I felt the need of coverage statistics, to see how effective were the tests written so far.
Googling around I found the URL above and a very nice tool. It is a simple Python script (coverage.py) that you run like:
coverage.py -x your_script.py
and it starts collecting data in a .coverage hidden file. After you're done, you can run statistics with
coverage.py -r
It will include standard Python modules in the list, something you are probably not testing the coverage, so try
coverage.py -r -o /usr/lib/python2.4
and only "your" modules will be reported. Probaby the -o parameter should have been passed in first command along with -x, so it would not collect standard libraries' data at all, and making the execution faster. If you want detailed information of which code was or was not covered, do
coverage.py -a my_python_script.py
Which will create a my_python_script.py,covered file with information about what code was covered. Example of that report:
> try: > db_row = self.table.get(id) > row = self.db_to_row(db_row) > intermediate = self.row_to_intermediate(row) > except notfound, e: > err = self.sqlnotfound(e) ! except modelerror, e: ! err = e.args[0] ! except sqlerror, e: ! err = self.sqlerr(e)
It means that the main code as well as the "notfound" exception were touched by the test, while the modelerror and sqlerror were never tested.