I have just completed the coding on my first NUnit test. I had not written any test software before and I had certainly never delved into 'nant test'. So I thought I would write up my 'experiences'.
'nant test' runs all the tests that have been entered into the nant build file. Some are very low level functional tests that need neither client nor server. Others are server tests while a few are client tests. Running the server, if required, is all handled automatically.
The first time I ran 'nant test' I found pretty quickly that I got a BUILD FAILED message. This was in one of the initial low level functional tests and it was immediately apparent that it was because I had actually changed the way that CSV files were specified to work. I had wanted to make CSV importing rules more 'flexible' and more in keeping with generally accepted principles that a CSV data file should be able to include (optional) spaces after the comma and these would be removed in the import process. I had made the code work that way and had modified the Ict.Common test that checked the CSV behaviour and assumed that was enough. However, when running 'nant test' the first time I realised that CSV functionality was used more extensively in Open Petra than simple file I/O and so tests were now failing in other places.
Actually the way to solve this was to change my new implementation of file import so that, rather than modify the behaviour, I extended it. That way all the old code worked for everything that was already written, but I got my new behaviour to work alongside - and then wrote a few additional tests to check my new extensions.
Conclusion #1: This showed me how useful it is to have tests at all. By having tests that show how the software has been designed to work, it quickly becomes apparent when a later 'enhancement' has upset what has gone before. So tests are good! But you knew that already!
Having fixed my new code and new tests I was in a position to run 'nant test' again.... This time I again got a BUILD FAILED and inspection of the nant output showed the following:
Thursday, 21-Feb-2013, 20:41:56 : Error executing non-query SQL statement.
The SQL Statement was:
UPDATE public.a_batch SET a_batch_description_c = ?, a_batch_debit_total_n = ?, a_batch_credit_total_n = ?, a_date_effective_d = ?, a_last_journal_i = ?, a_gift_batch_number_i = ?, s_modification_id_t = NOW(), s_modified_by_c = ?, s_date_modified_d = ? WHERE a_ledger_number_i = ? AND a_batch_number_i = ? AND s_modification_id_t = ?
Parameter: 1 Gift Batch 1 System.String VarChar 160
Parameter: 2 20.0000000000 System.Decimal Decimal 24
Parameter: 3 20.0000000000 System.Decimal Decimal 24
Parameter: 4 01/01/2013 00:00:00 System.DateTime Date 0
Parameter: 5 1 System.Int32 Int 0
Parameter: 6 1 System.Int32 Int 0
Parameter: 7 DEMO System.String VarChar 20
Parameter: 8 21/02/2013 20:41:56 System.DateTime Date 0
Parameter: 9 43 System.Int32 Int 0
Parameter: 10 1 System.Int32 Int 0
Parameter: 11 21/02/2013 20:41:56 System.DateTime DateTime 0
Possible cause: Npgsql.NpgsqlException:
column "a_gift_batch_number_i" of relation "a_batch" does not exist
Severity: ERROR
Code: 42703
It appeared that my “a_batch” table had a missing column. So, since at this point I already knew I had the latest code after a complete merge from trunk, I ran a 'patchDatabase' command and indeed noticed that one patch was applied (I think to 0.2.23) - but this made no difference. It was only when I deleted the database and completely re-created it that I managed to get 'nant test' to go further.
Conclusion #2: It appears that either the patch in trunk is missing at least this required upgrade, or we do not yet have a patch in trunk that brings the database fully up-to-date. This will need to be fixed. (There may have been other column schema changes to the a_batch table that did not show up in the message.)
So now I have a database that lets me complete that test I can continue again.... This time I got a BUILD FAILED message but all the previous messages were BUILD SUCCEEDED and I could not find anything to help me find the error. But I was at least able to work out from the nant output which of the tests was being run. It turned out to be Ict.Testing.lib.MFinance.GL.dll. For the time being I commented out the test in the Test.build file to see if I could carry on.
Conclusion #3: It is apparent that something must be different on my computer from the test build server - even though I am supposed to have the latest code from trunk - because allegedly the test server does not find any errors.
Having decided to comment out the particular test and investigate later, I carried on again... This time the tests all ran to completion and I got my BUILD SUCCEEDED!
However - all was not finished. I had by now examined the Test.build file and realised that you have to explicitly add tests that you write to the build file. I noticed that many of the tests that we have written are not part of the test build so do not get run regularly. Also some tests within DLL’s have been marked as [Ignore], presumably because they have been found not to pass any longer(?)
Conclusion #4: I don't know if you assumed that all our tests were running every night - they are not! We should update the build file so that all tests are run every time - and if tests fail we should make it a priority to fix the test or the main code.
Finally, I added my own new tests to the build file and – success!
As of now, and including my new server and client tests, we run 17. But we have 33 test DLL's. I am going to see how many of these I can add (and still have everything pass), but other people may need to be involved in providing help and advice.
Later On...
I investigated the cause of the failure of Ict.Testing.lib.MFinance.GL.dll. It turned out to be caused by a bug in server\lib\MFinance\setup\GL.setup.cs. Wolfgang had also found that this was wrong and had committed the fix the following day (22 Feb 2013) – so the test server must have also found that the test build failed but I was not aware of this.
My last conclusion is an observation really. When you run ‘nant test’ it uses the demo database and it completely resets the content. I can understand that this is sometimes necessary – and indeed it is good to run tests on an ‘empty’ database to prove that there are no issues that arise when tables have no records. But it is also important to run tests on databases that do have content – and to make sure that test actions do not ‘mess with’ existing data, or fall over due to unhandled duplicate record errors. The tests that I wrote were capable of testing a blank table and a table with prior data. I was able to create a working test ledger for doing my tests, which could be deleted entirely at the end of the test.
Conclusion #5: We should consider making our tests prove functionality on both empty and populated tables.
Here is a summary of the conclusions again…
Conclusion #1: Tests are really useful!
Conclusion #2: We do not have a patch that upgrades a database to the current schema
Conclusion #3: Tests should have been failing on the test server while there was an error in the code - and maybe they were.
Conclusion #4: Not all test DLL’s are actually run on the test server
Conclusion #5: We need to think about whether tests should run on both an empty database and a populated one.