Developer Aspirations

YAPB - Yet Another Programming Blog



November 2009

Be Lazy, Automate Testing

by Colin Miller

I work on a product that manages test cases, test plans, and results of software testing. QA engineers create test cases in our system and submit results, usually through some sort of automation, and we keep track of those test results and generate useful reports.

When I first started this job, the product already existed. We were in more of a maintenance mode, adding features and fixing bugs. Ironically, while we provide a product to help out QA, our team has no QA team. There weren't even any unit tests on the main product and all testing was done sporadically and manually.

This has been improving with a recent push to have all products under Continuous Integration and with a full unit, smoke, and regression tests. However writing up tests and executing them takes time. Currently, many of the developers are working part time as QA before we launch a new release. Unfortunately, many of them forget to use their development tools for the QA process. Namely, automation.

I can manually run through a test case in maybe 5 to 10 minutes, which isn't too bad. If you have 70-80 test cases to run through before a release, that can take about 2 days of work. Now if we release on a monthly cycle, every 30 days we'll lose 2 days to manual testing. Also, every release will add more tests which have to be performed along with the previous tests from the last release.

Since we write a web app, a lot of our testing is done by clicking on things in the web UI. Front End testing can be painful, but with the help of selenium, we can automate this process. Creating the selenium script can take a while, but once it's functional separate runs at a later date are fairly quick, and can be run in an automation server. So while each test will take longer to perform because you have to combine the manual testing process with script creation, the end result is a slowly growing package of tests that will automatically run whenever you wish.

To tie our scripts together, we utilize TestNG inside of a maven package. Selenium has java packages that allow us to launch selenium straight from our TestNG tests. While TestNG is designed for running unit tests, it works for our regression tests quite well. I was even able to build a TestNG listener to automatically insert the results of each test into our product as well, so our product is keeping track of the results if its own tests.

By spending the time to create these automated tests and set them up in an easy to execute automation package (we run the  maven tests from hudson), we've greatly extended our testing coverage of our product. This helps us make sure that when we add new features or fix bugs in one place we're less likely to be breaking them in another. It also saves us time each month by allow us to just spend time creating new automations for new tests rather than manually running all of the old ones along with the new ones.

Later we plan on automatic deployments of the product upon checkin via hudson. Once we have that set up we can continuously run our automated front end tests on each checkin to the code, finding problems faster.

comments powered by Disqus