What Mistakes Can You Make in Designing Your Automated Tests

From CitconWiki
Revision as of 16:57, 28 October 2007 by 60.10.6.170 (talk)
Jump to navigationJump to search

Youporno tube sexo en hoteles i pod socks total eclipse of the hurt site gps route66 nokia fastweb modem bables hatland site site moda 1950 1960 frezo revolver 44 magnum video sexy horse site index stroam.info benq pb6100 pullmann pneumothorax site mhs 700 xl sexs free finepix f610 fujifilm the boys in company c page old telefon transcend 80x yaesu vx 150 point amd 3000 acanthocybium sexy symphony Xxx naruto page hyundai 32 lcd cf d50 becon eu bu link dragonball bantumi n guta albe free mp3 download index hi fi 100 w link home selma ladova helena map ice mc think about the way index dj jungle athens 2005 www yaoo zippo 90 encoder tv index asrock k7vt4a hp 3380 lautar mesa climax robin norwood map home saiyuki sms sound homepage just of funky calza nylon url home cabinet in legno philips dvp3005 lg 8163b volvo v50 nero b u r c thomson mp3 gb nusrat lg lx 130 index bci 6bk camp danse du ventre home on fire mp3 con sd mmc notebook benq liquido caldo sexi schop knife iety sport 7 nuova mazda 3 nike shox nero url alfa 155 ts relvar We subtitled this session "Mistaeks I Hav Made" in Nat Pryce's honour. Facilitated by Paul O'Keeffe.

The Laundry List

Here's the list of gotchas we discussed:

  • Too much mocking.
  • Unit tests too tight.
  • Not refactoring test code as much as production code!
    • Copying and pasting test code.
  • Commenting out broken tests.
  • Testing at the wrong level.
  • Too many tests - take too long to run.
  • No cleanup after tests.
  • Not starting with clean test data setup.
  • Tests that don't respect the domain.
  • Only having big integration tests.
  • Not having team ownership of tests.
  • No test independence.
  • Tests that work by coincidence.
  • Time/timing dependent tests.
  • Not starting with a broken test.
  • Testing things that don't matter (for example, minor details of HTML).
  • Not testing exception cases.
  • Tests that spit out noise.
  • Tests that are just plain wrong.
  • Tests that don't check anything.
  • Not using data driven tests.