http://citconf.com/wiki/api.php?action=feedcontributions&user=222.240.208.14&feedformat=atomCitconWiki - User contributions [en]2024-03-29T06:29:13ZUser contributionsMediaWiki 1.35.11http://citconf.com/wiki/index.php?title=Chris_Gough&diff=1105Chris Gough2007-09-27T08:24:59Z<p>222.240.208.14: </p>
<hr />
<div>letopasdelo<br />
cacelacdr<br />
= Chris Gough =<br />
[http://www.citconf.com/wiki/index.php?title=CITCONAsiaPacific2007Registrants Attendee, CITCON Asia Pacific 2007]<br />
<br />
I'm a CI newbie, and these are the sessions I attended:<br />
<br />
* [http://www.citconf.com/wiki/index.php?title=CI_Fundamentals CI Fundamentals] - The right place for me to start.<br />
* [http://www.citconf.com/wiki/index.php?title=Simplifying_Mock_Object_Testing Simplifying Mock Object Testing] - I learned what mock-objects were in the hallway outside this meeting so much of the discussion went over my head - however I did leave the room with an idea of what, why, and how to use mock objects. Monkey-see, monkey-do. The next step on that journey will probably be to sniff out the mockery in the rails test framework (I'm a rails newbie too).<br />
* [http://www.citconf.com/wiki/index.php?title=What_is_the_right_mix_of_practices_and_tools_for_introducing_CI What is the right mix of practices and tools for introducing CI] Highly relevant for me; a lot of information for one session, especially since the nature of the topic means there will be different opinions and "every tool that somebody in the room likes or found useful" is clearly not the right mix to start with. I left with lots of web links to look up, which is the most I could every hope from a session like this.<br />
* [http://www.citconf.com/wiki/index.php?title=Clover2 Clover2] For me, a random look at a sample tool (a test coverage metric generator). Interesting, I can see how it would be useful. <br />
* [http://www.citconf.com/wiki/index.php?title=Using_Dynamic_Languages_for_Writing_Tests Using Dynamic Languages for Writing Tests] Had secretly hoped to see an example of python rig testing a J2EE app in a CI context. I saw a Groovy one instead... [http://citconf.com/wiki/index.php?title=Tomjadams Tom Adams'] demonstration of Domain Specific Language in a testing context will change the way I create software.<br />
<br />
I had a great CITCON experience, definitely better than an old-fashioned conference.</div>222.240.208.14http://citconf.com/wiki/index.php?title=Jason_Sankey&diff=1062Jason Sankey2007-09-26T17:53:29Z<p>222.240.208.14: </p>
<hr />
<div>trocdronsi<br />
oloc4traca<br />
== Me ==<br />
<br />
I am a founder of [http://zutubi.com/ Zutubi], makers of the Pulse continuous integration server.<br />
<br />
== Environments ==<br />
<br />
Pulse is primarily Java, so that is where I spend my time these days. Previously I have worked in various languages, most notably C (Linux device drivers and accompanying SDK) and C++ (virtual machine). I was lucky enough to be exposed to an environment with automated testing with builds on checkin from the start of my professional career.<br />
<br />
== Topics ==<br />
<br />
My interest in CITCON is from both ends: as a vendor and as a developer always looking to improve out own processes. Topics of interest to me include:<br />
<br />
* What sucks about CI? That is, how does it cost you time or effort that it shouldn't, or how isn't it saving you effort that it could?<br />
* The role of SCMs in CI, and how next generation SCMs can influence and/or advance CI.<br />
* How different are the skills required for a great QA engineer as opposed to a great developer?<br />
* Testing and prototyping.</div>222.240.208.14http://citconf.com/wiki/index.php?title=Jason_Sankey&diff=1061Jason Sankey2007-09-26T17:32:14Z<p>222.240.208.14: </p>
<hr />
<div>oloc4traca<br />
== Me ==<br />
<br />
I am a founder of [http://zutubi.com/ Zutubi], makers of the Pulse continuous integration server.<br />
<br />
== Environments ==<br />
<br />
Pulse is primarily Java, so that is where I spend my time these days. Previously I have worked in various languages, most notably C (Linux device drivers and accompanying SDK) and C++ (virtual machine). I was lucky enough to be exposed to an environment with automated testing with builds on checkin from the start of my professional career.<br />
<br />
== Topics ==<br />
<br />
My interest in CITCON is from both ends: as a vendor and as a developer always looking to improve out own processes. Topics of interest to me include:<br />
<br />
* What sucks about CI? That is, how does it cost you time or effort that it shouldn't, or how isn't it saving you effort that it could?<br />
* The role of SCMs in CI, and how next generation SCMs can influence and/or advance CI.<br />
* How different are the skills required for a great QA engineer as opposed to a great developer?<br />
* Testing and prototyping.</div>222.240.208.14http://citconf.com/wiki/index.php?title=How_to_Start_Testing_A_Large_Legacy_Application&diff=1046How to Start Testing A Large Legacy Application2007-09-26T09:44:56Z<p>222.240.208.14: </p>
<hr />
<div>cchigete<br />
letocacelc<br />
Facilitated by Paul O'Keeffe.<br />
<br />
== What Happened ==<br />
<br />
This session morphed into a discussion right up the other end of the spectrum, about achieving '''100% test coverage''', what that means exactly and whether attempting such a thing is even wise.<br />
<br />
== A Tighter Definition of Code Coverage Numbers ==<br />
<br />
What does 100% test coverage actually mean?<br />
<br />
* Type of coverage - line, branch or path.<br />
* Tests being run - unit, integration, functional or some combination of these.<br />
* Code being covered - all production code, perhaps with some acceptable exclusions.<br />
<br />
== Shooting for 100% ==<br />
<br />
We discussed one greenfields Java project, which achieved 100% branch coverage of all production code except for a thin wrapping layer around external third party libraries, running only fairly tight unit tests. Integration and functional tests were not used for coverage measurements. This was done in an attempt to enforce test driving of production code, since it would be nearly impossible to achieve this result without having done so. It succeeded in this respect, but at the cost of having a reasonably large amount of fairly brittle test code to maintain, due to the fairly tight coupling between the tests and implementation details within the production classes.<br />
<br />
== Hurdles ==<br />
<br />
Achieving this level of coverage was made more difficult when the code needed to call through to external library code which was not designed for testability. This includes the vast majority of all third party libraries and the JDK in particular! Language constructs which make achieving full test coverage more difficult include:<br />
<br />
* Referencing concrete classes instead of interfaces.<br />
* Direct use of constructors, rather than factories.<br />
* Final classes.<br />
* Static methods.<br />
<br />
Generally, use of any construct which makes it harder to replace real dependencies with test versions makes testing tricky.<br />
<br />
== Jumping the Hurdles ==<br />
<br />
The project discussed wrapped all third party APIs using untestable constructs in a thin proxy layer which automatically translated them as follows:<br />
<br />
* Concrete/final classes -> interfaces.<br />
* Constructors -> choice of factory classes or methods.<br />
* Static methods -> instance methods.<br />
<br />
Initially this layer was hand coded and was itself excluded from the code being measured for test coverage. However, inconsistencies and untested logic began to creep into this layer. To solve this, the wrappers were instead generated at runtime using dynamic proxies implementing hand coded interfaces for the desired APIs. This later evolved into the [http://code.google.com/p/proxymatic/wiki/AutoBoundary AutoBoundary] module of the [http://code.google.com/p/proxymatic/ Proxymatic] open source project.<br />
<br />
With such a layer, it is possible to replace/mock all third party code for the purposes of testing, making it easy to reproduce all behaviours, including hard to test exception conditions, thus making 100% coverage for the remaining production code possible.<br />
<br />
== How To Apply This To Legacy Application Testing ==<br />
<br />
With all this in mind, the discussion returned to the question of testing legacy applications. Surprisingly, it turned out that the 100% coverage approach/tools could be applied in legacy situations, if you think of the existing code as third party code. We figured you could start by test driving all new code and wrapping all existing code in the proxy layer, and then gradually move old code over to the new approach piece by piece. Untangling concrete/static/final dependencies would be aided by interceding the wrapper layer at the appropriate points.</div>222.240.208.14