Difference between revisions of "Performance testing, automating"
From CitconWiki
Jump to navigationJump to searchLine 1: | Line 1: | ||
− | TPCH | + | Using TPCH, for evaluating a database engine |
* Improving an analytics platform | * Improving an analytics platform | ||
* unnamed database vendor, data, scale, queries against data | * unnamed database vendor, data, scale, queries against data | ||
Line 21: | Line 21: | ||
* Set baselines for metrics | * Set baselines for metrics | ||
* At each release, we want to find out if we slowed down or sped up. | * At each release, we want to find out if we slowed down or sped up. | ||
+ | * How do we allow tradeoffs to interact, when there are thread-lock interactions slowing down eachother | ||
+ | * Set up a CI system that will always do performance testing, no commit shall decrease performance | ||
When does do we evaluate performance? | When does do we evaluate performance? | ||
+ | |||
+ | Performace evaluation requires a different skillset than other development. | ||
+ | * Does this require a different set of eyes? A "Performance QA" agent, who can watch file io, network io, GUI, thread locking, Operating systems, L1 cache hit rate/miss rate, etc.? | ||
+ | * How does one learn these skills? | ||
+ | ** College? Not always taught in college well, or directed at performance. |
Latest revision as of 14:31, 24 August 2013
Using TPCH, for evaluating a database engine
- Improving an analytics platform
- unnamed database vendor, data, scale, queries against data
Write performance stories
- We have a speculation that makes a story
- Some teams are unable to estimate a story for performance
- PO should give acceptance criteria performance
- Should each story allow for x time to find performance issues
Make it work
- Make it work well
- Make it work fast
JMeter
- How does a process scale with # of users, throughput
- Finding limits of the system, 20 second responses, for example, is that too slow? or is that part of subjective metrics?
- Definition of
Solution:
- Set baselines for metrics
- At each release, we want to find out if we slowed down or sped up.
- How do we allow tradeoffs to interact, when there are thread-lock interactions slowing down eachother
- Set up a CI system that will always do performance testing, no commit shall decrease performance
When does do we evaluate performance?
Performace evaluation requires a different skillset than other development.
- Does this require a different set of eyes? A "Performance QA" agent, who can watch file io, network io, GUI, thread locking, Operating systems, L1 cache hit rate/miss rate, etc.?
- How does one learn these skills?
- College? Not always taught in college well, or directed at performance.