I think of the following testing activities (as part of software verification) during software development and prior to production deployment.
- Development testing (10-15% of your testing time) – Unit testing, White box testing
- Software Testing (65-80% of your testing time, primarily black box) – Functional, Load, Stress, Performance (various OS, platforms etc.), Security, Regression
- Pre-production testing (10-20% of your testing time, black box only) – User acceptance, Business acceptance (more of software validation)
Here are a few of my observations with regards to the three testing buckets above:
- Stage #2 and #3 would immensely benefit by having production like (or real world environments in case of ISVs) environments (if not an exact replica) – that way the test results are more credible and one has more confidence on them. When we say production like, it would mean the entire test environment mirroring the production setup including OS platforms, software stack, versions, existence of other software products in the mix, network topology etc.
- The earlier a bug is detected between the steps above – the cheaper it is from a $$ and resource cycles perspective
- Virtualized environments will help #2 and #3 – so long as it is easy to setup and tear down virtual test environments. VMLogix LabManager has advanced automation capabilities that allow users to automate the process of setting up a test bed environment (OS platform, software stack in the test machine, synchronizing multi-machine bootup etc.). It is more important for #3 to have a production like environment.
What are your specific observations?