Most software testers focus on executing functionality, stress, load, performance, unit etc. type of tests. These are predominantly tests of the software being built that is certified/tested against certain hardware and base software stacks and platforms (configurations). There are a set of pre-defined (pristine) test environments (e.g., an OS, database, browser, network connectivity etc.) under which the developed software is installed and these tests are executed and evaluated.
However, as software testers, do you spend time tweaking the test environment setup (e.g., exploratory testing via client browser settings, multiple versions of databases existing on the server, network settings, previous versions of the software already installed and running etc.) and checking if ‘stuff works’ with the developed software? How important do you think it is to test software under various (unchartered) system configurations and setup?
[Update] — I have posted this question on the Software Testing Club as well (including some clarifications on the question). Other testing experts are chipping in and talking about it here.
[Update] — You might be interested in these 2 videos that I’ve posted recently — Demo: Automating the Creation of a Multi-Machine Test Environment and The Benefit of Network (IP) Zoning in Executing Test Environments.