Mock test objects
Jun. 17th, 2017 07:40 amAt work we have a number of server-side unit tests. I find some of these annoying. Server-side we offer various
On the one hand, such unit tests typically run quickly and easily enough that they can be placed quite early in our code quality checks: a problematically failing test can be discovered long before the culprit is merged into a running server and the integration test suite run against it. (The integration tests use a real running server.)
On the other hand, not only are these fake appropriate responses an inferior substitute for the real thing, meaning that the unit tests are perhaps not testing a server that properly corresponds to reality, but as somebody who works on the server internals I find these unit tests a maintenance headache: if I change something about how the server works then I must fix the affected unit tests to fake new values in a new way. That is, I effectively have to correspondingly adjust the sequences of behavior from the fake server.
Times may be changing in relevant ways. Perhaps ( the computing cost of running integration tests was much larger. )
At my last job we had ( a less manual approach to providing data for tests. )
I should clarify that I am fine with unit tests in general: I have written various new ones into our codebase but mine do not exercise internal server dependencies enough to require many return values from mock objects to be faked.
Following on from my previous comments about contemporary code quality, ( surprise test failures suggest the code was not thought through well. )
serviceswhich to some extent use each other internally. For unit tests there is not a real server running so when a test tries executing some server-side code there must be
mock objectsthat fake appropriate responses from the other parts of the server that they attempt to use.
On the one hand, such unit tests typically run quickly and easily enough that they can be placed quite early in our code quality checks: a problematically failing test can be discovered long before the culprit is merged into a running server and the integration test suite run against it. (The integration tests use a real running server.)
On the other hand, not only are these fake appropriate responses an inferior substitute for the real thing, meaning that the unit tests are perhaps not testing a server that properly corresponds to reality, but as somebody who works on the server internals I find these unit tests a maintenance headache: if I change something about how the server works then I must fix the affected unit tests to fake new values in a new way. That is, I effectively have to correspondingly adjust the sequences of behavior from the fake server.
Times may be changing in relevant ways. Perhaps ( the computing cost of running integration tests was much larger. )
At my last job we had ( a less manual approach to providing data for tests. )
I should clarify that I am fine with unit tests in general: I have written various new ones into our codebase but mine do not exercise internal server dependencies enough to require many return values from mock objects to be faked.
Following on from my previous comments about contemporary code quality, ( surprise test failures suggest the code was not thought through well. )