Mock test objects
Jun. 17th, 2017 07:40 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
At work we have a number of server-side unit tests. I find some of these annoying. Server-side we offer various
On the one hand, such unit tests typically run quickly and easily enough that they can be placed quite early in our code quality checks: a problematically failing test can be discovered long before the culprit is merged into a running server and the integration test suite run against it. (The integration tests use a real running server.)
On the other hand, not only are these fake appropriate responses an inferior substitute for the real thing, meaning that the unit tests are perhaps not testing a server that properly corresponds to reality, but as somebody who works on the server internals I find these unit tests a maintenance headache: if I change something about how the server works then I must fix the affected unit tests to fake new values in a new way. That is, I effectively have to correspondingly adjust the sequences of behavior from the fake server.
Times may be changing in relevant ways. Perhaps our unit tests are as they are because it was previously more expensive to deploy a live server and run integration tests against it than it now is. Perhaps computing power will soon be so cheap that it will be easy to run integration tests for each separate body of work from each software developer we have contributing. Also, we are starting a move to microservices which may allow for a testing approach that is more lightweight.
At my last job we had an interesting approach to setting up mock test data for server state: we generated it from real server activity. We had code that used a live server to set up for the tests. Then, we dumped its database out to a form ready for populating both Oracle and PostgreSQL. Subsequently, we populated from the database dumps before running the test suite so that tests could start up from or assume that prior server state. I wonder if one could similarly instrument a real server to capture return values from service methods to automatically set up mock objects that return
I should clarify that I am fine with unit tests in general: I have written various new ones into our codebase but mine do not exercise internal server dependencies enough to require many return values from mock objects to be faked.
Following on from my previous comments about contemporary code quality, personally I think that if one's code causes associated tests to unexpectedly fail then maybe one did not write it cleanly enough in the first place but, worse, if it causes only a couple of tests to fail then one's test suite's coverage is probably too poor as well. I have thus not been comfortable lately with a couple of my non-trivial bodies of work where, while in progress, deficiencies were revealed by only a couple of tests failing. In particular, the code I had mentioned some time ago for effecting node state transitions in directed model object graphs is challenging even my strong ability to think through algorithms' behavior.
serviceswhich to some extent use each other internally. For unit tests there is not a real server running so when a test tries executing some server-side code there must be
mock objectsthat fake appropriate responses from the other parts of the server that they attempt to use.
On the one hand, such unit tests typically run quickly and easily enough that they can be placed quite early in our code quality checks: a problematically failing test can be discovered long before the culprit is merged into a running server and the integration test suite run against it. (The integration tests use a real running server.)
On the other hand, not only are these fake appropriate responses an inferior substitute for the real thing, meaning that the unit tests are perhaps not testing a server that properly corresponds to reality, but as somebody who works on the server internals I find these unit tests a maintenance headache: if I change something about how the server works then I must fix the affected unit tests to fake new values in a new way. That is, I effectively have to correspondingly adjust the sequences of behavior from the fake server.
Times may be changing in relevant ways. Perhaps our unit tests are as they are because it was previously more expensive to deploy a live server and run integration tests against it than it now is. Perhaps computing power will soon be so cheap that it will be easy to run integration tests for each separate body of work from each software developer we have contributing. Also, we are starting a move to microservices which may allow for a testing approach that is more lightweight.
At my last job we had an interesting approach to setting up mock test data for server state: we generated it from real server activity. We had code that used a live server to set up for the tests. Then, we dumped its database out to a form ready for populating both Oracle and PostgreSQL. Subsequently, we populated from the database dumps before running the test suite so that tests could start up from or assume that prior server state. I wonder if one could similarly instrument a real server to capture return values from service methods to automatically set up mock objects that return
realdata.
I should clarify that I am fine with unit tests in general: I have written various new ones into our codebase but mine do not exercise internal server dependencies enough to require many return values from mock objects to be faked.
Following on from my previous comments about contemporary code quality, personally I think that if one's code causes associated tests to unexpectedly fail then maybe one did not write it cleanly enough in the first place but, worse, if it causes only a couple of tests to fail then one's test suite's coverage is probably too poor as well. I have thus not been comfortable lately with a couple of my non-trivial bodies of work where, while in progress, deficiencies were revealed by only a couple of tests failing. In particular, the code I had mentioned some time ago for effecting node state transitions in directed model object graphs is challenging even my strong ability to think through algorithms' behavior.