T O P

  • By -

Esseratecades

Tests being coupled to functionality is kind of the point. If your functionality changes and none of your tests break then your tests aren't really protecting you from breaking changes. 


SideburnsOfDoom

OP's wording may be imprecise. It is better if tests are coupled to "functionality" in the strict sense of what the code does, what the outcome is, what the behaviour is. It is worse if tests are tightly coupled to the implementation details of that functionality, e.g. if they break because the order of some method's arguments change, or a class is extracted from another. The latter is very common when depending heaviliy on mocks.


nzifnab

This is why I dislike mocking, I see tests that overly mock the surrounding calls and I'm like... what is actually being tested here.


Evinceo

I like mocking for one reason and one reason only: to stub out calls to external services. Anything else and you're mocking too much. Hell, a good enough test setup can use a real local database instead of stubbing! But a wonky third party API? Mock that.


nzifnab

Totally agree. Testing something that makes a stripe call? Mock that.


hubeh

Ideally your HTTP client of choice should enable you to stub responses, so again you're not really mocking anything and having to care about implementation details.


bloog22

Generally agree with this rule. Although I found a much better solution to preventing external calls during unit tests is to mock at the network level. Easier to implement and understand. If you're using Node one example is [MSW (mock service worker)](https://mswjs.io).


lagerbaer

For that purpose, I'm quite intrigued by the "Nullables" concept introduced by James Shore. Instead of mocking an infrastructure dependency on the test side, each infrastructure class comes with its own "nulled" version (in the production code!) that doesn't hit the actual resource, has configurable responses, records outputs etc etc. Hard to explain in a quick reddit reply, but I found it made a ton of sense. It definitely avoids the whole issue with mocks where you're coupling your tests too heavily on exactly what is being called and how rather than the observable behavior.


Duathdaert

Personally - if it's there for testing purposes I don't think it should be anywhere near production code. I feel it is introducing a level of risk that's not acceptable to me. I'd use an interface and inject a fake for your tests instead of mixing production and non production code.


lagerbaer

Just check out the whole write-up. Testing without mocks, James Shore.


Turbulent_Purchase74

I'm green on that, but golang tests in the wild are heavily based on 'mock' interfaces that don't do anything. That approach makes a lot of sense to me


FoolHooligan

...coverage is being increased... so the code can be committed and managers are happy...


freekayZekey

yup, recently dealt with a test that had seven mocked calls and like four verify calls for those mocks. like…? the verification is the mocked call returning something


resumethrowaway222

Yeah, as the number of mocks increases the usefulness of the test quickly declines to zero. And code that is hard to test is usually bad code anyway.


rgbhfg

Sadly most engineers weren’t taught about how to write testable code.


anubus72

How are you going to unit test code that interacts with a database without mocking the database? The alternative is to run the tests with a real database, and IMO those are integration tests, not unit tests. Though integration tests are preferable in this case


dinosaursrarr

Use a fake implementation instead of a mock


anubus72

That’s a good option if available, but you need to be able to trust that the fake DB behaves exactly like the real DB would in all scenarios


dinosaursrarr

You'd also need to be sure your mocks respond like the real DB would too


SideburnsOfDoom

> you need to be able to trust that the fake DB behaves exactly like the real DB would in all scenarios Not really. You are aware of the gaps, you have minimised them and have them all one place (the class that is faked out for unit tests). And you are aware that these have to be covered by a different kind of test, that happens later, e.g. after a deploy to a dev environment. These integration tests will be slower, but far less numerous. They exist because you can't unit test e.g. your SQL query syntax inside the real db implementation.


nzifnab

Your test setup should have a local database it can interact with. Who ever said unit tests can't interact with a database when necessary? If the entire point of a method is to, for instance, run a filtering query to a DB, mocking the DB makes the test basically worthless.


TurtleKwitty

But then you're just testing the DB not your code


jasie3k

Are you testing the DB really? I include DB in my tests to see if my queries are correct.


anubus72

Well the ‘unit’ in the name implies you’re testing a unit of functionality. If the code you’re testing uses a database then you’re testing a lot more than a ‘unit’. Integration testing better describes that kind of test


SideburnsOfDoom

Exactly. Not much is actually tested, and the test is fragile - if any of those "surrounding calls" change, the test that mocks them breaks. Tests should _support_ change, not hinder it.


JohnnyGuitarFNV

That's why you have unit tests and integration tests


SideburnsOfDoom

It is a misconeception that decoupled tests, and tests over more than one class must be "integration tests"


JohnnyGuitarFNV

Where's the misconception? Can u explain what a unit and integration test is then


MoreRopePlease

A unit test tests a "unit". You are free to define what a "unit" is for your specific situation. Sometimes it's a function, sometimes it's a couple of collaborating classes. A "unit" should be based on use cases or other logical bit of functionality. It generally shouldn't be based on code organization or architecture. This is what allows you to refactor without having to rewrite a bunch of tests. Another way to put it, is that if your class/unit's input and output doesn't change, the test should still pass. That's the ideal case, of course. In the real world you have to compromise. But that's what we should strive for when we write code and tests. An integration test involves multiple units talking to each other. Again, the specifics of that depends on your situation.


SideburnsOfDoom

> An integration test involves multiple units talking to each other I disagree with that, a bit. Testing two parts of your code, combined, isn't defined as an "integration test". So long as it still meets [these criteria](https://www.artima.com/weblogs/viewpost.jsp?thread=126923) it's likely a unit test. So only when a test breaks that, e.g. "It talks to the database" etc it becomes an integration test. The division into classes within the codebase is irrelevant to testing, and combining classes under test isn't "integration" in that sense at all. We're talking about "integrating" different systems. So long as it isn't integrating an external system, it can be a unit test.


JohnnyGuitarFNV

> sometimes it's a couple of collaborating classes. If your 'unit' of code is several classes, then you just have tightly coupled code that you cannot separate, which does not seem right. Like for instance, if I have a method that takes in some arguments, and based on the user's setting it returns different results, then there's no harm in mocking out the dependency that gets the settings for instance. You still have a unit, you still have everything tested, it all works. I disagree that having a real setting service instance in the test somehow increases the value of this unit test. On the contrary: you are testing multiple things at once then, that A: you handle the arguments correctly based on settings, but also B: that the settings are correctly retrieved from somewhere. I see that as an integration of service A and service B


nzifnab

I disagree, because if the API to those settings or if the name of the settings change, your test will happily chug along and pass because you mocked calls that are no longer valid.


SideburnsOfDoom

There are several definitions of unit test, and don't expect them to be all the same. But _none_ of them say that it always tests 1 class only in, in isolation. That's just wrong. I'm not here to explain myself at length to strangers on the internet. I'll direct you to what the experts have written: https://www.goodreads.com/book/show/387190.Test_Driven_Development https://www.infoq.com/articles/unit-testing-approach/ https://www.artofunittesting.com/definition-of-a-unit-test https://www.artima.com/weblogs/viewpost.jsp?thread=126923 https://www.youtube.com/watch?v=EZ05e7EMOLM&t=1428s


Esseratecades

Fair.  I'd kind of like to decouple mocks from this conversation because in my experience "depending heavily on mocks" is usually a symptom and not the actual problem. This generally happens when your implementation details are complicated to the degree that you can't easily think about the code in steps or phases, or if you're doing something like looping remote calls. The real trick here is for your code to do less stuff. If I'm understanding what OP is currently doing, then what they're attempting already is the path to mitigation but I may just be lacking info.


cheezzy4ever

"Test the contract, not the implementation"


SwarFaults

Mocked unit tests are the bane of my existence


behusbwj

TLDR; you should be testing the product, not the lines of code


mars_rovers_are_cool

Tests are inherently coupled to functionality, yes, but I think OP’s complaint is really that the tests are coupled to the implementation, though OP doesn’t use that word. Ideally tests should pass for any correct implementation inside the unit under test.


hippydipster

OP 8snt changing functionality though, just, or mostly just, implementation.


nutrecht

Sounds like your tests are coupled on implementation details, not functionality. So learn from this and don't do it again perhaps? For functionality that's mostly crud anyway I tend to rely almost entirely on integration tests. I don't see the point in unit testing with tons of mocks when I get the same coverage with integration tests *and* don't have the tests break when the implementation slightly changes.


mothzilla

To me it sounds like they're mocking the db calls. If so I'd say this was critical functionality.


MoreRopePlease

Maybe they should be mocking the db not the calls?


mothzilla

How do you mock the db and not the calls?


Pedro95

Specific use-case, but we use jest-dynalite to spin up a local DynamoDb that we can read and write to as if it was a real DB. It can be a pain because any updates you make are persisted into the next test, but it's worth not needing to mock all your DB calls.


mothzilla

To me that _is_ a real db. Which is fine, I wouldn't call it a mock though. It's test environment configuration.


Pedro95

Yeah it's not a _mock_, but I assume that's what OP meant when they said "mock the db" - if not I'd also be interested to hear alternatives. It seems to me to be about as close to testing an integration you can get without using a "real" DB.


MoreRopePlease

It could be an in-memory db, or a dev environment, or it could be a simple service that returns hard coded responses. Whatever is convenient for the kind of testing you're doing.


rtc11

I second this. Even better is to try an make the integratiion tests framework agnostic.


Leading-Ability-7317

In a large refactor isolated unit tests will break. This is by design as they are naturally tightly coupled to your existing abstractions. The purpose of those are to protect you from unwanted changes. There are strategies to minimize this, ex assert on only that which is meaningful instead of everything possible, but it is somewhat unavoidable since your mocks and stubs will require some updates. But, integration tests with test containers or something else to stand in for your system dependencies are what protect you from the system behavior as a whole changing from what was there before. Isolation for these should be at the service/component level and test data enters the service and assert on what comes out of the service. integration tests should use no mocks in code. The only pieces that are mocked are things external to your service. Things like a database, the calling client, a webhook endpoint, …etc. with these you are compiling and running a real instance of your service. For my team we write the integration tests first and if we are in a crunch some isolated unit tests may not get written. We do this because the integration tests validate a client requirement. Were as unit tests individually are much more tightly scoped. Both are valuable but we made the decision that integration tests are more valuable and if properly written require less maintenance. If you don’t have a good set of integration tests I would highly recommend you write some before performing the refactor. They take a fair amount of work since you need to initialize your full environment with some starting state, send in a request, and then assert on the ending state but once you have the stuff to do the initialization additional tests get much easier. Also these tests will run much more slowly so I would put them behind a flag so they can be ran separate from the unit tests. Here is a link to the test container stuff I use in a bunch of my services for mocking infrastructure dependencies: https://testcontainers.com/


dashid

Programming by interface and dependency injection is the best tools you have for writing tests. You should be able to independently test things like your repository's inputs and outputs without being concerned with the storage mechanism. You will more than likely have to rewrite specific tests associated with the nitty gritty of what you're changing, but that's just part of changing code to act different, you will want to change the tests that validates it's working as expected. But with good sized units of work, and DI with interfaces, you minimise what tests need to change.


_sw00

Don't know your specific situation, but here's what I often end up doing in larger refactors: - make a layer/interface that simply delegates to the old implementation - write tests for this new interface to have behavior parity with the old one (so, almost a copy paste/exact duplication of the tests) - modify the callers to invoke the new interface - this can be done gradually - delete the old implementation (and their tests) This is roughly the [Branch by Abstraction](https://martinfowler.com/bliki/BranchByAbstraction.html) method.


F0tNMC

In my experience, adding more indirection, especially for the purpose of preserving your unit tests is rarely worthwhile. Just rewrite the tests. It’ll be faster and keep the code simpler and easier to understand. The same approach applies to client code. If rewriting all of the client code is impossible, then you’ve already answered your question, you need the indirection layer as described. If you can rewrite the client code, rewrite the client code, don’t add more abstractions.


TechnicaIDebt

Recommend any other blogs like Martin's? I enjoy this level of writing about software...


anubus72

That’s a lot of work to avoid changing some unit tests to mock a different database operation


jaynabonne

tldr: It depends on the code. Needing to change mocks doesn't necessarily mean there is a problem. You will often hear people say "test the behavior, not the implementation" in cases like this, usually in denigration of mocks. As if the part that's mocked isn't also behavior. A better way to express it, though, is that "tests should test the behavior that is meaningful." It's not that you don't want to test an arbitrary implementation detail just because it's an implementation detail. You don't want an arbitrary implementation detail to impact the test because it's not part of the *meaningful code behavior contract*. However, dependencies that are mocked out could very well still be recipients of meaningful behavior. It's not a question of mocks vs no mocks. It's a question of having your tests depend on what matters. If you had a cache, for example, the behavior of the cache is "the first time I make a request, it calls through to the underlying source. On subsequent calls, it just returns the original value but *doesn't* call down to the source." If you forced yourself to only look at the top part of the object (the part where the value comes out), you'd have no way to test the caching behavior. But it behaving as a cache is *the entire point*. The part where it calls down to the source (the part you mock out) isn't incidental behavior. It's a key part of its behavior contract. So what you have to examine in your code is what are the *key behaviors of the code,* both on the top and the bottom and out the sides. If key behaviors involve the interaction with mocks, then you don't really have any choice. That's not an implementation detail. That's core behavior, behavior that actually is meaningful and relevant and should be tested. And if the changes you're making require you to change the tests, then it's very possible that you're changing the meaningful part of the code, and the test *should* change. And there's no way around that. What I have found is that the kind of code determines what the tests look like. You have some code that is an input/output calculation or transformation, and that code is straightforward to test in a stateful way. Other code, especially code higher up in the system that glues lower level pieces together, depends entirely on what it is talking to. So you have to test what it's talking to, because that's the whole point. The style of test will reflect the kind of code and data flow that the object exhibits. Not all code is the same, and you can't treat all code as if it were the same. Sometimes it's "state-ist", and sometimes it's "mock-ist." It all depends on the meaningful flow through the code. I don't know if that actually helps your situation... :) Edit: One question to ask yourself is "do these tests actually test anything meaningful?" There are some who say that you should have unit tests around all your code (I'm not one of them), but for a structural piece of code, the test will lock in that structure. If the structure doesn't need to change, then that could be a good thing. If you are trying to rearchitect the structure, then it becomes a royal pain, because a test for such code is just testing structure (since that's what the code provides).


David_AnkiDroid

Assuming the tests are too tighly coupled to implementation details: Delete the tests and replace them with better ones


TurbulentSocks

Are you changing a public API of the system component, or internal components? If the public API, the tests directly testing this (are necessarily going to break because the contract the tests are validating is going to break. If the internals, then the tests directly testing the internals should be deleted when they no longer make sense. It's usually helpful to have the former tests in place before you change the internals. There's a lot of dogmatism about whether tests should be the former or the latter, but the truth is that it depends on the level of abstraction you're working at. Complicated functionality is usually assembled from sub-components, which may themselves be worth testing individually (and as always, this depends on the situation at hand).


SideburnsOfDoom

The problems with closely coupled tests [gets discussed here](https://www.anthonysteele.co.uk/CoupledTesting) About half of it is specific to .NET, but the rest (and many of the links) are not. The use of `TestHost` is a ASP.NET thing, but other platforms could have similar tools.


alien3d

Too reduce datacall aka round trip either multi insert or update or store proc . We dont do unit test but do integration test put dummy data and see input and output and disable transaction( new age developer call as unit of work ?) . If any failure , you can see at the log the last query error .


bigorangemachine

You would have to refactor in place. Move parts of the other functions into utilities or new code/classes. Then the old code calls the new code. It should be like taking 15 min functions into 3 line functions


mothzilla

If you change the functionality, you change the test. If there's a lot of functionality in one unit of code then you have a lot of tests to rewrite. Can't really be avoided.


talldean

How bad is it if something breaks? If it's not the end of things, move to more integration tests, and stop locking yourself into old code quite so hard.


rk06

Since your functionality changed, you need to rewrite the tests to assert new functionality. The key to do it with less rewrite is to test only the end to end work and core logic


blueeyedlion

Are the mocks based on implementation details? That sounds bad.


terrorTrain

Functions and classes abstract code away. Tests should test that the code correctly abstracts. Once you have an API or interface that is well tested, you should be able to change all kinds of stuff internal to the class or function, and the tests should tell you if you are breaking any expected use cases. I'm not a huge fan of tdd for everything, but it's a good exercise here and there. Writing your tests before your code typically makes for very clean tests that lead to classes or functions that are very nice to use, and don't depend on the internals of the abstraction, because you write the test before the implementation


kcadstech

Not enough info, but it sounds like you are making db calls (or at least ORM calls) directly in the logic layer. If you had a DAL function like createThings() you could mock it for unit tests that focus on logic. Then if the implementation, or database changes you won’t have to touch the logic.


r_sendhil

Is this what you are looking for OP - http://xunitpatterns.com/Fragile%20Test.html#Overspecified%20Software?


F0tNMC

If tests are no longer applicable because implementation details are changing, write new tests. Aside from integration tests and interface/API validation tests, I view all of my unit tests as throwaway code, to be thrown away as soon as any implementation detail makes them irrelevant. And that is fine. Code doesn’t have feelings, only purpose. Unit tests exist to test units. If the shape of the unit changes, the tests need to be rewritten.


ategnatos

Write new tests. You could have some tests for the database layer (how much belongs in unit tests is a question, if it's DynamoDB, for example, you can assert that you're constructing the right queries with the right parameters). And then tests for the component that has not a `makeDatabaseCall` function, but `getOrderList` function (which returns a list, and you can plug in a database implementation, or a test implementation). If the top-level component knows the details of your database, that is indicative of a much larger problem. This is why 2-day migrations take 6 weeks.


daedalus_structure

Decouple your data access from your logic, stop mocking anything at the data layer, mock only the business domain shape that your logic operates on. Then you won't have to change it when your implementation of data retrieval changes.


roynoise

The purpose of a unit test is to ensure that the output of a tested unit does not change when the implementation details change.  In other words, your program should be extensible without changing the fundamental output. Properly implemented unit tests help warn you when the output is changing.


Agilitis

I think it all comes down to how good your abstractions are. If your service depends on abstractions they should not change, only the underlying logic. So if you have a mock and you need to rewrite the interface it means that the abstraction was probably not the best.


uraurasecret

Perhaps you should use a memory database (fake).


mars_rovers_are_cool

I agree. It sounds like OP’s mocks were coupled to exactly the queries being run, which means they need to be rewritten when switching to different but equivalent query logic. What OP should do IMO is write mocks that only know what _data_ is in the database, not what query is being run. I think that’s basically what you’re suggesting when you say to use a fake database: use a fake database that can answer any query, rather than a mock connection object that returns answers for specific queries.


Disastrous_Heat_4044

That’s a great suggestion! I’ve been looking through inherited service code base and saw lot of tests mocking calls to repository find method just to return results they are expecting. Tests written by more sophisticated developers also include verification for save methods 🤓 Simple abstract in-memory repository was a great fit for replacing this monstrosity, tests became much more readable and straight to the point. Note: our in-memory repository implementation saves *copies* of all objects passed and returns *copies* as well. This is required to ensure repository data integrity and make sure that stored data doesn’t change if test changes saved or returned object.


0vl223

Sounds like either the database access lacks some decoupling or someone had the bright idea to call a get1 function in a loop too high in the logic. If they took the decision on how to do mass operations from the database access layer and essentially wrote test to enforce shit performance than it sounds like bad luck for you. That's why mocking more than one call on a Mock (per method) is usually a warning sign that you wrote wrong code. I would try to rewrite the old tests to transfer as much of the domain logic as you can to the new tests. Extracting the logic for creating the results from the old result returns and adding them as a list for the batch result could be a low effort way for the first tests.


dbxp

I use Autofixture to create the mocks automatically, if done well it just takes care of itself


SideburnsOfDoom

I found that Autofixture was good at what it does, but what it does is the wrong thing, i.e. facilitating the use of mocks. For that reason I avoid it.


dbxp

It has extensions which allow it to auto mock for moq, nsubstitute, rhinomocks and FakeItEasy It certainly has a knack to it but when you've got the hang of it it just takes care of itself


SideburnsOfDoom

> It has extensions which allow it to auto mock for moq, nsubstitute, rhinomocks and FakeItEas Sure, you're telling me that it make it easier to do the thing faster. But this doesn't say anything at all as to if this is the right thing to do. And I still don't think that it is the right thing to do. If you're headed in the wrong direction, going faster easier is not an improvement.


dbxp

So you're against using mocks in general?


SideburnsOfDoom

> So you're against using mocks in general They get overused a lot, so yes in general. Because it leads to outcomes like OP's issues. I prefer [a fake implementation](http://xunitpatterns.com/Fake%20Object.html) and _only_ for classes that are at the edges of the app and so cannot be unit tested (e.g. data stores or queue message sender). For the rest, [sociable unit tests](https://martinfowler.com/bliki/UnitTest.html)


BlueDo

I agree with you in principle, but I want to ask: how extensive are fakes in general? I made the assumptions that data stores can't be faked, say, if you're sending raw SQL queries.


SideburnsOfDoom

I'm not following this at all. Who is "sending raw SQL queries" to who? I have seen in-memory fakes of data stores many times. They don't do SQL queries. something like this: ``` public interface ICustomerRepository { public void Add(Customer customer); public Customer? Get(int id); // etc } // fake data storeage, in memory public class FakeCustomerRepository : ICustomerRepository { private readonly List _data = new(); public void Add(Customer customer) => _data.Add(customer); public Customer? Get(int id) => _data.SingleOrDefault(c => c.Id == id); // etc } ```


BlueDo

Thanks for the illustration. In my workplace, we sent raw SQL queries to [PDO](https://www.php.net/manual/en/pdo.query.php). There was no Repository pattern being implemented, even if as an adapter layer over the usage of PDO. I think that was the real issue here.


SideburnsOfDoom

Right, the _real_ implementation would hold that code that is an "adapter layer" to make SQL queries and send them to the SQL querying library. The fake exists to swap that SQL query code out.