
A Tech Leads Responsibility in Testing
Testing is very hard and what it means to write a good test isn’t always well understood. Due to this fact, without a strong foundation all forms of testing including unit and functional tests can become an immense burden. As a lead engineer I have a responsibility to shape the testing infrastructure and culture of my team. I have a responsibility to provide the foundation that makes it easy to write good tests. The foundation comes through (but not limited to) tooling, design patterns, an alignment of testing philosophy and a shared understanding your team’s testing goals.
Testing requires you to become someone else
Testing the code you write demands you to step outside of your current self. Writing the tests for the code that I just wrote means I’m now trying to uncover things that I initially missed in planning. Even in TDD, you need to later attempt and find things you missed at first. You need to become an adversary to yourself and attempt to break things. You need to become skeptical and fill the role of someone who doesn’t trust you. Someone who trusts you will often be less rigorous. Stepping into someone else’s shoes will generally be a helpful skill in writing code. The tests you write will directly guard your teammates and future developer’s changes. If your test blocks your teammate, they will need the tools to be able to easily resolve the issue.
Testing is already hard, we often make it harder
Becoming someone else doesn’t come easy. On top of this, several human made factors that make testing hard include:
- We often overlook testing in training and learning
- Tests are often saved till last, but now your tired and may be on a deadline. I think TDD helps a lot in this point.
- The expectation that tests should be written is clear, but this expectation says nothing about how the tests should behave
- Writing awesome tests can easily get less appreciation than the code written and running in production
- Not all code is easily testable in a meaningful way. It’s easy to foot gun yourself.
Should you test your own code?
I don’t remember where I first heard this thought experiment so if you know please reach out. It goes: If testing is harder than writing the application code, and you write the application code to the highest of your ability, then by definition you are unqualified to write the tests.
Testing definitions get debated a lot
As engineers we debate the topic of testing a lot and we often don’t agree on the best approach. Individuals of all variations of experience and expertise will say Test-Driven Development (TDD) is the way, some will say Behavior-Driven Development (BDD), others will advocate for not writing unit tests at all.
There also exists a lot of debate about definitions. What is the definition of a unit test? No dependencies, having only library dependencies? Having only application dependencies? What is an integration test? Testing more than one unit?
Rather than breaking things up by unit vs integration many companies break up their tests into size.
- small - micro seconds
- medium - 10s of ms
- large - ex: 100s of ms
- stupid large (actually a used term) - seconds or more
What are properties of good tests?
from Kent Beck’s Test Desiderata properties include:
- Isolated - not dependent on order
- Fast - should run quickly
- Inspiring - passing tests should inspire confidence
- Writeable - cheap to write
- Readable - easily be able to understand the purpose/intention from the test code
- Behavioral - sensitive to changes in behavior
- Structure-insensitive - results don’t change when code is refactored
- Automated - triggered at various stages without need intervention
- Specific - cause of failures should be obvious
- Deterministic - output can be easily determined based on input - never changes if code is not changed
- Predictive - tests passing locally should tell you how the code will run in production.
There are additional properties to consider such as: maintainable, reusable, scalable, feature based, reportable, etc.
Which of these are you currently doing well or not well?
My Additional Philosophies
I think that Kent Beck’s listed properties offer a great guideline in how to consider tests. I especially like to consider the properties of Predictive and Inspiring. Inspiration is a powerful thing. Below are additional philosophies that have worked for me.
Developer Experience is a Top Priority
Again as a lead engineer I feel a duty to help provide a framework and necessary tooling for teammates to easily write good tests. I believe that there should be a very low barrier to setting up a test, asserting common business logic, etc. If I want tests to be isolated, can I write a tool that creates randomized data? If I want tests to run fast do I have the framework to track what tests are slow? Have I defined what a test should look like? If there is great developer experience, writing good tests will come much more naturally. While providing tooling it’s also important to pay attention to what teammates do or don’t like and be able to adapt.
Minimal Mocking
Only mock what you have to. We have to mock IO dependencies that normally go to other servers. I believe minimal mocking what you absolutely have to helps create:
- realistic testing: by minimizing mocks, the tests interact with actual implementations. This leads to more realistic scenarios that may uncover issues that would otherwise not surface. This aids in confidence, if my code ran in the test correctly I know it will be running very similarly in production.
- design feedback: minimal mocking encourages you to design your code with clear, well-defined boundaries and dependencies. Often you will have more modular and testable code which may also lead to higher code quality and architecture. I have found circular dependencies by re-writing a test using mocks in this way.
- maintainability: tests that rely less on mocks are often easier to maintain. When the internal behavior of a server changes, you won’t need to go and update mocks. This reduces the risk of brittle testing. Mocks tightly couple the code to a specific implementation.
- refactoring confidence: with fewer mocks, our tests provide better coverage in regards to the production behavior. When the tests pass in a repository’s test suite, I should at this point be confident that acceptance and regression tests won’t be affected by my change.
- reduced overhead: Mocks require setup and configuration. This adds a lot of overhead to the testing process. By limiting mocks, we can make our tests easier to write and to understand.
1000, 100, 10 rule
The 1000, 100, 10 rule is a heuristic to help maintain a balanced and efficient testing suite. It suggests that your test strategy should roughly follow these proportions:
- 1000 Repository Level Tests: These form the foundation. Tests like unit tests are fast, isolated, and plentiful, designed to validate individual functions or components. Their abundance provides quick feedback and confidence that each small piece of your application behaves as expected. When I say unit
- 100 Integration Tests: While fewer in number, these tests verify that different parts of your application work together correctly. They’re less granular and are testing behavior between boundaries of services.
- 10 End-to-End Tests: These simulate real-world scenarios to ensure the entire system functions as intended. Although they are slower and more complex, a smaller number of comprehensive end-to-end tests is enough to validate key user flows without bogging down your feedback loop.
This ratio isn’t a strict rule but a guideline meant to promote efficiency, maintainability, and realistic coverage. If you find yourself deviating significantly—whether by over-relying on mocks or missing integration details—it might be time to reassess your test strategy. Ultimately, this balance supports a robust testing culture that enhances developer experience, refactoring confidence, and overall code quality.