How I Get Value Out of Automated Testing
I've seen a ton of post's lately on Twitter, hackernews, etc. All effectively saying "yolo more". Yea that's cool and all, but honestly I actually find true yolo to be much slower and low leverage than just having a decent test harness. Before I start - I'm not advocating for TDD nor am I advocating for really trivial/low value unit tests.
Faster iteration, less logging
My quick rule of thumb is - if I'm adding log statements and trying to hit a code path repeatedly that's usually an indicator that I could gain some speed from writing a test. Combining this with real debugging tools I can generally shave off like 30-60 seconds off each iteration. The alternative I fall in to when this stuff doesn't work properly is just refreshing the page constantly or restarting a server. When you have a good test harness you'll lean on this more, and then while you're at it you'll be in a high leverage position to prevent something from breaking in the future.
Note - hot reloading and automatic server restarts is often an alternative to this approach. You'll want both to keep yourself moving fast. It's not zero-sum - both paradigms are useful.
Bigger tests are fine
I don't define unit vs integration vs acceptance vs smoke. I don't care about any of that. I want to know that some significant end-to-end code path works and I want to write tests fast. Tools like selenium can be a bit too slow and heavy handed for me. I tend to use unit testing tools like Jest or Mocha and making nearly every test feel like an "integration test". This keeps test fast enough and meaningful enough. On the UI side I'll test multiple actions running and the outcome of those paths in stores. On the API side I'll bounce between starting up a server and testing full request/responses and just testing individual handlers. Whatever is faster and makes a better test in the moment.
Don't test your framework
Testing a UI components is *ok*. It kind of depends what it's doing, but soooo much UI testing is just "oh ya it showed up" or "I clicked it and it fired a thing". This seems useless, the code is already declarative generally, and the code exists or it doesn't. In a past life I was a bit more strict here, but in modern times I've found these tests to generally be low-ish value. My exception for UI testing is design system components that have higher levels of complexity or will have many, many dependencies.
BUT But but....
In response to what I'm about to hear:
- It's not going to catch anything significant - well that's up to you and your tests to be high leverage, and if something breaks write a test for it
- It takes to long to make good tests - take shortcuts, copy paste swathes of code, trashy/messy/copy-pasted tests that work are better than perfect DRY tests with perfect abstractions
- We're not ready, we're constantly redesigning/changing everything - are you truly changing everything all the time? during the lifetime of that code it should be easily refactorable and you should have at least some assurances that you're not breaking meaningful behavior
- But irrelevant tests fail and we just delete them - yea that sounds correct, good job :)
- But it's just my side project, should I really write tests? - I've found that it's more important during side projects because given large gaps in time I can't fully remember all the logic/behaviors I've built up. Especially for long running side projects.