Tests are your spec. You write them first because that is the stage when you are still figuring out what you need to write.
Although TDD says that you should only write one test before implementing it, encouraging spec writing to be an iterative process.
Writing the spec after implementation means that you are likely to have forgotten the nuance that went into what you created. That is why specs are written first. Then the nuance is captured up front as it comes to mind.
Tests are not any more or any less of a spec than the code. If you are implementing a HTTP server for instance, RFC 7231 are your specs, not your tests, not your code.
I would say that which come first between specs and code depend on the context. If you are implementing a standard, the specs of the standard obviously come first, but if you are iterating, maybe for a user interface, it can make sense to start with the code so that you can have working prototypes. You can then write formal documents and tests later, when you are done prototyping, for regression control.
But I think that leaning on tests is not always a good idea. For example, let's continue with the HTTP server. You write a test suite, but there is a bug in your tests, I don't know, you confuse error 404 and 403. The you write your code, correctly, run the tests, see that one of your tests fail and tell you have returned 404 and not 403. You don't think much, after all "the tests are the specs", and change the code. Congratulations, you are now making sure your code is wrong.
Of course, the opposite can and do happen, writing the code wrong and making passing test without thinking about what you actually testing, and I believe that's why people came up with the idea of TDD, but for me, test-first flip the problem but doesn't solve it. I'd say the only advantage, if it is one, is that it prevents taking a shortcut and releasing untested code by moving tests out of the critical path.
But outside of that, I'd rather focus on the code, so if something are to be "the spec", that's it. It is the most important, because it is the actual product, everything else is secondary. I don't mean unimportant, I mean that from the point of view of users, it is better for the test suite to be broken than for the code to be broken.
It is more like a meta spec. You still have to write a final spec that applies to your particular technical constraints, business needs, etc. RFC 7231 specifies the minimum amount necessary to interface with the world, but an actual program to be deployed into the wild requires much, much more consideration.
And for that, since you have the full picture not available to a meta spec, logically you will write it in a language that both humans and computers can understand. For the best results, that means something like Lean, Rocq, etc. However, in the real world you likely have to deal with middling developers straight out of learn to code bootcamps, so tests are the practical middle ground.
> I don't know, you confuse error 404 and 403.
Just like you would when writing RFC 7231? But that's what the RFC process is for. You don't have to skip the RFC process just because the spec also happens to be machine readable. If you are trying to shortcut the process, then you're going to have this problem no matter what.
But, even when shortcutting the process, it is still worthwhile to have written your spec in a machine-readable format as that means any changes to the spec automatically identify all the places you need to change in implementation.
> writing the code wrong and making passing test without thinking about what you actually testing
The much more likely scenario is that the code is right, but a mistake in the test leads it to not test anything. Then, years down the road after everyone has forgotten or moved on, when someone needs to do some refactoring there is no specification to define what the original code was actually supposed to do. Writing the test first means that you have proven that it can fail. That's not the only reason TDD suggests writing a test first, but it is certainly one of them.
> It is the most important, because it is the actual product
Nah. The specification is the actual product; it is what lives for the lifetime of the product. It defines the contract with the user. Implementation is throwaway. You can change the implementation code all day long and as long as the user contract remains satisfied the visible product will remain exactly the same.
> The much more likely scenario is that the code is right, but a mistake in the test leads it to not test anything.
What I usually do to prevent this situation is to write a passing test, then modify the code to make it fail, then revert the change. It also gives an occasion to read the code again, kind of like a review.
I have never seen this practice formalized though, good for me, this is the kind of things I do because I care, turning it into a process with Jira and such is a good way to make me stop caring.
Thank you, I wasn't aware of this, this is the kind of thing I wish people were more aware of, kind of like fuzzing, but for tests.
About fuzzing, I have about 20 years of experience in development and I have never seen fuzzing being done as part of a documented process in a project I worked in, not even once. Many people working in validation don't even know that it exists! The only field where fuzzing seems to be mainstream is cybersecurity, and most fuzzing tools are "security oriented", which is nice but it doesn't mean that security is the only field where it is useful.
Anyways, what I do is a bit different in that it is not random like fuzzing, it is more like reverse-TDD. TDD starts with a failing test, then, you write code to pass the test, and once done, you consider the code to be correct. Here you start with a passing test, then, you write code to fail the test, and once done, you consider the test to be correct.
> I have never seen fuzzing being done as part of a documented process in a project I worked in
Fuzzing, while useful in the right places, is a bit niche. Its close cousin, property-based testing, is something that is ideally seen often in the spec.
However, it starts treading into the direction of the same kind of mindset required to write Lean, Rocq, etc. I am not sure the bootcamp grad can handle writing those kinds of tests. At least not once you move beyond the simple identity(x) == x case.
Also, if you find after implementation that the spec wasn't specific enough, go ahead and refresh the spec and have the LLM redo the code, from scratch if necessary. Writing code is so cheap right now, it takes a different mindset in general.
Although TDD says that you should only write one test before implementing it, encouraging spec writing to be an iterative process.
Writing the spec after implementation means that you are likely to have forgotten the nuance that went into what you created. That is why specs are written first. Then the nuance is captured up front as it comes to mind.