While (automated) testing is essential, whether or not to write tests first, depends on the problem being solved.
If the problem being solved is precisely specified then TDD makes perfect sense. Even more so if you can convince your clients/decision makers to write them for you (gherkin based acceptance tests). In these cases writing tests before/after does not result in any difference in time undertaken in the long run, and writing code against a spec can help you catch bugs early.
Even if you defer the actual automated verification of cucumber specs to a later stage, having the spec in front of you can help organize and prioritize your work, as you have a granular checklist.
Another area where TDD makes perfect sense is when you are introducing small well defined modifications in a large/complex enterprise system.
However TDD is not suitable for cases involving exploratory programming. In these use-cases your problem is almost always not well defined. You may have some dataset and you may want to explore different facets of it in a jupyter notebook, or try to fit different models to it. Or maybe you want to visualize the dataset in different ways to figure out which one conveys the meaning best. Similarly, you may have some APIs but you are just exploring if they can be integrated together. For these use cases it is almost always better to write tests later as a safeguard against future regressions.
Also in strongly typed languages like F#, OCaml, Haskell etc. you can model much of your domains constraints through type relations. Experts in this approach can model types to fit the domain to such an extent that if the code compiles then it is almost guaranteed to be correct because reliance on run time checks is minimized. Conventional test cases are required only as a secondary measure, and for things can not be checked by compiler - so can be written later.