Definitely bookmarking this!!
This answer has received 61 appreciations.
Hey Ipseeta, thanks for your question.
It's a bit hard to single out something in particular since I'm learning every day. But here's a few things, in no particular order:
Being in a good team with a good manager is worth more than working on particular projects. Projects come and go, succeed and fail. It's the people you work with who are making these years of your life special, and you can only do your best work when there is an atmosphere of honesty, mutual support and respect.
Big rewrites often fail. If you want to replace some old code, you need to have an incremental adoption strategy for it. Blindly rewriting something from scratch in the hope of solving some probems often creates many more of them, especially if you didn't write the original version.
Big rewrites can succeed. The previous point is often repeated as a law, but it's not. You can replace a large chunk of code with a complete reimplementation (like we did with React 16) if you have a good integration test suite, and a way to ship the new implementation to a subset of users for a subset of the product surface. We're going to publish a blog post on the Facebook Engineering Blog about this soon.
For libraries, tests should ideally be written against the public APIs. This goes against the common mantra of unit testing, but in our experience this helps both ensure that the right thing is being tested, and that it is easy to replace the underlying implementation. It does, however, make debugging test failures harder.
For very complex code, fuzz testing may work better than unit tests. If the code you're testing has to handle many unexpected cases that are hard to predict, it might be worth writing tests in a non-deterministic manner that randomly generates inputs and asserts that the outputs satisfy certain conditions.
Think about code in time. Don't stop at thinking about the code as it is now. Think about how it evolves. How a pattern you introduce to the codebase might get copy and pasted all over the place. How the code you're spending time prettying up will be deleted anyway. How the hack you added will slow down the team in the long run. How the hack you added will speed up the team in the short term. These are tradeoffs, not rules. We operate on tradeoffs all the time and we must always use our heads. Both clean and dirty code can help or prevent you from reaching your goals.
We're not supposed to be super smart. Unless you're building a spaceship, that is. But for normal software development I think it's true. I used to worry when I didn't understand something in the code and thought I was stupid. Now I still think I'm stupid but I know it's not my fault. It means that the code is structured in a weird way, or that the test coverage is not good enough, or that I haven't put console logs in the right place to visualize what's happening. We don't need to know everything. But we need to know when to build tools to advance our understanding, and learn to explore the code we don't understand. Even when it's our own code.
Use binary search and scientific method. Not literally in the code, but in how you approach it. Have a bug somewhere between callsites A and B? Put a log in the middle. It's between A and the midpoint? Put another log in the middle. Something's wrong with some input? Eliminate half the input. It's working? Try the other half. Etc. Debugging can feel very arbitrary but it's straightforward when you do it mechanically. Observe, form a hypothesis, come up with a way to test it, repeat. And cut things in half when there are too many.
Hope this helps!