This is article is a bit different from what we normally post, so be sure to give your opinion in the comments if you want to see more of this (or not). You will most often find these types of software development rants on a programmer's personal blog, but I find them interesting so here we go.
There are two camps in software development regarding optimization, each with diehard advocates. One side argues for software to be strictly designed, with decisions needing to be coherent and performance-minded. The other viewpoint claims that optimization should be done after profiling, because you could spend weeks making a fairly useless chunk of code purr like a kitten, and ignore the turkey that's using 99.99% of your resources.
Both sides can also point to situations that validate their opinion. The latter, “don't premature optimize” crowd can show examples where time is wasted because the engineer didn't look before they leaped. One such story comes from Chandler Carruth of Google. One of his first tasks at the company was to review code from Ken Thompson, a “very senior engineer” who created Unix and defined UTF-8. It solved rule-matching with about a 20-fold increase in performance over what they were currently using. When Chandler went to integrate the fix, his colleague mentioned “Yeah, turns out our use-case overwhelmingly hits a single rule, so I just check it first. It's now 100x faster.”
The other crowd says that, even if you can find exactly where poop stinks, you're still polishing a turd. One issue that is commonly pointed to is garbage collection. In memory-managed languages, this process scans through your application to delete unused chunks. Its goal is to remove memory leaks without users needing to carefully manage allocation themselves. The problem is that it necessarily freezes basically every thread and often takes several frames worth of time to complete. As such, you can either live with the bad user experience in real-time applications, or you can carefully design your application to avoid leaking memory. If you take the time to design and architect, it allows you to either choose a framework without garbage collection, or sometimes reduce / eliminate how often it triggers.
So the argument is over-thinking wasting time versus under-planning painting software into corners. As it should be somewhat obvious, both are correct. It's a bad idea to blindly charge into development, and it's good to think about the consequences of what you're doing. At the same time, what you think means nothing if it differs from what you measure, so you need to back up your thoughts with experimentation.
The challenge is to coast the middle for the benefits of both, without falling into the traps on either side.