Here's a PSA: your code still runs on a very fast abacus. This is a real machine, with hard limits.
“You don't have to be an engineer to be be a racing driver, but you do have to have mechanical sympathy.”
Every developer - whether the racing driver or the engineer - needs to understand mechanical sympathy.
Mechanical sympathy descibes a feeling - an inution - on how a machine operates. You may not know how it works on a component level, but you understand how the machine will respond to different inputs, to being prodded a certain way.
We can cut it down to an assembly line of ideas: your code asks the computer to do "things". These "things" require a physical action to be done by your computer. These physical actions take time. Not all physical actions take the same amount of time. In this context, mechanical sympathy is an intuition for what's slow, what's fast, what's efficient, what's unreliable.
We're living in a wild time; playing a tug-of-war between what we want, and what is possible. Algorithms are more complex, more code is freely available, datasets are larger, more users are online and [we hope] are using our products. This all comes saddled with cost. Cost is complexity, cost is slower applications, cost is lost users.
We also live in a world of finite resource. You have a limitation on how much you can fit on a SSD, a limitation on how many bytes you can push through a wire. You also need to compete against other applications all jostling for the same resource. The application that is in harmony with its environment is predictable, its happier. These both have a direct correlation with your users' happiness, and your happiness.
The lion's share of developers are not forced to understand these ideas, and so, by nature, they do not. These developers skate by; computers are relatively fast now, and you can build a full-featured application ignoring all of these. But, at some point, as you begin to grow the codebase, the user base, the complexity, and you will be punished for not having at least a rudimentary understanding.
A lot of developers will bring up the idea that premature optimisation is bad, and I totally agree. I don't view this concept as a set of tasks that need to be completed, or a way to achieve perfect performance, but as a set of guards to help avoid doing anything stupid.
So, as you're writing code:
What happens when you have 1,000,000 items running through the algorithm? Big O notation is a useful concept here.
Are you hitting the network? What're the constraints?
Are you reading/writing to disk? How big and how frequent will these be?
What're you storing in memory? Does it need to be there?
Do I need to be doing this now? Do I need to be doing this at all?
Understanding these have a real diminishing return. So, avoid going too deep. But you need to build your intuition. Understand the relative speed of different pieces of hardware. While you're building things: benchmark them. Keep these benchmarks on hand, spend time experimenting, playing. Remove things, add things, watch.
The key thing to remember: everything has a cost. You can ignore it, but you will pay either way. It's better to understand some of these hairy relationships up front, to avoid even harder problems down the road.