I think that while this may be true in many areas, there are a number of exceptions. In scientific computing, for example, one can run into situations where a trade-off needs to be made between resolution/accuracy versus memory. Another area is in the processing of huge datasets of text/images/video (examples: Tesla releasing 1.3 billion miles of data, training an image classifier on the >100 Gb ImageNet dataset, or natural language processing on large text datasets. Industrial IoT data can also be pretty huge). There's only so much that can be squeezed onto a single compute node (or GPU). Certainly it helps to have memory-efficient algorithms. Also, an inefficient algorithm is going to cost more if the data processing is done in the cloud (e.g. on AWS). Lastly, there's the internet of things (think "smart devices", "smart grids", "smart homes", "smart cities", etc), where storage and RAM limitations are pretty important. How does one get the "smart device" to capture as much data (e.g. streaming, time-series data) as possible into its limited memory, while at the same time keeping the device secure?