Summary of what follows:
"Ah, remember when computers were 1000x worse?"
Bear with me, I have an explanation. To save time, let's start with a simple scale:
Young: You think modern machines are shit and slow.
Old: You remember when things that were approximately 1000x worse* in almost every measure were the incredible, and expensive, new hotness of the day.
(1995)*
Incredibly freakin' old: You remember when owning a machine that was approximately 1,000,000 worse ** in almost every measure made you one of the elite, or maybe just very sad, few who owned a computer of any sort.
(1980) **
* No I am not making those numbers up
** No I really am NOT making those numbers up
I was super sad recently and did a bit of calculation to see just how well Moore's Law - or at least, the popular perception of it (IE electronics get 2x better every 18 months, rather than the "twice as many transistors on a single chip", which is roughly the same) had been adhered to over the past few decades. Turns out for the past THIRTY, it's pretty good, when averaged.
However, what this means is that in fifteen years you have ten generations, or approx a thousandfold improvement thanks to geometric progression. Over thirty years, it's twenty generations - and a one-million-fold improvement. To break it down further, it's 58% year on year, or a tenfold improvement every five years... with a few disruptive lumps, of course. Which is why you need to average it.
CPUs, display quality, remote comms and memory capacity (both RAM and "backing store") follow this rule pretty damn well, if you allow just a little bit of fudging. A high-multi-core CPU (say, 8-way with hyperthreading and SSE at 3-4Ghz) is about 1000x more potent than a 486 DX2/66, which is itself 1000x better than a 2.5Mhz Z80 or 1Mhz M6800 (likely chips in a 1980 desktop computer). 8kb would have been a pretty good amount of RAM in 1980, and so was a meg of user-file hard store (with commercially available software taking up 600kb on a tape at best). 1995, think 8mb and 1Gb, and 600mb CDs. 2010, an 8Gb PC is a little lavish, but not unusual, nor is 1Tb of disk, and you can easily burn through 600Gb of useful internet data a year (Google Maps, for example, is a massive data hog, and some Wikipedia pages are pretty bad... then we have all those cloud-based apps...).
To go back even further by another 1000x generation... 1965 is the earliest electronic calculators (no actual CPU, and a few tens or hundreds of ops per second is sufficient...) and examples of oscilloscope games (purely analogue), but even industrial computers are still pretty bad.
Then...
1980 is Pac-Man, Space Invaders, VCS, the very earliest BBSes and email.
1995 is Final Fantasy 7, Gran Turismo, Playstation, the start of the web.
2010 is, well, if you don't know, you've got bigger problems than me not spelling it out.
Those of you who, like myself, first cut their teeth on the generation immediately after 1980 (the Spectrum, C64, other Z80A and 6502/6509 etc systems and 80s arcade games), matured with the interim 16-bits, and joined the mainstream when PCs finally properly outgrew their expensive business roots (a 10x power increase each time, let's say), how old does thinking about all THAT make you feel?
And also - how awesome do we think the future's going to be, if by 2026 we may have machines which are 1000x more potent than what we currently enjoy? I mean, even though in real terms we've probably only got 10x the actual utility out of these computing devices with each 15-year generation (most of it being wasted on prettiness and bloat; after all, there was eventually a (monochrome) graphical web browser for my 1985 Atari, alongside crude audio and video digitisers... you could probably come up with some kind of tolerable Youtube interface for it), 10x better will still be amazing.
Singularity? Bring it on. I can't wait to see what kind of sound-driven kaleidoscope THAT can put on screen. The 1990~95 versions were awesome enough.
(((BTW The problem we are having is one of internal transfer speeds. Hard disks haven't accelerated anywhere near as much - in fact, because of the mere physics of it all, their speed has increased by the square root of capacity (because storage space is a function of areal density, but read and access speed are both functions of linear density). So in the time it's taken them to grow 1,000,000-fold (5mb to 5Tb, say... and price dropped from £10000 to £100 too!) their speed has only improved by about 1000x (a couple hundred kb/sec to a couple hundred mb/sec, or 25mb/s if on USB2) and access time by a mere 100x (a few hundred ms, to a few ms). SSDs are closing the gap, but still can't make up the difference. Even the most extreme home desktop tests on overclocking sites have trouble breaking 3Gb/s and some tens of microseconds access time. Memory and peripheral buses are suffering the same problem, which is why AGP and dumb-PCI graphics cards have gone the way of the dodo to be replaced by cards which are basically dedicated supercomputers that receive simple commands and chunks of texture/vertice data from the main CPU and work everything out from themselves - and cheaper display systems that aren't integrated into the Motherboard but the CPU itself.
(Well, mem access ALMOST manages to keep pace, but you have to fudge the figures VERY hard - massively pessimistic about the 1980 performance, and assuming DRAM instead of the more common SRAM; and very optimistic about what can be squeezed out of our very fastest gaming rig memory systems... with the true performance of a 1995 PC as the touchstone. It's still not kept up with CPU development or memory size.)))