Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Almost everything on computers is perceptually slower than it was in 1983 (twitter.com/gravislizard)
67 points by zeveb on Nov 7, 2017 | hide | past | favorite | 23 comments


Ironic he also had to type out the entire thing in 140 character segments each of which bounced all over the world rather than sharing a simple text file write up.


From a purely UI perspective that may be true (neglecting things like incompatible file systems, lack of networking outside large institutions, and the difficulty of doing simple things like sending email attachments), but that would be true irrespective of the adware. Text-based UIs are more efficient that GUIs, and mobile GUIs are a study of inefficiency.

On the other hand, it could take weeks to learn a nontrivial UI well enough to make it fast.


This was already posted and had a large discussion here earlier today: https://news.ycombinator.com/item?id=15643663


There's probably already a phrase for this, but I wonder how much of this is due to powerful machines making more room for bad programmers?

This guy is beating on GUIs, but I'm not sure that's the problem. I've used rock-solid GUIs on machines that ran @1MHz with less than 1MB of RAM that absolutely screamed performance and were a joy to work with.


> making more room for bad programmers?

That reminded me of

https://en.wikipedia.org/wiki/Braess%27s_paradox

and

https://en.wikipedia.org/wiki/Wirth%27s_law


An easier-to-read version is here: https://tttthreads.com/thread/927593460642615296

(also, for some reason Twitter thinks that Firefox on Linux is a mobile browser … why Twitter why?)


That didn't help. It's still unreadable, but the problem isn't the formatting.

I guess it depends on what we mean by faster. Certain kinds of interfaces were more responsive when they didn't have to deal with a lot of graphic overhead. If you were working on a DecWriter with a 300 baud connection to a mainframe, it wasn't that fast. Need data from the disk drive? Way slower. And of course, in 1983, no pc's had any kind of multi-tasking.


Not to mention his entire rant on maps losing search items when you pan is just wrong.

Or fuck it, lets talk about the fact that I have an application that has zero install time that will literally display the roads/geography/business locations of basically every civilized place in the whole freaking world, and it's almost always reasonably up to date.

And the biggest irony of all, he chooses to format his entire fucking lunatic rant in messages capped at 140 characters, when there are lord knows how many better options available.

This man is walking proof that nostalgia is real, and that people are generally idiots whenever they talk about "The good ol' days"...


> Search far and wide. Search for cities and then click around inside them. Read reviews. Do street view

> When you're all done, you go back to your plotted trip and start laying out the chosen locations and optimizing your path.

> You can do this with a paper map. You can't do this with gmaps. So you just don't do it.*

Yes, it’s so easy to “click around cities” and “do street view” on a paper map. It’s amazing that Google Maps even exists given the magic paper maps we already had.


You missed the point entirely. Nicely done.


I've never heard that from anybody who actually used computers in 1983.

Latency problems in the web/app world are the result of adware/bloatware/trackers/etc. and/or poorly-implemented frameworks and stacks, not modern hardware or O/S.


I actually used computers in 1983; I learned to program on an Atari 800xl, and later upgraded to my father's IBM AT (w/ Hercules graphics!) While computers have become extraordinary faster doing actual calculations or manipulating large amounts of data, as long as you had sufficient RAM[1], UIs responsiveness has gained latency (and generally became perceptually worse) on a curve that peaked in the Windows Vista era, when it started to improve again very slowly.

Note that I said "UI", not "GUI". Text mode visual UIs - which were fine for some tasks. I knew several people that preferred Lotus 1-2-3[2] over Excel and Quattro Pro specifically because of 1-2-3's low latency.

Every layer of abstraction added overhead. Protected memory was worth a bit of latency. Multiple layers of object oriented vtable calls and unnecessary texture bitblt on every GUI widget added made GUIs sluggish for questionable benefits.

> adware/bloatware/trackers/etc

Those huge sources of latency (and other) problems, but networks always add large amounts of latency. The web, which was designed for documents, not as a platform for interactive GUIs, has and will always have terrible feel slow, unpredictable, and laggy. Some of that is unfixable due to the speed of light and the overhead from any thin-client system that requires regular roundtrips to a server.

[1] everything became painfully slow if you were constantly needing to move pages to/from the swapfile.

[2] Lotus 1-2-3's release in 1983 was so popular the very idea of "IBM PC Clone" was created, with 1-2-3 acting as a popular compatibility test.


I did (well nearly, started in 1984 on 8-bit home computers), and the only thing that was dramatically slower than today in normal day-to-day tasks was loading data from cassette tapes, floppy disks or via modem.

Since I've been on computers with hard discs (Amiga 3000 early to mid-90's) it's definitely true that responsiveness either hasn't improved much, or got even worse (I remember how confused I was when I first saw an application with a splash screen, to hide the long startup time). This sort of carelessness has been spreading like a plague since then.


That's true. Waiting to load a program from a Tandy cassette tape was a challenge. Floppy disks were a big jump forward.

Before that, the Big Jump for me was a move from a Hayes 300 baud modem to an AppleCat 1200. I clearly recall thinking "Woah! It's feeding data faster than I can read it!"

Next up was the BeOS. From cold boot to completely loaded desktop and hard drive light off in something like 15 seconds. (Meanwhile, Windows 95 took something like 2 minutes to boot up.)

The final quantum jump was about 6 years ago when I moved from a spinning to a solid state (SSD) hard drive. Again, big BIG boost.


I still remember adding a kind of splash/waiting screen to one email application because The Most Important Feature was executing too fast and users wouldn't feel that it really works. Sometimes latency is caused only by management.


That exists now on TaxCut software - when it says "checking your return for possible errors or red flags" or similar. It has some blinky lights animation or arrows or such that runs for about 10 seconds. It was probably done before Windows could bitblit the image to the screen.


The hardware and OS are fine. The problem is the web platform and its towering stack of abstractions, combined with the occasional problem of focusing on aesthetics over utility.

I haven't yet experienced a web-based application that I actually enjoyed using. The closest anything ever gets is to make me wish it was a native application so I didn't have to deal with all the web nonsense in order to use it.


The hardware is fine. I wouldn't say the same about OSes. Desktop experience with Linux is incoherent. Windows and OS X are legacy monsters with 20 years of piling new features on top of not exactly rock solid foundation. Every time I see a frame drop when switching desktops on my MacBook I get so angry, I mean, really? It's 2017, game graphics are almost indistinguishable from life, and we're still having trouble drawing a couple of rectangles as our desktops?


Desktop too - so many frameworks and runtimes and threading and associated overhead that many apps don't feel as tight as they used too.


Back in 1990, some guy named Michael Abrash wrote about this in Dr. Dobbs magazine. He actualy used computers in 1983. He published his first game in 1982 and worked on another game called Quake with John Carmack. Michael is an Oculus Chief Scientist, now, where he is deeply concerned with perception and delay.

To paraphrase the Dr. Dobbs article, his young child asked him why he never used his "fast" computer. He was initially puzzled by the question, as his current computer was a state-of-the-art max spec machine (486-233mhz?), but realized she was referring to his ancient Apple 2 or TRS-80 or such sitting in the corner of the office. While the new system was hundreds of times faster by every computational measure, user experience said otherwise and his child picked up on this. The old 1mhz 8bit box only needed a second or two to boot to a prompt, yet the state-of-the-art Windows 95 machine required 2 minutes to boot up. The new computer was faster. The old computer was more responsive.


Indirectly related...

I am the one-man team for the CLI (Command-Line Interface) for a new cloud computing PaaS.

This is a small project (10+ employees, mostly software engineers) on a more extensive company (500+ employees).

Sometimes I have a [really] hard time dealing with management and co-workers because they expect me to mimic the behavior of our GUI console (web-based) with a high fidelity level that many times is just inappropriate for the CLI or not what you want on a CLI.

Most of this is caused by lack of experience on both using and writing CLI tools (I happen to have way more experience than them in this field because I am a long-term heavy Unix user - since a kid, besides having a greater understanding of Unix programming in general). I wish they at least read "The Art of Unix Programming" by Eric Raymond to better respect some design decisions I made.

I am very open to criticism and even enjoyed the unexpected help I received from the project UX / UI designer, that was enlisted to help me out (except for things I clearly decided against but were kind of forced on me).

Here is an incomplete list of things I had to deal with related to performance/usability:

* (lesser problem) backlash against language: Go was my language of choice for this, many of the team would instead use Node.JS

* lack of understanding that I have to support a vast amount of configurations: BSD/macOS/Linux/Windows + different shells + terminals (and it gets worse / weird when you talk about Windows), and that somethings that look nice in one system, might not look the same in most terminals. Also, many times it is difficult to predict with accuracy what terminal is in use (I am looking at you Windows Subsystem for Linux, that pretends it is xterm-2565, while not supporting most things of it).

* colors, colors, colors. The designer wanted colors everywhere. Foreground, background, Alpha-RGB. You can not imagine how hard it was to convince that we better stick with ASCII standard 16 colors and that alpha channel didn't even make much sense, at least while I am alone

* Explaining why we better stay away from new UTF-8 icons "that only renders nice on macOS iTerm"

* Explaining why colors look different in other terminals (say, Hyper)

* Explaining that I have no control over the screen background of the user whatsoever and can't even detect it appropriately / would not be nice to do so

* Animations, animations, animations. Why do they expect animations everywhere? How do I explain to them that if I try to open an URL on the browser it opens kind instantenously, and it needs no animation because the user wouldn't even see it (and then being told to just put a small delay before opening for it to appear to happen... fast)?

* Decorations, decorations, decorations. I lost this battle and the error handling mechanism prints a weird symbol on every line of error. I am actually stripping the decorations a few pieces at a time and intend to eventually get rid of it just by being a rebel and removing it kind of "accidentally" (especially when it gets in the way).

* Lack of understanding of Unix software programming patterns (filter program, pipes, redirections, etc -- see The Art of Unix programming) [and why I don't accept passwords to be passed as an argument as this is... soooo easy to use™!]

* No respect for spacing on a listing of entries: hard to defend that tabular data must be presented as tabular (even though it might get repetitive) or at least have easily parsable separations. One example is a proposal of listing services while showing their projects, and only print the project name on the first line of service [and assume the project above for all others]. This is a deal breaker for pipping or filtering out data most times, and people just want me to ignore it and feel disappointed with me when I do the right thing (ignore their opinion instead).

* I was demanded to add heuristics to a status system due to lack of proper feedback about remote actions such as deploying/restarting/stopping services. This would never work very well because of many things, such as concurrent calls, eventually request just took longer or happened in practice in 'the wrong order', and so on. Took more than six months until I got a decent API change where I could trust the status and more six months for a more CLI-like approach of not doing more than one thing at once to be agreed upon (instead of always trying to replicate the GUI [blocking] behavior).

Many of these things above I managed to ignore, did in a way that was easily revertable, or just accepted for the sake of my well-being [and wished/planned/plan to move to a more sane approach soon].


Yes some things feel slower today, but not all programs and it's not always an apples-to-apples comparison. For example, you should not compare programs that edit local files without any network involved with those that have a network connection used to send or receive data. You could as well say that it took hours to get flight prices 1983 since you had to go to a travel agency and ask for help. Now we can do it from our phones. And we have another difference right there, there were a limited number of travel agencies and agents. Maybe there were only 100 users on the mainframe that asked about flights at the same time. Today it may be 10 000 users at the same time. But if we are talkning about UIs then it is interesting to also look at how many things you can show on the screen and change quickly.

I will go on a little rant here. About 5 years ago I wrote a little web application for phones. I think Samsung Galaxy S4 was a common phone at that time and it is a lot slower than the modern phones of 2017. We used Angular 1 (React was not released publically) and created a single-page-application with offline storage in the browser using IndexedDb. The response time for most things was fast enough to not cause any real complaints from our users. Switching between views was fast and changing contents in list was also done quickly. Actual saving to the server took some time (could be a few seconds or up to about 30s) and we had to build a progress bar to keep users calm but they only had to use that once every 15 minutes or less. Some reasons this worked was that the offline storage was much faster than the network (3G or 4G gsm) to the server and the fact that a mobile screen only can hold a limited amount of data at one time. It was still html, css, js and the DOM, all running on a slow phone. These things _are_ slower then some more native technologies but you can get enough performance for many apps even with html and js if you avoid networks or hide the latencies.

Another project, even earlier, used server generated html and a relational db. When we ran that, including the server, on the developer laptops which were much slower than todays laptops the response times for simple save-calls was below 100ms. It was so fast that we feared that users wouldn't understand that the data was saved. We added a simple timestamp that was shown when the save was done to make it more clear. This was once again without network (for our trials).

Some UIs are harder to get good performance from. A word processor with WYSIWYG might have to reflow almost all text when something at the top of the screen is changed and games might have to repaint a lot of small items on the screen at once. For these types of systems html+css might be a bad fit but for most enterprise and productivity applications it should be possible to get good enough performance today. We might not always get this, partly because we are optimizing for development time instead. To develop offline storage solution might take more time and if you make many network calls in a way that freezes the ui it will be noticed by the users.

And as I started with, if you only have 100 users the backend might be very responsive even if you don't optimize is much. If you have over 10 000 users at the same time you might have to do a lot of optmization on the backend also.


cpus are faster but they also have the nsa inside of them.

there was a point where mhz stop making a computer faster, it was multitasking I recon. at 100mhz as DOS application/game was blazingly fast. but also nothing was bound to the network, it would just spin a cdrom to get the information, and at 150kb that was faster than most internet connections.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact