Our overall computing environment acquired a distinct patronizing/infantilizing feel to it in the last decade. I don't think it's only visual -- or even visual at all, not sure.
Okay, then how about "pretend you're a human" as opposed to the lizard people who design and write computer software. Spin it however you like. You still have to make the distinction between techies and "the rest of us". Systems have been designed that do not make this distinction and assume the user will be able to figure everything out, the most notorious of which is Unix -- when unadorned with Apple treacle, arguably one of the most user-hostile systems in common use. Normies perceive Unix as being more arrogant than the systems which "condescend" to them. It seems to say "Oh, you don't belong to the super-secret cabal of users who know these arcane commands? Fuck you, then!"
> You still have to make the distinction between techies and "the rest of us"
The language you use for this is important because it shapes the way you think about the difference. The way it is often phrased is in the form of "we're special, better, smarter, people than those dumb people who have no hope of understanding the arcane magicks we are naturally attune to". Which is of course bull. We have specialized knowledge and familiarity from spending years working with this stuff. That's it.
> [UNIX]... seems to say "Oh, you don't belong to the super-secret cabal of users who know these arcane commands? Fuck you, then!"
It seems to say that because that's exactly what UNIX says. They don't even name commands sensibly, not even in 2019. Discoverability basically doesn't exist.
> Systems have been designed that ... assume the user will be able to figure everything out, [such as] Unix -- ... arguably one of the most user-hostile systems in common use.
Um, you do know that Unix used to come with user manuals? Like, oh I dunno, the vast majority of software in the 1980s and early 1990s? The designers of Unix and comparable systems were perfectly aware that command-line incantations cannot be figured out simply by sitting at the system and playing with it; this is very much not what it was designed for!
If discoverability by novice users is a priority, then that is an argument for menu-driven, interactive interfaces and UIs - which could well be built on top of something like UNIX. But documentation is always going to be important.
Unix manuals are reference manuals, not training manuals. To learn something from them, first you already need to have a very good idea of what you're looking for.
The kind of documentation that Unix comes with is of little use to people who already have some specific training in computing disciplines.
I agree that a message like "This program attempted to do something the system won't allow" would be far more useful, along with a "more info" button with a more detailed error description behind it. It sure beats "oops" and "something went wrong". But people tend to forget what a computer said or did and remember how it made them feel. So the market pressure is toward mollycoddling error messages and away from informative ones.
(Also, the word "oops" was chosen because it connotes "something went wrong and it's our fault" -- probably chosen to avoid implying that it was the user's fault. Really ingenious, again, if your goal is to keep users comfortable rather than fully informing them.)
With the crucial difference that the somewhat funny message is followed by actual detailed description of what happened instead of "something went wrong".
Linux has had everything for ages, and that's kind of its problem, too. Everything that fits in one bin will have a counterpart somewhere that fits in the opposite bin.
My opinion is that the overall demeanor of desktop user interfaces have steadily become overrun with that of mobile UIs. The "mobile first" mantra has taken a toll on desktop computing, and it's difficult to measure because the desktop computing of a 2019 unmolested by that infantilizing influence (as you rightly put it) cannot be seen from the timeline of reality.
But most of us who have been around for a while can imagine a modern computing environment that still treats desktop computing as desktop computing (and not just large form factor mobile computing).
They're a half realized idea though. The value of tiling WMs is that they allow you to compose what is essentially your own workflow dashboard and save it. What needs to happen is to complete the idea: entirely composable GUI interfaces.
I'd argue that the desire to have windows side-by-side when multitasking is far more common than to have them be particularly overlapping and hiding one another. Certainly a lot of what I do on a computer is development, but tiling makes composing emails, image editting, writing, and web browsing less painful too. Floating windows seem to be prioritizing a metaphor over usability.
The widespread shortcut/gesture for making a window full height and half-screen width was a good middle ground for me.
Tiling WMs (which I tried 10 years ago?) would always break on some programs (say Gimp), then you had to run that program in "floating mode" and its already too much overhead for me...
Nah, CWM is better. Floating, not decorated env with a menu switcher and tags everywhere. Just open a FEW windows per tag, learn to context switch LESS, that's it, keep a kiss approach ;).
We're in the Fisher-Price era, pruning advanced features and providing strong visual cues in the form of bright colors is still the rage.
Who cares about having a file explorer on their mobile device? Who needs advanced networking options on their laptop when they're just using coffeeshop wifi? It'll probably get more and more segmented.
The UI testers for Windows 95 found that people were baffled by hierarchical file systems, even given the conceit of calling directories "folders" (which I found to be infantilizing and infuriating). The confusion and rage provoked by error messages intended to be specific and somewhat meaningful has become a pop-culture meme. ("PC LOAD LETTER? What the fuck does that mean?!")
I've recently had the fortune of talking at length with my mom about her past, and one thing she brought up was how she felt when my dad brought that first desktop computer into the house. To her, it was kind of like a typewriter (which she understood), and kind of like a television (which she also understood). You type things, and they appear on the screen, but -- and this is the spooky bit -- other things may appear on the screen that you never typed. It's something she got used to quickly enough, but never totally came to grips with.
I think most people -- even very smart people -- are like that. They don't know how to deal with a machine that works semi-autonomously, in ways that don't obviously correspond with their input, nor to form an internal model of how it works, nor to engage with the machine transactionally in order to successfully operate it to complete a task ("if I do A, the machine's internal state will become B and I can expect its future behavior to look like C"). This comes natural to us, because we're techies and this is what we do. Some people can sit at a piano and play it like nothing. I can't!
The insight of the GUI was to draw a representation of the machine's internal state (or a highly simplified model of it) to the screen in terms that humans readily understand, along with available options for a human response (in the form of buttons and pull-down menus). Early GUIs prioritized the mapping of machine models to aspects of the real world, leading things like the spatial Finder which presented the file system in such a way that we can use our instincts for how we find things in real space to navigate it. This approach gets you some leverage, but there are limits to how far you can go with this. As time went on, we ran harder and harder against those limits. Typical office users may have fared okay, but then computers started to enter the home in a big way AND started to be networked in a big way, leading to a whole new base of inexperienced users -- who might've otherwise never touched a computer in their daily lives -- confronted with an overwhelming tidal wave of possibilities. And they became baffled, mystified, and frustrated by even the easier-to-use, Windows 9x era interfaces we had. And then, a decade later, smartphones created a whole new base of confused users. So the designers of today, having exhausted all the good ideas of how to solve the problem, resort to the UI equivalent of shouting at a deaf person: dumbing down the UI, removing elements considered to be too distracting, enlarging and spacing out the ones that remain, replacing specific error messages with meaningless but inoffensive blobs of text ("Something went wrong", "There was a problem", etc.).
Even more maddeningly, some of these changes were inspired by corporate communications. Some of these new error messages ("We're sorry, but...") resemble the old broadcast-TV error message of "We are experiencing technical difficulties. Please stand by." But the thing you have to understand is, this sort of communication works on normies. They don't need specific details of what went wrong, what they need is to be reassured that everything, in fact, will be okay. From an appealing-to-normies standpoint, "We are experiencing technical difficulties" would have been a vast improvement over a common Windows 9x error message -- "This program has performed an illegal operation and will be shut down." To a normie, "illegal" means criminal! The Feds put people in prison for a long time for computer crime; imagine the panic that would set in if you, knowing nothing about how a computer works, were suddenly told that it had done something illegal!
So really UI designers are just prioritizing soothing users over giving them actionable information and fine-grained control. The next revolution in UI design will be in making users well informed and capable without alarming them. I'd prefer that everybody toughen up a little, and basic understanding of how these machines work becomes a part of our civilization's literacy requirements, but that's nearly impossible to achieve given current market forces.
GUIs represent a machine’s internal state, but that representation is often misleading, especially to users who take it literally.
Take object persistence. It’s innate to assume that objects don’t go away simply because we can’t see them. Documents don’t vanish in real life simply because you stop looking at them.
Many people don’t understand why a document on a computer screen can vanish, because they don’t understand that that document has to be assembled from data and code every time it’s opened. They don’t understand why it should look different in a different version of word (or worse, in some other program), because objects shouldn’t change when you view them somewhere else.
They don’t understand why you can’t just put a Word document in an email, or a website, or in ‘the cloud’ and edit it in-place. To many people the functionality of the editing is inherently in the document, (not the system) and don’t understand that, without the system, it’s just a series of bytes with no inherent meaning or functionality.
> GUIs represent a machine’s internal state, but that representation is often misleading, especially to users who take it literally.
And that's largely fault of the developers, since they build on layers upon layers of utility libraries, which are not exposed to the user but inevitably pop-up in the form of a broken metaphor or unintelligible error message.
User-facing systems should be defined around powerful data&workflow metaphors, and all the layers in system built around supporting those metaphors in coherent ways.
There is a tradition of people trying to build user systems around simple concepts, easy to combine (starting with Memex, then Smalltalk, Hypercards, and nowadays with mobile OSs). But there's always been a great deal of friction in adopting them:
- first because their experimental nature can't compete with the more polished nature of commercial systems based on legacy conceptual metaphors;
- and second, because up until recently, end-user hardware was not powerful enough to support the complex graphical and computational requirements for the heavy environments required to support these novel interfaces.
Now that computers are powerful enough to build novel experimental interfaces on top of all the legacy libraries required to run generic hardware, we're starting to see again a lot of experimentation of those system-encompassing alternative metaphors for interaction.
I did mention that GUIs have limitations in how accurately they can represent machine state. You've done a nice job in elucidating some of these limitations.
> They don’t understand why you can’t just put a Word document in an email, or a website, or in ‘the cloud’ and edit it in-place. To many people the functionality of the editing is inherently in the document, (not the system) and don’t understand that, without the system, it’s just a series of bytes with no inherent meaning or functionality