I've not seen a place where this isn't broadly true, no matter where the business folks come from (top-tier B-school analysts, Bob's discount MBA emporium, doesn't matter) or how good the reputation of the firm they're at. This also extends to management and administration in the public sector—schools are rife with obvious pseudo-science bullshit, bad attempts to replicate results while skipping half the measures that were taken, et c., but dumb-ass superintendents (you would not believe it, seriously, "dumb-ass" was not chosen lightly) and principals eat it up.
[EDIT] which is to say I'm not at all surprised design "science" is full of BS, because it's sold to people who almost all suck at evaluating those kinds of things. Any "science" that largely exists to sell stuff to middle- and upper-management or "stakeholders" probably tends to be awful, because it doesn't need to be good.
I think this tradition of authoritarian bullshit has some traction in the modern world still. Which in part causes all the silly pompous pseudoscientific bullshit.
The scientific method, when applied rigorously (!), is deeply uneconomical in the vast majority of cases, because the market has already eliminated most glaring inefficiencies by pure trial and error. Any possible gains are likely marginal anyway.
Doing a bunch of half-assed and meaningless experiments just to slap the prestigious "science" label on something on the other hand is very economical. So that's what people do instead.
If you're on the outside of that subset, it turns out that not only is it easier just to slap the label on, but your ideal customer (relative to your psychology) doesn't really care to do much more than a simple label check anyway.
From zero to 'rm -rf science-budget' in one marketing exercise.
A sad truth but coming around to it can help us understand why scientists need to work on social policy and outreach messaging as well. Scientific values simply will not sell themselves the way we think they should, outside of our little sphere.
Fortunately our cultural messaging has vastly improved in this domain recently. Bridge psychologies like that of Bill Nye are great examples of how this can work.
Live in that world as a struggling student scientist, juggling the prospects of a "marketable" thesis versus an ethical one, as debts pile up and staring down the barrel of your entire life amounting to being yet another mediocre drone / living on the streets / not being a scientist at all.
The prospect of being completely broke, unable to afford rent, unable to afford food, being an embarrassment to friends and family, these factors exist and must surely inform the judgement and behaviours of students in a position to have a study funded.
I know of phd's still struggling in their careers against the same thing, where an institute is more interested in publicity than the actual science.
The symptom may be half-assed and meaningless experiments but that is not to say the individuals responsible aren't capable or willing of so much more if the appropriate supporting social structures existed to allow ethical scientific thinkers to exist, or if institutions stopped taking in students for the money and instead tested applicants for profit.
It's easy to point out the problem but how can it be solved? To me, I think there needs to be publicly funded institutes that have no profit motive and exist purely to intake the best academic minds, this might seem anti capitalist but as a model it is an investment in the future of a society that is calculated in quality rather than quantity.
On the other hand, it is often possible to profit unethically and sometimes that is the most expedient way. There is a cost to this, which is reputational. That's why it's important to call out bullshit.
The problem with research is that the business risk is high so the potential rewards need to be equally high. So there is an argument to be made for large entities (up to and including the state) to fund research, but there need to be checks and balances. The ones we have may not be perfect, but that doesn't mean they're inadequate. It's all just overhead as far as I am concerned.
The business relationship among people is becoming flatter instead of deeper. Tooling is going to help individuals achieving more independently. At least it's the reality I would like to live in.
> Any "science" that largely exists to sell stuff to middle- and upper-management or "stakeholders" probably tends to be awful, because it doesn't need to be good.
as long as it's reeaally expensive. that's really the only way to asses the accuracy and importance of these things.
Simple shit like p -> q, orders of magnitude, independent probabilities, partitioning, and the classic correlation/causation go completely over their heads.
At the risk of sounding arrogant or elitist, there's a reason the peripatetic school inscribed "let none but geometers enter here". By and large, a mathematical education structures thought in a way that nothing else can.
 My personal pet peeves. People seem unable to break concepts down into component parts that both (a) cover the entire problem and (b) don't overlap.
Interviews at top business schools are nothing but partitioning and mathematical analysis. Google “business school case interviews”. Modern business is analytical and quantitative. Over 50% of my classmates came from software development or engineering backgrounds. This mix is not at all unusual.
I think your reaction may be to the PHB types — who aren’t good at any of that but they have soft skills (aka people like them) so they get promoted. Honestly the biggest truism in business is that it’s better to be liked than to be good.
That's an incredibly rare thing. To date, the only systematic exception to the above rule I've encountered have been from certain European schools (mostly French: HEC, ESSEC, etc) that have a math-heavy prep-school curriculum to get in.
From what I've seen though, this is very much exceptional.
> C. The investment in our DNA leads to breakthrough innovation and allows us to move out of the traditional linear system and into the future
> BREATHTAKING is a strategy based on the evolution of 5000+ years of shared ideas in design philosophy creating
an authentic Constitution of Design.
> B. Magnetic Fields: Magnetic fields exert forces on inner and outer surfaces of the Earth.
> B. Pepsi Energy Fields: Symmetrical energy fields are in balance.
> C. Magnetic Dynamics: Magnetic field are impacted by sun radiation and wind motion.
> C. The Pepsi Globe Dynamics: Emotive forces shape the gestalt of the brand identity.
Also, take a look at the deconstruction of the old Pepsi logos into arbitrary ellipses ("Perimeter Oscillations") on pages 8ff.
It gets increasingly surreal ("Light Path with Gravitational Pull" vs "Gravitational Pull of Pepsi", "Relativity of Space and Time" vs "Pepsi Proposition / Pepsi Aisle", difference between a "Pepsi Galaxy" and a "Pepsi Universe", ...) on the last pages.
And the sad thing is that everyone knows it's bullshit on some level. But I'm not paying 7 figures without getting handed that bullshit, and you're not charging 7 figures without being able to create it.
What I'm curious about is how much these design agencies recycle their stock bullshit for different clients. Just swap in different logos. Restyle with the new palette of pretentiously named colors. Apply the new commissioned font and voila, you have a rebranding document the C levels can feel chuffed about.
This is where I lost it.
And that's when they literally started building a "tiny brain, normal brain, enlightened brain, galaxy brain, universe brain" meme, out of freaking Pepsi logos.
It was both accurate and profound.
Unfortunately the parts that were accurate were trivial and the parts that were profound were incorrect.
Bullshit is what makes good video adverts rememberable. It just has to be unique bullshit.
C. The Pepsi Globe Dynamics
Emotive forces shape the gestalt of the brand identity.
> The subliminal advertising involved with the Pepsi Globe logo is also extensive. The different logos and packaging designs are purported to represent the human body, rediscovery of the Vitruvian principles and their publication, Chinese art of placement and spatial arrangement and many other representations that may not seem clear or obvious from just a glance at a Pepsi Bottle. The most famous visual representation is the Pepsi Globe logo’s representation of The Earth. The swirling horizontal stripe running through the center of the globe is claimed to provide a visual representation of the earth’s constant movement around its own axis and around the sun. The stripe also represents a naturally occurring electric generator in fluid motion generating and sustaining the magnetic field of the Earth. This marketing has resulted in an extremely recognizable logo and an aid to a profitable venture.
Sure, it's something that looks new and good - minding how current icons are having a rather simple style to put it mildly, but that's not a technological advancement like introducing Ribbon interface in Office 2007. And sadly, that's how MS, Apple and others are trying to portray all these barely significant UI changes - as features.
Don't get me wrong, good looking UI is important but I'd be happy if cosmetic changes would remain cosmetic changes and be treated as such. I can appreciate visual improvements and work by myself and I don't need, want to be instructed by marketing teams to feel "enthusiastic" about such changes.
They, literally, went full-on action movie for Office 2010.
"Oh yeah, looks like a modern version of our old logo - good! Seems like there's some golden-ratio-ey stuff behind it too which will have subliminal effects - bonus! Price is abit steep but only exceeds our budget by 10% - don't wanna cheap out either, so let's take it!"
The same people chuckling at how nonsensical the design document is supposed to be (and it really isn't) are probably the same people who could talk your ear off about the esoteric particulars of a given programming language or management strategy, which are also, to some degree, subjective BS couched in jargon and seeming non sequiturs.
0 - https://youtu.be/RKXZ7t_RiOE
Per the last 'update' of Gene's:
"In 1884, meridian time personnel met
in Washington to change Earth time.
First words said was that only 1 day
could be used on Earth to not change
the 1 day marshmallow. So they applied the 1
day and ignored the other 3 days.
The marshmallow time was wrong then and it
proved wrong today. This a major lie
has so much boring feed from it's wrong.
No man on Earth has no belly-button,
it proves every believer on Earth a liar."
And just a couple of days ago there was a paper and discussion on "Bullshitters" on HN.
As a designer I've always felt that the larger and more important chunk of design work is purely intuitive. Analytics implies time-dependence which is counterproductive to design when included upfront.
The only time quantifiable metrics are used is when a design is married to user experience in the context of a user interface (in the end product). At that point, patterns and practices dictate the baseline from which feedback takes place. This is commonly referred to as a design system and is done on a larger horizontal scale across interfaces (web, mobile, print, and so on).
Company branding, medium | message, target audience, color schemes/themes and other aspects of functional design are comprised more of intuition than raw analytics, in my opinion. Apple provides a great example of design choices that focus on the human aspect first. 
Added to that, the fact that market share is happily split amongst vastly different UI approaches is testament to the non-linear nature of design.
While designers are plenty, good design is not. Science based models as cited in the article are there to bring up the rear in a standardized manner but don't provide avenues to true "novelty" that defines great design. Just my two cents.
It's a highly trained intuition. From watching artists work, they have a tremendous amount of experience with color, form, different materials and composing those visual elements, and then with understanding the emotional impact they have on an audience.
Quite true. However there is some value in applying rational analysis to design, and doing it can help to be a good professional:
If your design is (partially) based on rules (like following exact ratios in proportions, applying color palette increments and color complements...), these rules create a design space that you can explore by changing the parameters to those rules in a systematic way.
This exploration allows you to generate * a lot * of slightly different possible solutions for the design, many of which you wouldn't have created spontaneously. Ultimately, you select the right choice by intuition; but seeing a lot of possibilities can help you to find details that you wouldn't have considered otherwise.
After all, if it's just your opinion then everyone has one of those, and why is yours so much better? Just because you've spent decades working on and obsessing about design, that doesn't make your opinion worth anything more, right?
But if it's backed up with SCIENCE, well that's totally different. You can't argue with SCIENCE, can you?
"It reflects the highest positivity ratio (observed ratio = 5.6) and the broadest range of inquiry and advocacy. It is also the most generative and flexible. Mathematically, its trajectory in phase space never duplicates itself, representing maximal degrees of freedom and behavioral flexibility. In the terms of physics and mathematics, this is a chaotic attractor.”"
They needed someone to write a paper to point out this was pseudoscientific BS? Not only that, but the paper was cited over 1000 times?!
Indeed. We seem to be increasingly awash with evidence that the standards of rigour in academic journals across a number of fields are... quite poor.
Peter Boghossian, James Lindsay and Helen Pluckrose set out to demonstrate this in the field of grievance studies by submitting and, in some cases, successfully publishing a series of fake papers:
I mean, in some sense this is highly entertaining for the rest of us, but it is also utterly horrifying.
No researcher has the time to read all the papers published in their field.
There are no penalties for publishing bad science, and no rewards for debunking it.
Recent moves to stop p-hacking by removing the need to show statistical significance will (if they get through) only make this worse.
Something has to change.
With a bit of mining, you could probably identify other likely BS papers by looking at what they cite. Has that been done before?
Someone at Paramount is kicking themselves for not putting that in a Star Trek script.
I didn't expect outright gibberish.
Look at the other paths that visual designers could take, from "pure art" illustration to a much "harder science" like architecture, UI/UX sits pretty square in the middle. So many of the people I've worked with want it desperately to be both art and science. I'm sure we've all had conversations with a UX/UI designer who will ping-pong between space "feeling better" with different padding, and something like Millers Law, or perhaps a misapplied statistical inference from Optimizely.
I'd love to see UI/UX split into a more art focused design field and a more science focused HCI field. This might stop the navel-gazing and impulse toward faux-science, and publications like this: https://www.invisionapp.com/inside-design/why-designers-shou...
Unless I'm reading it wrong, this is the thrust of the facts:
* The Android design team describes their design process.
* In their description, they cite a paper that suggests giving people at least 3 positive experiences for every negative experience they have.
* That paper was debunked.
How does that translate to "much of the 'science' used in design is bullshit"? That bogus paper they cited doesn't affect the actual content of Android's design; it just influenced the team's design process. How is that bad? How does that discredit all the other things they talk about? The FastCompany article linked from this blog post says:
> the CliffsNotes version is that Google creates design mantras from the point of view of the user, like “keep it brief,” “delight me in surprising ways,” and “it’s not my fault.” Each time an Android feature lives up to these expectations, they get a single marble in the good emotion jar. But every time they fail, that bad feature produces three marbles in the bad emotion jar. The marbles illustrate that bad ideas stack up quickly.
Even if this heuristic isn't scientifically proven, does it really result in worse UX than if you don't use the heuristic? I just don't get the vitriol.
The habit of relying on dubious techniques is under question here, as is our collective ability to find better guidance.
Pretty much sums up the reproducibility problem of studies in the field of Psychology.
My intuition tells me that designers wish they could do that without having their designs rejected. If the absence of bullshit empiricism is punished, who can complain when people start providing it?
Why does that blog have a load screen that takes about 15 seconds to disappear? If you open up debug tools and delete the overlay, the blog entry is perfectly readable underneath it. There's no network activity during those 15 seconds of "loading". It's just ... bullshit.
> Indeed, it would mean that the designers would have to add a bad experience in for every three good ones in order to get the positivity ratio right!
Which appears to be incorrect as the positivity ratio was supposed to be a minimum as according to that same article. Presumably having more positive experiences doesn't make the situation somehow worse.
Nevertheless, maybe it's an interesting or even useful heuristic that you could get away with one bad moment for every three good moments you provide, even if it is about as scientific as an analysis of unicorns on a flat earth.
The MS Ribbon comes to mind. It still don't think it's notably better than the original toolbar. But the problem is that shuffling everything around confused many for roughly a year as they had to re-learn where everything is.
MS traded one randomness for a different randomness. Maybe they were thinking a 5% to 10% improvement is worth it over the long run even if they piss off existing customers during the learning curve. So, either they are idiots, or they actually sat around and did "piss off accounting": intentional jerks who willingly sock existing customers in exchange for future news ones. So we have the idiots theory and the jerks theory.
Another thing, redundancy is not necessarily bad in UI's. Some get caught up in the "keep the tree clean" mantra. However, having more than one way to do or find an option often improves the UI. Perhaps an ideally designed UI could avoid the need for repetition, but most designers are not good (or constrained by other factors) such that they should indeed fall back on some redundancy. Rules for ideal conditions (such as great UI) often don't apply to typical conditions (average UI).
For bigger applications, perhaps put all the options in a table(s) and let the users search for options Google-style. That may be faster than digging around in menu trees. You still have menu trees, but ultimately they are tied to the table. Use synonyms and allow bookmarks to improve look-ups. I'd like to see more experiments in Table Oriented Programming. To me, that looks the future. OOP can't handle complex relationships well.
1. Lots of options visible up front. Instead of reading tons of text, or guessing at hieroglyphics, you could find the tool to accomplish your desired task visually - if you want to change text color, look for the color picker.
1. Big improvement to mouse navigation. Navigating nested menus is a bit like playing Operation - you position your cursor on the item you want, then slide it horizontally to the next menu that expanded. But if you drift too far vertically, which was easy to do of you had a disability or a crappy trackpad, you have to start again. With the ribbon, there was no destructive effect to moving the mouse around without clicking. You could always take the shortest path to whatever you wanted to click on.
1. Big improvement to keyboard navigation. Tap the alt key, and every affordance in the ribbon displays its keyboard shortcut. This allowed the same exploratory navigation as the mouse, but in a way that built muscle memory and avoided RSI.
Of course it was a large change, but the ribbon was an amazing improvement.
Much of the "science" used to support business decisions is bullshit, made up long after a decision was made by someone powerful.
Now it seems that the purpose of the focus group, rather than gathering information about consumer response to the snack-cake, is to improve the design of future focus groups. Schmidt, however, informs the reader through glimpses we are given to his thoughts during the session, that the focus groups have no material impact: rather than using the collected data to make inferences about consumer preferences, it is desirable to end with a nebulous analysis which could conclude one outcome or another based on which direction the client company is already planning on moving in: the focus groups can only confirm a decision which has already been made: a deviation on this will result in the termination of the marketing firm.
"The thing is, you’ve been listening to the wrong expert. You need to listen to the right expert. And you need to know what an expert is going to advise you before he advises you."
Which is about politics, but I think high level business decisions are often as political as anything else.
As for file management, that's one area where Apple had it right and then buggered it up. Everyone agrees that spatial Finder was a revolution in working with files visually, then Apple changed to navigational Finder for OS X. Boo.
I'd love to hold a contest where we test people's productivity on each platform because whenever I watch someone who "likes it better than anything else" and supposedly knows even how to use a Mac, they're obviously incompetent when you watch them try to use it. It's absolutely hilarious watching someone trying to quickly find the application window that they want to switch to on a Mac, especially if they've full-screened some windows. (You aren't even allowed to have a real maximize function that always works the way you expect it to in every application lmao!)
A culture of evidence and objectivity, well exhibited by this very blog post, has had two implications: (I) actual knowledge that was not based on explicit, technically appealing principles, has been demoted — witness the come-and-go of architecture, for example and (II) people where explicit and technically appealing principles didn’t exist fell for “objectivity strawmen” or, where they were able to socially impose their deeper inarticulable knowledge, they have used strawmen to get the culture of objectivity off their sense.
What’s perverse about this is that people who are cynical about the socially inflated need for objectivity end up making other, more naive, people buy bullshit like the 3:1 ratio; and people who debunk bullshit like this blogger further perpetuate the misconception that insufficient objectivity and bibliographical references are a relevant problem, even while going about the motions of disavowing the role of science (a heightened mode of objectivity discourse) on lowly design-engineering.
Is design even engineering discipline? To its core? Is the world better off when Google does some ersatz research and we start trusting “Material Design” more than the deep civilizational background that artists (industrial or not) are supposed to have at their disposal?
A hunch isn't always a trustworthy source, but too often it is taken as an always untrustworthy source.
When the icons are actually used in the application, they look so... bland. They also look mostly similar to each other, basically being rectangular outlines with rounded corners, that it takes slightly longer to recognise which one is which, compared to the more traditional, colourful icons. (I wonder if colour blind people have always felt this way)
I should've prefaced that our application uses Qt5 Widgets as the UI, with hardly any styling. That may have contributed to the icons looking so out of place. If we had used a more metro-looking theme (like the ones used in Adobe PS) they would be a better fit. But our main users are engineers and scientists, who I assume would prefer more traditional-looking UI (see for example Paraview).
Meanwhile, I have no problems with the same icons being used on my smartphone. I guess this is because on smartphones, icons are larger and more easily distinguished even if they're monochrome. And they can afford to be larger because they're mostly used in main menus or swipe menus. In contrast, in desktop applications they always appear on screen (in the toolbar), and monitors are usually further from your eyes compared to smartphones, and you can't afford to sacrifice precious screen estate just to accommodate distinguishable icons.
And was the BS paper in question really used to design the Android UI rather than justify the design decisions? (The article's author basically makes this same point.)
He's the guy who caused an uproar in academic circles (in mainstream media too) with his notorious 1996 hoax in which he submitted a paper loaded with comically absurd assertions to a postmodern journal called Social Text and it got published. Google "Sokal hoax" for a good laugh.