The other day I was returning some umbrellas to a Target. I had bought them online and picked them up at a different store. When trying to return them the nice old woman at the customer service counter had issues receiving the inventory because Target's system was not accepting the return of inventory whith such circuitous provenance. We were both intelligent humans, I had a valid receipt, and in another era she could have just taken the umbrellas and the issue would have been resolved. But today we have software which contains all the rules, and often these rules extend out into the world and begin to govern our lives.
My example is silly, but think about how often tech restricts you versus how often is frees you. People reading this on HN might feel the answer is about even, but most people not in tech would probably have a very different answer.
I think you're looking back with rose tinted glasses. Making returns in previous eras was very difficult. It was very much caveat emptor or make an enormous fuss.
I agree. I remember 20 years ago when returns were much more of a hassle.
For example, at Home Depot in the 1990s, to return something, you had to fill out a written form with a pen. The cumbersome pen & paper dance was repeated by the cashier as she had to analyze the receipt and manually circle which items were being returned. Today, their POS system just scans the UPC bar codes and immediately puts the refund amount back on the credit card. Literally 30 seconds instead of 5 minutes.
Also, if I lost my receipt, the cashier justs ask for the card number I used. They can then find the transaction in their database and issue a refund back to the card. Container Store can also do this so I assume many retailers can do it. This convenience is all made possible by modern POS systems. Previously, the stores had a strict "no return without a receipt" policy.
Overall, returns are easier today, not harder.
I had budding criminal friends in high school who were no doubt partially responsible for why it became so difficult, because they were exploiting the easy returns for personal gain.
They would buy 56K USRobotics modems from Best Buy and return the boxes containing used 14K modems. Someone had found a large quantity of disused USR 14K modems, I think it was from an office building dumpster. Weekends were spent doing this across dozens of stores at scale, I never heard about any of them getting caught.
Back then your average retail employee wouldn't know a modem from a sound card. All they verified was that a circuit board computer thing was in the box.
On average, absolutely.
In the long tail, maintaining a system that bypasses software blocks is rare, and perhaps prohibitively costly. Yet the need for overrides exists.
I think to some extent that was why the early Internet felt free. Not because it was necessarily that free, but because people of different views could agree on technology as something empowering. Like from the famous hackers manifesto:
"I made a discovery today. I found a computer. Wait a second, this is
cool. It does what I want it to. If it makes a mistake, it's because I
screwed it up. Not because it doesn't like me..."
Today it feels like technology doesn't like me. The decisions were made by someone else "because that is just how technology work". While they are usually making a lot of money.
Now if you want to blame technology for making it easier for pointy-headed assholes to micromanage every single human interaction? maybe.
But computers don't decide to restrict people, and history has shown us that bureaucratic people have wanted this sort of petty power over others since long before we could reify bureaucracy into business logic.
If you believe that our mistake was in listening to the pointy-headed assholes, I'd love to know where I can work and not do so.
Paperwork's dark forest would be an equally applicable headline and yet...
Middle management doesn't decide it any more, or less then the engineers do.
They order a system, the engineers build it, nobody, including the engineers, has a perfect understanding of the problems that the system is supposed to solve, so it doesn't 100% map to the real world. Nobody understands which parts of the system don't map well to the real world, so they don't design a process for dealing with the system's shortcomings.
The responsibility for this failure is collective.
Do you really think the Product Owner at target didn't think someone might buy something at one store and need to bring it back to another store? Or did they just choose to make this more difficult?
I've got a few friends who are unfortunate enough to work in call centers for a large satellite company. They are constantly forced to treat customers like crap and if the customer complains to put blame on the system. Not the managers/executives who came up with the rules that the customer is having trouble with, but the computer system. It is used as a shield/excuse for shitty policies
It seems like you're saying the system must be adhered to. Otherwise why would the system not matching reality matter? That the system must be adhered to is not a given, and is in fact the exact problem being created by middle managers.
I have other example, here in Romania going to your doctor to get your monthly prescription requires the doctor to input something into a computer, if the system acts it can happen to waste 30+ minutes waiting.
The IT system is not optimized for the doctor or patient benefit and it should have some fallback mode to pen and paper( if there is such a fallback mode the doctor does not use it for some reason)
I've found that this question aptly describes the problem you had. This really isn't an isolated issue either, it's everywhere. Modern design is not user-oriented; it's developer-oriented.
This was already happening 100 years ago when workers were 'deskilled' to operate manufacturing machines efficiently, i.e. turning the worker into a 'trained gorilla' under so called scientific management.
It happens today when you talk to Google in a way that the search engine understands, not the way humans understand things, and it happens every time we use a technical device to get a task done. And the conflict here is not between developer and user, because developers are affected by this as well. A software developer adopts the limits of the compiler and the language she uses, the compiler does not adopt to the person. So tech workers are really just a rung up on the same ladder.
People in the software world were not entirely oblivious to this. Systems like Smalltalk were designed with the idea of turning human machine interaction into something more organic. Sadly, this idea seems to have been abandoned.
In general, I'd argue that this is not so much 'de-skilling' as 'learning how to use tools'
I mean, I'm sure that before shovels, people were way better at digging with their hands. Before bows, I'm sure we were way better at chasing down animals through strength and endurance.
Sometimes there are knock-on effects that aren't so good; cars may be making us less fit, (I personally think this is largely a problem preferences; most people prefer a density level that makes public transit not so practical, and are willing to pay a huge price for this in terms of lost time, traffic deaths, obesity, increased housing costs, etc...)
But the point is that these new tools usually make us better at the thing we're trying to do.
When I started working on cars in the late '90s, all I had were the manuals and exploded parts diagrams. It is difficult to overstate how much easier a repair becomes now that I can type a few words in and get a video of someone performing the exact repair I need on the exact same model of car. As a shadetree mechanic who isn't an expert with the exploded parts diagrams, this is a huge game changer.
I've even used the youtube to fix a laptop, something you'd expect me to be good at. (I'm no longer a professional hardware tech, but I was never a professional mechanic; go back far enough and I did fix laptops and other computer hardware for money) a friend had one of those fancy, thin Sony laptops; they tried to replace a keyboard and it wouldn't boot. they then took it to a repair place, who couldn't figure it out.
The exploded parts diagram was impossible. But I looked up on youtube, and found a detailed teardown and assembly. I followed along with the heavily accented voice, and the laptop booted up like new.
You could argue that in a bygone era, I would be more incented to figure out how to go from those line drawings to actual parts, to learn how to read those exploded parts diagrams. And you are right, of course; but I didn't, and because of modern tools, I'm better at fixing things than I would have been.
Type systems restrict me every time :)
Heck, your example isn't even unusual for a chain: customer returning item - item originally purchased online - origin store different from return store. Simplified, the only thing that matters is "origin store different from return store". Get that right and it fixes a lot.
Locally we have a couple of large hardware store franchises that have displaced the smaller independent hardware stores, and its the same everywhere. I don't believe there is some greedy scrooge who plotted the removal of the analog block and tackle systems in order to force a higher expense in either individual pulleys (for those who feel they need an actual B & T, DIY if necessary) or in the electromotorized winch (for very heavy loads or faster work). What I do believe is that at some point inventory software was marketed to them, with machine learning to optimize profit, and they blindly followed what the algorithm observed (perhaps some of the shops in the franchise were temporarily out of B & T when they still sold them, whereas others had it in inventory, upon which those who needed to raise a heavy load right then, decided to buy either multiple pulleys and 2 carabiners or the electric winch. so the algorithm that knew stock saw a move that increased profit if they would refuse to stock B&T systems in the future...)
So now most people in the West who in the past would have bought a B & T for occasional use, now can only buy a more dangerous, and more expensive and less intuitive device. I'm sure B & T's are still made, but I'm also sure a large number of factories that made them were subsequently closed because "demand fell" or rather "algorithm decided to let demand fall". I'm also pretty sure teachers in technical schools are still teaching kids to calculate the load for N turns through a B&T system in various kinds of scenarios even though those students will not be able to find one when they find themselves in need (they could probably still order them online, but if you see it as a basic tool at school you probably expect it to be for sale instantaneously in the brick and mortar hardware stores...)
So the problem is: conspiracies exist, but they are emergent, not explicit, the programmer who wrote the inventory optimizer did not intend to replace all B&T's with electric winches at the pain of clumsy multipart pulley system. Nor did employees in the store come up with this scheme. And yet the conspiracy is executed, because if we blindly optimize we effectively and systematically turn a blind eye to potential exploitation...
Twenty bucks on Amazon.
The problem is of course not the application of knowledge and optimization, but a lack of informed decisions.
I'm imagining something like taking a picture of a barcode or the object itself or alternatively a URL, and then seeing alternative product types. where other consumers can suggest and vote on top 5 alternative product types without advising brands for example.
I'm also quite sure that the lowest level employees walking around in those stores during the introduction of the price optimizations were suddenly bombarded for a period with different questions/requests from customers who were obviously dissappointed and that the employees were well aware that sale of some product types in demand was halted to increase revenue on others.
I'm a bit torn over how regulation could help, or stop "helping" like upholding libel laws (in case mapping commercial product URL's to alternatives could be portrayed as libel). It's also environmentally unfriendly to ship everything instead of picking things up in the hardware store (which can happen on bicycle etc). So part of me really thinks it is somehow the civic duty of the brick and mortar hardware stores to continue selling say block and tackle systems, say by regulating that "if you sell single pulleys, and electromotor winches (two extremes), then you should also sell at least one type of block and tackle system" but this is obviously hard to generalize in a systematic way. An alternative is to have consumer groups / construction sector list essential product types.
I would really like to hear more ideas on how to somehow (in)directly include the consumer's interests into the optimizer's loss function...
EDIT: another traditional entity could be viewed as having a natural interest to inform the consumer or construction worker: the insurance agencies could have a common interest to mandate that every sale or offer of an electric winch be accompanied with a reminder (or signed acknowledgement?) not just of the dangers of low-feedback up down button controlled electric winches but also of potentially safer alternatives especially when the user will only seldom use the device, and be less familiar with the dangers. But that would only work because the blind optimization by one party led to a potential increase in danger to their clients. Also it would only work if the insurance agencies don't just look up what tools were used during the accidents in their database, but for each one of them gather a detailed report and try and determine what other tool would have been safer to use, which is probably a bit more work than they are accustomed to.
For nascent examples, see VRM (Vendor Relationship Management), buycott.com, Apple's papers on differential privacy for on-device data, digi.me and reputation schemes in grey/dark markets.
In existing markets, viral media is one of the few brand feedback mechanisms with low latency influence on decision systems, computer or human. But media has its own incentive problems.
- invested in fad
- fad that killed the old way
- only to ressuscitate the worst of the old (immature business, price games)
who wouldn't looove that
> civilizations fear one another so much that they don’t dare to reveal themselves lest they immediately be considered a potential threat and destroyed
and then completely misuses the term, instead talking about how consumers of technology are suspicious of creators of technology.
It annoys me than no credit is given to earlier authors who covered almost exactly the same ground, including Fred Saberhagen, Greg Bear, and Alistair Renyolds.
Very interesting. What this theory doesn't take into account are the non-hunters. A single farmer is at risk in a world full of hunters - see Robinson Crusoe. However, a village full of farmers who can store heavy weapons in their houses has almost nothing to fear from a small group of hunters.
If each but one civilization dies in a hunter universe, making noises is worth the risk to meet others who are willing to cooperate.
Cooperation means you trade your minimal chance of winning a winner-takes-it-all game for the risk of meeting a player who doesn't understand that cooperation is the best move. Given that the other players are space-exploring civilization, I would say that they at least give us the chance to cooperate.
The problem may be that it's not them being evil but us being unable to cooperate.
I imagine the effect is similar to the effect air power had on castles.
Is it perhaps possible that it might actually account for exactly these points? A single farmer is like a hunter, except unarmed and not stealthy. They cannot be expected to live long as they will be treated only as another hunter.
You're absolutely right about the hypothetical power of an organized village! Yet, in this scenario such a thing never comes about and can never come about. A village requires collaboration and trust, things the hunters do not posses.
As a result, there are no farmers to speak of. They don't last long enough to matter. The only survivors are the hunters.
Are you familiar with the prisoner's dilemma? This is one where trust is impossible, betrayal is cheap, and cooperation risks your entire civilization on someone else's cheap betrayal.
The books highlight that trust cannot be established. All civilizations are very alien to each other, the huge differences and the lack of a common culture makes establishing trust almost impossible. Imagine sending the first message “I am a farmer, and would like to collaberate with other farmers.” How would you determine if the responder was a farmer, or a hunter pretending to be a farmer?
There are other aspects of the fictional world that makes a farmer collaboration impossible, but I wouldn’t want to spoil it.
Farmers already made that choice by sending the message. Whoever wants to attack will be prepared to the point of full domination.
That said, why should space faring civilizations be incapable of cooperation? Without leaving the solar system, we are already in a state of post-modernism and multiculturalism. A civilization that can travel between solar systems can be even more advanced in respecting other cultures.
If there is a risk then it is us, not pouring more resources into research so that we have nothing to offer once somebody else comes along.
Part of the setup for this particular game is that there is no being prepared to the point of "full domination". There is only first strike, and whoever attacks first wins. This is a huge part of why every player is incredibly careful - there is no surviving, enduring, or being prepared for an attack.
Bear in mind that this discussion is not people advocating national or global policy around guns versus butter. This is people wrapping their heads around a particular model explored in some science fiction works. The model you prefer and advocate can be found in a different set of science fiction works.
Not every model used in every work of literature will produce outcomes preferred by every person. Not every game has an outcome or stable state that everyone likes under the rules of the game. That's fine. That is, after all, why we have different models and explore their consequences.
Though I understand if some reject this and seek for a way for every model to produce their preferred outcome. It's a very human response.
There are three key attributes to a dark forest strike. They are increadibly cheap for the attacker, they are abosultely devastating to the victim, and they do not give away the attacker’s position.
Given these attributes a few hunters could set up conditions where civilizations that decide to communicate would be quickly eliminated. Being friendly would be a trait conditions would select against.
It’s an interesting game, and a truly horrifying answer to Femi’s paradox.
In the context of this particular game, part of the very basic setup is that attacking is cheap and easy as measured in immediate resource costs. As other comments point out, attacking does not automatically create vulnerability, making collective retaliation against attacks unreliable. Further, there is no true win condition. There is only continuing to play. There is also the knowledge that attempting to communicate with any other player requires becoming vulnerable to an attack from anyone who detects you. Even assuming you can establish meaningful communications and trust, is far more likely to get you annihilated by some other player than to lead to a desirable outcome. That there's a non-zero chance of being detected by more than one player only makes the odds against you longer - it only takes one attack.
Within this extremely harsh set of rules and interpretations, most rational players will likely conclude that the balance of risks does not favor trying to find friends with their entire civilization at stake.
I understand if some people might choose to find otherwise.
Think about how many other developing technologies people just can't wait for, like self driving cars and renewable energy. It's not that people distrust technology, but like anything else it depends on who's wielding it and what they could do with it.
Setting aside the rhetorical style, I think the main argument is pretty weak. People claiming Facebook has hidden motives aren't skeptical of technology; they're skeptical of Facebook. People raise issues about AI reinforcing bias, but in the cases I've seen those are mostly people who work in tech.
Basically, I don't think the author establishes that people actually fear technology, and he doesn't really consider when technological skepticism is appropriate.
No. Humans have always been afraid of change and skeptical of technology. Books and the printing press were considered destructive inventions that would prevent people from thinking for themselves. There were also extremely negative perceptions of television and the internet.
Sounds like it succeeded in that, we just don't like the results.
Sure we should not be paranoid but why should be naive, why risk more then you have to. We are resources for capitalist entities to exploit us.
Everyone loves this shit. It's gold. There's just a vocal minority.