I'm not an expert in the genre, but I am a big fan of Neuromancer. I skimmed the article, and while I agree that the "cyberpunk" aesthetic has practically evolved into self parody, I'm not sure I agree about the analysis/critique of the genre.
IMO the core of cyberpunk is about envisioning a world where advanced technology is useful and ubiquitous, yet humanity is worse off than ever ("high tech, low life"). It's a subversion of the simple tech dystopias where the technology itself is evil or is misused by evil people, and more of a realistic counterpoint to the idea that technological progress leads to inevitable utopia.
I'm not sure about more contemporary works that build on those themes. Maybe it's lost its edge as "futuristic" technology has pushed its way more and more into our lives?
We have corporate dominance and mega-corps wield more power than many governments - just look at Google, Amazon, Apple, and Meta.-
We're living under corporate surveillance capitalism as our dominant economic model, where companies control entire digital ecosystems and infrastructure while exercising massive influence over politics through lobbying and campaign contributions. We've got platform monopolies controlling our information flow and commerce, with tech CEOs functioning as quasi-governmental figures making policy decisions that affect billions of us.-
We have a surveillance state and privacy erosion that permeates our daily lives through mass digital surveillance by governments and corporations, facial recognition systems watching us in public spaces, and social credit systems emerging globally. China is gonna get there. But that's not even the scariest thing. We ourselves might end up asking for it to be implemented, as one of our better options to restore some order at some point ...
We're got constant data mining of our personal information for profit and control, living in smart cities with omnipresent monitoring while governments maintain backdoors in our encryption and communication systems. We've got predictive policing algorithms shaping law enforcement decisions across jurisdictions. We are getting a digital divide and social stratification that's intensified as extreme wealth inequality gets exacerbated by technology, creating a tech-literate elite versus a digitally excluded underclass. Go DEI that one out ...
We're living in a world where access to technology determines life opportunities, while the gig economy creates precarious employment and automation displaces traditional jobs. We've developed digital literacy as a new form of class distinction, separating those who can navigate technological systems from those who can't.-
We have cybercrime and digital underground activities flourishing through ransomware attacks on our critical infrastructure - heck, entire complexes are built and kept for the purposes - cryptocurrency-enabled black markets, and state-sponsored cyber warfare. We're dealing with identity theft and digital fraud at epidemic proportions, while hacking collectives engage in cyber-activism and dark web marketplaces facilitate illegal goods and services. We've got these digital criminal enterprises operating across borders with increasing sophistication. NK'ers are mass-applying for jobs stateside. And getting the jobs.-
We have information warfare and reality distortion shaping our public discourse through disinformation campaigns that manipulate public opinion, deep fakes and synthetic media that blur truth and fiction, and filter bubbles that create echo chambers.-
We're living social media manipulation of democratic processes as routine, creating "post-truth" information environments where AI-generated propaganda slop and bot networks spread false narratives at unprecedented scale. We are even beginning to talk like the LLMs we created.-
We have technological dependency and addiction dominating human behavior as smartphone addiction and constant connectivity become normalized, with social media serving as our primary medium for social interaction. Our every communication is an ad. We've reached a pointed where digital detox has become both a luxury and necessity, and virtual relationships and parasocial connections replace face-to-face interactions. Ask Zuck. And he ain't even got started yet. His AI chatbots are gonna dopamine-cycle the heck out of us, and we will love him for it.-
We're already living with screen time dominating our daily lives, creating tech withdrawal symptoms and digital anxiety when devices are unavailable. We be doom scrolling ourselves to death - and loving it. Babies flick paper magazines to scroll. I've seen this.-
We have urban decay and environmental collapse manifesting in uninhabitable zones created by climate change, pollution and environmental degradation in industrial areas, and overcrowded megacities struggling with urban BAMA sprawl.-
We're watching gentrification displace communities while infrastructure decays in non-profitable areas, and resource scarcity drives conflict between populations competing for diminishing resources. We have biotechnology and human enhancement entering mainstream adoption through genetic engineering and CRISPR technology, performance-enhancing drugs and nootropics, and normalized cosmetic surgery. We've got biohacking communities experimenting with human optimization, pharmaceutical enhancement of human capabilities becoming commonplace, and life extension technologies remaining primarily accessible to the wealthy elite. Neuralink is right up the pipeline. In a sense one could argue all this sex changing going on is body mod going normalized.-
We are getting virtual reality and digital escapism providing alternatives to physical existence through VR worlds that compete with physical reality, online gaming as primary social space, and digital avatars representing virtual identities, though Zuck fell on his face of that a tad, to the tune of a few billions. Meta wasn't so Meta after all.-
We're got remote work blurring the boundaries between physical and digital existence, while virtual economies generate real-world value and emerging metaverse platforms promise complete digital immersion. We have AI and automation displacement restructuring entire industries as AI replaces human workers across sectors, algorithmic decision-making controls hiring (see Amazon pink-slipping bottle-peeing their meatbots), we got lending, and criminal justice, and automated trading systems manipulate markets.-
We're seeing AI-generated slop increasingly replace human creativity, machine learning bias perpetuate systemic discrimination, and autonomous weapons systems raise ethical questions about the future of warfare.We have punk aesthetics and counter-culture movements embracing technology through hacker culture and maker movements, DIY technology and open-source communities, and cyberpunk fashion with body modification subcultures. We're seeing street art incorporate digital themes, underground tech scenes and hackerspace culture flourish, and resistance movements use technology as tools against established power structures. Autonomous land drones attacked in Ukraine for the first time.-
We have global connectivity and cultural homogenization accelerating as the internet creates a global monoculture with English as the digital lingua franca, cultural products distributed globally and instantly, and traditional cultures disrupted by digital connectivity. We're dealing with global supply chains that remain vulnerable to disruption while borderless digital crimes challenge traditional jurisdiction and legal frameworks. We have transhumanism and human-machine integration advancing rapidly through smartphones functioning as external memory and processing organs, social media profiles serving as extended identity, and wearable technology monitoring biological functions. Heck, we got RFK pushing his smartwatch on everyone. Will probably get away with it too. Insurers must be salivating somewhere.-
We're beginning to see brain-computer interfaces move from science fiction to development reality, prosthetics controlled by neural signals become commercially available, and genetic modification for disease prevention enter clinical practice.We have economic disruption and digital currency challenging traditional systems as cryptocurrency undermines conventional banking, digital payments replace physical currency, and the gig economy destroys traditional employment models.-
We're got automated trading and algorithmic market manipulation creating volatility, digital asset speculation driving economic instability, and central bank digital currencies emerging as governments respond to decentralized alternatives.We have information as power and commodity driving our digital economy with data recognized as the new oil and most valuable resource, information warfare conducted between nations, and corporate espionage executed through digital means. We're living with our personal data being harvested without meaningful consent, algorithmic curation shaping our individual worldviews, and knowledge gaps creating significant power imbalances between those who control information and those who consume it. Bubbles are getting to be more like walled gardens.-
... we're living in a world where the aesthetic might be less neon and leather, but the underlying power structures, technological anxieties, and social dynamics are remarkably similar to what Gibson, Philip K. Dick, and others envisioned decades ago. The main difference methinks is that our cyberpunk reality came wrapped in sleek consumer Apple products rather than the grimy underground aesthetic Gibson imagined.-
Cyberpunk is here. It just ain't totally evenly distributed yet.-
> smartphone addiction and constant connectivity become normalized
Cyberpunk predicted this, but I don't think anyone predicted how we now see people relinquishing their thinking to LLMs.
> Babies flick paper magazines to scroll
I know a young couple who are rigorously keeping screen devices away from their toddler, yet the kid has picked up the habit of holding up random objects to his ear and talking to it, or just swiping across the object's surface.
> We are even beginning to talk like the LLMs we created
The irony here is that when I'm not entirely convinced your comment isn't generated by an LLM. Most humans couldn't come up with all these examples that we're already in a cyberpunk dystopia on the fly.
And that itself is another issue itself: When almost everything can be AI-genne'd, it casts doubt on all other human output.-
> Most humans couldn't come up with all these examples
I beg to differ on that. I bet we are all aware of these things. I just took the time to point them out, based on rather painful, and concerned, observation, somewhat systematically. Heck: HN's frontpage will give you 70% of all this on any given day.-
> We are even beginning to talk like the LLMs we created
... and, to this: Fair enough. But it goes beyond just the "human" end zone getting slopped to death, to where you can't distinguish human from machine genne'd anymore. I was getting to how human speech patterns themselves are beginning to resemble language model output (something, again, mentioned onsite) ...
> I'm not sure about more contemporary works that build on those themes. Maybe it's lost its edge as "futuristic" technology has pushed its way more and more into our lives?
Yep. Food surplus yet millions starving. Advanced cosmetic medicine yet millions die of preventable infections. Access to nearly all information ever gathered yet it can feel like we're living through Idiocracy!
It's not all bad, but that was my nihilist take on your question :) I think any attempt at commentary right now would end up eerily reminiscent of modern life.
> think any attempt at commentary right now would end up eerily reminiscent of modern life.
The genre was always an extrapolation of contemporary society as the authors saw it. You could absolutely do that today, with appropriately updated technical speculations, but without the signifiers of the petrified genre of Cyberpunk that we are all familiar with in 2025, folks might not recognize it as such. Doesn't mean it's not engaging in the same milieu
Climate fiction such as New York 2140 and The Ministry for the Future might be one example of modern themes of technology reinforcing and amplifying existing inequalities.
Taxis are complementary to the higher-priority parts of the hierarchy. It's easier to commit to walking/cycling/public transport knowing that you can always take a taxi in a pinch. SOVs have the inverse effect - it's hard to combine them with other modes of transportation, and if you're already paying to own/maintain/insure a vehicle, you're incentivized away from considering alternatives.
The hierarchy isn't so much about how green each individual option is, but rather about how trips should be distributed to reach an overall optimum.
For short trips or connections, walking should be more convenient because you don't need any gear or space to store your bike. This also gives a multiplicative effect with other transport options, because (e.g.) people are much more likely to take a bus or train if they can walk directly to the station instead of needing a bike or car to get there in the first place.
As an aside, mature bicycle infrastructure goes beyond bike lanes, especially as the number of cyclists grows. For instance, here's a video showing off a huge bicycle parking facility in Amsterdam: https://youtu.be/EqwasBTzZS8?t=530. Obviously this is great compared to car parking, but it's still a lot compared to the infrastructure needed to support short walking trips.
Not seeing it mentioned in the other replies, so I'll mention that (at least the way I read it) "our massive car addiction" should be taken as a societal addiction to cars rather than addiction of any individual. If someone lives in a place where a car is the only feasible way to meet their day-to-day needs, it's not fair to say they're addicted to their cars; however we might question why they find themselves in that situation in the first place. Often this comes down to societal pressures (zoning, lack of funding for other modes of transportation, etc.) which are largely outside the control of individuals. The challenge is to change the cultural mindset from "I need a car today, so cars are a necessity for life" to acknowledge that other options can be viable if we, as a society, are willing to recognize and seriously consider them.
> ... the only real option that exists is reorganizing housing across the whole society to massively increase density and to mix commerce zoning with homes in a way currently unheard of.
Places like this already exist (ie. basically any major urban center), but I don't think the intent is that every place needs to be like that. Small steps toward better options (eg. allowing limited commercial redevelopment in residential-only areas, improving the safety/speed/accessibility of alternate transit options) should be the short-term goal, and we can work slowly towards them. But societal pressure (eg. from NIMBYs and zero-sum car-first people) often makes even small improvements glacially slow or impossible.
These shouldn't be thought of as orthogonal issues. You can dramatically improve the walkability of a place by reducing the amount of space reserved for parking.
City centers in forward-thinking European cities are trying things like this: forbidding cars from the central square and a few blocks' perimeter results in much calmer and more pleasant pedestrian experiences. Instead of roads for cars you have pedestrian boulevards, avenues, and alleys with greenery throughout and a distinct sense that you live in a community, rather than merely among it.
It's mostly impressive from a technical standpoint, since the programming of games from this era would be strongly tied to the display resolution, eg. a programmer could know which background tiles or entities could be shown in the viewport at any time, and dynamically load/unload them for performance reasons. All of these optimizations now need to be tweaked or removed in the widescreen version so things outside the original 4:3 viewport don't disappear at the edge of your 16:9 display.
More recent games use flexible approaches to allow for different aspect ratios, which would behave similar to eg. fluid design on the web.
Jon Burton of TT Games has an interesting Youtube channel where he goes over some of these old school development techniques, if you wanted to learn more; eg. https://www.youtube.com/watch?v=96DO4V8qrR0 uses a lot of techniques that would be difficult to extend to a 16:9 display.
I don't think it's too surprising - cards from this era are primarily collector items, so among equal rarity cards, the most "iconic" ones are the ones that demand the highest price. Charizard had a pre-existing (and continued) level of popularity that made it the obvious most desirable card, even if it's not particularly playable. MTG didn't have the existing IP, so the cards that became iconic are more based around their playability, rarity, and associated mythos. Not too many people are dropping $20,000+ for a lotus to play it in vintage, but the prices continue to rise because of increasing collectibility. For what it's worth, Ancestral Recall is arguably a stronger card of equal rarity, but it's worth substantially less on the basis of being less iconic (if only slightly).
Is there a standard list of highly valuable MTG cards? I have a couple thousand cards from the mid-90s and while I’m sure they are probably worthless, I can’t bring myself to just dump them even though I haven’t played in 20 years.
Depends on your definition of "highly valuable" - from that time period, there's a very short list of cards worth >$1000, quite a few in the $100-$999 range, and a ton in the >$10 bracket. What they're actually worth depends a lot on the particular printing and what condition they're in.
If you wanted a starting point, Scryfall is a useful tool for looking up cards (though they're missing pricing data for some early cards, presumably due to scarcity of transaction data). Here's something to get you started (cards printed before 2000, sorted by price, displayed as a price list): https://scryfall.com/search?q=unique%3Aprints+sort%3Ausd+dat...
A lot of it will come down to the value of your time. You can sell directly yourself but deal with headaches and scammers, or you can sell to someone like Rudy for what will likely be ~50% of prices you'll see online. Part of that is that most of your cards will probably be considered in "played" condition, either light or heavy.
IMO you'd get a lot of mileage from the "Reserved List" or "Vintage Staples" but site is decent for a general price lookup as well.
Feel free to shoot follow-up questions. There are a few cards that went from trash to treasure since you've been out of the game. Lion's Eye Diamond probably the most extreme example, but basically anything on the Reserved List has gone insane in the past few years.
Even "pretty bad" condition Beta is still worth a pretty penny. Lots of people just want to have complete sets, or play "1994" League which allows only cards from that era. There will definitely be a market if you do indeed have Beta :)
They are dual lands with basic land types without any abilities (other than the mana abilities they inherit from their basic land types) which also means without any drawbacks. They're just "Swamp Forest", "Mountain Forest", "Island Swamp", etc. See here:
And they are literally what their type lines say. So you basically get two lands with a basic type for the price of one (i.e. one card, or one land drop; card and tempo advantage, 2-in-1). Their only limitation is that they are not basic lands so they are subject to the restriction of number of copies per deck (4, in most formats).
Every dual land created subsequently has some kind of drawback or limitation (other than the number of copies restriction): "Enters the battlefield tapped", paying some amount of life or taking damage, sacrificing a permanent, discarding a card (I think), printed as bouble-faced card, and so on.
The original dual lands are very powerful in the game and haven't been reprinted since Revised so there aren't a lot of them. And people pay that much for them.
Modern dual lands come with restrictions/penalties for taking advantage of their dualness (time delays, damage, choosing which version to use when played, etc) and they are often some of the most valuable cards in a set.
Even though the Black Lotus value is mostly as a collector's item, in game, it has the advantage of being extremely versatile. As someone said, all decks are better with a Black Lotus. It is the most expensive because everyone wants it in their deck, no exception, and more than a one if it wasn't restricted (initially, it wasn't).
Ancestral Recall may be more powerful but it requires blue mana, which may be an issue in a non-blue deck, of course you can use a Black Lotus for that...
> As someone said, all decks are better with a Black Lotus. It is the most expensive because everyone wants it in their deck, no exception, and more than a one if it wasn't restricted (initially, it wasn't).
> Ancestral Recall may be more powerful but it requires blue mana, which may be an issue in a non-blue deck, of course you can use a Black Lotus for that...
It's true that everyone wants it with no exceptions. That's not good enough to make it the most expensive card, though; that's due to prestige.
If you look at the decklists for a recent Vintage event (here: https://magic.wizards.com/en/articles/archive/mtgo-standings... ), you can see the top 16 decks play 16 Black Lotuses, just 11 Ancestral Recalls... and 16 Mishra's Workshops, which accounts for 80% of the decks that aren't playing Ancestral Recall.
There's no such thing as a "non-blue deck" in Vintage. Even the Mishra's Workshop decks can easily generate blue mana. (The Bazaar of Baghdad deck in 15th place can't, but its whole strategy revolves around not needing to generate mana at all.)
I haven't played with or against the deck. So this is a purely theoretical discussion that is likely to miss something important. But here goes:
- The deck's plan is to attack for damage. It will do this by getting creatures onto the field without paying for them.
- There are 9 cards in the deck which require mana to play: the 4 Deathrite Shamans, the 4 Sticher's Suppliers, and the 1 Swords to Plowshares. The Shamans can produce mana and the Suppliers serve the important role of getting cards into the graveyard. But mostly, this deck seeks to avoid playing mana-producing lands, and therefore also won't play cards that cost mana.
- Bazaar of Baghdad is the center of the deck; every turn it will allow you to draw two cards (good!) and also discard three cards (great!)
- After that, it's just a matter of getting creatures into play. Basking Rootwalla can be played for free whenever you discard it. (Which you can do via the Bazaar.) Bloodghast will come out of your graveyard whenever you play a land. Hollow One can be played for free as long as you've discarded 3 cards. (Luckily, that's the exact amount of discarding provided by the Bazaar.) Hogaak can be played out of your graveyard as long as you have two creatures already in play. And Vengevine, which is probably most of the damage, will come out of your graveyard whenever you play a second creature in one turn. (You have to play them, so Bloodghast won't count, but Basking Rootwalla, Hollow One, and Hogaak all will. If you can play a Deathrite Shaman or a Stitcher's Supplier, those will count too, but this is not a necessary part of the plan.)
- Vengevine has haste, so whenever you do manage to get one out, it can attack immediately. It has been the core threat of various decks in the past.
- If you can get 3 Bloodghasts and 2 Hogaaks into your graveyard, you can recur all of your Vengevines just by playing a land. This deck has a tremendous capacity to bounce back from creature removal. (Graveyard removal will hurt more.)
Thanks! You know, now that you say it, it seems obvious especially given the Vengevine, but I tought Vintage was too fast for decks that win by attacking with creatures to compete, so I thought there was some hidden interaction I couldn't see.
I'd argue that of any software project on the planet, Windows is the closest to having unlimited resources; especially when you consider the number of Windows customers for whom backwards compatibility is the #1 feature on the box.
And speed isn't the only metric that matters; having both the 32-bit and 64-bit versions of DLLs uses a non-trivial (to some people) amount of disk space, bandwidth, complexity, etc.
Surely, Apple and Google have just about as many resources as Microsoft does.
If Android, Mac OS, etc were super slimmed down systems in comparison to Windows, I would understand the argument much better. Instead, it feels like we're in the worst of both worlds.
> Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters?
I think the analogy here is backwards. The better question is "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.
If basic computer operations like loading a webpage took minutes rather than seconds, I think there would be more general interest in improving performance. For now though, most users are happy-enough with the performance of most software, and other factors like aesthetics, ease-of-use, etc. are the main differentiators (admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance).
These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use. Hence the complexity/performance overhead of using technologies that allow software to be easily iterated and expanded are justified, to my mind (though we should be mindful of technology that claims to improve our agility but really only adds complexity).
> "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.
I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase! That means there is a huge opportunity to further optimize for many things in the system:
- The car no longer needs to have a hole on the side for filling up. A lot of pipes can be removed. Gas tank can be moved to a safer/closer location where it is used.
- The dashboard doesn't need a dedicated slot for showing the fuel gauge, more wirings and mechanical parts removed.
- No needs for huge exhaust and cooling systems, since the wasted energy is significantly reduce. No more pump, less vehicle weights...
Of course, that 0.005L car won't come earlier than a good electric car. However, if it's there, I'd totally prioritize it higher than other things you listed. I think people tend to underestimate how small efficiency improvements add up and enable exponential values to the system as a whole.
This is definitely an interesting take on the car analogy so thanks for posting it! I don't know that I agree 100% (I think I could 'settle' for a car that needed be be fueled once or twice a year if it came with some other noticeable benefits), but it is definitely worth remembering that sometimes an apparently small nudge in performance can enable big improvements. Miniaturization of electronics (including batteries and storage media) and continuing improvements to wireless broadband come to mind as the most obvious of these in the past decades.
I'm struggling to think of recent (or not-so-recent) software improvements that have had a similar impact though. It seems like many of the "big" algorithms and optimization techniques that underpin modern applications have been around for a long time, and there aren't a lot of solutions that are "just about" ready to make the jump from supercomputers to servers, servers to desktops, or desktops to mobile. I guess machine learning is a probably contender in this space, but I imagine that's still an active area of optimization and probably not what the author of the article had in mind. I'd love if someone could provide an example of recent consumer software that is only possible due to careful software optimization.
V8 would be one example. Some time ago, JavaScript crossed a performance threshold, which enabled people to start reimplementing a lot of desktop software as web applications. In the following years, algorithms for collaborative work were developed[0], which shifted the way we work with some of those applications, now always on-line.
That would be the meaningful software improvements I can think of. Curiously, the key enabler here seems to be performance - we had the capability to write web apps for a while, but JS was too slow to be useful.
--
[0] - They may or may not have been developed earlier, but I haven't seen them used in practice before the modern web.
> I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase!
It's a nice idea but it wouldn't work. The gasoline would go bad before you could use it all.
Plug-in hybrids already have this problem. Their fuel management systems try to keep the average age of the fuel in the tank under 1 year. The Chevy Volt has a fuel maintenance mode that runs every 6 weeks:
Instead of having a "lifetime tank", a car that uses 0.005L per 100km would be better off with a tiny tank. And then instead of buying fuel at a fuel station you'd buy it in a bottle at the supermarket along with your orange juice.
There is [1] https://duckduckgo.com/?q=alkylate+petrol which is said to last anything between two years and up to ten years, depending on the mixture while burning rather clean.
You are thinking too small, with a car generating power that cheaply you could use it to power a turbine and provide cheap electricity to the entire world. It would fix our energy needs for a very long time and it would usher a new age!
Or the car could just be very efficient. Gasoline has a lot of energy. Transporting a person 100km on 34MJ/l * 0.05l =1.71MJ doesn't sound as impossible as you make it seem.
Trains transports at 0.41 MJ/t·km. If the person weights 0.1t it would take a train packed full of people 41MJ per person to transport them 100km, or a bit more than one litre of gasoline. I don't think it is possible to go significantly below that without transporting them on mag rails or vacuum pipes.
Secondly we talked about 0.005l cars, not 0.05l, so it would be a few hundreds times more efficient than train transportation.
The big problem is this, if we related this back to software it would mean the software being delivered in 10-15 years, rather than in 6 months. Kind of a big downside...
Not necessarily. For one, relating this doesn't remove the ability for incremental development. Another thing, there's very little actual innovation in software being done. Almost anything we use existed in some version in the past two or three decades, and it was much more faster, even if rougher at the corners. Just think how many of the startups and SaaS projects we see featured on HN week after week are just reimplementing a feature or a small piece of workflow from Excel or Photoshop as a standalone web app?
That's the old Ruby on Rails argument. In that specific case it only made sense when there were no similar frameworks for faster languages, but that's hardly the case today.
Ironically though, I'd be willing to bet that end-user performance on most traditional server-side-rendered apps using the "heavyweight" RoR framework is far better than the latest and greatest SPA approach.
In a previous life I did back office development for ecommerce. We had two applications, one RoR monolith and a "modern" JavaScript Meteor SPA. The SPA was actually developed to replace the equivalent functionality in the RoR application but we ended up killing it and sticking with what we had. Depending on what you're trying to accomplish server side rendering is just as good, if not better than the latest and greatest in client side rendering.
A UI where each interaction takes several seconds is poor UI design. I do lose most of my time and patience to poor UI design, including needless "improvements" every few iterations that break my workflow and have me relearn the UI.
I find the general state of interaction with the software I use on a daily basis to be piss poor, and over the last 20 or so years I have at best seen zero improvement on average, though if I was less charitable I'd say it has only gone downhill. Applications around the turn of the century were generally responsive, as far as I can remember.
> These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use.
I’m willing to bet that a significant percentage of my accidental inputs are due to UI latency.
Virtually all of my accidental inputs are caused by application slowness or repaints that occur several hundred milliseconds after they should have.
I want all interactions with all of my computing devices to occur in as close to 0ms as possible. 0ms is great; 20ms is good; 200ms is bad; 500ms is absolutely inexcusable unless you're doing significant computation. I find it astonishing how many things will run in the 200-500ms range for utterly trivial operations such as just navigating between UI elements. And no, animation is not an acceptable illusion to hide slowness.
I am with the OP. "Good enough" is a bane on our discipline.
How about the i-am-about-to-press-this-button-but-wait-we-need-to-rerender-the-whole-page. At which point you misclick or not at all. Especially some recent shops and ad heavy pages use this great functionality ;)
The rule for games is that you have 16ms (for a 60Hz monitor) to process all input and draw the next frame. That's a decent rule for everything related to user input. And since there are high refresh-rate monitors, and it's a web app and not a game using 100% CPU & GPU, just assume 4-5ms for a nicer number. If you take longer than that to respond to user input on your lowest-capability supported configuration, you've got a bug.
0ms is great, 4ms is very good, 16ms is minimally acceptable, 20ms needs improvement (you're skipping frames), 200ms is bad (it's visible!), 500ms is ridiculous and should have been showing a progress bar or something.
Responding to input doesn't necessarily mean being done with processing, it just means showing a response.
Don’t get me started with all the impressive rotating zooming in Google Maps every time you accidentally brush the screen.
The usage story requires you to switch to turn-by-turn, and there’s no way to have bird eye map following your location along route (unless you just choose some zoom level and manually recenter every so often.)
It’s awful, distracting and frankly a waste of time... just to show a bit of animation every time I accidentally fail to register a drag...
Well, Google Maps is its own story - it's like the app is being actively designed to be as useless as possible as a map - a means to navigate. The only supported workflow is search + turn-by-turn navigation, and everything else seems to be disincentivized on purpose.
I respectfully disagree -- something that is 10 times more efficient costs 10 times less energy (theoretically). When the end user suffers a server outage due to load, when they run out of battery ten times quicker, all of these things matter. When you have to pay for ten servers to run your product instead of one, this cost gets passed on to the end user.
I was forced to use a monitor at 30 fps for a few days due to a bad display setup. It made me realize how important 60 fps is. Even worse, try using an OS running in a VM for an extended period of time...
There are plenty of things that are 'good enough', but once users get used to something better they will never go back (if they have the choice, at least).
Another problem is that the inefficiency of multiple products tends to compound.
- Opening multiple tabs in a browser will kill your battery, and it's not the fault of a single page, but of all of them. Developers tend to blame the end user for opening too many tabs.
- Running a single Electron app is fast enough in a newer machine but if you need multiple instances or multiple apps you're fucked.
- Some of my teammates can't use their laptops without the charger because they have to run 20+ docker containers just to have our main website load. The machines are also noisy because the fan is always on.
- Having complex build pipelines that take minutes or hours to run is something that slows dow developers, which are expensive. It's not the fault of a single software (except maybe of the chosen programming language), but of multiple inefficient libraries and packages.
> "Even worse, try using an OS running in a VM for an extended period of time..."
I actually do this for development and it works really well.
Ubuntu Linux VM in VMware Fusion on a Macbook Pro with MacOS.
Power consumption was found to be better than running Linux natively. (I'm guessing something about switching between the two GPUs, but who knows.)
GPU acceleration works fine; the Linux desktop animations, window fading and movement animations etc are just as I'd expect.
Performance seems to be fine generally, and I do care about performance.
(But I don't measure graphics performance, perhaps that's not as good as native. And when doing I/O intensive work, that's on servers.)
Being able to do a four-finger swipe on the trackpad to switch between MacOS desktops and Linux desktops (full screen) is really nice. It feels as if the two OSes are running side by side, rather than one inside another.
I've been doing Linux-in-a-VM for about 6 years, and wouldn't switch back to native on my laptop if I had a choice. The side-by-side illusion is too good.
Before that I ran various Linux desktops (or Linux consoles :-) for about 20 years natively on all my development machines and all my personal laptops, so it's not like don't know what that's like. In general, I notice more graphics driver bugs in the native version...
(The one thing that stands out as buggy is VMware's host-to-guest file sharing is extremely buggy, to the point of corrupting files, even crashing Git. MacOS's own SMB client is also atrocious in numerous ways, to the point of even deleting random files, but does it less often so you don't notice until later what's gone. I've had to work hard to find good workarounds to have reliable files! I mention this as a warning to anyone thinking of trying the same setup.)
What year MBP is this? I tried running Ubuntu on Virtual Box on my mid 2014 MBP with 16GB ram, but that was anything but smooth. I ended up dual booting my T460s instead.
But perhaps the answer is VMware Fusion instead then.
I've only given Linux 6GB RAM at the moment, and it's working out fine. Currently running Ubuntu 19.10.
I picked VMware Fusion originally because it was reported to have good-ish support for GPU emulation that was compatible with Linux desktops at the time. Without it, graphics can be a bit clunky. With it, it feels smooth enough for me, as a desktop.
My browser is Firefox on the Mac side, but dev web servers all on the Linux side.
The VM networking is fine, but I use a separate "private" network (for dev networking) from the "NAT" network (outgoing connections from Linux to internet), so Wifi IP address changes in the latter don't disrupt active connections of the former.
My editor is Emacs GUI on the Mac side (so it integrates with the native Mac GUI - Cmd-CV cut and paste etc, better scrolling), although I can call up Emacs sessions from Linux easily, and for TypeScript, dev language servers etc., Emacs is able to run them remotely as appropriate.
Smoothness over SSH from iTerm is a different thing from graphical desktop smoothness.
When doing graphics work (e.g. Inkscape/GIMP/ImageMagick), or remote access to Windows servers using Remmina for VNC/RDP, I use the Linux desktop.
But mostly I do dev work in Linux over SSH from iTerm. I don't think I've ever noticed any smoothness issues with that, except when VMware networking crashes due to SMB/NFS loops that I shouldn't let happen :-)
Having your VM stored inside a file on a slow filesystem is bad. Having a separate lvm volume (on linux)/zvols (with zfs)/partition/disk is much more performant.
I store my Linux VM disk inside a file on a Mac filesystem (HFS+, the old one), and I haven't noticed any significant human-noticable I/O latency issues when using it. The Linux VM disk is formatted as ext4.
That's about human-scale experience, rather than measured latency. It won't be as fast as native, but it seems adequate for my use, even when grepping thousands of files, unpacking archives, etc, and I haven't noticed any significant stalling or pauses. It's encrypted too (by MacOS).
(That's in contrast to host-guest file access over the virtual network, which definitely has performance issues. But ext4 on the VM disk seems to work well.)
The VM is my main daily work "machine", and I'm a heavy user, so I'd notice if I/O latency was affecting use.
I'm sure it helps that the Mac has a fast SSD though.
(In contrast, on servers I use LVM a lot, in conjunction with MD-RAID and LUKS encryption.)
Yes, but it's not just relative quantities that matter, absolute values matter too, just as the post you replied to was saying.
Optimizing for microseconds when bad UI steals seconds is being penny-wise and pound foolish. Business might not understand tech but they do generally understand how it ends up on the balance sheet.
But the balance sheets encompass more than delivering value to end-users; business can and do trade off that value for some money elsewhere (see e.g. pretty much everything that has anything to do with ads).
Note also the potential deadlock here. Optimizing core calculations at μs level is bad because UI is slow, but optimizing UI to have μs responsiveness is bad, because core calculations are slow. Or the database is slow. This way, every part of the program can use every other part of the program as a justification to not do the necessary work. Reverse tragedy of the commons perhaps?
> Even worse, try using an OS running in a VM for an extended period of time...
I do that for most of my hobbyist Linux dev work. It's fine. It can do 4k and everything. It's surely not optimal but it's better than managing dual boot.
Host is Windows, guest is Ubuntu. Hypervisor is VMWare Workstation 12 Player. There is a very straightforward process to get graphics acceleration in the VM. The shell has a "mount install CD" option that causes a CD containing drivers to be loaded in the guest (Player > Manage > Reinstall VMWare Tools). You install those, and also enable acceleration in the VMWare settings (https://imgur.com/a/PUaE38u). Again, it's not perfect, but I can e.g. play fullscreen 1080p YouTube videos. Not sure how it would like playing 4k videos, but my desktop doesn't like that so much even in the host OS.
I do this the other way around, Ubuntu host and a KVM virtual machine controlled by virt-manager with PCIe passthrough for its own GPU and NVMe boot drive. I enjoy Linux too much for daily use (and rely on it for bulk storage with internal drives mergerfs fused together and backed up with snapraid), but I do a lot of photography and media work so I also rely on Windows. This way, I can use a KVM frame relay like looking-glass to get a latency free almost native performance windows VM inside a Ubuntu host, without the need to dual boot (but since the NVMe drive is just windows, I can always boot into windows if I please)
I have to be careful about what I describe, but I don't think people care about speed or performance at all when it comes to tech, and it makes me sad. In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.
At my current place of employment we have plenty of average requests hitting 5-10 seconds and longer, you've got N+1 queries against the network, rather than the DB. As long as it's within 15 or 30 seconds nobody cares, they probably blame their 4G signal for it (especially in the UK where our mobile infrastructure is notoriously spotty, and entirely absent even within the middle of London). But since I work on those systems I'm upset and disappointed that I'm working on APIs that can take tens of seconds to respond.
The analogy is also not great because MPG is an established metric for fuel efficiency in cars. The higher the MPG the better.
> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.
I never liked this view. I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time.
> they probably blame their 4G signal for it
Sad thing is, enough companies thinking like this and the incentive to improve on 4G itself evaporates, because "almost nothing can work fast enough to make use of these optimizations anyway".
"I can't think of a single legitimate use case that couldn't be solved better than by hiding your true capabilities, and thus wasting people's time."
Consider a loading spinner with a line of copy that explains what's happening. Say it's for an action that can take anywhere from 20 milliseconds to several seconds, based on a combination of factors that are hard to predict beforehand. At the low end, showing the spinner will result in it flashing on the screen jarringly for just a frame. To the user it will appear as some kind of visual glitch since they won't have time to even make out what it is, much less read the copy.
In situations like this, it's often a good idea to introduce an artificial delay up to a floor that gives the user time to register what's happening and read the copy.
This doesn't work well in apps but games do incredible things to hide that state; and it's partially a consequence of avoiding a patent on minigames inside loading screens.
e.g. back in the 90s with Resi 1, the loading screen was hidden by a slow and tense animation of a door opening. It totally fit the atmosphere.
Plenty of games add an elevator or a scripted vehicle ride, or some ridiculous door locking mechanism that serves the same purpose without breaking immersion, especially as those faux-loading screens can be dynamic.
It's pretty much the exact same technique used in cinema when a director wants to stitch multiple takes into a single shot (e.g. that episode in True Detective; that other one in Mr Robot; all of Birdman).
Flash is good. If the state transition is "no indicator -> spinner -> checkmark", then if the user notices the spinner flashing for one frame, that only ensures them the task was actually performed.
It's a real case, actually. I don't remember a name, but I've encountered this situation in the past, and that brief flash of a "in progress" marker was what I used to determine whether me clicking a "retry" button actually did something, or whether the input was just ignored. It's one of those unexpected benefits of predictability of UI coding; the less special cases there are, the better.
> In fact, there are so many occasions where the optimisation is so good that the end user doesn't believe that anything happened. So you have to deliberately introduce delay because a computer has to feel like it thinks the same way you do.
I see this argument coming up a lot, but this can be solved by better UX. Making things slow on purpose is just designers/developers being lazy.
Btw users feeling uneasy when something is "too fast" is an indictment of everything else being too damn slow. :D
I wonder how this trend will be affected by the slowing of Moore’s law. There will always be demand for more compute, and until now that’s largely been met with improvements in hardware. When that becomes less true, software optimization may become more valuable.
I use webpages for most of the social networking platforms such as Facebook. I am left handed and scroll with my left thumb (left half of the screen). I have accidentally ‘liked’ people’s posts, sent accidental friend requests only because of this reason.
Guessing along with language selection, it might be helpful to have a selection of hand preference for mobile browsing.
> admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance
I think for webpages it is the opposite: non-orthogonal in most cases.
If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.
I dont know, that just feels wrong. If anything, the rise of mobile means there should be more emphasis on speed. All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness. Can you point to a newish app that is clearly better that its predecessor?
> All the bloat is because of misguided aesthetics (which all look the same, as if designers move between companies every year, which they do) and fanciness
That's not really true. Slack could be just as pretty and a fraction of the weight, if they hadn't used Electron.
I think there are two factors preventing mobile from being a force to drive performance optimizations.
One, phone OSes are being designed for single-tasked use. Outside of alarms and notifications in the background (which tend to be routed through a common service), the user can see just one app at a time, and mobile OSes actively restrict background activity of other apps. So every application can get away with the assumption that it's the sole owner of the phone's resources.
Two, given the above, the most noticeable problem is now power usage. As Moore's law has all but evaporated for single-threaded performance, hardware is now being upgraded for multicore and (important here) power performance. So apps can get away with poor engineering, because every new generation of smartphones has a more power-efficient CPU, so the lifetime on single charge doesn't degrade.
I think objections like this may be put in terms of measurable cost-benefits but they often come down to the feeling of wasted time and effort involved in writing, reading and understanding garbage software.
Moreover, the same cost-equation that produces software that is much less efficient than it could be produces software that might be usable for it's purpose (barely) but is much more ugly, confusing, and buggy than it needs to be.
That equation is add the needed features, sell the software first, get lock in, milk it 'till it dies and move on. That's equation is locally cost-efficient. Locally, that wins and that produces the world we see every day.
Maybe, the lack of craftsmanship, the lack of doing one's activity well, is simply inevitable. Or maybe the race to the bottom is going to kill us - see the Boeing 737 Max as perhaps food for thought (not that software as such was to blame there but the quality issue was there).
The analogy is wrong as well because a car engine is used for a single purpose, moving the car itself. Imagine if you had an engine that powered a hundred cars instead, but a lot of those cars were unoptimized so you can only run two cars at a time instead of the theoretical 100.
or... something.
The car analogy does remind me of one I read a while ago, comparing cars and their cost and performance with CPUs.
>And build times? Nobody thinks compiler that works minutes or even hours is a problem. What happened to “programmer’s time is more important”? Almost all compilers, pre- and post-processors add significant, sometimes disastrous time tax to your build without providing proportionally substantial benefits.
FWIW, I did RTFA (top to bottom) before commenting. I chose to reply to some parts of the article and not others, especially the parts I felt were particularly hyperbolic.
Anecdotally, in my career I've never had to compile something myself that took longer than a few minutes (but maybe if you work on the Linux kernel or some other big project, you have; or maybe I've just been lucky to mainly use toolchains that avoid the pitfalls here). I would definitely consider it a problem if my compiler runs regularly took O(10mins), and would probably consider looking for optimizations or alternatives at that point. I've also benefited immensely from a lot of the analysis tools that are built into the toolchains that I use, and I have no doubt that most or all of them have saved me more pain than they've caused me.
Then you're being disingenuous in picking a quarter of the quote.
>You’ve probably heard this mantra: “Programmer time is more expensive than computer time.” What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.
The point is that we are wasting all the resources at every scale. We are supposedly burning computer cycles because developer time is more important. Yet we are also burning developer time with compiling, or testing for interpreted languages, at a rate that is starting to approach the batch processing days.
Haha, that's a great link. I actually laughed out loud at how ridiculous his comment sounds.
I used to work on a team at Amazon that was _very_ relieved and happy to move away from Oracle and onto the AWS databases. I wasn't directly involved but I understand the migration work was monstrous. I think it's clear from Ellison's comment that Oracle considers that to be a product feature.
IMO the core of cyberpunk is about envisioning a world where advanced technology is useful and ubiquitous, yet humanity is worse off than ever ("high tech, low life"). It's a subversion of the simple tech dystopias where the technology itself is evil or is misused by evil people, and more of a realistic counterpoint to the idea that technological progress leads to inevitable utopia.
I'm not sure about more contemporary works that build on those themes. Maybe it's lost its edge as "futuristic" technology has pushed its way more and more into our lives?