Today, the job of a computer is everything. Computers are in everything and do everything. They are our interface to the world. Their value is what they enable - writing, tax returns, video consumption, gaming and everything else, not the fact they’re a ‘computer’.
For those of us who want a computer to tinker with, we’ve never had it so good. There’s so many more options than there’s ever been for hackable tech. Just because most people don’t want to do that doesn’t mean they’re wrong, it just means they have different priorities.
I love the fact I don’t have to muck about with my iPhone to get it to work just as much as I love mucking about with my raspberry pi to get it to work.
I have changes I want to make to my iPhone. Some of them would be as easy as adding a trigger to an SQLite datbase that a stock "app" uses. I'm not "allowed" to.
I don't think companies should be legally permitted to take away the rights of an owner to do what they please with their own property. Locked bootloaders, e-fuses, crypto keys buried inside hardware, etc, should all be able to be overridden by a physical interlock. (Like Google did with Chromebooks, for example.)
Flip that switch and you lose the "walled garden". Okay. So be it. No running software that "trusts" my device (which is, arguably, misplaced trust anyway). No "app store" for me. I can live with that. It's my device and my choice to make. I get to make the call to give up "security" for freedom.
It's morally wrong to take away the ability of the owner to do what they wish with their devices. It should be illegal too.
There's a reason I don't have any Apple device: they don't make anything that I want. It seems pretty pointless to mandate that all companies must sell products that I like.
The law should make it "their business". I don't believe it should be possible to waive your right to legally do what you want with your property.
(I have similar feelings about restrictive covenants in the conveyance of real property, for example.)
Since we're editing here:
It seems pretty pointless to mandate that all companies must sell products that I like.
We mandate lots of consumer protections. This seems no different to me.
Companies make and sell products for specific use-cases all the time. Saying the can't sell something unless they also make the product able to break away form that specific use case, regardless of time and cost to the company and consumers for parallel built in systems or whatever the lift may be doesn't sound feasible to me.
I don't know, I'm on your side that I should be able to dismantle my iPhone and use the parts to my hearts content if I want to. But I'm not sure that it's something Apple has to build into the system just in case someone feels like doing it, especially since I could buy any number of other computing devices that might work just as well.
I am being dogmatic. I can't think objectively about this because I have yet to see how it benefits society to absolutely remove the rights of owners to legally do what they please with their property (at least in terms of chattels).
Most people are interested in solving a task, and their device helps them with this. They are uninterested in general-purpose computers. And there's no shortage of them for the rest of us! Android devices are the obvious choice here, with their user-exposed flashable ROMs. (Not just the OS, but every ROM is exposed through fastboot) Or PinePhone. Or whatever Firefox is doing. You've got choices for tinkerers and general-purpose computation, it just doesn't live in a device with an apple logo.
This is disingenuous. Apple is not being called out for failing to create desirable functionality for iPhones. Rather Apple (et al) are being criticized for deliberately acting to purposefully constrain functionality. The first criticism would be asking manufacturers to do work to improve their products. The second criticism is telling them to stop crippling devices they sell.
You're reversing the burden. Apple is spending a lot of money to NOT make it possible to tinker with it.
No, because their failure to do this has no structural consequences for the market in ovens or space heaters. Apple’s control over the market for software that can run on 2 billion iOS devices, on the other hand, is a big deal. Some think it’s Apple’s just deserts for creating a platform that users like; others think the government should end that control using the antitrust laws passed to limit corporate power at this scale.
Now there are some real scummy practices that Apple uses, and those should be regulated. Software that detects tampering and shuts down the device? Sure, get rid of it. Some vital service they can (and do) shut down at any time, bricking the device? Regulate it to nothing. But ultimately 'do what you want with your property' is completely different from 'sell whatever I personally want in a device regardless of how others feel'. Apple sells a walled garden by design, and buying one of their devices pretty much entails realizing that.
I am not mature, informed, or articulate enough to argue about this objectively. I cannot conceive of how society benefits from taking away the rights of owners.
Both of those are fairly valid complaints though. Intentionally making a device hard to access or repair is almost as bad as not allowing it at all. That shows an actual concerted effort to stop or make it as difficult for consumers as possible.
That's basically the definition of anti-consumer practices.
It in no way benefits me to not be able to use a standard screw driver to open something and have to either purchase an expensive proprietary tool or at worst take it to an 'authorized' repair shop that has the license to use the proprietary tool.
Occasionally I find myself thinking I'd like to own an Apple product, eg when I run across a cool iPad app or something, but this rarely persists longer than the few minutes it takes me to remember that I don't want to reorient around the Apple ecosystem.
If you were not motivated enough to research the options available to you, you cannot in good faith argue that Apple did anything wrong here.
In every "jailbreak" case, the manufacturer's intention was to make practically impossible owner control of devices. Jailbreaks are about exploiting bugs. The manufacturer's intention remains the same-- to remove owner control. Just because Apple made mistakes in their implementation of their architecture of control doesn't change their intention.
re: impractical for individuals - Try un-blowing an e-fuse in a Nintendo Switch. That's practically impossible. That's the kind of protections I'm talking about.
The problem with this argument is the same as any essential utility. At some point, it becomes impossible to live a normal life without access to a certain facility. If access is only available through a monopoly provider or a small enough number of providers that they tend to move the market in unison even if not actively collaborating, that can become a serious problem.
This is why we have government regulation of essential markets. Regulators might impose pricing limits in violation of the "free market", to prevent exploitation of vulnerable customers (and in this context, remember that most or all customers might be in that category) where competition fails to do so effectively on its own. But laws and regulations can also impose other safeguards, such as requiring honest advertising, adequate privacy and security protections, or interoperability.
I think it is no longer credible, at least in my country (the UK) and others like it, to argue that a modern smartphone is a luxury. People use their phones to access government services, shops and home delivery services, banks and financial services. They are by far the dominant communication device of our age, not just for calls but also texts, emails, numerous Internet-based communications channels. Some people no longer have any other reasonably convenient means to access those services and communications. Some of those services and channels are provided exclusively via mobile apps and simply aren't available to those who don't have a phone (which is in itself a problem, because obviously not everyone does, and this is making things very difficult for some demographics during the strange times we are living in).
This being the case, it is reasonable to argue that people should not have to choose between two dominant ecosystems for their phone when both have serious problems in areas like privacy, security, reliability and data lock-in, some of which are a direct result of the interests of the providers of those two dominant platforms, without any realistic ability for most people to protect themselves from those risks or with any such ability relying essentially on luck (for example, the availability of jailbreaks and the continued operation of essential apps despite any jailbreak that has been applied).
Even more, it doesn't seem to me that cell phone manufacturers are currently moving the market in particular unison. There are still more and less open devices, just as there are more or less expensive ones. Perhaps we're moving slowly towards that, but we're certainly a long way away as of now.
My primary issue is the difference between ensuring that no one has to get screwed and ensuring that no one can get a product they want. So long as there are more open devices, no one is compelled to use an IPhone. Anyone who wants to be protected from big bad apple may simply refrain from paying them. A utility is generally regulated not because of its vitality, but because of inherent restrictions on consumer choice. Unless it becomes impossible to just not buy Apple's shitty locked down hardware, it doesn't make sense to constrict.
No idea about Google Pay, it's never even occurred to me to try it.
We are in a similar place in the mobile world, where life is getting increasingly reliant on electronic luxuries. If you want to take advantage of the convenient electric scooter or rideshare, you need an app. If you want to cash a check without going to a bank or paying a fee with a third party like a grocery store, you need an app. If you want to get early warnings from the city and/or state about earthquakes of all things, you need an app (MyShake). And for all those apps, you can't build from source on your own hardware. There is rarely a mobile website alternative. You need to get them from one of two centralized and moderated marketplaces, where only sanctioned and licensed devices may participate. You could make a stand here with a dumb phone and yell at the clouds that the world is this way, and it really is bad that it is this way, but you'd realistically be living a decade in the past and miss out on all this useful functionality.
Following your weird examples the parent's comment is like saying you don't have to buy a *Ford* car, or a *Carrier* AC. There are many more car and AC options if you don't like Ford or Carrier.
Same goes for Apple; I would never use an Apple device, because I like to have control over my technology. Somehow, despite my amazingly brave stand against tyranny and capitalism, I still have a phone, running a LineageOS install that took me 20 minutes and an online guide to get running years ago. On the other hand, if I wanted to give my grandma a device I was confident she couldn't somehow force into a bad state, I would like it to be legal to sell me one.
Most people I know who are any good at programming had started learning about it by modifying something either on their own computers (pre-2000) or on the web (post 2000). They took things they found interesting or useful and somehow introspected and changed them. Good chunks of skills gained by that were transferable to professional environment.
Today, this is not how people get into tech. There is an ever increasing gap between technologies used for professional computing and things that are observable and modifiable by a normal person out there.
Curiosity and experimentation have been replaced by (appropriately named) coding bootcamps.
However, I'm not convinced that the absolute number has decreased. It's completely possible that larger numbers than ever of people are getting into tech as an extension of their own curiosity, and are simply less visible due to being outnumbered by the masses from bootcamps.
As a kid I got into programming in basic on my Acorn Archimedes because I had a book on basic. However I never got further than that because I didn’t have access to any more advanced programming books.
Now, all the information about it everything is available within a few minutes of searching.
Imagine being able to quickly and easily add your own custom programs to your iPhone without asking anyone's permission though? Also, what if - as the owner of that device - you had full control over it's capabilities like you do with a desktop computer? Adding these abilities certainly would not be the downfall of phone ecosystems as we know them. People that want convenience wouldn't leave their app store.
The fact that they are our interface to the world is exactly the reason why we need to take control of them away from the Apple/Google oligarchy. People need to stop comparing these devices to gaming consoles, which don't impact the world even a fraction as much as smartphones, when we're talking about making new rules for running smartphone platforms.
Cool? I'm a software engineer and I have no desire to do this. The number of people who care about this feature is vanishingly low. I think it is reasonable to lament losing a particular feature for tinkerers, but it becomes unreasonable to demand that businesses cater to this very small niche.
> Adding these abilities certainly would not be the downfall of phone ecosystems as we know them. People that want convenience wouldn't leave their app store.
Downfall? No. But locked down systems are a somewhat effective way of keeping badware off of devices for the masses.
If you're not advocating self reliance and self governance for people, you're not thinking long term. It's historically proven bad advice to willingly give all control to central powers.
> ...locked down systems are a somewhat effective way of keeping badware off of devices for the masses.
Developers are not the masses. The masses aren't installing the necessary tools and compiling their own programs and they never will. I'd settle for signing a written agreement with Apple and waiting a week for approval when I buy my phone so that I can actually control my own device.
I'm not arguing for letting me put my apps on everyones iPhones, just mine.
I don't advocate removing all safety checks from phones, but making somebody jump through 15 hoops to install a non-corporate-approved app is stupid.
Secondly, these companies stop a lot of badware, but they also act as gatekeepers to stop anti-establishmentware that may be for the public benefit at the detriment of the gatekeepers' stranglehold on power.
In other words, these devices are systems of perpetuating the status-quo and safeguarding the profits of the ones making them. This in itself I'm not upset about: of course capital seeks to cement its own power. The real issue I have is how much potential is destroyed as collateral damage.
General purpose computers are great for people who know what they’re doing (although there are some _really_ good phishing scams out there that can even fool trained people), but they’re an absolute disaster for the vast majority of people who _don’t_ become specialists in computer security.
I _just_ had a support ticket opened by someone who goes between different stores for a client…and this man sent his password in the clear in the support ticket. This man isn’t an idiot, it’s literally _not_ in his job description to be a computer security expert (his job has to do with hardware sales or lumber or something like that).
This man would be better served with two things we’re building later this year (AD integration and an Android version of the employee app), because then he can just log in with the same (probably simple and insecure) password that he uses for his Windows log-in at work. He is the type of person that, when told to do something, would simply install program X because someone told him to do so (never mind that program X is actually malware).
So no, while I think that Apple’s signing restrictions are a little on the draconian side, I don’t think that that Windows 95-like “permissions” are what most people want or need.
I don't buy it.
> I don’t think that that Windows 95-like “permissions” are what most people want or need.
That's an extremely bad faith take on my argument because there's a very wide spectrum of possibilities between the Windows 95 free-for-all and what we have with iOS. You're presenting a false dichotomy.
For one thing, iOS could easily put up tons of scary warnings before letting you sideload things. That would be enough to dissuade most people from doing it. However, I'd be willing to go to extremes to get control over the devices which I supposedly own. Make me come into the Apple store and sign away any rights to a warranty from Apple. Make me pay extra. Whatever you want - just don't put every single user in prison because a likely majority of people can't handle making good decisions on what software to install.
The idea that we must remove any and all control from users to protect the innocent is just as bad as the idea to have a War on Drugs - and I highly suspect that these preventions are actually in place for the same reason: to actually control people and rake in profits, not to protect them.
You say that Apple can make you pay extra. OK. Here’s a $99/year developer contract with which you can develop and install software that you want as you want (I believe that these builds are good for ~90 days, so you recompile/reinstall every 90 days; unlike the 7 previously mentioned). But people don’t _like_ that and have said that’s unfair.
I am completely saying that there’s no _meaningful_ way we can put up a fence to keep people who _shouldn’t_ be running random apps from doing so. We can’t keep users from clicking on _links_ that they shouldn’t be clicking on. Just this morning, I had a neighbour ask me about one of those full-screen “WARNING FROM MICROSOFT YOUR COMPUTER IS INFECTED” pop-ups; even though she was smart enough not to click on anything, she _still_ copied down the phone number to maybe call the scammer. My father, a couple of weeks ago, didn’t know about Ctrl-W / Command-W on a similar full-screen hijack that had affected both his Chromebook (locked down) and my mom’s MacBook (mostly not locked down).
If you can crack that, then there’s going to be a lot of people who will be at your door to reward you…and then many more looking for the backdoors you left so that they can continue to infiltrate systems for their own rewards (whether state actors or criminal actors).
Unless you were talking about jailbreaking.
Freedom to modify is not the same as zero cost. You paid for the iPhone hardware, after all. Just pretend it cost $99 more and came with the right to load your own apps.
The root cause is the complete lack of facilities to quickly and simply delegate capabilities to applications. There is no way to tell the OS, give this file to this application. Instead of trusting nothing, and providing the PowerBox facility to allow the user to do this, all the popular OSs just trust the applications with everything that the user has permissions to access, by default.
If you want to pay for a purchase at a store with cash, you carry a number of units of currency... and you only give the clerk the appropriate amount, and you can count the change to verify the transaction is correct.
Giving you a person, access to your cash is a solved problem. 8)
There is no equivalent way to work with an application on mainstream OSs. This is collective insanity, which I've been calling out for more than a decade.
I would argue that at least some of those holes were indeed imagined, that’s why iOS works the way it does and not the way parent would like.
Or even better, imagine being able to quickly and easily add your own custom programs to someone else’s iPhone, without asking anyone’s permission though?
That’s kind of the problem, isn’t it?
But those aren't the same thing. Me being able to install what I want on my phone doesn't give me the ability to install it on other people's phones.
I currently have the ability to install apps on my phone from the app store, but I can't make that decision for someone else's phone. So why would it be different for software that doesn't come from the app store?
Obviously this is a bit self-serving on Apple’s part - restricting software installs begins to feel like a protection racket. But it still serves as a security perimeter.
Yes, there's a security risk, but what's really being said here is "we're going to decide what you're allowed to do so you won't hurt yourself".
And I guess I just don't think that should be their decision to make.
Or to put it another way: Apple can keep the app store, and even make it the default, but there should be a setting to allow installing things from outside the app store. It can even be behind a few scary popups warning the user of the danger.
They certainly didn't make a decision to install my malware in that apk!
I'm sure you won't hurt yourself - most iphone users will though.
Yeah, I agree with you - as long as it's not something as simple as a click-through and ignoring a few worthless scary warnings that everyone ignores anyways and exposes non-technical users to bogus certificates and downloaded malware on Windows and Android. If it was something like MacOS's system integrity protection deactivation where you have to restart the whole system and execute a command through some obscure interface, it could work.
I disagree to an extent. While the hardware is much, MUCH faster, efficient, reliable.. it isn't something we can tinker with anymore.
Would you feel safe picking up a random USB drive and running the programs on it? A huge chunk of the fun of early PCs is gone. You can't just find new stuff and try it out to see how well it works.
If its open source, you get dependency hell, if it works at all. Plus there could be any number of backdoors or bugs in it waiting to subvert your system. Plus the ever present threat of having all of your passwords and/or data exfiltrated to who-knows-where.
In the days of 2 floppy disk machines, none of this was a worry.
Sure, why not? Just don't run it on the same computer that you access your sensitive data on. Given that you can buy a fully functional and tinker-friendly computer for like 20 bucks, this seems like a pretty straightforward solution.
It really isn't the same. You always have to worry these days.
It's similar to the fact that going into a crowd was no big deal in the past, but now it is.
Not to mention, sticking random floppies in your main computer was never exactly safe to begin with. The heydey of the Michaelangelo virus was 1991, if memory serves.
Because the hardware could have something planted in it... and if what you're tinkering with turns out to be useful, then what? Then you have to spend $20 for another computer that you can trust with that one little thing.
Yes, in the days of MS-DOS, there were virii spread by floppy disk, but those could be guarded against fairly easily. You could always start fresh with a clean copy of your OS and use it to clean up the mess.
A better way is to have an OS that protects the hardware, and itself. Then you can have a single computer for everything, without having to trust any piece of application code, ever.
I was very far off, geographically, growing up a teenager in India. But back then, 1998 - 2006 ish, I feel the Internet was a lot more about experiments. I would read up on some software or hardware group all the time. LUGs were way popular. I used to be on chat rooms from MIT or other unis every other day.
Nowadays most people I see of that age group are happy with a YouTube video on their smartphones. General purpose of computers is simply not visible. Not even to the level that I, a curious teenager, had experienced in those years from so far in India.
We have too many tall-walled gardens now.
Update: I should add I am happy with how powerful computers are these days, but they are not for the purpose of learning or tinkering the actual device or its software. It is just way to consume media for the mainstream.
Note that profit-driven entities can give away software, but only if it supports another artificially scarce good. Google rents space on its search tool, as Facebook rents space on its social media tool. Ad space is scarce, and made more valuable the more Google and Facebook give away (which is proportional to the attention they capture).
But devs generally don't like artificial scarcity, so the platforms need a path in for them. I think Apple demonstrates this most clearly: the iPad and iPhone are software consumption devices, but the Mac is also a software creation device, and therefore is generally more open, free, and changeable.
You remind me of Cory Doctorow's "The Coming War on General Computation" keynote, which interestingly didn't notice that streaming and similar cloud-based services would become the DRM of the last decade.
Or, as it turns out, the rising appeal of non-general purpose devices that are more efficient.
This is wrong, and every open-source project including OpenBSD and Linux, emacs and vim is counter-evidence.
"A lot of "software innovation" is not about building better software, but building software you can charge for" -- this is correct.
I maintain an open source project that sits in a very small niche. Based on downloads, stars on GitHub, and comments on forums, I estimate that hundreds or thousands of people find it useful. I do get the occasional donation, but if I calculated dollars per hour that I have put into this project, it would probably be in the single digits.
In much the same way that a company can produce an open-source product and charge for support, feature development, consulting, data feeds and sometimes SAAS operation.
If that were the case no software would be written except by hobbyists as a kind of joke. Software provides value other than being scarce and it's really the maintenance of the software that people are wanting (those paying attention at least) not the software itself.
Worthless to whom? Maybe for corporations who want to make billions by exploiting a legal monopoly. For the rest of humanity, abundant software is extremely valuable.
We didn’t lose the hacker spirit. Go look at hackaday.com to see what people are working on. We simply gained an audience.
My point is, video games were my gateway to programming. I got into mods and then expanded from there. I'm sure I am not alone. It's these natural paths that spark curiosity which are being cut off.
Combine this with popular and easy game making software like RPGMaker, or even the ease that someone can set up a basic game on Unity. Beyond this, there are programs like Scratch and Processing that allow for fun and relatively easy computational tinkering.
I doubt the natural paths of curiosity have shrunk, though they may look different. Rather, the number of people using computers has massively increased, and the number of computer "tinkerers" is just a smaller proportion of people of the larger whole.
My point is that there is a small percentage of the population who are makers and a large percentage who are consumers. The Internet of 30 years ago was mostly populated by the makers so it felt like everyone was a maker. Now it is much more representative of the real public because the consumers joined. It now feels like it’s mostly consumers because in the world’s population consumers outnumber makers by a large percentage. However, and this is the crux of my point, the absolute number of makers did not go down and in fact went up. You won’t find many makers in your Facebook feed, in relative numbers. But maker communities are bigger, better, easier to find, and easier to join than ever before. Just because consumers have joined the makers does not mean that things are worse. They are better. The rest is nostalgia.
> //i.ibb.co/3Sv29Qs/23340-en-REVIEW-FINAL-Mail.png ^^
1. You can find any information you want, in an instant.
2. Hardware is very cheap, computers can be tiny, you can order components from anywhere
3. Robotics and electronics are way cooler now, with drones etc.
4. Software development is way more productive with all the libraries and tools we have right now.
5. When I was a kid, games were made by 1 or a few people. It quickly evolved into big studios. But nowadays, a solo or tiny team can release super successful games.
In the past I could get any component I might need from a local store in 20 minutes. now I've got 2 weeks to 3 months turnaround on shipping from shenzhen.
Not many people are, so local stores aren't as common.
I really miss having a place (back then we even had ad dozen of them in a few km distance) with a huge selection carrying every component you might ever need, and with enough technical expertise that you could show your circuit diagram and get feedback how to improve it. This is something only seen in shenzhen today.
Now I just don't do any small projects anymore as there are no stores in the entire state left, and the extreme premium in shipping and waiting time of aliexpress just isn't worth it anymore.
That sounds very atypical. I live outside a major US metro and I know of one good computer store in the area (Micro Center). I imagine there a few small places that carry electronic components but not sure how many. And it wasn't much different 10 to 20 years ago unless you count Radio Shack which wasn't really all that great.
There would be a queue of people waiting on a Saturday to give their handwritten list of components to an employee to go find.
Well. No. RadioShack is closed. Fry's is closed. If I needed a resistor, I honestly wouldn't know where to go. Probably AliExpress or DigiKey. There's no local stores left.
And companies sending parts remotely always existed, all that happened is now some have less access.
I meant the current trend of digital products which are, in my opinion, more like a TV than "computers".
Interesting that you mentioned you've moved to woodworking, as I found a way to rekindle my curiosity be exploring older technology as well. I've dived head first into antique clock and watch repair.
If you have any resources you can share/recommend, I'd be very curious also ;-)
Personally I started with restoring discarded and thrifted furniture. The initial outlay was just for some glue, varnish, and sandpaper.
That is true. But I really wonder if the hacker/consumer ratio has shifted over the years or not.
And games will look exponentially better considering the tools we have at disposition now. Also, selling millions of units of one game is "possible" even for Solo developers (while it's obviously going to be a superstar kind of probability). in the 90s reaching that kind of market was impossible without a huge corporation support.
I have been using laptops that can not be easily opened/upgraded for the last 4-5 years. Devices are being made mainly for people to consume media and that's it. I think computers of the earlier era and now are not at all comparable. The devices now are not really "computers" as they used to be. They are opaque all-soldered-and-sealed boxes. How will you inspire anyone to tinker with them?
Adding a graphics card to a box was such a cool thing for all of my gamer pals. Video cards, audio cards, or any upgrades would naturally involve casual talks about tech. Who does that these days, unless you own and upgrade a desktop?
Does that mean we've sadly come to the end of the era of hobbyist friendly alarm clocks? Maybe, but so what? We then got computers instead. Now we have Raspberry Pi's for kids who see cellphones as boring black boxes. There's always something and it's fine if it's not the same thing it used to be. The same thing happened with cameras - people used to build their own cameras out of bits of wood and develop their own pictures from glass plates. but cheap film cameras eliminated that hobby from the mainstream. It's fine. Technology moves on. Now we have 3D printers that enable kids to make things they never could have before without access to an expensive milling machine.
But, we've still got linux and a lot of hardware support for that. I'll likely be the last bastion of general purpose computing.
That was as good as it got. The difference between a beginnner dabbling with his first PRINT loop and an advanced assembly language programmer wasn't that great because the machine was so limited; you could go from one extreme to the other in skill in a year or two if you were interested enough. My natural response to seeing the "MISER" adventure, after solving it of course, was "I have all the programming skills to make a game like that" and did so.
And while then as now, only <1% of the general population was interested enough to get good at programming these things, another 10-20% was interested enough to hang around, dabble, try the latest cool programs that came down the pipe, or were made by the 1%. I had people playing a game that I made that consisted of one (long) line of BASIC.
Then, it seemed, most of the rest of the population (the other 80-90%) got computers too, Commodore 64s mostly where I lived. And still, even if only a tiny minority actually programmed their own stuff, it felt like a part of a vibrant scene, you could always show off your stuff.
I believe there is hope of making cool graphics effects with GPU shaders. GPU tech is still advancing similar to how CPUs were back then, so people have not yet explored all the possibilities of the hardware. You can do impressive things just by experimenting even if you don't have a lot of theoretical knowledge. Sites like shadertoy.com make it easy to get started.
Every modern computing machine I know allows you to add a very rich set of coding tools, other than unjailbroken mobile devices perhaps. Yeah, poor access to mobile sucks, but if you feel the need to hack, you won't let walled gardens stop you.
Web scraping is an insanely rich source of data for building cool tools. Then you might add a bit of simple machine learning, like a voice-based interface or a GAN-based video face-melter or visualizing web-scraped results on a map. These sorts of tricks were hard or impossible to do 10 or 20 years ago. But not today.
If you want to hack, I'd start by immersing yourself in Linux. Or Raspberry Pi. Better yet, both.
I can personally attest to what a massive turn off this is. I grew up in an age when computers were a part of everyday life, but their inner workings - hidden behind a mountain of ugly obfuscation. If you find the rare trivial task like making stamped websites fascinating, good for you. But for people like me who couldn't care less, it takes some sort of external impetus to actually discover their interest in computing. In my case it was a half assed mandatory programming class in engineering school where I found out I had a talent for it, and discovered an interest in the inner workings of things I had been taking for granted all my life.
Just because resources are easy to find doesn't mean anybody cares for finding them.
So from a learning experience perspective, a kid tinkering with Linux might gain more insight into how computers work and than, say, on Windows or Mac.
Some of things are worse, but some things got infinitely better.
Walled gardens and specialization are somewhat different concepts. The former is primarily concerned with restricting how hardware is used, while the latter is mostly concerned with optimizing hardware for specific applications. While there are times when these two concerns can overlap, this need not be the case.
This can be seen historically. Early microprocessors were developed for integer calculations, with floating point operations being done in software or with an optional floating point coprocessor. Floating point was integrated, and became a fundamental part of our notion of general purpose microprocessors, later in the game. The path taken in computer graphics was much more complex, but it is also worth noting that it started off as specialized hardware. Experimentation resulted in GPUs becoming a generalized tool. Recent developments have encouraged them to diverge into a multitude of different specialized tools. Both of these examples illustrate how specialization has broadened what can be done, without creating and environment that restricts what can be done.
While I agree that walled gardens encourage media consumption at the expense of experimentation or creation, this is more of a cultural thing. There is far more money to be made in the former than the latter, so that is what the market focuses upon.
YouTube is a fantastic resource for any hobby or subject. It's like a world wide university.
> Not even to the level that I, a curious teenager, had experienced in those years
I think therein lies the rub. Teenagers curious about computers aren't very common. And even back then (early 2000s), my experience in Europe was the same . Most of the people in my school didn't really care about computers. They had other interests and pursuits.
The difference today is that computers are much more widespread because they allow people to do much more diverse things. At the time, they would maybe type up a report or something and that's it. There wasn't Facebook et al., so many people didn't spend any significant amount of time in front of the computer. Idle time would mostly be spent in front of the TV or hanging out with friends.
I think the reason this perception comes up is that at the time, people who "spent time on computers" were doing so out of curiosity and interest in learning about them. So the shortcut is that since now many more people are spending a good chunk of their day in front of the computer, that must mean they're as interested in them as the "curious teenagers" of our time. That's simply not true. Just look at what they do on the computer. It's mostly idle scrolling on some form of social media.
And, to address the subject of the OP, I'm also wondering whether there's an actual decline in the "general-purposeness" of the computer, as opposed to there just being more "non general-purpose" computers.
There's a definite decline in the percentage, and maybe even in absolute numbers if we look at people who used to use computers because they had no alternative but who are now better served by smartphones or tablets and also for uses which need "specialized processors" (say ML).
I draw a parallel to my feeling whenever I see discussions about iOS being locked down and people expecting the whole of the HN crowd to be up in arms about it. In my case, I was one of those curious teenagers of yore. I still love tinkering with computers, etc. But phones? I just can't be bothered to care about them. Can I make calls and check some maps? It's great!
I have an iPhone and treat it pretty much like a dishwasher. I barely have any apps installed. Neither allows me to install whatever random app I want. It's not an issue for my dishwasher, and frankly, nor is it for my iPhone. Whatever "general computing" I feel like doing, I prefer doing it on an actual computer.
Of course, there are other philosophical considerations in this discussion which I understand and can get behind, but the point is that, to many people, a computer is an appliance. They want it to do certain things. Does it do that? Great! If it doesn't, it's not fit for purpose. They simply do not care if some guy some where would like to do some thing with it but can't because Microsot / Apple are locking it down.
 I went to school in the suburbs of Paris, so many of the kids' parents were pretty well-off and many were even working in computer-related fields, so there wasn't any access problem. Practically all of them actually had at least one up-to-date computer at home.
Computers are an idealised example of an organised system which is assembled/operated by solving puzzles. Some people enjoy solving those kinds of puzzles. Most people don't. The first kind of person doesn't really understand the second kind of person - and vice versa.
Something interesting and disturbing has been happening over the last twenty years. The Internet stopped being a technological puzzle and became a social and political puzzle.
This kind of "cultural engineering" - where the currency is attention, influence, patronage, and sometimes actual spending, not puzzle prowess - appeals to a completely different kind of person. And these people have been driving the development of the Internet for a while now.
People who like solving computer puzzles have been completely blindsided by this and continue to act as if technology is still somehow primary - when it isn't, and hasn't been for a while.
Plenty of puzzle lovers don't care about computers at all. Plenty of good programmers and tinkerers don't care about puzzles of different kind.
The “internet” as a thing is still capable of the same things it was 20 years ago. The proportion of people using the internet who care more about the social and political puzzle rather than the technological is what dramatically increased.
I understand where you are coming from, and it makes 100% sense if you own a PC/laptop as well. But guess what? In a lot of places the majority of population (including younger people) do NOT have them. Their first and only experience of computers is a mobile phone. Even though "teenagers who care about computers" is a low number, now a lot of them won't realize that something like a general purpose computer exists (until way later, if they end up using a computer at work).
I grew up being fascinated with how useful and flexible computers are and fell in love with creating software. I'm quite sure that I would not have happened if I was born during the last 10 years.
Broadly, I could see two situations in which people only get a phone and have no computer: either "poor" countries, where they can't afford a computer and will get a phone that's better than nothing, and very affluent ones where they'll just get a phone or tablet and not care. Although I'm not sure that this second category has no computer.
While I do sympathize with this, and I think that it would more likely than not allow people to discover this curiosity by, as you say, being fascinated with how flexible phones would be, my point is somewhat different: this isn't "going back". Phones never were open. And I'm pretty sure that the people who can't afford a computer now couldn't afford one in the 90s.
As such, for me, this is more of a lost opportunity to progress rather than a regression.
> Phones never were open
They certainly were more open than today. It was much easier to fiddle around with the software with smartphones that Nokia used to have, for example. Both iOS and Android were easier in that respect, too. So there's clearly been a regression there. However, if you meant "phones were never as open as PCs", ah yes then I agree.
> As such, for me, this is more of a lost opportunity to progress rather than a regression.
I would agree with the spirit of that, but the problem is that a lot of computers are actually being "dumbed down" so that they are more similar to phones simply because a larger population is used to them. For example most young people might not find it weird if some day MS or Apple make it painfully difficult to install PC/Mac apps from outside their stores.
You're right we are not there yet (yay?), but we seem to be moving towards that.
Uh, as a young person during that time, I think there may have been one kid in my high school who was the right combination of nerdy and well-off to get a smartphone. It was still cool if your flip-phone came with any internet-connected features at all.
I’m not sure why it has become so common on HackerNews to defend this trend and/or pretend it’s not a trend.
When I was a kid we had 8bit micros. They were programmable but they were also eye wateringly expensive. So it was a struggle to own anything you could program at all.
Also they all came with closed source, proprietary OSs.
I used to spend a fair amount of time just starring at adverts for things I couldn’t afford and that would be obsolete before I could.
Now you can get a whole computer given away free on the front cover of a magazine and it will do a pretty good job of running scratch for you.
When I was in university in the Netherlands studying CS in the early 90s, many (I would say most) didn't care about computers; they were there because 'money later' or 'making games'. By far most never wrote a line of code. It was frightening (to me) how much they had to catch up as the lectures were fast paced. Many changed to psychology or law in the first 2 years.
Now we live in an age where multiple versions of nearly everything seems to be available. Just buy what you want, don't learn to make it would seem to be the imperative of the day.
That encouraged tinkering to a high degree. I remember having lengthy talks about what components to use to build a PC with gamers or videgraphers. The more computers become a closed box which can not be opened, casual onlookers will have no reason to even configure them.
The industry is simply not giving people a box to open.
The industry is, it’s people who don’t want it.
The sealed boxes are some combination of more resistant to damage and longer lasting and smaller fo factor that a modular box can’t be, the sealed software is less prone to malware and blue screens of death.
The cloud makes things work like an appliance, rather than having to worry about dying drives and backups and figuring out port forwarding on your router. It also helps companies extract maximum rents on their product, but the point is it came with some benefits that consumers do value.
Nowadays however most white-collar jobs and daily life tasks involve computers at some point, and knowing how to use & program them would make these tasks more efficient - almost too efficient, to the point where it goes against corporate interests (turns out society normalized and encourages business models that bank on artificially created or maintained inefficiency) which is partly why general-purpose computing is on the decline now.
> Nowadays most people I see of that age group
These are not the same people at all.
Have a look at GNU/Linux phones recently developed:
Plenty of your peers were not on those chat rooms. They did not interacted with tech like you or even did not interacted with tech at all. They had different hobbies and interests, different habits then you.
This was foreshadowed with the Netburst not being able break 4 GHz in 2004-2005, and CPUs having to shift to multi-core. This bought "classic" CPUs more time, but CUDA showed up in 2007 and GPUs went from strictly specialized computing engines to general-purpose (in research, if not yet in the home). CPUs have also been steadily gaining SIMD extensions.
Now GPUs are showing promise for NN workloads, but in environments where the stack is tightly controlled, NN co-processors are showing up. This is because tightly controlling the stack has the benefits of being able to optimize and harmonize software and hardware, and interop outside of the stack (and in some cases, stack longevity) is not a factor.
The article isn't truly about how more and more computing environments tightly control their stack, but that mechanism does play a part in the design choices that result.
The current state of GPUs is based on Nvidia's decision to focus on GPGPU instead of being a DirectX accelerator forever. They specifically decided to make the device more general-purpose than it needed to be at the time. Presumably they thought there was a chance to start another virtuous cycle for parallel computing.
I don't think we are close to realizing the GPU's potential as a general-purpose device. Imagine the kind of software we will have when every phone has the equivalent of today's biggest desktop GPU inside it.
The next revolutionary general-purpose device will probably feel like a niche product at first, just as the 3Dfx Voodoo did in 1996.
Microprocessors and CMOS upended the computer industry, to the extent that a few years later, all the big companies in computing, up to that point, were in precipitous decline (even IBM, which embraced the new world order with the PC, only delayed this reckoning.)
In those days, microprocessors and DRAM alone were the cutting edge of technology, and they opened up all sorts of possibilities (though this also depended on some additional special-purpose equipment, notably for graphics and networking.)
One might draw an analogy here to the Big Bang. What happened next was like a period of cosmic inflation, in which Moore's law and Dennard scaling (plus fiber optics and the wiring of the world) created exponential growth. As in cosmology, we end up with a much bigger but quite uniform universe, in this case because the growth of the basic technology alone was enough fuel for all the innovation we could come up with.
It was only after inflation that the universe became really interesting. It is still overwhelmingly hydrogen, but it has differentiated: there are also galaxies, stars, planets and people.
So, the long-delayed conclusion to this analogy: innovation in computing is now more mature, but it is not shrinking, it has diversified - and there are still 10^n (for some large n) 8- and 16-bit microprocessors being made, and there are still hobbyists doing clever things with what is now basic, even primitive, hardware (that, not so long ago, was unattainable) - but we also have emerging technologies (machine translation and autonomous vehicles, for example) that quite a few people, not so long ago, assumed would be forever beyond the capabilities of mere machines.
The metaphor that came to my mind was the long persistence of steam power after electric motors were invented. Initially, 'going electric' meant replacing your one giant steam boiler with one giant electric motor, and doing everything else the same way. The factory remained organized around the drive shafts and pulleys and mechanical distribution systems. There was little advantage. It took decades of experimentation to gain the insights of how many small motors and task lighting could allow a factory to be optimized for task flow, not power distribution. And then it took longer for those insights to diffuse through slow human networks.
Hardware has developed so fast that software had no time or incentive to mature. Each decade's tech just gets ossified into the stack because hardware was making the stack faster at a better rate than human insight.
Every few decades, we get lucky and an invention happens when the ecosystem can use it... so we get compilers and sql and automated tests. But then other insights like immutable data structs just stay niche.
I hope that as hardware progress flattens, opportunities emerge for better software paradigms. Maybe this will coincide with the craft of software becoming introspective about it's myriad social issues.
(This is not about TFA, which examines a more well dedined "universal vs specialized processor" dichotomy, but about the frequent laments about general purpose computing).
Why? Just because they thought it would be cool. (Er, hot.)
computing in my view is rapidly subliming away into the cloud. whether users will have much computer at all in front of them is questionable, or perhaps semi-vestigial. work/applications are headed into data centers. want photoshop today? even the old-fashioned install & run it yourself version has major cloud connectivity.
even within the world.of.consumer electronics, I don't see specialization as a trend. video game consoles have tended towards "exactly like a pc" over time. cell phones are pc's with the most proprietary/controlled drivers on the planet. 5g base stations are pc software-defined-networks plus gobs of general purpose dsps/sdrs. in the smaller computing/devuce world, arm and espressif and soon risc-v are cutting out larger & larger swarths.
thus far ai processors have had fairly wide ability to operate models in a cross-architecyure fashion. make sure you can run onnx or tf-lite. big only marginally customized gpus still have a huge presence. there is variety here but remarkably little end user differentiation.
where are the specialized systems this write up talks to? where are the fragmented systems? the door is opening as we try to re-learn how to make silicon foundries, how to do open source chip making, but I haven't seen what's being talked about here, specialization, bifurcation of capabilities.
but cloud? cloud is murdering the shit out of general purpose computing. applicationization of software turns software effectively hard to end users. we have no power, no ability to adapt or change or see the computing. it's 100% what is given to us. every system is 100% special/specialized, and in the post Pax Intertwingularis death of the api & interoperability, that means radical rigidity. systems are radically more specialized & in&general, but it's not a hardware problem like this paper asserts and it's not something on thebhorizon: it's already happened, it's already made the general.purpose computer obsolete, and it's at the software level, it's about where most of the world's computing is run, on special systems, on the clouds.
Look at intel, even incremental changes can go wrong and cost you competitiveness.
Rise of the learning machines
Apple revenue is more than that each month.
There's enough money around. So it seems unlikely that the cost of TSMC silicon fabs would be the obstacle to others developing alternative computing paradigms.
In a kludgy, gradual way personal computers were tamed. This was inevitable. If you network completely general purpose, user-controlled machines in an institutional environment you will get security chaos.
Some of this, though, like putting digital rights management into personal computers, was tragically evil. What we are left with is neither here nor there: Computers are not secure. Our communications are not secure and are under constant surveillance. And we do not control machines we supposedly own. Lose, lose, lose.
You cannot have security and surveillance and central control of "personal" devices. You cannot protect "digital rights" and sell entertainment content to users that truly control their personal computers.
Then we got microcomputers, and we added consumers and individual employees to "general purpose", and "word processing, word processing, word processing, and spreadsheets" became the new definition of general purpose computing, dwarfing big iron data processing. Then laser printers and desktop publishing, then Photoshop, CDs and "multimedia", and then The Internet (meaning the web) exploded and general purpose computing suddenly meant email, instant messaging, and so many new home pages that Yahoo could hardly list them all.
In the Internet Boom, nobody defined general purpose computing as mostly word processing, much less business data processing, and forget about scientific calculating.
Fast forward a couple more decades, and general purpose computing is streaming YouTube celebs, texting, binge watching streaming "TV", getting most of your news managed by Big Tech censors so you can't "misunderstand", keeping up with ever-streaming timelines of your 900 closest friends, going to work or school via Zoom, doing most of your shopping....
And where is word processing in this general purpose computing? Probably bigger than ever, but now buried in the "other" category because newer components of general purpose computing dwarf it.
So if some of this is done with a laptop, and some with a "phone", and some with a flat screen in your car, and some with your TV remote, and some with an iPad...I suspect that "computers" could easily be described as either declining or growing for general purposes depending on which of many reasonable definitions you employed.
The point is that the basic modern computer design works well (enough) for almost all of those tasks. For the most part all that you need is the basic operations on 32 or 64 bit "integers": move, compare, add, subtract, and, or, xor, shift left, shift right.
With this you can do spreadsheets, word processing, database, accounting, desktop publishing, web browsing, email, instant messaging, decoding MP3 audio and JPEG photos and MPEG video. And show it all on a GUI on a high resolution bitmapped display.
You can increase the efficiency of some of those tasks with hardware and special purpose instructions for floating point arithmetic (though spreadsheets for example hardly benefit) and SIMD processing for media. See my other (top level) post.
But you don't need them, especially if you have many general purpose CPU cores.
Don't forget the original Macintosh did basically all of those tasks with just a simple integer-only CPU running at 8 MHz. No FPU, no GPU, no media instructions.
A modern multi-GHz CPU can do all those tasks acceptably using only simple integer instructions as long as you don't insist on very high resolution video (and games) at high frame rates.
Minor quibble aligning with my minor quibble with the article itself: modern graphics has been depending on a dedicated graphics subprocessor for ages now. Even most Linux distros that aren't targeting extremely constrained hardware vend a graphics subsystem that assumes an accelerator card.
Perhaps the graphics card can even be noted as the harbinger of the specialized-hardware trend (if one ignores the audio coprocessors before it). This trend has been riding alongside the CPU trend for ages.
The result is that special-purpose computing is growing, and general-purpose computing is on the decline.
That is all true as I have been posting pretty much exactly the same on HN for a few years. The real question is though, does it matter?
I am not convinced, at least not yet convinced what we are doing now on a computer is that much different to 20 years ago. Excel, Web Browsing, Media Consumption and Content Creation like Photoshop. The only real recent innovation is Machine Learning that greatly increase productivity in certain niche.
And it does not seems to me any of the above activity is going to fundamentally shift. There is nothing in terms of Hardware tech limitation that is holding up performance for further productivity increase, rather it is Software that is not getting much improvement if you consider the worst and best software could have performance different of 100x.
To give an example, I could bet with few billion dollar we could create a Hardware GPU that is 80% close to Nvidia GPU or AMD GPU. I am also willing to bet even with 10 Billion dollar funding we cant get a CUDA Compatible Library or Drivers that is 80% of what Nvidia is offering today.
This is perceptive, but also red ocean thinking. Moving ML to GPU is only bad for other workloads if CPU languishes as a result. But I think there's reason to believe it's a blue ocean. The more performance ML gets out of GPU, the more money the ML business can pour into semiconductors, batteries, EDA, and so forth. CPU benefits from all of these.
I think you can see this at work in the laptop market. Cell phone R&D has driven battery technology. Server R&D has driven efficient high-performance x86 cores. Both have driven power-efficient process nodes. Put them together and you can make incredible laptops.
( https://en.wikipedia.org/wiki/Partial_evaluation#Futamura_pr... )
Anyway, I don't see the problem. I don't see how e.g. better GPUs cannibalize the performance of general-purpose CPUs, and I doubt that the market for general purpose computing isn't already saturated. You can buy CPUs for a couple of bucks (little ones), you have a supercomputer in your pocket (phone), and people only buy fancy chips to play video games.
In my opinion, the OS needs to go all virtual+distributed for general-purpose to survive. Or else all hardware will become web browser accelerators. Even now all browsers already have their own hardware drivers for video.
Nowadays everything just works so damn well. My phone works with my printer without any driver installation let alone mucking around. So nobody has any idea how anything works.
Circuit design has been marching steadily along for decades, and between the tools available to allow non-specialists to go from abstract hardware description to image-ready chip specification and the ever-cheapening, ever-broadening capabilities of custom chip fab, are we approaching an era where fabricating mid-range specialized circuits is on par with the difficulty of writing and compiling software, only a bit slower?
This process started a couple of decades ago, adding instructions to assist in the calculation of hashes and encryption, and relatively narrow SIMD parallel processing to assist multimedia.
Now virtually all high volume general purpose processors have such instructions.
If you count simple floating point add, subtract, multiple, divide as special purpose then this process started 50+ years ago.
The advantage of doing this is less added area and power consumption, and the ability to mix special purpose and general purpose operations at a much finer-grained level. Sometimes its worth copying a few MB of data across to a GPU's local RAM, downloading a special program to it, and then copying the results back.
The number of potential special purpose operations that are useful to someone is probably unbounded. But each one might be useful to only a small number of people. It's not feasible to just keep on adding everything someone thinks of to the volume leader mass-market processor.
Three related things are happening to help with this:
1) adding reconfigurable hardware to a general purpose processor or embedding the processor into reconfigurable hardware. Here we have Xilinx "Zynq" and MicroChip "PolarFire SoC" with ARM and RISC-V (respectively) hard CPU cores inside an FPGA. We also have Cypress PSoC which I believe is more like adding a small amount of reconfigurable hardware on the side of a conventional CPU core. If the performance needs of the general-purpose part of the processing are relatively low then you can use a "soft core" CPU built from the FPGA resources themselves. Each FPGA vendor has had their own custom instruction set and CPU core, but now people are moving more and more to instructions sets and cores they can use on any FPGA -- chiefly RISC-V.
2) making custom chips with a standard CPU core augmented with a few custom instructions / execution units. Again, much of this activity is centred around RISC-V though ARM has announced support for this with one or two of their standard CPU cores, initially the Cortex A35. Going into full production of a chip like this has costs in the low millions of dollars, with incremental unit costs as low as $1 to $10. Small numbers of custom chips (100+) can be made for $5 to $500 each depending on the size of the chip and the process node -- bigger, slower nodes are cheaper.
3) adding special purpose instructions that can be more flexibly applied to a larger range of problems. The main contender here is support for "Cray-style" processing of (possibly) long vectors of flexible length. If appropriately designed the same program can run at peak efficiency (for that chip) on CPUs with vastly different vector register lengths. This is in contrast to traditional SIMD where the program has to be rewritten every time a CPU is made with longer vector registers -- and it is very inconvenient to deal with data set sizes that are not a multiple of the vector length.
If suitable primitives are included for predication of vector elements and divergent and convergent calculations then such a vector processor can run the same algorithms as GPUs (e.g. directly compiling CUDA and OpenCL to them). CPUs with sufficiently long vector registers can then compete directly in performance with GPUs on GPU-style code. All while staying tightly integrated with general purpose computations.
ARM SVE and the RISC-V V extension are the examples of this, with I think the RISC-V version being the more flexible and forward-looking.