I wish other distros documented ways to make it easy to customize the initramfs like this. I'd love to build a setup like this, but I don't want to use Alpine as I don't like musl for compatibility reasons or RC scripts for managing services.
There are other options, but they have considerable barriers to entry as well, like NixOS which requires learning their specific DSL. I like the idea of `bootc` but that doesn't support running from RAM best I can tell. Other distros really only document customizations to the initramfs as a means to provide an installer for a stateful system, which makes running a server like this a bit of uncharted territory.
> I wish other distros documented ways to make it easy to customize the initramfs like this.
Well this is not exactly a documented or "official" way to do things, it's just that Alpine is so darn simple, that producing an elegant but crazy hack doesn't look all too different from wrangling Ubuntu to do a normal, sane thing (like installing Firefox without Snap).
In fact, building an initramfs completely from scratch, with just enough userspace to start doing useful things, is not that difficult. It's just a cpio archive with an arbitrary filesystem layout - you can drop a statically linked executable (name it "/init"), pass -kernel & -initrd to Qemu, and you've got yourself a "hello, world" of embedded/single-purpose Linux.
> I don't like musl for compatibility reasons or RC scripts for managing services
That's the point. You can afford hacks like this because you got rid of all that complexity. musl is simple. RC scripts are simple. NixOS is all but.
> Then yes, you have a regular Linux (although based on musl, instead of usual glibc) on which you can install Docker.
The OnePlus 6/6T are also supported by Mobian [0], which is just regular glibc-and-systemd based Debian, and so a pretty familiar Linux server experience.
> it might indeed be a good idea to avoid Android
It's a good idea to avoid Android because of kernel security as well. Old Android devices always use out of date Linux kernels even when using custom ROMs, and when running (containerized) network services you really depend on the security of your Linux kernel to keep those things properly isolated. Both PostmarketOS and Mobian do bring current mainline Linux support to these devices, so you can be quite a bit more confident in your kernel that way.
It's a shame PostmarketOS and Mobian don't really support many newer devices well. Last I checked the OnePlus 6(T) were still the highest performance devices that had okay support. The Snapdragon 845 - a 2018 flagship SoC - in the Oneplus 6/6T made them real high performance device to repurpose for a long time. In 2024 though it's now beaten in performance by the Raspberry Pi 5 or RK3588 based Armbian based devices. Those SBCs of course already have much better I/O and more straightforward ways to get a supported Linux running on them (and don't require disconnecting the battery with custom soldering). So you need to be really committed to reusing your old hardware to go down this route.
> It's a good idea to avoid Android because of kernel security as well.
Well it's "just" a matter of updating the kernel, right? Those linux projects like PostmarketOS do a lot of mainlining, which benefits custom Android ROMs.
I agree that if the goal is to use the device as an RPi, then it's better to avoid Android. But I wouldn't say that Android is less secure than Linux in general (on the contrary, Android has an interesting security model).
It's not "just" the kernel. The bigger issue is the firmware - Android devices have a ton of closed-source low-level firmware bits, and you really shouldn't expose them to the Internet after the device has reached end-of-support.
But if you're only using it for limited projects as an RPi replacement, then it's probably alright if you're also putting a firewall in front of it, or having it in an isolated network segment with a reverse proxy.
> Android devices have a ton of closed-source low-level firmware bits
Can you elaborate on that? You seem to be suggesting that there are low-level firmwares on Android devices that are exposed to the Internet and do not receive updates. Which ones? And do they receive updates with Linux on mobile OSes? And if yes, why couldn't alternative AOSP-based systems use those firmware updates?
The important ones - from a security and privacy standpoint - are the baseband (cellular stack), WiFi, Bluetooth, NFC, camera, mic, bootloader and the Trusted Execution Environment. Then there's also minor firmware bits for the sensor hub (accelerometer, ambient light sensor etc), touch controller, audio etc.
You can imagine the consequence if there was a vulnerability in say the WiFi firmware or the microphone. The Bluetooth stack is especially vulnerable, with it being an attack vector many times in the past.
On Android devices, only Android has been able to deliver updates to those firmware blobs. This is mainly because these are closed source binary blobs, and are provided by the OEM (often in conjunction with the respective chipset manufacturer, covered by a license agreement).
AOSP and unofficial Linux based OSes like PostmarketOS do not have a license to obtain and distribute these firmware. But even if they did, it means nothing if the support agreement from the chipset maker has ended. Being closed source bits, you can't do anything about it if the respective manufacturer refuses to provide updated firmware.
Ocassionally, some Android custom ROM makers may extract these blobs from more recent devices having the same chipset but running newer firmware, and of course, it doesn't always work (well), not to mention, it's technically illegal. And of course, an official project like PostmarketOS or LineageOS would never do something like redistribute proprietary firmware bits. Projects like these conveniently ignore the firmware issue, and leave it as an exercise for the end user.
Nintendo optimizes for cost, not maximum performance and almost always selects older technology. AMD Z2 chips go into $600+ bulky low margin PC gaming handhelds whereas Nintendo likely will want to hit $300-350 while keeping a healthy margin.
This also means that the Switch SoC doesn't use an expensive cutting edge manufacturing process. And it probably won't be made in TSMC factories at all. Leaks pretty clearly indicate an Nvidia Ampere based SoC built on Samsung's 8nm process, so it's the same tech as Nvidia's consumer line circa 2020.
I wouldn't automatically prefer any random N100 mini PC over a nice second hand enterprise mini PC.
In home server use cases, mini PCs stay idle the vast majority of their runtime. So it's idle power consumption that is the most useful metric to look into. The N100 can have great idle performance in theory, but most data I can find about N100 boxes is them idling in the 12W-15W range. This is something that older enterprise mini desktops have no trouble matching or beating [1]. Especially since roughly the Skylake era (Intel 6th gen), idle power consumption for enterprise PCs has been excellent - but even before then it wasn't bad.
Enterprise vendors like Dell/HP/Lenovo have always optimized for TCO and actually usually use quite high quality power supply circuitry, whereas most N100 mini PCs tend to be built with cheaper components and not as optimized for low power usage for the whole system.
[1]: I recommend reviewing Serve The Home's TinyMiniMicro project, which often finds the smallest enterprise PC form factors to idle at 8 to 13W, even older ones. Newer systems can get below 7W! https://www.servethehome.com/tag/tinyminimicro/
One can also do things like undervolting to reduce the power draw even more. Modern BIOSs can give a lot of freedom for underclocking/volting, not just pushing things to consume more power.
The Shield TV has had an impressive support lifecycle for an Android device but it still falls well short of a 10 year support cycle.
The Shield was released in May 2015 and its latest software update has an Android security patch level of April 2022 and was released November 2022. No more updates seem to be forthcoming. Notably, all Shield TVs today are vulnerable to remote code execution when displaying a malicious WebP image [0], a widespread issue uncovered last year.
Apple released the Apple TV HD two months after the Shield TV, but it still receives the latest tvOS updates to this day and will be receiving this year's new tvOS 18 [1] [2]. It received a fix for that WebP issue the same day as supported Macs, iPhones and iPads did last September.
Even the best Android device examples with good vendor support still seem to be falling short. The Shield TV is still capable streaming hardware in 2024 used by many people, but it's sitting there doing that while missing important security patches unbeknownst to the users.
[2]: To be fair it's the only Apple A8 device that receives support until today. The iPhone 6 with the same chip was launched mid 2014 and received its last update in early 2023.
In 2014, the Debian project cited accessibility [1] as a main reason to offer Gnome as the default desktop. Since that was over 9 years ago, a lot could have changed. However, observing that Gnome is generally the most well funded Linux desktop, and is sponsored by/the default for Red Hat Enterprise Linux [2] that has large incentives to meet government accessibility regulations, there's a good chance it is still ahead.
[2]: It is also the default on other enterprise Linux vendors, such as Ubuntu (albeit customized) and even major KDE supporter SUSE uses Gnome Classic mode as the default for SUSE Linux Enterprise Desktop
There generally has been a major regression for both GNOME and KDE with regards to accessability with the switch to Wayland IIRC. Mostly due to the strict separation of apps and not being able to snoop eachothers windows by default.
Tho there have been major works sponsored/supported by GNOME/STF to improve the entire accessibility stack used in the Linux userspace https://www.youtube.com/watch?v=w9psDfEFf9c
Other emulators want to solve the problem generically, and this solution doesn't quite.
Static recompilation from one machine language to one other language is somewhere between extremely difficult to not being generally possible in practice [1]. To make recompiling something like this properly, you need some help from the binaries that make recompilation easier [2] and on top of that you need to patch certain things to make this work [3].
Dynamic recompilation doesn't have this problem. It allows you to create software you can dump the original binaries+assets ("ROMs") in and it will generally emulate it.
There's a lot of confusion about how generic this solution it. It's extremely impressive in how much work it saves making recompilations/"ports" much easier, and it will be very valuable. But it is not able to replace the need for traditional emulators.
[2]: N64 game binaries may happen to avoid a bunch of things that make general static recompilation hard that help this approach, but I don't actually know.
The iPad App Store is perhaps an even more dysfunctional place than the iPhone in how much it holds hardware and use cases hostage to the manufacturer's vision. Just imagine how much more versatile the iPad Pro would be if only you could run Linux VMs on it in the moments you want to do anything remotely tinkery on an iPad.
Apple's hardware since the 2021 iPad Pro (with M1) has had the ability to do this. The iPads have the RAM (16gb on higher storage models), appropriate keyboard and trackpads, the works. Great hardware being held back by Apple's vision people weren't allowed to deviate from.
A straightforward reading of the DMA suggests that Apple is not allowed to restrict apps from using hardware features. Let's hope that means Parallels/VMware style VMs are possible without too much of a fight.
totally agree - the iPad Pro could be a great second coding/programming tool - I'd love to justify buying myself one, but.. I just don't see a use-case if I can't work on it. I don't design stuff, don't really feel like I need a separate browsing device either
I switched from an iPad to a Surface Go 3 running Fedora a while ago and it really transformed my tablet use. I mostly just watched Youtube videos and did some light browsing on my iPad, but never really any serious work. Occcasionally I would ssh into other machines using apps like Blink, but even with the external keyboard the UX just feels ... off. Same for other apps that have IDE-like environments. They work, but they're never really great to use.
I was skeptical about getting a Linux tablet because of the worse battery life and less polished overall experience, but having a desktop Firefox with all add-ons, my text editor of choice, and the ability to open a terminal and run whatever I want really more than makes up for it (Plus GNOME is a pretty good tablet experience out of the box these days as long as you broadly stick to their 'official' apps).
Yep, I've got one and don't use too much. Too big for scrolling, too limited (software) for work. But Apple knows iPad might cannibalize mac and limit it's uses on purpose
> But Apple knows iPad might cannibalize mac and limit it's uses on purpose
Felt the goal was to overtake Mac during the 2015-2019 era, all the real engineering focus was on iPad, the Macs were underpowered and not really fit for purpose.
Why would Apple choose a platform where they don't get 30% of every Creative Cloud sub when they could have had that.
Only reason they backtracked was because Mac sales didn't fall off and the iPad just isn't that good to do real work on.
I believe it's simply more lucrative to keep selling both devices to the same target group, than try to solve the users' problem with a single device.
Everything in Apple is designed to silo off the two product groups.
An "iPad with MacOS" would just shift revenue from the MacOS division to the iPad division, losing a MacOS customer and probably NOT gaining a iPad customer (as he would have purchased an iPad anyway).
Just as much as developing an MacBook convertible is not an issue of user experience but an issue of unnecessary cannibalization of iPad sales...
By that logic, the iPhone wouldn’t have been able to play music as soon as it launched. Yet that was part of the whole pitch: “an iPod, a phone, and an internet communicator”.
> I was never tempted to buy an iPod, but combine the phone and iPod and give me internet access to boot... sold.
Before the iPhone there were already phones which could play music and access the web. I even remember some Motorolas which interacted directly with iTunes. The iPhone didn’t succeed just by smooshing those together.
Either way, that’s neither here nor there, the point is precisely that Apple didn’t shy away from cannibalising their own product.
I don't know how it is relevant what Apple did on other products, especially "pre-iPhone".
The point is that TODAY the PC line and the iPad line of Apple are quite notable silo'ed to very specific usage-patterns.
There is no technical reason for that, but the distinct commercial reason that there is nothing to gain in terms of revenue or profit by combining the two products into one.
They both sell fine and at great margin separately, there is little to gain by building an iPad Pro that is 2000 USD and supports the use-cases of both a 600 USD iPad and a 1600 USD MacBook respectively.
Quite bluntly: You want the iPad to be convenient in a workflow as far as possible, and then SUCK really bad in a way only a fully synchronized Macbook can fix.
This is the same reason behind the Apple Pencil not working on the iPhone. Despite the iPhone approaching sizes of an iPad mini, I can't use the incredibly expensive pencil on an iPhone because according to Apple only the iPad should be used for tablet stuff.
What? The Apple Pencil works because there’s a special digitizer layer on the screen for pencil compatible devices that allows it to work. This isn’t included on the iPhone. Same reason a Samsung S-Pen doesn’t work on devices that don’t support it.
I think the technical reason why the Pencil doesn't work is beside the point here.
Apple is building the hardware, and they decide that the Pencil use-case a iPhone user may have shall not be covered by buying an Apple Pencil, but by buying an iPad (and a Apple Pencil)
The technical reason is important, though. If it was totally free I suspect they’d allow it to function, but it doesn’t… so burdening the 200M iPhones with the additional cost of the pencil hardware is a trade off not worth taking. Just like Samsung not “allowing” S-pen to work on most of the phones since adding the digitizer element would be a silly cost adder, especially for their super cheap phones.
It's a decision of product proposition, and Apple decided that the Pencil use-case shall support iPad sales and not be cannibalized by the iPhone.
They also decided for a while that all their premium iPhones shall have "Force Touch", an entirely unique display technology only for iPhones to sense pressure without the potential of additional accessory sales.
These are all valid decisions. They are not a charity, they operate to maximize the profit they can gain from each customer.
The iPad has the big "issue" of barely needing to be replaced with new models, as most use-cases are consumption-oriented and there are no real disrupting sales-driving requirements for iPad media consumption.
So the Pencil was created to drive the proposition towards Media CREATION, because people would buy a new, more-expensive iPad then and requirements for that segment are constantly increasing (better pencils, lower latency, more-demanding apps).
Also in the past year: iPhone increases focus on Media recording with more-complex video features, iPad is tagging along with demanding Media processing use-cases
Wasn’t that the period when Apple were positioning themselves to get the Macs away from Intel? I’m not sure the goal was to let the iPad overtake as much as it was to get its processors ready to take over from Intel.
> But Apple knows iPad might cannibalize mac and limit it's uses on purpose
Apple isn’t afraid to cannibalise its own products. They did exactly that with the iPhone in regard to the iPod. If someone is going to displace one of your most successful products, it better be yourself with something even more outstanding.
It would have been in Apple’s best (financial) interest to have the iPad cannibalise the Mac because they’d have more control and earn more money from app sales.
For certain groups of people (the majority?) that is reality, as long as you don’t need compilers, IDEs, or virtualization you can do pretty much anything on an iPad.
iPad (any model) with keyboard-cover can be used as a great portable ssh/mosh
terminal (eg with Termius app). I work in Emacs--most functionality is available via terminal.
I've never owned a keyboard cover, but one could bring a TKL or 60% mechanical keyboard for the full typing experience without a laptop - might be a good compromise for some.
Because they don’t run iPadOS? People love all the things the OS can do. They just wish it wouldn’t stop them from doing that one thing in particular that they it to do.
Tantalizingly close to perfection with one glaring flaw is extremely frustrating!
It's the UI. It is designed from the ground up for touch. People who like iPadOS do not like Windows Surface tablets for that exact reason. A desktop UI that's been shoehorned into a tablet is not as good as a purpose-built touch UI.
The two tablet genders: iPad, and Windows Surface. It's a shame no other giant tech company ever created one that has exactly the combination of attributes you asked for.
> Just imagine how much more versatile the iPad Pro would be if only you could run Linux VMs on it
After installing https://ish.app for Alpine Linux emulation on iPad, one immediately comes up with use cases, even though it's excruciatingly slow.
Hopefully Apple opens up the imminent M3 iPad Pros to allow macOS and Linux VMs, even if the feature is initially price segmented to devices with extra RAM. The iPad 4:3 high-resolution screen offers unmatched vertical real estate for text editing.
As long as the majority of the target group keeps buying MacBooks AND iPads, I doubt that Apple has an incentive to cannibalize its own product line.
They are well-aware of this, visible from the fact that they never bothered to add a touch panel or Pen-support to any MacBook, or make the Watch a standalone device: Customers wanting this either buy the devices individually anyway, or wouldn't be willing to hand over the sum of all combined devices for a single "superset" device.
Just imagine that Apple's view of the "iPad Pro with MacOS" demographic are customers who purchased a 1600 USD MacBook and a 1000 USD iPad. Is the "iPad with MacOS" able to replace either of those? Would they be able to charge 2600 USD for that device and sell comparable volumes?
> Just imagine that Apple's view of the "iPad Pro with MacOS" demographic are customers who purchased a 1600 USD MacBook and a 1000 USD iPad. Is the "iPad with MacOS" able to replace either of those? Would they be able to charge 2600 USD for that device and sell comparable volumes?
"iPad with MacOS VM" is technically adjacent to "iPad with Linux VM", since both make use of hardware nested virtualization support that is present on Apple M* processors. Good performance/watt Linux on Arm will launch in a month on Microsoft/HP/Dell/Lenovo/etc laptops and tablets with Qualcomm-Nuvia (ex-Apple) Snapdragon Elite X.
If Apple opens up Linux VMs on iPad (as a side effect of opening MacOS VMs), they can keep some users entirely within the Apple walled garden, similar to Microsoft's introduction of WSL on Windows. If they allow defections to Nuvia hardware, it can expand to Macbooks Pros, given the Qualcomm roadmap for AI silicon on laptops, co-funded by billions of automotive pipeline.
Those who already purchased two Apple devices have already given their money to Apple. They won't do it again, since iPads are already overpowered for the artificially constrained use cases. If new iPads with extra memory/storage allow VMs, that's net new revenue above the $1500 price point. We'll find out next week.
I bought some Apple Watches (because of reasons). I am wearing one right now. Model 3 (Nike edition or something). I've switched everything off (WiFi, Bluetooth, analytics, the whole thing). It only shows the date and time. The battery lasts 4-5 days.
It's amazing when you shut down the telemetry-battery-draining functionality of devices. And to add some more insult, I am using an Android phone, which ofc don't even try to connect to my watch :)
I believe -and Gemini just confirmed- that they don't work together.
> it depends on a larger device for configuration
yes, the architecture was purposfully made, so that the Watch only collects your bio-metrics, with limited own/independent functionality. They (Apple) does want everyone in the 'garden', so why open it up?
> I see the Watch the same as a late 90s Palm device.
Yes, but the LTE-variant is more along the lines of a Palm Treo.
Apple could probably make it link to a MacBook with very little effort, and to all other platforms with just a little more.
It's just a direction not worth for Apple to explore, because in their view those are just customers who have "not yet bought an iPhone", so why try to win them with the Watch if it just prolongs their journey to the iPhone
I have the large Apple Watch. It has cell capabilities. I wish it was a standalone device. I don’t need a phone. The cellular watch could replace my phone if Apple allowed standalone devices. I doubt they will ever allow people to have a cellular watch without being tied to a phone.
I also have a cellular Watch. Combined with some AirPods it works great if all you need is phonecalls which is a good use case if you want to be available without a time sucking little monster in your pocket.
I think their point was that you have to pair Apple Watch with an iPhone in order to use cellular, and they wish you could use cellular Apple Watch without having to own an iPhone.
Yeah you're right, that's fair. I think any up to date iPhone will do though, so if you want to go this route you could buy the cheapest one you can find (hell, the screen could be smashed, who cares?), do the configuration and then toss your phone into a drawer.
Oh gosh if I could use a series of iPad apps to run a Linux system on an iPad I’d be so happy. I mean I could get an android tablet but I don’t really like android. I’m fine with iOS and I love Linux, so sticking those two together would be really nice.
Actually I’d love to run a Linux VM on my iPhone too!
What’s the benefit to you of a VM on your iPhone when you can simply ssh to a vm somewhere else? Not saying there isn’t a benefit, but curious about what you want to do. Other than people who are in the middle of nowhere, which at that point I’d recommend a raspberry pi and a battery bank or a laptop or something.
I use the pi and battery for running various ham radio stuff while out in a park or whatever and connect from an iPad, and that works very well in my use case.
>iPhone when you can simply ssh to a vm somewhere else?
Like not having reliable internet access everywhere. In a lot of areas mobile internet is spotty. Or you're in roaming so it's insanely expensive.
Plus we already have these powerful devices in out pocket, more powerful than PC's were 10 years ago, sitting idly doing nothing most of the time, why not put them to use when in need instead of paying for some extra remote cloud compute on top of that.
Also, VMs don't just mean Linux for web development, it could be a VM for retro gaming or running things in VM for security sandboxing etc. That would be really neat to always have with me instead of having to ssh all the time.
While I agree with your use case, doing nothing most of the time is how those devices last day long on a battery and can run without a fan. My MBA get toasty when I OCR a pdf, I cannot imagine a phone on a sustained load.
>doing nothing most of the time is how those devices last day long on a battery
But It will run down the battery only for me, not for you. Why do you care about how I want use my battery life? You don't have to do what I do, with your own phone. You can just keep using like a regular phone if that's all you want. Me having more freedom with my own device, does not reduce your freedoms you have with your own device.
I paid for the device and I own it so why shouldn't I be allowed to use it how I like even if it runs the battery in 2 hours? That's why I have portable power banks and GAN chargers. They can even throw in a disclaimer about waving your rights to warranty for devices used like that.
Otherwise what's the point of all that technological progress of M* chips if all that we're allowed to wo with them is browse Instagram but now even faster, and play Candy Crush but now with ray tracing.
> You can just keep using like a regular phone if that's all you want. Me having more freedom with my own device, does not reduce your freedoms you have with your own device.
I was just pointing at an aspect of how these devices are engineered. In fact, I jailbroke my first iPhone and spent years running with a rooted and modded Android device. So, be my guest...
> Otherwise what's the point of all that technological progress of M* chips
Showing that ARM is a viable computation platform? I'm not particularly enamored of the Apple ecosystem. It's great for what I use them for and awful for anything else. Reverse-engineering is an option, but I prefer focusing my efforts on more hackable platforms for my tinkering. Unless the law mandates openness, I'm not seeing Apple's stance changing.
>I was just pointing at an aspect of how these devices are engineered.
And you pointed at the wrong thing. Apple isn't preventing you from running VMs on their M* iPhones to stop you from draining your battery too fast. Come on, don't act this naive.
>Showing that ARM is a viable computation platform?
If their motives were that charitable, and cared so much about the ARM platform, they would open up their M* platform to others to run whatever OS they want on it and provide OSS drivers, no keep it as locked as possible.
First, in the major European city where I live mobile internet is not super reliable and flat data packs are relatively expensive - I have one because I develop a lot on trains, but most of my friends don't.
Second: it's a waste of hardware and money. If I can already run the thing on my device, renting twice as much hardware for the same result is hard to justify.
And finally, it keeps my data under my power. Some of the work I do has strict requirements on what I can do with the data, and "upload it to a cheap cloud provider" is not on that list.
Not just VMs, you could technically also run things like PC emulators, with real PC operating systems, especially older ones, with acceptable performance. Just imagine using Windows 98 on an iPad!
Reminds me of running Windows 95 under Bochs on the Sony PSP. The the CPU turned up to max (333MHz) it was just barely fast enough to impress your friends. ;)
Well it will probably never, since all apps are always within the sandbox. The idea is to ssh out to some other system. Besides the actual iOS shell is not so interesting or useful anyway. (Jailbroken devices have had them for a while, you won't be running your nvim and git stuff locally anytime soon.)
While this is doable, it is far from an enjoyable experience -- many packages are not available on iSH or have issues, for example. Most people are not going to replace their laptop with these two apps.
At some point i had multiple older iPads with perfectly great screens, and i wanted to use them as "hubs" for a home setup to control various things, another option was using them as secondary screens, or maybe just give them to a kid.
You couldn't, they were simply to old for the new IOS update, and almost all apps including browsers requires the newer IOS and update automatically without asking - essentially bricking them on purpose.
Anyway i ended up giving them to a "safe e-waste center" but i'm sceptical they'll actually be recycled.
I think locking down a device should be illegal especially e-waste considered, and if there's some reason not to, then it should at least be opened the day official support ends so the device can be used to watch videos/games for kids/whatever.
As a counterexample, the other day I found my old Nexus 5, from 2013, running Android 6. While it was not completely straightforward, I was able to reset the phone and link it to a new Google account, and after several cycles of updates the entire Google suite seems to work, including Maps, and not slowly at that. I was, and still am, genuinely impressed.
A story going in completely different direction --
I have a Sony Xperia phone from 2017. It has stopped receiving OS updates after Android 8, and I don't use it any more other than occasionally as a backup phone. A while ago, I discovered that people on xda are putting LineageOS (a custom ROM based on AOSP) with Android 14 on it, tried that myself, and it works! As slow as the phone is, it can run apps without any problem. This is truly amazing.
Same, I wish once device stops being supported it should give easy option to be jailbroken and unlock bootloader. Such devices could be retrofitted for many other roles e.g robotics toy with arduino/raspberrypi, smart home, smart router etc.
Well, in my country there's been multiple scandals about waste handling where it was found very little ended up being recycled, the sorting people did in some cases created more pollution because it had to be transported and huge amounts ended up in big dumps of toxic assorted garbage either here or in some third world country where kids then make a few cents a day scavenging in the toxic piles.
So yeah, i'm sceptical. There's a reason it's called reduce, re-use, recycle as a very distant third as far as i've seen.
I don’t think versatile devices are possible. I love iPad Pro for what it is. I tried Surface Pro and it was a much inferior tablet experience, even though the device is more “versatile”. I just doing think that you can get an excellent tablet by trying to be a laptop at the same time.
It’s a screen. Add a regular Bluetooth keyboard mouse and you have a PC. There’s no compromise here from hardware perspective, it’s just software that’s in the way.
Remember the size of the original iPhone? I have long wondered why nobody makes a universal compute brick in such a form factor without a screen. Then sell 5" or 7" or 10" or 27" screens with and without touch that connect to the little brick.
I can buy a 15" screen right now for under $75. It's the ultimate super-thin laptop if you remove the compute and keep the brick in your purse/backpack/holster.
For extra points, connect two compute bricks for more muscle.
I mean, Samsung has DeX. It’s exactly what you describe. I don’t like it, I don’t need it. I’d rather have a focused device than one that tries to be two things at once.
I have a Surface Pro and really like it. But for sure it is 100% a compromise experience, especially on the tablet side.
But part of it was reconditioning myself; the “proper tablet experience” largely comes from limitations of what they let you do with it. And with more features comes some complexity. For me it’s worth the tradeoff.
Is it right to say that currently the cost of the hardware is being partly subsidized by the profits Apple makes from the software? If some of the profit from the software gets taken away will we see the price of the hardware rise?
>Is it right to say that currently the cost of the hardware is being partly subsidized by the profits Apple makes from the software?
No, it wouldn't be. You're probably thinking about gaming consoles who's HW is sold at a loss or at very thin profit margins and subsidized by the more expensive game purchases, but Apple hardware already has the highest profit margins of any HW manufacturer out there, and at their 200 USD per 8GB of commodity RAM and NAND chips, you better believe it.
So no, they don't need the walled garden SW money to fund HW. Their HW alone brings in plenty of cash.
The cost of custom chips is massive, but then manufacturing is cheap - after selling N units to pay off the initial investment, it's almost free (unit cost) when done at scale.
I don’t think manufacturing would end up almost free for any of the newest chips since there are such tight tolerances and high failure rates. At least not for quite a few more years when (if?) there are competing fabs.
In the current situation Apple has to consider that a marginal price rise in hardware will lose marginal revenue in software, thereby shifting the equilibrium price of hardware lower.
That's really not true. The optimal (in the sense of where the supply and demand curves intersect) price for maximum profit rises as costs rise. It's true that the revenue-optimal price remains the same, but I think Apple's shareholders care more about profit than revenue.
To build intuition on this, it helps to think about the extreme cases: If the marginal cost of production is zero, you can sell the product for close to zero to pick up pennies from almost every human on earth. So the revenue-maximizing and profit-maximizing prices depend on demand elasticity, but are both low.
If the marginal cost of production is a million dollars, selling for anything less than that will result in negative unit economics. You can still maximize revenue with low prices, but that incurs negative per-unit profit. In fact, the price must be more than one million dollars per unit to make any profit. That might imply that the profit-maximizing condition is one unit sold for $1m+1.
For certain demand curves, that might even imply the profit-maximizing condition is to tell zero units! A real-world example of this is Rivian. They have negative unit economics, and would be more profitable if they simply stopped production.
I think what confuses some people is that all these things can be (and are) true at once:
1. The price where Apple achieves maximum profit under the new rules is higher than before.
2. After raising prices, that profit will be less than what they earned before.
3. Units sold will be less than before.
4. Apple won't reduce prices in response to the lower profit because the new higher prices, lower quantity and lower profit are profit-maximal under the new market conditions.
What we will observe in practice is not higher MSRPs in Europe, but fewer discounts (it is an open secret that you should never buy an Apple product without at least a 10% discount).
I see a lot of people claiming (I believe disingenuously) that the changes forced by the EU will convince them to consider buying Apple's products in the future. If you believe those people, that's yet another reason to think Apple hardware prices will rise in Europe: Both the supply and demand curves are moving in directions that imply higher prices.
You are right. My earlier claim is incorrect. It is not a certainty that the optimal price doesn't change when costs rise. It really depends on demand elasticity.
My reasoning was this: If Apple can get away with a higher price without demand dropping off, why would they not charge this higher price in the first place?
But the idea is flawed. Apple could ultimately make more money selling fewer more expensive devices at higher margins than selling more devices at lower margins. So you're absolutely right.
Of course they could also make less profit by protecting their margins. We don't know.
> The amount of knee jerking in the comments for this law really brings down the quality of discussion and detracts from remaining issues.
Sadly, there are topics that sometimes come up on HN that makes people have these knee-jerk reactions. Anything related to EU, unions, Tesla, Israel and more tend to bring out the worse from normally reasonable people. I'm probably guilty of it sometimes too, we're all humans after all.
Matrox and 3dfx never came close to making general purpose GPUs, they just got completely outcompeted in the regular PC graphics market of the late 90s. Matrox and 3dfx last tried to compete on architecture in 2002 [1] and 1999 respectively, while the first GPUs capable of general purpose computing were released many years later at the end of 2006 (by Nvidia) and mid 2007 (by AMD/ATI). High performance GPUs have been a completely Nvidia and AMD/ATI affair for two decades until Intel entered the fray last year, since before people even conceived of GPGPUs.
Matrox and 3dfx were gone before they could even conceive of a general purpose GPU strategy.
[1]: The Matrox Parhelia itself being out of date tech on release to boot, lacking a Direct3D 9.0 class architecture that ATI launched two months later.
There are other options, but they have considerable barriers to entry as well, like NixOS which requires learning their specific DSL. I like the idea of `bootc` but that doesn't support running from RAM best I can tell. Other distros really only document customizations to the initramfs as a means to provide an installer for a stateful system, which makes running a server like this a bit of uncharted territory.
reply