This is just going to encourage bloat and inefficiency, isn't it? Note: this has nothing to do with specifically Apple Silicon but just any computing improvements in general.
Kind of a cynical take... but is he wrong?
But I see two reasons why the M1 is a big deal, even given the bloat tax.
One is simple power efficiency. These chips run cool and fast, and Intel laptop chips don't. No amount of hyperefficient code is going to get you a real 18 hours of battery life on last year's MacBook Air.
The other one is Apple's proven track record of integrating custom hardware and software, into something which is greater than the sum of its parts. Ever since I got an iPad Pro, I've been quietly frustrated with the user interface of MacBooks. It's just not physics-smooth, and the iPad just is.
I just got one of the 16" MacBook Pros for work, and it's pretty loaded, and it's a good computer— by the standards previous to the M1. Battery life could be better; overheats sometimes with serious fan noise, for no good reason, although each update to Catalina appears to substantially reduce this.
I figured I could get five years on this rig. No way. I might make it to 2022, but I know terminal gearlust when I see it. Whatever Apple sticks in the next release of the 16", I'm craving it already.
More seriously, while program overheads have undoubtedly gotten higher, that's always been in exchange for something else, even if non-technical, like a better developer experience and platform support. Nobody writes slow programs on purpose, but people are very willing to trade it off for things that are more important in the grand scheme of things. Low level programmers love to smugly pretend that those tradeoffs do not exist but they do.
Wait, of course they write slow programs on purpose, because "premature optimization is the root of all evil," right? A developer might use Electron because it's "fast enough" and halves your development time, and "fast enough" is always a perceptual metric, and not a technical one.
Faster hardware means even less optimization is needed before something is shippable -- great for developers! -- and a recipe for software continuing to feel just as slow as before, because we only optimize until it's "fast enough".
And despite the M1, or any of the advances of the last decade, the perceptual line that is "fast enough" hasn't changed.
EDIT: Yes of course Knuth was talking about optimizing noncritical paths, but the sprit he’s espousing lives on in system design: use electron, or something else that makes your product more maintainable and easier to understand (because you didn’t build your own bespoke cross-platform app scaffold, and electron is well-documented, etc.), until you’re sure you can’t anymore. Well, the bar for “can’t” is raised every time there’s more cpu to support rapid, maintainable, “nonoptimal” development, and here we are.
That's not writing a slow program on purpose; that's accepting a tradeoff of speed for deliverability. Writing a slow program on purpose means intentionally adding code to make it slower with no other consequence.
"Fast enough" can be measured in ms latency for UX purposes. Ex: Websites test and benchmark themselves on load time because they know sales/pageviews decrease after too long.
So...making a purposeful decision that results in your program being slower...is not “writing slow programs on purpose”? They’re definitely not slow by accident.
No one but artists (this is not a dig at artists, in fact I love them and teach them to program at an art school) would take “writing slow programs on purpose” to mean “adding useless code to slow things down”.
It’s all about making decisions that prioritize the perceived latency at the expense of anything else.
And you're not adding it "to slow things down". You're adding it to make it possible to run at all on the other platform.
No, the first is not the meaning of the second.
The faster the underlying platform, the less work is required to get to "good enough" performance, so this strategy absolutely leads to software not getting faster as hardware gets faster, and the only way for an end-user to get their software to run fast is to have faster hardware than is typical for the target market for that software.
I don't think it's much of a stretch to call a development paradigm that causes software to never get faster even as hardware improves "writing slow programs on purpose."
No. The argument against premature optimization is not about labor vs. performance tradeoffs (but about correctness and maintainability vs. performance tradeoffs), nor is it about the entire code base, but about non-critical elements.
Quoth Knuth: “Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.”
Thinking about and designing for broad performance concerns even though it adds to initial implementation time is not against the adage; it’s when you are compromising clarity and ability to reason about the code to shave off microseconds on a path without a clear reason to believe that its going to be hit often enough that those microseconds matter in the big picture that you are engaging in premature optimization of the type targeted by the expression.
> I don’t think it’s much of a stretch to call a development paradigm that causes software to never get faster even as hardware improves “writing slow programs on purpose.”
If that is its explicit intent, sure; if it is a side effect, no. In any case, blaming such a paradigm on the adage against premature optimization is misplaced.
People should, and do, make tradeoffs for maintainability over speed — and this is still choosing to make your app slow — whether Knuth meant it or not.
If true, I envy your ability to produce correct, maintainable code without labor!
Further there are things you should know are usually performance issues, that I would expect you to avoid entirely, except with good reason. Using O(N²) or worse algorithms on big collections when there are much faster alternatives in the standard library of your language for example. I don't think these good habits are premature optimization.
It was not fast at all when the Java hype was peaking in the late 90s. I knew a dotcom era startup that replaced a bunch of Perl with Java and my friends complained it had fewer features and was much slower.
Programs written on Electron rely solely on web technologies, I always avoid these programs, one simple chat application eats up to half a GB of memory, no I am not going to increase my machine's memory in order to run those programs.
Real software that does a job you need it to do is much better than hypothetical software that will do the same job faster.
You don't need to port Electron tier-software. This was a solved issue long ago. Code it in Java, Python etc... Or Use Rust/C/C++ and a cross platform toolkit.
There are plenty of devs who'd be happy to ship extremely high quality, small & fast code. It's just that very few people want to pay for it.
This is also how I felt watching the video, but on the other hand I have a really hard time finding serious faults with his point.
HN keeps things light. My personal website is super light. But the product I work on? What's a few MB gzipped down to less than a MB? 100ms? 300ms? Who can tell the difference.
: I can, but certainly not the people who the feature matters to much of the time.
Yeah most things are frivolous... you can survive without a better camera in your phone for another year. But that ham-fisted implicit cultural aspect is what also brings people medical devices 5-10 years sooner than it could have. We all know these gains compound over time. So maybe it's ok if our programs are a little slow. We buy the truly mission critical technology faster with that frivolity.
But I'll also say, I still get mad at my phone and throw it pretty often. Just the way it is.
No, but we are in a place where most development projects are really "stitching" dependencies together, and those dependencies can get heavy.
On the other hand, this may help apps written in things like Ionic to speed up like crazy, because WebKit is now insanely fast.
Is there anything wrong with that, though? If there's any common thread in the history of computing, it's that progress happens when things become abstracted enough to allow for another layer of building blocks. That's not to say that those layers don't have a cost, or we don't need to learn how to do them well, but gluing together components is really all software development has ever been.
It's hard to use a lot of dependencies, regardless of their quality, without introducing bloat. Lots of redundancies.
For example, each dependency may have its own messaging or thread management system, or its own memory management system.
Or it may include other dependencies that have their own "special sauce."
For myself, I use lots of little packages; mostly written by Yours Troolie, and only occasionally use third-party dependencies.
Each of my packages is usually crafted towards a certain discrete function, and has its own project lifecycle. I think that's a good thing about using dependencies. I believe that modules with encapsulated lifecycles are a good way to ensure quality (or not, if I "choose poorly," as the Knight Templar said in the Indiana Jones movie).
Well problem is developers are not writing fast programs on purpose. That they did not write slow on purpose but it nevertheless turned out to be slow is real problem to me as a user.
True, but nobody writes fast programs on purpose either, only "fast enough" programs. The optimization work stops as soon as it "feels" fast enough and, this "feels fast enough" has been the same over the last few decades no matter how fast the underlying hardware is. Any advance in hardware will ineviatably be eaten by software within a year or two. That's why running old software on new hardware feels so increadibly fast.
I also disgree that we got better developer- or user-experience out of the "deal". Most commercially developed applications have a "peak version", but still continue to be stuffed with new features that nobody asked for. E.g. name one feature that was added to Microsoft Word, Outlook or Excel in the last two versions which really added to the "user experience".
However, when it comes some software areas prone to capture, there's not only no choice, but rather a multitude of incompatible-by-design options. Due to WFH, I need to have four different bloated chat/videoconference software on all the time which all offers the same features to me. When I want to watch a live stream, I need to open YouTube or Twitch or Facebook Live which all offer the same basic functionality but no one is interested in standardizing their interface so I can just point my familiar video player to it (at least without it breaking all the time due to cat-and-mouse fights between the platform and open source developers).
I'm not generally in favour of regulation, but mandating some basic degree of interoperability so platform and software can compete separately seems like the only option.
More seriously, while program overheads have undoubtedly gotten higher, that's always been in exchange for something else, even if non-technical, like a better developer experience and platform support.
- Many devs aren't familiar with the real breadth of possibilities and discount some approaches without understanding them (using "developer experience", "platform support", and "too low level" as excuses), thereby skewing towards familiarity and preference. As most devs are familiar with web tech, this tends to win out. Other legitimate issues might not be even be understood (HCI, attack surface, accessibility, etc.)
- The performance hit might not be significant enough to notice on a dev-grade machine (e.g. M1-processor, 32GB of RAM, etc.) and/or in isolation, but will become noticable when regular users run several apps built using the same heavy stack.
The latter is particularly apparent if you try to run multiple Electron-apps simultaneously on a low-to-medium specced laptop – which I find pretty egregious as a user, as multitasking computer systems has been a thing for a while now.
My interpretation is that many people (or companies) will usually write software that's no more performant than what they can get away with – which can be summed up, if cynically, as "users won't see any major performance gain or increase in capability in their day-to-day usage because of shitty software."
Still, I feel there's a lot of software I use that is hard to optimize or no longer developed, and I'm certainly happy to have it run faster given new hardware. As a programmer I also like being able to be lazier and still have usable software. Higher level languages, GC, less thinking about access patterns and debugging prefetch problems, etc.
That said, there's something of a problem with that though, because by making 1) more and more prevalent, it makes 2) more expensive due to fewer people choosing to work in that manner. Similar to how consumer technology goods get much less expensive, but have many more defects and issues. That "race to the bottom" makes cheap things cheaper, but makes more expensive things more difficult to make. I don't know what that phenomenon is called specifically, but it's definitely something I've been frustrated by in nearly every industry and product category.
It's not that people or organizations were necessarily more considerate or contemplative in the past, it's just that the processes or technological improvements (e.g. better hardware) weren't there to allow those races to the bottom. And maybe the systems we were living under didn't reward that behavior as much.
This software culture is visible on Linux than macOS, but it's alive and thriving.
Also every laptop I've had prior, would get pretty hot or the fan would get quite loud when connected to an external display. This M1 is completely silent when doing that AND it doesn't throttle.
Further, as someone who deals with HPC at $WORK, our (medical) researchers always want faster.
Latency from the user perspective is a bit harder, although I guess for a webpage you could run it headless into a video recording then time how long to go from a blank screen to showing information?
If facebook doesn't have the money for the engineering effort to build a native messenger app, I don't know who does
Same as it ever was:
> Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
> The adage is named after Niklaus Wirth, who discussed it in his 1995 article "A Plea for Lean Software".
> Other common forms use the names of the leading hardware and software companies of the 1990s, Intel and Microsoft, or their CEOs, Andy Grove and Bill Gates, for example "What Intel giveth, Microsoft taketh away" and Andy and Bill's law: "What Andy giveth, Bill taketh away".
I was thinking the same thing as I read this bit. I’m still on an Intel Mac and recently switched from VSC to Panic’s Nova editor, highly polished and 100% native.
If bloated code runs fast on an M1, imagine how native code performs? I’m trying very hard to excise as much Electron garbage from my life as possible.
As an example, Minecraft on the 16GB M1 Pro running under Rosetta 2 runs much better (near-constant 60fps with everything on default, no fans, imperceptible heat) than on the 32GB i9 Pro (peaks about 40fps with everything turned down, full fans, will burn you unless you disable Turbo Boost).
I'm looking forward to a native JVM and seeing if it gets even better.
(I assume you can sub in your own JDK with Minecraft? Never used it)
The bar of quality are quite low for lot of people.
We simply wouldn't have the services we have today at their low price if our hardware didn't offer them more breathing room.
As inefficiency tips more towards degradation in machines, more emphasis will be given to efficiency.
While you can profile performance in any web inspector, I think a good product team should keep a base-spec Windows laptop on hand to see how it all comes together for users who may not have the kind of hardware they design and develop on.
Roughly 1/100000 of what's on a low end computer today. On a CPU that ran at 1MHz and took many clock cycles per instruction.
In other words, yes but that's nothing new.
There's a heck of a lot of pointless bloat. But at least some of the extra power has gone to useful features!
Today feels different. If Slack really have any significant features that couldn't be done on computers 15 years ago?
Apparently this and other things did not go over well with developers (apparently the apps needed to be done at least partly in assembly too). Wikipedia:
GeoWorks attempted to get third-party developers but was unable to get much support due to expense of the developer kit — which ran $1,000 just for the manuals — and the difficult programming environment, which required a second PC networked via serial port in order to run the debugger.
Even though PC/GEOS is referred to as an "operating system", it still requires DOS in order to load. GEOS and its applications were written in a mix of 8086 assembly (Espire) and C (GOC), both with non-standard language extensions to support the object-oriented design.
Processing the touches into events should take 1 C64 of power or less.
Scrolling shouldn't significantly more difficult. 1 C64 for this.
The screen is much higher resolution, so we need the power to render that, but that's offloaded to the GPU and the GPU doesn't even need to leave idle frequencies.
60fps, well what was the C64? Whatever, multiply the resources by 10.
We've now accounted for... 20 C64s of power.
It's a little odd to exclude the supercomputer your computer comes with to help with scrolling.
And, I mean, the C64 offloaded rasterization to a different chip too.
Don't even get me started on drawing more than 8 sprites!
(Ex C64 games programmer here)
The C64 had a resolution of 320x200 pixels with 16 colors. That would cover about 1,5 square inches on a modern phone (modulo the colors, another factor of 6 to 10). You could not dream of re-rendering the whole screen with any FPS number. Games had to resort to all kind of tricks to create a smooth experience, and whatever was put on screen was pre-rendered content combined in clever ways. Calculations involving floating point like a mandelbrot set in said resolution took hours to days per _frame_.
And the C64 does re-render every frame, because there's not enough memory for a frame buffer!
I'm not worried about games here, and word processors don't need fractals.
In the past, I developed on old hardware (or at least tested on old hardware). If the software was performant on my old machine, it was bound to be fine for most others.
Eh? Apple's fairly clear about what Swift is doing in the background (which isn't much, really; boring ol' reference counting).
This might apply more to Java, mind you. There are no secrets, but most people probably aren't going to investigate exactly what the JIT or GC are doing.
There is no virtual machine, and nothing is happening other than your code and the libraries you use.
It’s only high level from the point of view of the type system.
Are there any specific instances you’re thinking of here? Java, Go, and even Swift all are open source.
I don’t really understand your complaint here. The Swift runtime library is pretty limited - basically just allocation and reflection. The compiler spits out machine code that you can look at. Where is the “background” you are thinking of?
I remember being shocked when I first read a game requiring 16MB of RAM (yes, 16MB, get off my lawn).
If I ever meet the king of software I could ask him to make faster software, but I haven't found him in the phonebook yet.
They are cross-platform apps which still need to run well on the 95% of other computers which don't have the M1 chip.
This isn't like with web browsers where it impacted everyone.
If it does, Wintels are not going to be pleasant environments for users. Wintel laptops would need to up to 32gb/64gb RAM
That's naive and wrong. The M1 seems to be an excellent processor with good performance. It's not worlds better than a Ryzen 5 5600X or its mobile variant.
> I decided I don't need more RAM than 8 GB, because nobody from the first reviewers managed to give the RAM a hard time. It seemed to me it's a different beast altogether and it doesn't make sense to just compare numbers and say that more is better.
The processor is not a magical thing that changes how much memory is needed for workloads. When the first reviewers did not look at your workload it's meaningless that they did not complain.
Your memory requirements completely depend on your workload. It's quite possible you do not use more than 8GB, but I had IDEs and GIMP use more than that. And it's of course (it's Apple) not upgradeable, 8GB will not be enough if your usage changes, and then that laptop will just be expensive garbage for you.
And right, it's professional usage where that would fall short, I doubt that it would help as much when the image file I open fills the ram completely or when parts of the IDE are swapped away.
Maybe if you're solely focused on performance.
But the other reason why M1 is so ground breaking is because it offers this performance with ~2x the battery life and minimal cooling compared to its competitors.
> Given the little chance Intel or anyone else is going to keep up with this new level of performance
"All of the technologies in the SoC can access the same data without copying it between multiple pools of memory. This dramatically improves performance and power efficiency. Video apps are snappier. Games are richer and more detailed. Image processing is lightning fast. And your entire system is more responsive."
Of course if you need 12 GB of data in memory than sharing 8 GB won't be enough but for most current uses of 16 GB its possible that 8 GB could feel as responsive between the shared memory and the faster access speeds (due to faster components and smaller distances)
It's easy to type a comment about bad decisions and garbage, but the world doesn't work in terms of absolute truths, the decisions we make are made under certain circumstances and the success of the results is subjective. I risked something, I know it, so far I'm happy with my choice.
Given how overpriced the bigger memory configurations are I understand why you took that risk.
The earlier parts of my comment just reflect my impression that you overestimate the performance of that laptop compared to available alternatives. 10x faster is just not accurate. Remember the context: A huge group of Apple fanboys that buy everything the company produces without being aware of the alternatives and drawbacks. That's why it feels important to point out misconceptions like that, whether they are really there or just the interpretation.
For $1000 the base model Air is a hell of a machine.
People sometimes forget that macOS has had automatic RAM compression since Mavericks (10.9) . Think about how fast this must be using Apple's SoC combined with the extremely fast SSD at 2190 MB/s writes and 2675 MB/s reads. With compression, the effective speed is at least double.
It would take a whole lot for there to be any appreciable slowdown running lots of apps simultaneously with 8 GB of RAM.
That's Spotlight indexing the email, PDFs, Word files, etc. on your hard drive. Once it's done, your Mac will be much more responsive: https://www.macworld.com/article/3388134/spotlight-dont-take...
That's what I'm thinking.
Having said that, according to the article "Intel’s Disruption is Now Complete" discussed in HN before, Intel's lowest end of Celeron CPU line was introduced just to delay the inevitable . But what the article did not mention is that Intel purposely crippled its low end CPU with the minimum amount of RAM. Until very recently the most RAM you can get with Celeron based processors is only a few GBs. However, perhaps due to the competition of AMD and other ARM based manufacturers, Intel has done the unthinkable and with the latest Gold processor series (11th Gen) you can have up to 128GB of RAM . FYI, the Gold processor series is conveniently placed in between the Core and Celeron back in 2016 and essentially it's a beefed up Celeron. Not only that, it can now supports Optane, a new Intel's proprietary non-volatile or persistent memory technology that can provide its lower end CPUs with potentially more than TB of working memory . Imagine using a tablet/PC/home server in the year 2021 having an equivalent of humongous terabytes of RAM!
And then likely 2022 we see the M3 chip for the Mac Pro, Mac mini Pro and iMac Pro.
I think Mac has their own, but I use NoMachine and it basically just works and can work with different client and server OSes. On the client side it is just video decoding so it is not very demanding - and it uses less bandwidth than I would think.
Yeah, that line made me take the entire review less seriously.
What if you are barely using that GPU though? Wouldn’t most the GPU RAM just be wasted while you did CPU intensive tasks?
Likewise in your example you might be using the GPU more so less RAM could now be allocated to the CPU to improve GPU performance by more efficiently using the available RAM
Outside of the CPU, you're really not using the other processors for "many tasks."
In the end, honestly, I don't care that much if it's the processor, RAM, or a pixie dust. The machine's snappy whatever I do on it and that's all that matters. I'm happy with my choice so far, but only time will tell, of course. If you do RAM-intensive stuff on your computer and have the money, buy RAM, good for you.
But this isn't completely true. ARM compiles down to significantly smaller binaries than x86, because it doesn't have to support legacy extensions.
Edit: you may be reading IA64 by accident
I think the reason I would always buy more RAM is not because it needs it today, but because it may need it in 2 or 3 years, as app inevitably eat into the performance gains with bloat.
I bought a mac mini with 32 GB in 2018 and I managed to OOM it a few times and oh boy was it ugly - already regret not taking the 64gb option at the time.
I do most of my data engineering in the cloud, or on a 32 GB Linux system - so I don't need a lot of memory for data sets - just something that will run the applications. I would never recommend that the average user buy more than 8GB of memory with a Mac Laptop.
Meanwhile - my 16 GB Linux Laptop (Ubuntu 20.04 Dell XPS 15 7590) routinely comes to a crashing halt if I try and do something crazy like open up 40 Browser Tabs and then start zoom. It's horrible. I've spent hours trying to see what's going on with top to no avail. If only it was a graceful degradation - but open a single browser tab on the wrong site once you are close to the edge- game over. Usually faster to just restart the system than wait for whatever swapping is happening to conclude.
I've rarely had this happen to me on the MacBook Air.
Different operating systems/hardware really seem to make a difference using pretty much the same workload, speaking from personal experience of someone who splits their time 50/50 on Linux and macOS.
It decreases swapping to disk by using a compressed ramdisk for logs, unused stuff etc. The zram-config from universe may be broken (https://github.com/StuartIanNaylor/zram-swap-config)
It's usually gcc that eats up all my memory, turns out with template metaprogramming a single gcc instance can eat 3GB of RAM, and I usually build with 6 parallel instances. Thankfully I've got plentiful SSD-backed swap so I don't feel the system becoming too unresponsive.
(Not saying it is ergonomic or better than the mac.)
Only thing it’ll probably die on is running VMs which I’m not going to do any more. They can live in DigitalOcean.
One liner review: this is the only computer I’ve ever had which can keep up with me.
People have been saying this for years and my personal 2011 17", 2016 13" and 2018 Mini have never had more than 8GB. Never noticed an issue.
Unless you know you're going to be running memory intensive workloads, I see no need.
It was a very different situation even only a few years ago.
So with wear levelling you should see the SSD last well beyond the usable lifespan of the device.
I mostly use my Mac for personal projects in TypeScript, Swift, and Java, and for browsing the web. I also play video games occasionally, and I'm surprised at how good the performance is.
It's incredibly fast, and at $700 it's a steal. Normally with Apple you pay a pretty hefty premium if you want performance, but the M1 makes it much more accessible.
The magic of the M1 is not just the hardware, but also the software of MacOS.
Putting everything together on the M1 chip, the CPU / GPU / Neural Engine, and allowing all those to share memory, seem to be from where much of the magic comes. The further optimization of MacOS to leverage all that helps as well.
AMD already has a lot of experience with high-speed interconnects. They already have a lot of experience with APUs. I wonder if it'd be possible for Microsoft and AMD to enter some sort of partnership whereby the optimize Windows in the same way MacOS has been optimized.
I really fucking hate Apple. I hate their walled garden approach. I hate their "trendiness". So I have a small bit of self-loathing that I own the base model new Mac Mini, but I can't allow my seething hatred for the company to cloud my judgment when it comes to using the fastest and most productive platform for my work, and, I'm sorry to say, this little $699 machine outperforms my Ryzen 9 3900X w/ 32 GB of RAM and an RX 5700XT for most tasks. I still game on this machine, but most productivity work has been relegated to the Mac Mini now.
Learning MacOS has been frustrating, but also kinda fun.
Anyway, the walled garden sucks, but it's worth the trade off to me.
Memory is one of those things where increasing the size tends to result in a qualitatively different experience rather than a quantitatively different one.
The most noticeable things for Xcode are:
- Iterating is faster, the build-run loop is much quicker so it feels lighter when you make code changes
- Indexing and autocomplete is very quick. Noticeably better than the MBP
I also use Android Studio and it's definitely not as quick or smooth as Xcode. But it never was on my old Intel machine either. The UI seems to lag under load and navigating a large code-base just feels a bit more clunky due to lag on character input and opening files
Android Studio feels about the same as it did on my i9 as the M1
On your last note:
>> Android Studio feels about the same as it did on my i9 as the M1
It seems even this would be progress also - so you're getting ~ the same Android Studio performance on a 32GB RAM i9 (expensive) and 16GB RAM M1 (less expensive)
This seems like a win all around.
I’m honestly wary of devs who think 16 GB would be too “limiting” for app development. They’re probably also the ones responsible for churning out bloated crap that stinks like ass even in 32 GB.
When I'm using 32gb RAM for an app, i'm not actually running the entire app in 32GB RAM. The breakdown is usually:
- 8gb: XCode/AndroidStudio, chrome, slack
- 4-8gb: Virtualized App VM in profile 1 (Hospital Staff)
- 4-8gb: Virtualized App VM in profile 2 (Patient)
- 4-8gb: Virtualized App VM in profile 3 (Admin)
Interactions in one VM affect the other. You typically debug and test UX across all at the same time. Hopefully much of it is already tested with REST API layer testing, but not all, and definitely not QA/UAT. Is this not how it is done? How do other people do it?
I have a friend doing something similar for a ride hailing app, he has a customer app, and a driver app. He constantly runs both side by side while developing. I have another friend doing a food delivery app -- again, he has three profile VMs running side by side: customer, restaurant, admin.
So, yes, it seems M1 MBPs are not limiting, correct, at this time. They probably will be in 2025 after a dozen OS and XCode updates, and I imagine i'll need another laptop then. If none of us did, we'd still be on the 2008 iMac with 4GB RAM and we wouldnt be speaking about the M1 at all.
And 640kb ought to be enough for anybody. Anyone trying to sell you more RAM is just setting up a trap in order to charge those who don't believe Bill Gates and just want more RAM whatever it takes.
The vast majority of people upgrading their RAM are doing so simply because they see a bigger number and not because they need it.
model name : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
It doesn't have hw acceleration for VC1, so youtube specifically has a lot of issues. I've been able to rectify it by using h264ify to force youtube to give me h264 but a normie isn't going to be able to figure that out.
Even a browser (Chrome and Firefox) tab in the background with a few CSS animations causes it to be noticeably unresponsive in other applications.
It supports 4K output but my tests (backend TypeScript) are measurably slower when the display is at 4K vs 1440p.
What OS are you using? I run Linux so I'm wondering if it's that, but I've tried every different DE and Xorg Vs Wayland and different drivers, and they are all just as bad.
Disclaimer: I bought the Mac for building apps only.
YouTube is a turd. I do everything I can to avoid using it. How they’ve screwed up a video player that badly boggles the mind.
(Of course, the YouTube frontend is also fairly bloated, which doesn't help. There isn't much of an excuse for that one).
The title of the article/link could be a bit more descriptive though.
Maybe not a real world use case, but in day to day use, it really does fly. Things I've noticed that are real world and noticeably quicker than my Intel MBP day to day - decompressing large archives (4gb+ ZIP files for instance), compressing folders with thousands of small files, opening very large XD & Photoshop files, etc. My MBP used to sound like jet engine when watching YouTube while doing multiple other things. I'm still yet to hear the fan spin up on this once.
Moreover Apple has an essentially unlimited supply of machines. I can't imagine it would be worth it to actually take in machines and repair them (not least because it would undermine confidence in the refurb store if it was a gamble about whether you'd get a basically-new machine or beat-up machine.)
Anyhow, I expect that ARM9 will be the hotness and this first generation will lose support a little more quickly and next years models. I guess it depends on ARM9 though.
It's impossible to say if the next gen will be a giant leap where you'll feel bad for "wasting" money on the previous gen, or if it'll be just a small upgrade.
I daily a 2015 MBP. I own other newer hardware as well (dev kit; Linux box; wife has a modern MacBook I’ve used extensively). It’s not slow.
I just don’t run any Chrome junk at all. I still hit 8 hours of battery life and never experience slowdowns - even with Xcode running 24/7.
If you avoid modern bloat, we’ve had fast computers for quite some time. I’m sure the M1 is still another league, and I’ll likely buy one just to come forward a few years, but otherwise any Mac from the past 5-7 years is still very, very usable IME.
Short of that keyboard issue, that is.
I mean, part of it is that I don't work at a big company that requires I transpile 100k LOC every time I want to see my work on screen any more while attending to 100 slack channels at the same time, but I'm typically running Goland, VSCode, Safari (what I use personally), and Firefox (better for testing my web code) at the same time and it's snappy. Maybe 2-second compile typescript or go compile times, no noticeable slowdowns switching apps etc.
MacBook Pro (Retina, 15-inch, Late 2013)
2.3 GHz Quad-Core Intel Core i7
16 GB 1600 MHz DDR3
From what I understand, USB-C Macbooks that came after this magsafe generation had to CPU throttle earlier to deal with thermal issues, so despite better specs they couldn't get pushed without overheating. As well as heat/throttle issues if you try to charge it from the wrong USB-C port.
Work also gave me a 2015 Mac and now Linux is running on it. It's pretty fine and far from sluggish. No idea what the author is doing to have problems.
Just spitballing here, but I’m guessing the OP doesn’t spend much time using DAW sample libraries or editing video.
What we need are concrete A/B tests. Two identical (other than RAM size) x86 systems running large data set memory hungry tasks that performs significantly better on a 16 GB system than a 8 GB system. Then the same tasks (running native ARM code) on 8 and 16 GB M1 systems with the identical OS version. Ideally a sequence of tests with varying memory demands. Tests like that would at least get in the ballpark of revealing if the M1 systems are somehow more capable with less memory. The rest is just anecdotal.
I suspect that M1 systems might handle light swapping a bit more efficiently than the latest x86 systems, but tasks with truly large datasets will ultimately be limited by SSD access rates. Of course, that is just another useless opinion until reasonable test data is available.
Thanks to blog posts like yours I am also buying a 16G M1 MacBook Pro for general purpose uses such as web development!
The only things I can think of are that it’s because I used the Migration Assistant to go from my old MacBook Air to this, or that I’ve somehow acquired a lemon. I’m going to try wiping it clean and starting fresh, but beyond that it’s in refund territory.
I really want to be wrong about it, but I’m sceptical of the good press at this point given my own experience.
I should note I run everything (besides Firefox) on arm build (vscode insiders is super glitchy but it's the price to pay for nightly builds).
I wasn't sure which one to get but after upgrading the memory to 16GB the cost difference is pretty inconsequential between the two.