Hacker News new | past | comments | ask | show | jobs | submit login
I Bought Apple Silicon (honzajavorek.cz)
196 points by honzajavorek 79 days ago | hide | past | favorite | 290 comments



> Spotify, Slack, Discord, Facebook, Twitter, LinkedIn, all the annoying bloated websites, which I used to hate so much for the past months, now run like if they were the most optimized websites on the internet.

This is just going to encourage bloat and inefficiency, isn't it? Note: this has nothing to do with specifically Apple Silicon but just any computing improvements in general.


Yes, but that's pretty much how it goes. Jonathan Blow had a rant on release of the M1 Macs saying it literally didn't matter because computers were already really fast and the problem has been, and will continue to be, that if the processor/system is faster then software will be written slower to hit the magical response time expected.

Kind of a cynical take... but is he wrong?


We used to have a saying "Moore giveth, and Gates taketh away".

But I see two reasons why the M1 is a big deal, even given the bloat tax.

One is simple power efficiency. These chips run cool and fast, and Intel laptop chips don't. No amount of hyperefficient code is going to get you a real 18 hours of battery life on last year's MacBook Air.

The other one is Apple's proven track record of integrating custom hardware and software, into something which is greater than the sum of its parts. Ever since I got an iPad Pro, I've been quietly frustrated with the user interface of MacBooks. It's just not physics-smooth, and the iPad just is.

I just got one of the 16" MacBook Pros for work, and it's pretty loaded, and it's a good computer— by the standards previous to the M1. Battery life could be better; overheats sometimes with serious fan noise, for no good reason, although each update to Catalina appears to substantially reduce this.

I figured I could get five years on this rig. No way. I might make it to 2022, but I know terminal gearlust when I see it. Whatever Apple sticks in the next release of the 16", I'm craving it already.


"What Andy giveth, Bill taketh away" (Andy Moore, Bill Gates)

https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law


Andy Grove. Moore's first name is Gordon.


Grove, my bad.


Wow man I'm really feeling that. I also have a great 16" and I'm experiencing similar issues with it. The new M1 MacBook Air is already as fast as my machine and costs less than half. It's bonkers. I use two external monitors however and the Air doesn't support that, so that's one thing keeping me away from the new M1. I imagine the new 16" MacBook Pros will support more than one external monitor, and I don't think I'd be able to stop myself from getting one.


Given it's Jonathan Blow, I feel compelled to say yes.

More seriously, while program overheads have undoubtedly gotten higher, that's always been in exchange for something else, even if non-technical, like a better developer experience and platform support. Nobody writes slow programs on purpose, but people are very willing to trade it off for things that are more important in the grand scheme of things. Low level programmers love to smugly pretend that those tradeoffs do not exist but they do.


> Nobody writes slow programs on purpose, but people are very willing to trade it off for things that are more important in the grand scheme of things

Wait, of course they write slow programs on purpose, because "premature optimization is the root of all evil," right? A developer might use Electron because it's "fast enough" and halves your development time, and "fast enough" is always a perceptual metric, and not a technical one.

Faster hardware means even less optimization is needed before something is shippable -- great for developers! -- and a recipe for software continuing to feel just as slow as before, because we only optimize until it's "fast enough".

And despite the M1, or any of the advances of the last decade, the perceptual line that is "fast enough" hasn't changed.

EDIT: Yes of course Knuth was talking about optimizing noncritical paths, but the sprit he’s espousing lives on in system design: use electron, or something else that makes your product more maintainable and easier to understand (because you didn’t build your own bespoke cross-platform app scaffold, and electron is well-documented, etc.), until you’re sure you can’t anymore. Well, the bar for “can’t” is raised every time there’s more cpu to support rapid, maintainable, “nonoptimal” development, and here we are.


> Wait, of course they write slow programs on purpose, because "premature optimization is the root of all evil," right? A developer might use Electron because it's "fast enough" and halves your development time, and "fast enough" is always a perceptual metric, and not a technical one.

That's not writing a slow program on purpose; that's accepting a tradeoff of speed for deliverability. Writing a slow program on purpose means intentionally adding code to make it slower with no other consequence.

"Fast enough" can be measured in ms latency for UX purposes. Ex: Websites test and benchmark themselves on load time because they know sales/pageviews decrease after too long.


> accepting a tradeoff of speed for deliverability

So...making a purposeful decision that results in your program being slower...is not “writing slow programs on purpose”? They’re definitely not slow by accident.

No one but artists (this is not a dig at artists, in fact I love them and teach them to program at an art school) would take “writing slow programs on purpose” to mean “adding useless code to slow things down”.

It’s all about making decisions that prioritize the perceived latency at the expense of anything else.


There is a difference between deciding "we don't need to be fast" and deciding "we need to be slow."


Code that makes your software able to run on a completely different platform with minimal further development cost isn't "useless". Quite the contrary.

And you're not adding it "to slow things down". You're adding it to make it possible to run at all on the other platform.


I guess I wasn’t clear — this is my point! You’re adding it because it has huge benefits, and one cost is it slows things down. You are still making an explicitly decision to use something that makes your app slower, no?


There's a significant difference in meaning between "write slow programs on purpose" and "write programs that are fast enough but not faster."


> Wait, of course they write slow programs on purpose, because "premature optimization is the root of all evil," right?

No, the first is not the meaning of the second.


Warnings against premature optimization suggest a two-pass strategy, where in the first pass you avoid trading increased performance for increased labor, and then in the second pass you do whatever is needed to make the performance good enough.

The faster the underlying platform, the less work is required to get to "good enough" performance, so this strategy absolutely leads to software not getting faster as hardware gets faster, and the only way for an end-user to get their software to run fast is to have faster hardware than is typical for the target market for that software.

I don't think it's much of a stretch to call a development paradigm that causes software to never get faster even as hardware improves "writing slow programs on purpose."


> Warnings against premature optimization suggest a two-pass strategy, where in the first pass you avoid trading increased performance for increased labor, and then in the second pass you do whatever is needed to make the performance good enough.

No. The argument against premature optimization is not about labor vs. performance tradeoffs (but about correctness and maintainability vs. performance tradeoffs), nor is it about the entire code base, but about non-critical elements.

Quoth Knuth: “Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.”

Thinking about and designing for broad performance concerns even though it adds to initial implementation time is not against the adage; it’s when you are compromising clarity and ability to reason about the code to shave off microseconds on a path without a clear reason to believe that its going to be hit often enough that those microseconds matter in the big picture that you are engaging in premature optimization of the type targeted by the expression.

EDIT:

> I don’t think it’s much of a stretch to call a development paradigm that causes software to never get faster even as hardware improves “writing slow programs on purpose.”

If that is its explicit intent, sure; if it is a side effect, no. In any case, blaming such a paradigm on the adage against premature optimization is misplaced.


You're right, I guess it's the cargo cult interpretation of "premature optimization is the root of all evil" that's the issue.


That the adage was intended to address a narrower case does not invalidate your argument, friend!

People should, and do, make tradeoffs for maintainability over speed — and this is still choosing to make your app slow — whether Knuth meant it or not.


> No. The argument against premature optimization is not about labor vs. performance tradeoffs (but about correctness and maintainability vs. performance tradeoffs), nor is it about the entire code base, but about non-critical elements.

If true, I envy your ability to produce correct, maintainable code without labor!


Part of the reason for writing the slow version is that you need to profile it to find out where the problems actually are, because intuition is not a very good guide. Some of the worst bottlenecks seem tiny but happen so frequently they add up, while a lot of things that sounded slow don’t actually matter.


That's also the thing with premature. I would agree with you that premature is before you know what's slow. But if you have a CI Profiling run showing a few hotspots it's not premature anymore. Still people quote this adage.

Further there are things you should know are usually performance issues, that I would expect you to avoid entirely, except with good reason. Using O(N²) or worse algorithms on big collections when there are much faster alternatives in the standard library of your language for example. I don't think these good habits are premature optimization.


We are looking at Electron as a potential way to write once for all platforms, not least to be able to run on Mojave, Catalina and Big Sur without a breaking change every 12 months. The software is in the 'run once every few months' category, so for me the optimisation required is not anything to do with speed or memory usage, and more about not getting nailed by Apple's latest change of direction.


That’s nonsense though. Native Mac apps built years ago continue to work completely fine on the latest OS releases.


One wonders if VC-funded startups or "move fast and break things" firms like Facebook or Uber (as opposed to say '90s Microsoft or Sun or IBM or Bell Labs) becoming the vanguard advancing the cutting edge of some categories of software has created perverse incentives that reward this behavior. A cultural shift in Silicon Valley software development.


People used to talk about Java in the same way people talk about electron now so I feel like that's a definite no.


Programs written in Java runs on top JVM. On the other hand programs written for Electron comes bundled with Chromium. And we all know how it manages memory.


The JVM is fast today after 25 years of optimization and some serious improvements in the underlying processors (even with the Moore’s law failure of maybe half that period).

It was not fast at all when the Java hype was peaking in the late 90s. I knew a dotcom era startup that replaced a bunch of Perl with Java and my friends complained it had fewer features and was much slower.


Java's selling point was portability, and did it very well, it was faster than programs written in interpreted languages, well it was and still is slightly slower than programs written in C and likes.

Programs written on Electron rely solely on web technologies, I always avoid these programs, one simple chat application eats up to half a GB of memory, no I am not going to increase my machine's memory in order to run those programs.


One wonders how fast Electron will be in 25 years, if it lasts that long.


The relative penalty for using Java over C is so much smaller than Electron over Java or Electron over C


For cases on the edge, the penalty for not using Electron is that you don't get to use the software at all, because the developer doesn't have the personnel or budget to port it to multiple platforms.

Real software that does a job you need it to do is much better than hypothetical software that will do the same job faster.


That's not really true. There's very few software that wouldn't exist if it weren't for Electron.

You don't need to port Electron tier-software. This was a solved issue long ago. Code it in Java, Python etc... Or Use Rust/C/C++ and a cross platform toolkit.


Given that people whine if an app costs 99 cents, and that a lot of people proudly copy commercial software without paying... Maybe it's not just developers at fault here.

There are plenty of devs who'd be happy to ship extremely high quality, small & fast code. It's just that very few people want to pay for it.


One of the problems is, that Microsoft failed to provide a decent native framework for their platform. WinRT/C++ now kind of fits the bill, but MS still doesn't seem to push it in any way. I've recently switched to the Mac and I'm learning Swift/SwiftUI/Metal right now. Apple has its act together, writing native platforms using those technologies is a dream come true.


> Given it's Jonathan Blow, I feel compelled to say yes.

This is also how I felt watching the video, but on the other hand I have a really hard time finding serious faults with his point.

HN keeps things light. My personal website is super light. But the product I work on? What's a few MB gzipped down to less than a MB? 100ms? 300ms? Who can tell the difference[0].

[0]: I can, but certainly not the people who the feature matters to much of the time.


And even if you could, you're getting more value by getting that thing sooner. Or an even better argument: in the end, maybe it took just enough resources at the quality you got to even be born in the current market.

Yeah most things are frivolous... you can survive without a better camera in your phone for another year. But that ham-fisted implicit cultural aspect is what also brings people medical devices 5-10 years sooner than it could have. We all know these gains compound over time. So maybe it's ok if our programs are a little slow. We buy the truly mission critical technology faster with that frivolity.

But I'll also say, I still get mad at my phone and throw it pretty often. Just the way it is.


It's not an either or. Cutting edge technology has always been a bit slow and rough around the edges. That's not an excuse for your chatroom client to use 450MB of RAM.


> Nobody writes slow programs on purpose

No, but we are in a place where most development projects are really "stitching" dependencies together, and those dependencies can get heavy.

On the other hand, this may help apps written in things like Ionic to speed up like crazy, because WebKit is now insanely fast.


> we are in a place where most development projects are really "stitching" dependencies together

Is there anything wrong with that, though? If there's any common thread in the history of computing, it's that progress happens when things become abstracted enough to allow for another layer of building blocks. That's not to say that those layers don't have a cost, or we don't need to learn how to do them well, but gluing together components is really all software development has ever been.


It's both good and bad; depending on how much care is taken in selecting dependencies.

It's hard to use a lot of dependencies, regardless of their quality, without introducing bloat. Lots of redundancies.

For example, each dependency may have its own messaging or thread management system, or its own memory management system.

Or it may include other dependencies that have their own "special sauce."

For myself, I use lots of little packages; mostly written by Yours Troolie, and only occasionally use third-party dependencies.

Each of my packages is usually crafted towards a certain discrete function, and has its own project lifecycle. I think that's a good thing about using dependencies. I believe that modules with encapsulated lifecycles are a good way to ensure quality (or not, if I "choose poorly," as the Knight Templar said in the Indiana Jones movie).


> Nobody writes slow programs on purpose,

Well problem is developers are not writing fast programs on purpose. That they did not write slow on purpose but it nevertheless turned out to be slow is real problem to me as a user.


> Nobody writes slow programs on purpose

True, but nobody writes fast programs on purpose either, only "fast enough" programs. The optimization work stops as soon as it "feels" fast enough and, this "feels fast enough" has been the same over the last few decades no matter how fast the underlying hardware is. Any advance in hardware will ineviatably be eaten by software within a year or two. That's why running old software on new hardware feels so increadibly fast.

I also disgree that we got better developer- or user-experience out of the "deal". Most commercially developed applications have a "peak version", but still continue to be stuffed with new features that nobody asked for. E.g. name one feature that was added to Microsoft Word, Outlook or Excel in the last two versions which really added to the "user experience".


Not just old software, but with a lot of open source software you usually have a spectrum of choice between more basic, lightweight software and more elaborate, heavyweight software. Think LXDE vs XFCE vs KDE vs Gnome as desktop environments, or nano vs Sublime vs Code/Atom vs IntelliJ as text editors, for example.

However, when it comes some software areas prone to capture, there's not only no choice, but rather a multitude of incompatible-by-design options. Due to WFH, I need to have four different bloated chat/videoconference software on all the time which all offers the same features to me. When I want to watch a live stream, I need to open YouTube or Twitch or Facebook Live which all offer the same basic functionality but no one is interested in standardizing their interface so I can just point my familiar video player to it (at least without it breaking all the time due to cat-and-mouse fights between the platform and open source developers).

I'm not generally in favour of regulation, but mandating some basic degree of interoperability so platform and software can compete separately seems like the only option.


    More seriously, while program overheads have undoubtedly gotten higher, that's always been in exchange for something else, even if non-technical, like a better developer experience and platform support.
Software architecture usually entails trade-offs in many directions, between NFRs, etc. – but it feels to me like the intepretation of the performance-scale (as well as developer experience, really) is warped by two things:

- Many devs aren't familiar with the real breadth of possibilities and discount some approaches without understanding them (using "developer experience", "platform support", and "too low level" as excuses), thereby skewing towards familiarity and preference. As most devs are familiar with web tech, this tends to win out. Other legitimate issues might not be even be understood (HCI, attack surface, accessibility, etc.)

- The performance hit might not be significant enough to notice on a dev-grade machine (e.g. M1-processor, 32GB of RAM, etc.) and/or in isolation, but will become noticable when regular users run several apps built using the same heavy stack.

The latter is particularly apparent if you try to run multiple Electron-apps simultaneously on a low-to-medium specced laptop – which I find pretty egregious as a user, as multitasking computer systems has been a thing for a while now.

My interpretation is that many people (or companies) will usually write software that's no more performant than what they can get away with – which can be summed up, if cynically, as "users won't see any major performance gain or increase in capability in their day-to-day usage because of shitty software."


The thing is the tradeoff makes more sense for the developer, who gets his job done quicker, than to the customer who pays for the sloppy bloat every time he uses the software.


Slow code is usually written due to ignorance rather than choice. Lots of people only know OOP, which is slow and unoptimisable by default.


Carmack famously also has declared "I would rather have magic software over magic hardware" - the context being that we're using current hardware to a small fraction of its potential.

Still, I feel there's a lot of software I use that is hard to optimize or no longer developed, and I'm certainly happy to have it run faster given new hardware. As a programmer I also like being able to be lazier and still have usable software. Higher level languages, GC, less thinking about access patterns and debugging prefetch problems, etc.


The software being written being slower is of course enabled by those hardware improvements, but 1) the slow software is due to those technical inefficiencies making software easier/less expensive to produce, and 2) we can still make fast software by not using those inefficient means.

That said, there's something of a problem with that though, because by making 1) more and more prevalent, it makes 2) more expensive due to fewer people choosing to work in that manner. Similar to how consumer technology goods get much less expensive, but have many more defects and issues. That "race to the bottom" makes cheap things cheaper, but makes more expensive things more difficult to make. I don't know what that phenomenon is called specifically, but it's definitely something I've been frustrated by in nearly every industry and product category.


I think you're definitely onto something, and what you're describing is a reoccurring pattern of modern society chasing after quick gains and optimizing certain metrics (e.g. short-term profit) over other considerations.

It's not that people or organizations were necessarily more considerate or contemplative in the past, it's just that the processes or technological improvements (e.g. better hardware) weren't there to allow those races to the bottom. And maybe the systems we were living under didn't reward that behavior as much.


There's also a counterculture of projects that are lightweight alternatives: Sway as a lightweight window manager (among many lightweight WMs), musl vs libc, libressl vs openssl, gemini vs http, Alpine Linux vs Ubuntu, Alacritty as a blazing fast terminal.

This software culture is visible on Linux than macOS, but it's alive and thriving.


Yeah, sorta wrong. Sure, computers are already really fast, but the big thing with the M1 Macs for me is that they've finally came out with a laptop that doesn't heat up like crazy when I'm using it for moderately demanding tasks.

Also every laptop I've had prior, would get pretty hot or the fan would get quite loud when connected to an external display. This M1 is completely silent when doing that AND it doesn't throttle.


We have had passively cooled high performance devices for a while. M1 Macs are a great step, but it's more useful to compare them with previous gen mobile devices (iPad Pro 2018 ran faster than the 13" MBP at the time). Bringing ARM SoCs to laptops is also not new (e.g. Surface Pro X from Oct 2019), but I agree it's worth noting that Apple is doing a great job of it. Best x86 translation out there.


A little wrong. It does matter that the thing is always cool to the touch and it can run all day, even under heavy load. That is pretty new to my knowledge.


Perhaps x86 has been 'fast enough' for most people, but the M1 is as fast (or faster) using less power. Even if the M1 was 'only' as fast as other chips, then it's performance-per-watt would still be impressive.

Further, as someone who deals with HPC at $WORK, our (medical) researchers always want faster.


Similar to how taser training for law enforcement involves being tased themselves, I think Electron developers should have to use old computers for an extended period of time as their Electron development machines, just so they get an idea of what it feels like to be a user of their apps.


They should have old computers as their manual testing machines forever, not any finite period.


For a moment, I thought you were gonna suggest that Electron developers should be tased in order to make them associate using Electron with pain.


If they don't learn their lesson from using the old computers, then this can be a last resort.


Although it's a complete pain to standardize the results, you can have benchmarks as part of your CI (It's a bit easier if you can use some kind of normalised measurement like cache misses per loop iteration, but rdpmc generally isn't allowed - or even available from the OS).

Latency from the user perspective is a bit harder, although I guess for a webpage you could run it headless into a video recording then time how long to go from a blank screen to showing information?


Sure but the fact that you need benchmarks and big performance optimizations in a chat app is a sign that your approach is very inefficient and slow. Especially considering the same feature set was available 30 years ago in computers that were less capable than my wrist watch.


I really like the idea behind Facebook's 2G Tuesdays, but I'm not sure if it helped achieve anything apart from PR, or if it's still going.


Ad networks are strongly incentivized to perform well on slow devices, since they are paid per impression. In contrast the most valuable clicks almost always occur on the most expensive devices.


Then Facebook goes ahead and builds Messenger for desktop on top of Electron.

If facebook doesn't have the money for the engineering effort to build a native messenger app, I don't know who does


> This is just going to encourage bloat and inefficiency, isn't it?

Same as it ever was:

> Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.

> The adage is named after Niklaus Wirth, who discussed it in his 1995 article "A Plea for Lean Software".[1][2]

[…]

> Other common forms use the names of the leading hardware and software companies of the 1990s, Intel and Microsoft, or their CEOs, Andy Grove and Bill Gates, for example "What Intel giveth, Microsoft taketh away"[7] and Andy and Bill's law: "What Andy giveth, Bill taketh away".[8]

* https://en.wikipedia.org/wiki/Wirth%27s_law

* https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law


> Visual Studio Code is now optimized for Apple Silicon… and it works and it's snappy. I hated that app on my last laptop, it was very slow and bloated

I was thinking the same thing as I read this bit. I’m still on an Intel Mac and recently switched from VSC to Panic’s Nova editor, highly polished and 100% native.

If bloated code runs fast on an M1, imagine how native code performs? I’m trying very hard to excise as much Electron garbage from my life as possible.


> If bloated code runs fast on an M1, imagine how native code performs?

As an example, Minecraft on the 16GB M1 Pro running under Rosetta 2 runs much better (near-constant 60fps with everything on default, no fans, imperceptible heat) than on the 32GB i9 Pro (peaks about 40fps with everything turned down, full fans, will burn you unless you disable Turbo Boost).

I'm looking forward to a native JVM and seeing if it gets even better.


Knock yourself out: https://www.azul.com/downloads/zulu-community/?package=jdk

(I assume you can sub in your own JDK with Minecraft? Never used it)


Found https://gist.github.com/tanmayb123/d55b16c493326945385e81545... which uses Zulu to run Minecraft - was a lot of faff to get running, no sound, decent perf, definitely more heat than normal MC running on R2 though.


I'm pretty sure it just calls Java. Whichever interpreter pops up it'll use.


The modern Minecraft launcher allows you to specify the Java binary to use.


It is very very very rare you see this on HN, VSC is slow and bloated. You only ever see people claiming it is fast, or fast enough, or very little difference to Sublime.

The bar of quality are quite low for lot of people.


People who are using VSC are adding more value to this world than those who are complaining about VSC being slow


Xcode also runs well, wheee as A few weeks ago it was 10 seconds for inserting a new line.


One person's "bloat and inefficiency" is another person's "features at low cost".

We simply wouldn't have the services we have today at their low price if our hardware didn't offer them more breathing room.

As inefficiency tips more towards degradation in machines, more emphasis will be given to efficiency.


If, in a year or two, everyone at a tech company is building and testing on Apple Silicon machines, it could very well lead to major performance blindspots.

While you can profile performance in any web inspector, I think a good product team should keep a base-spec Windows laptop on hand to see how it all comes together for users who may not have the kind of hardware they design and develop on.


Back in 1989 I ran an application called GeoWorks on my Commodore 64. It had a paint program and WYSIWYG word processor and both applications and documents ran in the C64's available RAM of... 38k?

Roughly 1/100000 of what's on a low end computer today. On a CPU that ran at 1MHz and took many clock cycles per instruction.

In other words, yes but that's nothing new.


Of course, the programs couldn't render as many pixels or colors at once, couldn't insert photographs, videos, or complex tables and charts, couldn't load complex fonts or otherwise handle nuanced typography, etc etc.

There's a heck of a lot of pointless bloat. But at least some of the extra power has gone to useful features!

Today feels different. If Slack really have any significant features that couldn't be done on computers 15 years ago?


GEOS was written largely in assembly from everything I’ve read.

Apparently this and other things did not go over well with developers (apparently the apps needed to be done at least partly in assembly too). Wikipedia:

GeoWorks attempted to get third-party developers but was unable to get much support due to expense of the developer kit — which ran $1,000 just for the manuals — and the difficult programming environment, which required a second PC networked via serial port in order to run the debugger.

Even though PC/GEOS is referred to as an "operating system", it still requires DOS in order to load. GEOS and its applications were written in a mix of 8086 assembly (Espire) and C (GOC), both with non-standard language extensions to support the object-oriented design.[5][9]


It's worth thinking about what happens when you touch your phone's screen or reach for your trackpad/mouse and scroll a bit, at 60fps. That's a basic gesture that involves a gazillion C64s of everything.


I'm thinking about it.

Processing the touches into events should take 1 C64 of power or less.

Scrolling shouldn't significantly more difficult. 1 C64 for this.

The screen is much higher resolution, so we need the power to render that, but that's offloaded to the GPU and the GPU doesn't even need to leave idle frequencies.

60fps, well what was the C64? Whatever, multiply the resources by 10.

We've now accounted for... 20 C64s of power.


but that's offloaded to the GPU

It's a little odd to exclude the supercomputer your computer comes with to help with scrolling.


Thankfully, the GPU hasn't been afflicted with terrible inefficient slowness for 2D programs. The interesting questions lie in what the CPU is doing and wasting time on.

And, I mean, the C64 offloaded rasterization to a different chip too.


To scroll the screen on the C64 actually involved a non-trivial amount of CPU usage, because the hardware can only shift the screen by up to 7 pixels, so every 8 pixels, one has to manage all of that because the VIC chip can't — this involves offset copying all of the screen RAM, which can be done over previous frames, and same for the colour RAM, except this must be done in realtime (because the colour RAM has fixed location / cannot double buffer). Doing all of this actually takes a significant proportion of available CPU time per frame in C64 games. And that's just for the character-based graphics mode, not the bitmap graphics mode — which you simply have not got a chance in hell at scrolling on the C64, because the VIC offers no assistance here and the CPU is nowhere near fast enough to do it.

Don't even get me started on drawing more than 8 sprites!

(Ex C64 games programmer here)


I think you either severely overestimate the capabilities of a C64 or underestimate what it takes to put a modern user interface experience together.

The C64 had a resolution of 320x200 pixels with 16 colors. That would cover about 1,5 square inches on a modern phone (modulo the colors, another factor of 6 to 10). You could not dream of re-rendering the whole screen with any FPS number. Games had to resort to all kind of tricks to create a smooth experience, and whatever was put on screen was pre-rendered content combined in clever ways. Calculations involving floating point like a mandelbrot set in said resolution took hours to days per _frame_.


Increasing the DPI doesn't affect the CPU, only the GPU, and it's the CPU causing problems. So please ignore pixel count when explaining why a "modern user interface" requires so much more computation that it can't react to input in 15 milliseconds.

And the C64 does re-render every frame, because there's not enough memory for a frame buffer!

I'm not worried about games here, and word processors don't need fractals.


The sad part is that most developers will be using these faster machines and to them their software will seem super snappy. But the poor folks with non-state of the art machines will experience much slower experiences. It creates a dynamic where other users are forced to upgrade just to use basic software.

In the past, I developed on old hardware (or at least tested on old hardware). If the software was performant on my old machine, it was bound to be fine for most others.


It's already the case with JavaScript bloat, which destroys performance and network efficiency of the internet in many countries.


Yes, it's a variant of the Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox


Most likely yes, but if x86 keeps on lagging behind then hopefully not since the developers of these will need to make sure these work ok on x86 also.


I've enjoyed high level programming so far (think Java, Swift). But for the sake of argument, I'm gonna say this: high level languages (& their runtime environments) take control over the code that the programmer writes. Speaking of "computing environments", I'm unease with the fact that I don't really know that my Swift program is doing in the background, and more so, the fact that the vendor (Oracle, Google, Apple etc) tries so hard to prevent me from finding out.


> I'm unease with the fact that I don't really know that my Swift program is doing in the background, and more so, the fact that the vendor (Oracle, Google, Apple etc) tries so hard to prevent me from finding out.

Eh? Apple's fairly clear about what Swift is doing in the background (which isn't much, really; boring ol' reference counting).

This might apply more to Java, mind you. There are no secrets, but most people probably aren't going to investigate exactly what the JIT or GC are doing.


Swift really isn’t doing anything “in the background”.

There is no virtual machine, and nothing is happening other than your code and the libraries you use.

It’s only high level from the point of view of the type system.


> and the fact that the vendor (Oracle, Google, Apple etc) tries so hard to prevent me from finding out.

Are there any specific instances you’re thinking of here? Java, Go, and even Swift all are open source.


I'm unease with the fact that I don't really know that my Swift program is doing in the background

I don’t really understand your complaint here. The Swift runtime library is pretty limited - basically just allocation and reflection. The compiler spits out machine code that you can look at. Where is the “background” you are thinking of?


but that's not new, CPUs/etc have been getting faster since forever and games are already super-bloated


Yep

I remember being shocked when I first read a game requiring 16MB of RAM (yes, 16MB, get off my lawn).


You have to take into account of the capabilities and willingness to work on optimization of the average developer over time too.


Not to mention organizations' willingness to entertain those engineering initiatives, never mind prioritizing them.


Many producer consumer systems really; roads-traffic, predator-prey. It a very common emergent behavior.


Having slower noisier computers didn’t stop the bloat.


And when confronted with "Faster CPUs will cause slower software!" the only actionable is what? Purposely build slower CPUs? That consumers won't want to buy?

If I ever meet the king of software I could ask him to make faster software, but I haven't found him in the phonebook yet.


Not really.

They are cross-platform apps which still need to run well on the 95% of other computers which don't have the M1 chip.


Note: this has nothing to do with specifically Apple Silicon but just any [dramatic] computing improvements in general.


But this is a dramatic computing improvement limited to 5% of the computing market.

This isn't like with web browsers where it impacted everyone.


>> This is just going to encourage bloat and inefficiency, isn't it? Note: this has nothing to do with specifically Apple Silicon but just any computing improvements in general.

If it does, Wintels are not going to be pleasant environments for users. Wintel laptops would need to up to 32gb/64gb RAM


> Given the little chance Intel or anyone else is going to keep up with this new level of performance, I believe this new laptop could be with me perhaps even longer, unless everyone else in the world buys it and developers start to create 10x bloated apps compared to today.

That's naive and wrong. The M1 seems to be an excellent processor with good performance. It's not worlds better than a Ryzen 5 5600X or its mobile variant.

> I decided I don't need more RAM than 8 GB, because nobody from the first reviewers managed to give the RAM a hard time. It seemed to me it's a different beast altogether and it doesn't make sense to just compare numbers and say that more is better.

The processor is not a magical thing that changes how much memory is needed for workloads. When the first reviewers did not look at your workload it's meaningless that they did not complain.

Your memory requirements completely depend on your workload. It's quite possible you do not use more than 8GB, but I had IDEs and GIMP use more than that. And it's of course (it's Apple) not upgradeable, 8GB will not be enough if your usage changes, and then that laptop will just be expensive garbage for you.


Most of the reviewers don't seem to understand the concept of swapping. The SSD in the new macs is fast enough you don't (or hardly) notice when the OS swaps and so they think 8 gb is enough. While this may be fine for regular users I wouldn't recommend it for professionals.


That's a great point. And we maybe should grant that if they really do not notice that swapping, that it is an argument for less ram working better than one would usually expect for their usage.

And right, it's professional usage where that would fall short, I doubt that it would help as much when the image file I open fills the ram completely or when parts of the IDE are swapped away.


> It's not worlds better than a Ryzen 5 5600X or its mobile variant

Maybe if you're solely focused on performance.

But the other reason why M1 is so ground breaking is because it offers this performance with ~2x the battery life and minimal cooling compared to its competitors.


I think everyone is aware of the battery life / minimal heat dissipation of the Apple Silicon processor. The grandparent comment was specifically quoting and refuting this line:

> Given the little chance Intel or anyone else is going to keep up with this new level of performance


That line is strictly true in a thermally constrained machine such as a laptop.


Hi, I'm the author. English is my second language and by performance I assumed the overall performance within all the constraints, that is, temperature, noise, and battery included. If I was to type everything explicitly, the article would be very exact, but very long and hard to read. The exactness wasn't so important in this sentence, as I think it's clear I'm aware of how it is given I earlier mentioned I enjoyed articles where this is considered and explained.


I think I was agreeing with you.


The M1 is on a 5nm process from TSMC. I'm sure if AMDs Zen 3 (eg Ryzen 5600x) was also on that process instead of 7nm the perf/power ratio would be similar


I think the reviewer is mostly right though. The M1 uses unified memory where the CPU, GPU, the neural processor, and other components can access the same data without copying it.

"All of the technologies in the SoC can access the same data without copying it between multiple pools of memory. This dramatically improves performance and power efficiency. Video apps are snappier. Games are richer and more detailed. Image processing is lightning fast. And your entire system is more responsive."

Of course if you need 12 GB of data in memory than sharing 8 GB won't be enough but for most current uses of 16 GB its possible that 8 GB could feel as responsive between the shared memory and the faster access speeds (due to faster components and smaller distances)



Yeah but there hasn't been the level of vertical integration that the M1 laptop has before. The same company is building the silicon, the laptop and the OS they can leverage whatever they feel like and don't have to account for processors that have differing features.


Sure but Intel doesn't support it right so it wasn't in use for MacBooks as far as I understand. I'm not saying Apple invented the concept but they implemented it in their laptops for the first time so you logically wouldn't need as much RAM for many use cases where you can share memory that you couldn't share before.


No. Just no.


Care to elaborate...?


Hi, I'm the author. I have limited budget and made a trade-off. I just decided it's less probable I'll run out of RAM given its performance and my usage of the computer, than that I'll run out of disk space. That's all.

It's easy to type a comment about bad decisions and garbage, but the world doesn't work in terms of absolute truths, the decisions we make are made under certain circumstances and the success of the results is subjective. I risked something, I know it, so far I'm happy with my choice.


Hey, please don't be hung up on the garbage word - after all it's just a description what the laptop will be at some point in the future. An earlier future than necessary. My beef is not with you, but I am annoyed with Apple selling devices that are not upgradeable. If they would just be reasonable and let users upgrade their ram, the laptop would continue to be useful when (not if) more than the current minimum of acceptable system memory is needed.

Given how overpriced the bigger memory configurations are I understand why you took that risk.

The earlier parts of my comment just reflect my impression that you overestimate the performance of that laptop compared to available alternatives. 10x faster is just not accurate. Remember the context: A huge group of Apple fanboys that buy everything the company produces without being aware of the alternatives and drawbacks. That's why it feels important to point out misconceptions like that, whether they are really there or just the interpretation.


I’ve noticed on my M1 air that even when it’s swapping the performance impact isn’t that noticeable. I wouldn’t be surprised if 8gb falls short once we can run VMs but I’ve found it to be adequate for everything else.

For $1000 the base model Air is a hell of a machine.


The processor is not a magical thing that changes how much memory is needed for workloads. When the first reviewers did not look at your workload it's meaningless that they did not complain.

People sometimes forget that macOS has had automatic RAM compression since Mavericks (10.9) [1]. Think about how fast this must be using Apple's SoC combined with the extremely fast SSD at 2190 MB/s writes and 2675 MB/s reads. With compression, the effective speed is at least double.

It would take a whole lot for there to be any appreciable slowdown running lots of apps simultaneously with 8 GB of RAM.

[1] https://www.lifewire.com/understanding-compressed-memory-os-...


When I first got my new 8gb Air there was some indexing process running for a while that was using 6gb. The machine was a bit slower but still quite responsive. I think there’s some truth to the claim that you need to think about ram a little bit differently with this new hardware.


When I first got my new 8gb Air there was some indexing process running for a while that was using 6gb.

That's Spotlight indexing the email, PDFs, Word files, etc. on your hard drive. Once it's done, your Mac will be much more responsive: https://www.macworld.com/article/3388134/spotlight-dont-take...


RAM compression implemented in hardware with proprietary extensions to the ARM ISA for their algorithm.


>RAM compression implemented in hardware

That's what I'm thinking.


RAM is probably one of most the underrated things in the world at the moment. Anyone who are remotely familiar with how OS works would definitely get as much RAM as possible. Virtually it's like the cash of the virtual world (pardon the pun), the more you have the better you are.

Having said that, according to the article "Intel’s Disruption is Now Complete" discussed in HN before, Intel's lowest end of Celeron CPU line was introduced just to delay the inevitable [1][2]. But what the article did not mention is that Intel purposely crippled its low end CPU with the minimum amount of RAM. Until very recently the most RAM you can get with Celeron based processors is only a few GBs. However, perhaps due to the competition of AMD and other ARM based manufacturers, Intel has done the unthinkable and with the latest Gold processor series (11th Gen) you can have up to 128GB of RAM [3][4]. FYI, the Gold processor series is conveniently placed in between the Core and Celeron back in 2016 and essentially it's a beefed up Celeron. Not only that, it can now supports Optane, a new Intel's proprietary non-volatile or persistent memory technology that can provide its lower end CPUs with potentially more than TB of working memory [5]. Imagine using a tablet/PC/home server in the year 2021 having an equivalent of humongous terabytes of RAM!

[1]https://jamesallworth.medium.com/intels-disruption-is-now-co...

[2]https://news.ycombinator.com/item?id=25092721

[3]https://newsroom.intel.com/news/iot-processors-industrial-ed...

[4]https://ark.intel.com/content/www/us/en/ark/products/199288/...

[5]https://www.intel.com/content/www/us/en/architecture-and-tec...


RAM is pretty power hungry. Like everything there’s a cost and “moar RAM” is not the solution to all problems when you’re thermally constrained and standby battery lifetime is important.


Really wishing Apple would release a 32 GB model. After virtualization lands, I would jump onboard in a heartbeat. I have 16 GB right now, but I have to delegate important experiments to a server which is annoying.


It is expected that next year they will release their M2 chip to support the 14/16" MacBook Pro and iMac.

And then likely 2022 we see the M3 chip for the Mac Pro, Mac mini Pro and iMac Pro.


I would expect them to do so, but there is another solution: you can use a remote desktop on a beefy server and run your programs there and then use it from a nice portable machine.

I think Mac has their own, but I use NoMachine and it basically just works and can work with different client and server OSes. On the client side it is just video decoding so it is not very demanding - and it uses less bandwidth than I would think.


Virtualization is already here; Docker and QEMU have been in testing for a little while.


Apple is ten times better at planning future generation hardware than all other counter part, I believe they are already planning for much bigger M2 M3 chips. I truly think other will have real hard time catching up with apple on performance front anytime soon or ever.


The pro should be out shortly.


If most machines only have only 8G ram, it compels app developers to pay more attention to memory usage. This is a good thing. That said, there are plenty of other usages that just need a lot RAM and these folks are holding off on upgrading.


Apple already working on adding many core (~32) and tons of ram (256gb) by two three generations, so if no body else, Apple itself will force for the upgrade.


> The processor is not a magical thing that changes how much memory is needed for workloads.

Yeah, that line made me take the entire review less seriously.


I think the reviewer is mostly right though. The M1 uses unified memory where the CPU, GPU, the neural processor, and other components can access the same data without copying it.

"All of the technologies in the SoC can access the same data without copying it between multiple pools of memory. This dramatically improves performance and power efficiency. Video apps are snappier. Games are richer and more detailed. Image processing is lightning fast. And your entire system is more responsive."


The unified memory actually means more RAM usage as textures that would only exist in the GPU RAM now have to live in the unified RAM. Unified memory is a great performance improvement but I don't think it translates into lower RAM usage.


I agree not in all cases.

What if you are barely using that GPU though? Wouldn’t most the GPU RAM just be wasted while you did CPU intensive tasks?

Likewise in your example you might be using the GPU more so less RAM could now be allocated to the CPU to improve GPU performance by more efficiently using the available RAM


That doesn't mean you need less RAM. That reviewer doesn't know what he's talking about.


If you assert that without discussing any of the points that I brought up then its not very useful. Why wouldn't sharing memory for certain large objects instead of copying them not make memory usage more efficient for many tasks?


> CPU, GPU, the neural processor, and other components

Outside of the CPU, you're really not using the other processors for "many tasks."


Well the storage is also integrated and most tasks use at least CPU + storage so those will benefit too. Not to mention if you don't use your GPU much then the memory can now be dynamically allocated to the CPU instead of remaining with the GPU. Intel has integrated graphics cards that share system RAM but that RAM is slower than traditional graphics RAM so they have terrible performance.


Just not how it works, but I don't really feel like engaging in an argument. Technical readers of this chain will know who is right.


Then why comment at all? There was no argument, I was literally trying to reason how it could not be better and you chose to be rude instead of providing a single technical detail


Hi, I'm the author. I've read about how they constructed the RAM and how it's different from the usual architecture. That and YouTube reviews convinced me that if I should save money on something, I could risk it with RAM. I observed how my potato Macbook works with its 8 GB RAM and I could see that most of the capacity is idle, while the computer is slow. That made me think that RAM just isn't the bottle neck I'm fighting. My budget is limited and based on what I knew I decided the RAM is less of a risk than a small disk.

In the end, honestly, I don't care that much if it's the processor, RAM, or a pixie dust. The machine's snappy whatever I do on it and that's all that matters. I'm happy with my choice so far, but only time will tell, of course. If you do RAM-intensive stuff on your computer and have the money, buy RAM, good for you.


> The processor is not a magical thing that changes how much memory is needed for workloads.

But this isn't completely true. ARM compiles down to significantly smaller binaries than x86, because it doesn't have to support legacy extensions.[1]

[1] http://web.eece.maine.edu/~vweaver/papers/iccd09/ll_document...


Does it? Most if not all of those graphs seem to have x86 pegged even or denser than ARM?

Edit: you may be reading IA64 by accident


-- I decided I don't need more RAM than 8 GB, because nobody from the first reviewers managed to give the RAM a hard time. It seemed to me it's a different beast altogether and it doesn't make sense to just compare numbers and say that more is better. Apple allows you to buy a computer with more than 8 GB RAM, but I think it's just a trap how to satisfy and charge those who don't believe the reviews and just want more RAM whatever it takes. --

I think the reason I would always buy more RAM is not because it needs it today, but because it may need it in 2 or 3 years, as app inevitably eat into the performance gains with bloat.


I find that crazy. I bought a computer with 16gb of ram in 2011 and by 2015 I was hitting OOM left and right.

I bought a mac mini with 32 GB in 2018 and I managed to OOM it a few times and oh boy was it ugly - already regret not taking the 64gb option at the time.


I upgraded to my M1 Macbook Air from a 2015 Macbook Air with 8GB of memory, i5 even. I almost never run into memory issues with it - ran the full complement of browser tabs, Microsoft Office, pixelmator, Photos, VPN Client, bit of Jupyter-labs, etc...

I do most of my data engineering in the cloud, or on a 32 GB Linux system - so I don't need a lot of memory for data sets - just something that will run the applications. I would never recommend that the average user buy more than 8GB of memory with a Mac Laptop.

Meanwhile - my 16 GB Linux Laptop (Ubuntu 20.04 Dell XPS 15 7590) routinely comes to a crashing halt if I try and do something crazy like open up 40 Browser Tabs and then start zoom. It's horrible. I've spent hours trying to see what's going on with top to no avail. If only it was a graceful degradation - but open a single browser tab on the wrong site once you are close to the edge- game over. Usually faster to just restart the system than wait for whatever swapping is happening to conclude.

I've rarely had this happen to me on the MacBook Air.

Different operating systems/hardware really seem to make a difference using pretty much the same workload, speaking from personal experience of someone who splits their time 50/50 on Linux and macOS.


Maybe zram can help you on Ubuntu: https://github.com/ecdye/zram-config

It decreases swapping to disk by using a compressed ramdisk for logs, unused stuff etc. The zram-config from universe may be broken (https://github.com/StuartIanNaylor/zram-swap-config)


Which browser are you using? I've got no issue with around 50 tabs in Firefox in a similar laptop. But maybe I don't visit the same memory heavy websites.

It's usually gcc that eats up all my memory, turns out with template metaprogramming a single gcc instance can eat 3GB of RAM, and I usually build with 6 parallel instances. Thankfully I've got plentiful SSD-backed swap so I don't feel the system becoming too unresponsive.


You can tune how the linux memory allocation works, I've done some changes in a couple of servers (couldn't find which settings in the 60 seconds I google), might be worth investigating in workstations too.

(Not saying it is ergonomic or better than the mac.)


I risked it on an 8gb mini and quite frankly i can’t bend it even if I try.

Only thing it’ll probably die on is running VMs which I’m not going to do any more. They can live in DigitalOcean.

One liner review: this is the only computer I’ve ever had which can keep up with me.


Depends on what you're doing.

People have been saying this for years and my personal 2011 17", 2016 13" and 2018 Mini have never had more than 8GB. Never noticed an issue.

Unless you know you're going to be running memory intensive workloads, I see no need.


SSD performance has improved such that if you do need to swap it's not a huge impact to performance.

It was a very different situation even only a few years ago.


Swapping would not be good for the longevity of the SSD.


Most of the 8GB users aren't likely to fully use all of the SSD.

So with wear levelling you should see the SSD last well beyond the usable lifespan of the device.


My wife has a 2013 mba with 4gb of ram. She’s never hit the limit doing office365, Firefox, and Citrix. I have a 16gb dev machine at work and I’ve only OOM a few times. I can’t actually tell you how little ram my phone has but it’s certainly less than 4.


> a mac mini with 32 GB in 2018 and I managed to OOM it a few times and oh boy was it ugly

How?


How did you manage hitting OOM on 16GB, let alone 32? I'm curious. I had 8 till 2018 and then 16. Not even once did I run out of memory and I was running java apps in docker.


Very often with compilation & LTO of large software in C++, having 2/3 VMs open (not just docker, the whole GUI stack as it's for debugging a GUI software)


Agreed I’ve been 16gb+ for a decade now. He seems to have under-specced on ram and HD.


I also opted for 8GB and a 250GB SSD. I've found that my M1 Mac Mini has completed replaced my Window desktop (an old i7, 16GB RAM, TB of SSD/HDD, nvidia 1070).

I mostly use my Mac for personal projects in TypeScript, Swift, and Java, and for browsing the web. I also play video games occasionally, and I'm surprised at how good the performance is.

It's incredibly fast, and at $700 it's a steal. Normally with Apple you pay a pretty hefty premium if you want performance, but the M1 makes it much more accessible.


I have to say, I'm rather concerned for Intel and AMD.

The magic of the M1 is not just the hardware, but also the software of MacOS.

Putting everything together on the M1 chip, the CPU / GPU / Neural Engine, and allowing all those to share memory, seem to be from where much of the magic comes. The further optimization of MacOS to leverage all that helps as well.

AMD already has a lot of experience with high-speed interconnects. They already have a lot of experience with APUs. I wonder if it'd be possible for Microsoft and AMD to enter some sort of partnership whereby the optimize Windows in the same way MacOS has been optimized.

I really fucking hate Apple. I hate their walled garden approach. I hate their "trendiness". So I have a small bit of self-loathing that I own the base model new Mac Mini, but I can't allow my seething hatred for the company to cloud my judgment when it comes to using the fastest and most productive platform for my work, and, I'm sorry to say, this little $699 machine outperforms my Ryzen 9 3900X w/ 32 GB of RAM and an RX 5700XT for most tasks. I still game on this machine, but most productivity work has been relegated to the Mac Mini now.

Learning MacOS has been frustrating, but also kinda fun.


I used to hate Apple too, but I've found their products to be better than the alternatives. I was a die-hard Windows/Android user for something like 6 years, and I slowly moved over to Apple once I got my first MacBook. WSL is great for development, but it doesn't come anywhere close to macOS in my opinion.

Anyway, the walled garden sucks, but it's worth the trade off to me.


A friend of mine said: "Hassen macht hässlich". In english this would be: "To hate makes you ugly"


A friend of mine said: "People who try to control everything suck dick." I don't know what that translates into in German.


On Linux anyway once you get over ~8GB of ram doing normal things in tmpfs starts to make a lot of sense. My 2012 macbook pro runs entirely from tmpfs. I regularly compile Linux and Firefox in tmpfs on my desktop with 32GB of ram.

Memory is one of those things where increasing the size tends to result in a qualitatively different experience rather than a quantitatively different one.


Same story for storage. Much faster storage is noticed immediately.


Has anyone done app development (either XCode for iOS or Android Studio for Android)? How limiting is the 16gb RAM? Can you smoothly run both XCode/AndroidStudio AND 1+ virtual machines for testing? Curious if it is worth waiting for the upcoming MBPs with 32/64GB RAM?


I use Xcode on some fairly large projects and "upgraded" from a maxed out 2018 MBP (i9, 32GB) to an M1 Air (16GB). It's faster and more responsive in every way. Especially under load

The most noticeable things for Xcode are:

- Iterating is faster, the build-run loop is much quicker so it feels lighter when you make code changes

- Indexing and autocomplete is very quick. Noticeably better than the MBP

I also use Android Studio and it's definitely not as quick or smooth as Xcode. But it never was on my old Intel machine either. The UI seems to lag under load and navigating a large code-base just feels a bit more clunky due to lag on character input and opening files

Android Studio feels about the same as it did on my i9 as the M1


Thank you so much! This is exactly what I was looking for!

On your last note:

>> Android Studio feels about the same as it did on my i9 as the M1

It seems even this would be progress also - so you're getting ~ the same Android Studio performance on a 32GB RAM i9 (expensive) and 16GB RAM M1 (less expensive)

This seems like a win all around.


Not quite what you're looking for, but I've noticed IntelliJ resident memory staying suspiciously low. I have the vague suspicion that they're taking advantage of the faster processor to be way, way more aggressive with memory compression, but I can't easily prove it :)


I've done dev in Xcode on the original 12" fan-less MacBook with 8 GB and it was fine, simulators and everything.


It was also fine for me at the time. However, I've found it to be increasingly unbearable with each subsequent OS update and each XCode update. At this point, XCode is not useable on my 2015 8GB RAM Macbook -- it could work, but productivity would be so low it isnt worth doing. I was planning to make a mega-upgrade up to 64GB, but the chatter of "16GB w/ M1 is the new 32GB" has me intrigued.


And the M1 MacBooks are also much better aT tHe TiMe.

I’m honestly wary of devs who think 16 GB would be too “limiting” for app development. They’re probably also the ones responsible for churning out bloated crap that stinks like ass even in 32 GB.


I'm pretty confused by this take, not all apps are tip calculators or note taking apps. There are entire classes of enterprise apps with sophisticated functionality. I used mine with on-phone AR, which involves computer vision, on the edge. Further, my app (and many others) have multiple user roles (in my case: Patient, Hospital Staff, Admin).

When I'm using 32gb RAM for an app, i'm not actually running the entire app in 32GB RAM. The breakdown is usually:

- 8gb: XCode/AndroidStudio, chrome, slack

- 4-8gb: Virtualized App VM in profile 1 (Hospital Staff)

- 4-8gb: Virtualized App VM in profile 2 (Patient)

- 4-8gb: Virtualized App VM in profile 3 (Admin)

Interactions in one VM affect the other. You typically debug and test UX across all at the same time. Hopefully much of it is already tested with REST API layer testing, but not all, and definitely not QA/UAT. Is this not how it is done? How do other people do it?

I have a friend doing something similar for a ride hailing app, he has a customer app, and a driver app. He constantly runs both side by side while developing. I have another friend doing a food delivery app -- again, he has three profile VMs running side by side: customer, restaurant, admin.

So, yes, it seems M1 MBPs are not limiting, correct, at this time. They probably will be in 2025 after a dozen OS and XCode updates, and I imagine i'll need another laptop then. If none of us did, we'd still be on the 2008 iMac with 4GB RAM and we wouldnt be speaking about the M1 at all.


> Apple allows you to buy a computer with more than 8 GB RAM, but I think it's just a trap how to satisfy and charge those who don't believe the reviews and just want more RAM whatever it takes.

And 640kb ought to be enough for anybody. Anyone trying to sell you more RAM is just setting up a trap in order to charge those who don't believe Bill Gates and just want more RAM whatever it takes.


> "Yet despite Gates's convincing denial, the quote is unlikely to die," Fallows wrote. "It's too convenient an expression of the computer industry's sense that no one can be sure what will happen next."

https://www.computerworld.com/article/2534312/the--640k--quo...


for under $1000 (after EPP) I'll just trade-in if it came to that


This mentality is exactly why Apple charges such a high tax on RAM upgrades.

The vast majority of people upgrading their RAM are doing so simply because they see a bigger number and not because they need it.


I want a no commentary, no sound, video of the M1 Mac usage. Like show me how it runs YouTube on Chrome for example. So tired of these reviews that don't show anything other than the opinion of some fellow that is either Apple-biased or doesn't cover my field of usage at all.


What laptop are you on that struggles to play Youtube on Chrome?


My 2016 HP Elitebook G3 does.

model name : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz

It doesn't have hw acceleration for VC1, so youtube specifically has a lot of issues. I've been able to rectify it by using h264ify to force youtube to give me h264 but a normie isn't going to be able to figure that out.


I have the same processor in my ThinkPad T470s and it's terrible for anything video.

Even a browser (Chrome and Firefox) tab in the background with a few CSS animations causes it to be noticeably unresponsive in other applications.

It supports 4K output but my tests (backend TypeScript) are measurably slower when the display is at 4K vs 1440p.

What OS are you using? I run Linux so I'm wondering if it's that, but I've tried every different DE and Xorg Vs Wayland and different drivers, and they are all just as bad.


I have 6100U and it seems fine...


Nothing struggles. But the Intel 2020 Air is noticeably slower than the refurbished 2012 Linux Mint (and 2015 Windows) that I own.

Disclaimer: I bought the Mac for building apps only.


Interesting that the blame here seems to sit with Apple. Aren’t we asking why Google, who owns the browser, can’t make YouTube, a video player, that they own, work in that browser on 2020 hardware?

YouTube is a turd. I do everything I can to avoid using it. How they’ve screwed up a video player that badly boggles the mind.


Youtube is basically optimizing for their own costs over that of end users by using VP9, which uses less bandwidth but often is software-decoded. So the real answer is basically that they can, but it would cost them more.

(Of course, the YouTube frontend is also fairly bloated, which doesn't help. There isn't much of an excuse for that one).


Oh, I'm not blaming Apple for anything here. I actually love my Mac Air, for the things its really good at. I'm blaming the so-called YT influencers/content creators that lie to their audience.


The M1 macs are the fastest web browsing machines you can buy, by a huge margin. If that's your use case, there's no question what machine to get. The closest you can come with mobile x86 is to get the top-of-the-line 11th Gen Core i7, and the M1 beats that thing by almost 100% in browser benchmarks.


Yep, or alternatively an iPad Air with a magic keyboard.

(Ducks)


Not even wrong. There's hardly a difference between the latest iPad and the latest MacBook.


This might work for you, if you turn the sound off: https://m.youtube.com/watch?v=UxSI45eeAts


This kind of helps illustrate your point. It’s from the linked review titled “forget the performance the battery life will blow you away”

https://techcrunch.com/wp-content/uploads/2020/11/m1-dock.gi...


Not everything is going to cater to your needs. I was curious how homebrew works so this was interesting to me. Looks like I'm going to hold off for a bit.

The title of the article/link could be a bit more descriptive though.


Switch the sound off and try this for how performant it is -

https://www.youtube.com/watch?v=vKRDlkyILNY

Maybe not a real world use case, but in day to day use, it really does fly. Things I've noticed that are real world and noticeably quicker than my Intel MBP day to day - decompressing large archives (4gb+ ZIP files for instance), compressing folders with thousands of small files, opening very large XD & Photoshop files, etc. My MBP used to sound like jet engine when watching YouTube while doing multiple other things. I'm still yet to hear the fan spin up on this once.


What sort of usage are you looking at? I have a Mac mini M1 and MBA (Intel i7) that I can do some test on. Anecdata wise, my M1 runs noticeably faster than my MBA minus software incompatibility issues.


If you record your screen for 10 minutes while doing common stuff like mail, web browsing etc, you've already beaten 99% of the tech review YT channels out there in terms of competence on reviewing something.


I wish this was true :/ But it was breaking people's webpage where the page won't load (including myself) and the menu you would normally when you click the icon on the bar was white.


Is it the assumed bias or the lack of "coverage" that's most annoying? I see coverage but recognize the bias so I'm very genuinely curious what it is you're hoping for as far as reportage on bleeding edge Apple chips.


I've got a 16GB ram macbook pro M1 I can show you maybe? Plugged into a 27" 1440p monitor in clamshell mode demolishing even my hackintosh at basic tasks.


This is my first time hearing Apple refurbished meaning old store display model. I always thought refurbished machines were products that other people bought new and returned, and then Apple restored them to like-new condition. There seem to be a lot more refurbs available than I would imagine there are old display models. Now I'm curious, though, does anyone know more about how (Apple official) refurbs are sourced?


He wrote "second hand, refurbished" so it's not entirely clear that it was Apple refurbished.


I made the assumption that he meant refurbished by a third-party, not Apple, because it's pretty common for those sorts of secondhand machines to be refurbished with new screens. Figure they put in the cheapest screen they could source, and no wonder it's dying in 5 years. I doubt any OE screen from Apple would have the same problem he describes.


Hi, I'm the author. To be honest, I'm not sure how the refurbished thing works. That's why the whole paragraph is full of "I think" and "perhaps" and "my theory is". It's all just speculation. The shop where I bought it had it marked as refurbished, that's what I'm sure about, but I don't know what's the exact history of the machine. I've heard rumors those are computers from Apple Store etc., hence the speculation about worn out display, but I've never really investigated it further. Doesn't matter much now, whatever the reason, the display is dead.


I have bought Apple refurbished machines. I always understood them to be more-or-less unused returns. IIRC Apple cannot legally advertise returned machines as "new", hence the "refurbished" designation.

Moreover Apple has an essentially unlimited supply of machines. I can't imagine it would be worth it to actually take in machines and repair them (not least because it would undermine confidence in the refurb store if it was a gamble about whether you'd get a basically-new machine or beat-up machine.)


It is also my understanding that Apple refurbs are returned devices.


I usually try to avoid "first gen" Apple products. There usually seems to be a big jump in quality with the next iteration. Did that factor in your decision? do you think it is worth the wait for their 2nd try?


I ordered a 16GB mini for the kids, it won't make it here by Christmas.... We just need it for the virtual schooling.

Anyhow, I expect that ARM9 will be the hotness and this first generation will lose support a little more quickly and next years models. I guess it depends on ARM9 though.


This is old physical hardware, and the CPU itself has been around in some form for a while (I don't think they've done a big micro architectural change since the A12 or so), so it's probably not as risky as usual.


Hi, I'm the author. I didn't have much choice. Whatever older would be worse than M1 and whatever newer would be more expensive and I'd have to wait several more months with broken display to get it. I was really tired of waiting, working on the potato with ghosting display was like torture and I was counting days when the M1 was about to be delivered.


The next gen will almost certainly be better (faster, more efficient, etc.), but that's true of any computing product you buy.

It's impossible to say if the next gen will be a giant leap where you'll feel bad for "wasting" money on the previous gen, or if it'll be just a small upgrade.


This generation there are no changes to the form factor to speak of. The 2nd gen might be where the hardware outside of the processor package changes, it could be a first gen of the new form factors for air and pro macbooks. With its own problems perhaps.


What I find a bit wild is that the author was using a laptop newer than my late 2013 MacBook Pro, and said everything was sluggish. My maxed 2013 pro is still super fast, but it seems like all the USB-C MacBooks that came afterwards sacrificed performance for.. what, thickness?


I have learned to stop trusting people who say their old computer is “super fast”. No, your computer from 2013 isn’t super fast. You’ve gotten used to the slowness.


I think there is something to be said about snappiness during basic tasks like web browsing. I had a 2012 macbook pro that had an ssd and memory upgrades, and functionally tooling around Mojave felt no different than a new computer; just as snappy and performant. You don’t always hit a point of strain with your device just doing basic tasks. Now if that 2012 macbook of mine connected to a zoom call on the other hand, then the computer would overheat to the point where the battery would swell and disable the keyboard until it cooled down.


What?

I daily a 2015 MBP. I own other newer hardware as well (dev kit; Linux box; wife has a modern MacBook I’ve used extensively). It’s not slow.

I just don’t run any Chrome junk at all. I still hit 8 hours of battery life and never experience slowdowns - even with Xcode running 24/7.

If you avoid modern bloat, we’ve had fast computers for quite some time. I’m sure the M1 is still another league, and I’ll likely buy one just to come forward a few years, but otherwise any Mac from the past 5-7 years is still very, very usable IME.

Short of that keyboard issue, that is.


Nah.

I mean, part of it is that I don't work at a big company that requires I transpile 100k LOC every time I want to see my work on screen any more while attending to 100 slack channels at the same time, but I'm typically running Goland, VSCode, Safari (what I use personally), and Firefox (better for testing my web code) at the same time and it's snappy. Maybe 2-second compile typescript or go compile times, no noticeable slowdowns switching apps etc.

My laptop: MacBook Pro (Retina, 15-inch, Late 2013) 2.3 GHz Quad-Core Intel Core i7 16 GB 1600 MHz DDR3

From what I understand, USB-C Macbooks that came after this magsafe generation had to CPU throttle earlier to deal with thermal issues, so despite better specs they couldn't get pushed without overheating. As well as heat/throttle issues if you try to charge it from the wrong USB-C port.


The 12" MacBook was a largely failed experiment at going finless. A cute concept, but pretty slow out the gate, let alone after 5 years.


Hi, I'm the author. My old machine is Macbook 12". Not Macbook Air, neither Macbook Pro. It's the small, portable Macbook 12" with low performance, low memory, and no fan. In other words, the machine was kind of underpowered already when released, for the trade-off of getting excellent portability and silence.


I had a laptop running Linux from 2013 too and it was always snappy. The only reason I switched was because of how noisy the fans are.

Work also gave me a 2015 Mac and now Linux is running on it. It's pretty fine and far from sluggish. No idea what the author is doing to have problems.


They had a 2015 12" MacBook (the one with no fan). This was essentially an Atom; they very much sacrificed speed for slowness. A contemporary Air would have been much faster.


i guess it depends on what you call slow and what you get used to it. I have an iMac 27" 2012 and i barely can do anything with it.


> Apple allows you to buy a computer with more than 8 GB RAM, but I think it's just a trap how to satisfy and charge those who don't believe the reviews and just want more RAM whatever it takes.

Just spitballing here, but I’m guessing the OP doesn’t spend much time using DAW sample libraries or editing video.


You are probably right. The problem is that it is all grey. Some people do work with large datasets that absolutely perform much better with larger RAM. Some people had bad experiences in the past with slow swap systems and now always believe they need more RAM. Some people look at monitor apps and see that programs allocated large virtual blocks (because they can) and assume they need more physical RAM. Some people had 16 GB systems in the past that they really didn't need and now run an M1 system with 8 GB and find that everything runs fine.

What we need are concrete A/B tests. Two identical (other than RAM size) x86 systems running large data set memory hungry tasks that performs significantly better on a 16 GB system than a 8 GB system. Then the same tasks (running native ARM code) on 8 and 16 GB M1 systems with the identical OS version. Ideally a sequence of tests with varying memory demands. Tests like that would at least get in the ballpark of revealing if the M1 systems are somehow more capable with less memory. The rest is just anecdotal.

I suspect that M1 systems might handle light swapping a bit more efficiently than the latest x86 systems, but tasks with truly large datasets will ultimately be limited by SSD access rates. Of course, that is just another useless opinion until reasonable test data is available.


Hi, I'm the author. I don't know what DAW is and I don't spend time editing video. I made my first video in iMovie ever just a few days after I've got the M1 :)


DAWs or digit audio workstations are for editing sound. In music, that can mean needing to load a 100G file into RAM to get a convincing orchestra or piano sample. So I have a MacPro I bought earlier this year with 384G RAM but...

Thanks to blog posts like yours I am also buying a 16G M1 MacBook Pro for general purpose uses such as web development!


Work bought me a new M1 Mac Pro with 16GB RAM and I’m not feeling the speed whatsoever. My dock causes repeatable crashes waking from sleep. Browsing feels slow and laggy, with things like the 1password add-on visibly loading in parts when opened.

The only things I can think of are that it’s because I used the Migration Assistant to go from my old MacBook Air to this, or that I’ve somehow acquired a lemon. I’m going to try wiping it clean and starting fresh, but beyond that it’s in refund territory.

I really want to be wrong about it, but I’m sceptical of the good press at this point given my own experience.


Yeah wipe it. Mine is blazing fast and I started fresh.


Although I am obviously very impressed with Apple's new Silicon, is there a "I only buy Apple" effect here where people are basically impressed with their laptop being even feeling ridiculously fast because previous Apple products have generally been expensive and not particularly fast relative to the competition (i.e. I see people saying "Wow I can finally game on my laptop" and it's unthinkable staying in an ecosystem like that)?


There’s a big difference between gaming on a thick honking slab of a laptop and an ultra book.


Few if any MBA-sized laptops are much use for gaming; if you want any sort of battery life at all you're largely stuck with Intel's uninspiring integrated options.


I have a Mac mini M1. I’m leaning towards returning it and getting a 16GB but the 16GB had the longest delivery time. Interestingly enough, I do run some “heavy loading” stuff and I just scratch 6GB or so. I am curious what the longevity issues are with the large swap partition.

I should note I run everything (besides Firefox) on arm build (vscode insiders is super glitchy but it's the price to pay for nightly builds).


Why not Firefox? The current release has native arm64 code, and AFAICT runs fine.


Whoa, I did not know about Firefox 84. I actually have it and it updated and I had no clue it was M1 native now. I was wondering why ublock origin started working again all of sudden.


Shouldn't ublock continue to work regardless? If it's web extensions, then the arch shouldn't matter (I must be missing something?)


aaand ? Does it run smoothly?


It does!


I'm in the same boat! I want to swap my 8GB mini for a 16GB mini, but there's a considerable delay before receiving it.


“ I also decided I want 512 GB drive, as I'm used to that size and I didn't want to downgrade” my 512gb MBA gets ~2800/2700 MBs read/write versus the standard benchmarks of ~2000. I’ve been curious for benchmarks of the higher capacities but haven’t found any. Seems capacity still increases performance like it did earlier generations with ssds.


Awesome, I'm waiting on my new pro to get here, it's a huge upgrade from my 2016 butterfly that died on me.

I wasn't sure which one to get but after upgrading the memory to 16GB the cost difference is pretty inconsequential between the two.


I hope native Linux happens for these someday.


Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: