Hacker News new | past | comments | ask | show | jobs | submit login
M1 MacBook Air hits 900 GFlops in the browser with Safari's experimental WebGPU (jott.live)
433 points by brrrrrm 48 days ago | hide | past | favorite | 470 comments



I usually avoid hype. But the M1 is just unreal. I was back in the office for the first time in months, and since getting my M1 MBP...my overclocked i5 hackintosh with 4x RAM just disappointed. It didn’t feel slow, but it stopped feeling fast.


I just got rid of a 10 month old 16" MacBook Pro for an M1 Air. I wasn't unhappy with the performance of the 16", but the heat and noise were just misery inducing.

The thing was unusable on my lap. After 30 minutes or so just coding, it would get so hot I had to put something between it and my legs. My partner could hear the fans from the next room. And to cap it off, since Big Sur I've been unable to get more than 4 hours off the battery—usually more like 3.

Compare that to the Air: I'm losing two USB ports which is annoying, the speakers are nowhere near as good, I miss the bigger screen, and I'm down 16GB RAM (although in retrospect, I never needed 32GB for my workloads anyway). But in return, I get almost no heat, zero noise, nuts performance (my test suites run 2x faster on average, despite running on two fewer cores), and so far 10-12 hours off a charge easy. And I get real function keys instead of the Touch Bar.

Happiest I've been with a computer in a long long time.


>The thing was unusable on my lap

Just out of curiosity, why do so many people use them on their lap?

I know "lap" is in the name, but damn, that posture is just terrible for your spine, neck and shoulders.

Do you not have a table or some flat surface around the house/office to place it on?

Outside of a few crowded meetings that lacked table space for everyone, I never in my life used my laptop in my lap.


I'm at a desk the majority of the time. But, sometimes I want to work on something casually while I'm watching TV. Or sometimes my partner is on an important call so I have to vacate our shared home office.

I don't think it's an unreasonable expectation to be able to comfortably use a laptop on your lap for short periods.


My machine says `MacBook Pro (Retina, 15-inch, Mid 2015)`. It doesn't get too hot from just coding for me. What are you doing?

(It does get hot when watching the highest resolutions youtube has to offer these days, though.)


You have what many would consider the last 'good' mbp. After that one, Apple pushed to make them thinner and try to make them faster, but really went too far with the thermal envelop. It's 100% Apple's fault for the thinness, but Intel didn't do them any favors with their delays either.


I had a 2008 15" MBP, the original retina 15", and currently have a work issued 16" i9.

The 16" gets toasty, but those first two could get scalding hot. And I was concerned about that as I only had 13" in the intervening years, but the 16" is nothing like those at all. Never once had a situation where it was so hot that I had to remove it from my lap but regularly had those situations with the others.

And I'm pushing it super hard, it rarely gets better than 3-4 hours on a charge. I don't think people are remembering just how bad those earlier MBPs were.


Interesting. I have a 2015, 2017 mbp and m1 MBA sitting here. The 2017 is the worst of the bunch in terms of fan noise and heat. It is faster than the 2015, and both are 15". I don't have a 16", so maybe they fixed some of the issues on that one.


Interesting!

I had a newer MBP for work at a recent job. I liked it well enough, and it was faster (just thanks to newer processors).


Mostly just WebStorm and/or VS Code. Neither are particularly light applications I know, but they shouldn't be taxing on a machine of that spec. For comparison, my previous machine (an entry spec 2015 13" Pro) would only spin the fans up when I was actually pushing it.

Throw in some horribly unoptimised crap like Teams and it was like sitting next to a tiny little aeroplane all day long.


My same model does when coding running docker, any video 720p or above, or zoom calls.


Docker for mac is ridiculously bad to use. We use build containers at work, and Big Sur definitely gets mad at the VM activity & the high disk activity. My 2019 15" i9 MBP sounds like a jet engine half the time.


I have the same laptop and the fans are on more than they're not. Not exactly a great experience.


> (my test suites run 2x faster on average, despite running on two fewer cores)

I replaced a 2017 i7 mbp with a m1 MBA, and see the same results running my java test suites. I ran them a bunch of times thinking there had to be some error.


I use logic to make music and the noise from the fan on my 15" MBP goes beyond annoyance to making it hard to perform my task as I can't hear subtleties without cranking up the volume to compensate for the fan. As soon as the 14" drops later this year I'm on it.


> the speakers are nowhere near as good

Wow, I need to go listen to 16 inch, when I listened to new Air for the first time, I thought 'wow, those are surprisingly capable speakers for such a lightweight machine'


The speakers on my 16” are good enough that I don’t much miss my old school stereo with bookshelf speakers which I parted with in a move a couple years ago. They certainly aren’t as capable of that volume, but they’re loud enough to fill my small 1br house, and with excellent audio quality (probably comparable to my old setup).

Apple has consistently provided good audio iterations on its laptops, so I’d assume the same will be true on the inevitable M-series 16” replacement.


They are probably its best feature I'd say (aside from being the first Apple laptop in 4 years with a reliable keyboard).


I'm upgrading when the M1X MBP comes out, which (rumors say) will be another large performance jump


We all were complaining about the lack of innovation with iMacs, barely upgraded, MacBooks with its superficial gadget “emoji bar” as the sole innovation (and the butterfly keyboard), felt insulted with iMac Pro because it feels like “You want better? Here, pay.” It felt like a decade of maintenance, milking the cow.

All along, half of the headquarters must have been secretly working on the M1...


When they released the M1 macs without a design change, my initial take was that it was a cowardly move: they didn't want to attach a big design change and risk calling a lot of attention to what might be a flop. But I actually think it was more of a boast: showing exactly how changing only the processor changes everything in a clear A/B test scenario.


That's not the reason, it was almost certainly due to Rosetta and nothing else.

Changing the processor architecture over for the first time was going to be a massive undertaking (I'm sure there are still folks there who have nightmares about universal binaries from '06). They had no way to run a wide public beta of the hardware before releasing, so had to play it as safe as possible.

The only sane way to migrate something so fundamental is to change absolutely nothing besides the processor, so that you have a side-by-side comparison between the previous generation and the new one out in the wild to debug problems that come up. If they had added [Face ID / a touchscreen / new mechanics for the keyboard] they'd need to debug whether those changes were causing the bugs rather than the M1 changes.

That's why I've fought myself to hold of - I'd expect the next MBP to be a fundamental redesign.

I can live without a touchscreen, but please just give me a Face ID laptop Apple.


> [...], but please just give me a Face ID laptop Apple.

Why is that so much more convenient than eg the fingerprint scanner?

(Even entering my password is pretty quick for me.)

I haven't use any form of Face ID on any device so far. So I am genuinely curious. I do see the appeal of finger print auth over having to type a password or a code on the phone.


I have a Magic Keyboard for my iPad Pro that I use as a Zoom machine and for saving thoughts, doodles, and for distraction free documentation writing. Aside from that it’s also my main tablet for content ingestion.

Using it docked in the keyboard makes it feel like a laptop. FaceID for filling in forms, authenticating with SSO, it’s all basically instant and requires almost no thought from me, not even a single cycle of brain power needed.

I will say it’s not like, amazingly more convenient than the TouchID sensor on my MBA, but it’s nice to have.


This, essentially.

It more or less reverts to the same experience of being on Chrome on any other platform where your passwords just automatically are input, without the obvious security compromises.


I actually strongly prefer the fingerprint to face-authentication (especially in the age of masks). Face ID is just so much more responsive to things like lighting conditions, and you pretty much have to look directly at your phone, so you can't casually unlock while the phone is sitting on the table while you're at a coffee date or something. I'm always touching my phone/laptop anyway when I want to use it, so I honestly see no advantage to not just using that touch itself to unlock.


FaceID uses infrared, lighting conditions do not matter.

You can disable the requiring of looking at the phone and unlocking via FaceID in settings, I regularly unlock my phone without holding it so I am "dead on looking at it".

That said, I also prefer TouchID.


FaceID uses infrared, lighting conditions do not matter.

This is straight up not true. It works fine in the dark, but unlocking an iPhone in direct sunlight can often take a couple of tries.


On the controrary it does not work in complete darkness, dim light is fine. My iPhone 12 never unlocks with Face ID in the middle of the night, and neither did my iPhone XS. I can however see a faint red light flash from the sensor when it tries, I guess this is spill-over into the visible spectrum from the IR.


I've never seen this myself on a XR, but I will take your word for it (As it does make sense), thanks for the info.

I was more thinking lack of light not being a issue, I didn't take into account that mass amounts of light can blow out the image for the sensors.


That's my thinking as well. But lots of people seem to really like Face ID, and I strongly prefer to assume that there's just something I'm missing, instead of assuming those people are all idiots.


I have an iPad Pro with FaceID and the smart folio keyboard and that is a close analogy to the laptop with FaceID.

With FaceID I don’t have to “do” anything to unlock it. When I sit down in front of the iPad, it is just unlocked. I don’t have to think or act to make it happen.

With the laptop with touchID, I need to press a key to wake it and then press the right finger on the touchID key to unlock. It is a more deliberate and complex action.

I can see on phones that the differences are less, but even in the time of masks, I tend to prefer FaceID. If I’m using my phone, I’m also looking at it. The mask gets in the way but that is only a few times a day when I’m at the grocer or something like that. I know that with touchID I could pull my phone out of my pocket and unlock with my finger in one movement, but that doesn’t save much for me. You may use your phone differently that that’s fine. I think there would be value in a phone with both systems as people have different needs and preferences.


This is 100% it.

More than that - masks are an issue currently if you’re trying to unlock your phone indoors in a public place with Face ID, but I just don’t find myself in a situation where I’m wearing a mask when I need my laptop.

(I’m not going to Starbucks or sitting in an office right now, and very likely won’t be until widespread vaccination has taken place).

The convenience of Face ID is transformative to my workflow. It really does essentially take you back to when your passwords would just automatically auto-fill without any kind of checks that it’s still you, without your passwords being stolen if someone swipes your unlocked laptop and runs off.


Think about your experience using Face ID on your phone compared to fingerprint. It’s just easy.


I have no experience using Face ID on my phone.


I think it was more motivated by the fact that the first months with the new processor architecture are rough from a software point of view. While Rosetta seems to work great, there are a lot of things which didn't work initially and now slowly come around with more and more native packages appearing. A complete redesign of the machines would have attracted too many customers who would not have been ready for the bumpiness of the software in the first months.


Yeah this is what I thought at first too, but in my experience it was remarkably smooth. Even as a developer, which is probably a worst-case scenario for this type of transition, I have barely had to think about the fact that it's a different arch


I ran into this. I was making a lot of very conservative noises early on at work about potential risks to our workflows, and I've been waiting and waiting. I had thought making docker images on a different architecture to deployment would bite us, but so far it's the dog that didn't bark.

I don't know how in hell they did it, but I'm impressed so far.


It won’t be a flop.

It will be a teraflop.

More seriously, by the way, do we have comparison between the black iMac Pros and the M1? It would be ironic if iMac Pros were surpassed by the 1/4th-prices M1...


The M1 wins in single core performance and power consumption, in everything else, the pro wins.


They did the same thing when they switched to Intel. Nearly every model inherited its previous design, with big design changes coming after the product line had transitioned. At the time the general consensus was that they were timid about making big visible changes that might make existing customers feel like their products had changed too much. But now it seems like the norm for Apple to stagger architecture and design changes.


There was talk that it was because Apple didnt want to alert Intel as to the massive leap the M1 would be!


How would a design change have clued them in about the performance difference?


If there were rumors of a design that was impossible with an Intel chip, that would have tipped their hand. Keeping mostly the same initial design meant is was easy for Intel to think Apple was just replacing like for like to save some money.


It’s the first Mac in years that’s got me wanting to replace my 2015 MacBook.

Does anyone have experience of Pro vs Air as a dev machine. I’m assuming the pro is an around better machine, but that touchbar....


I had a touchbar Pro and I now have an M1 air. I would go for the Air.

The touchpad is slightly too big on the Pro and the palm rejection is insufficient, leading to unexpected mouse movements and occasionally unintended selection and mass-overwriting of text. This issue is magnified further with the 15" devices on which the touchpad is comically oversized.

The touchbar leads to unintended behaviors like sending half-written emails because you unconsciously brushed against it while entering a number, and only make it more difficult to do basic things like adjust brightness and volume.

The Air, on the other hand, is a well-engineered little machine that is just as usable as a 2015 Pro or Air. Pity about the limited ports. You've seen the rumors that the next Air will ship with more ports?


I have the pro. I should have got the air. I dev mobile and web and fans almost never on, so thermal throttling on air would likely be minimal. The Touch Bar is horrible, accidentally hit “back” button all the time.


We bought one of each on day one for R&D purposes. I absolutely agree. The Air is absolutely the better machine and it’s cheaper. In our benchmarks we actually saw better performance out of the Air most of the time (bursty workloads) and the only thing we could think of was that the bigger heat sink in the Air provides better cooling than in the Pro most of the time, unless you have really long compiles where the fans kick in.


You can even do a little (reversible) mod to the Air where you add some thermal pads, effectively turning the entire bottom into a giant heatsink. It'll get a little warmer to the touch than before, although not uncomfortably, but the sustained performance gains are definitely worth it.


Think of it this way: The Air is not the underpowered almost-macbook anymore. The Air is THE M1 MacBook now. The performance is all there. The magic is all there.

The Pro is like a Plus version, with some bonuses for a higher price -- no thermal throttling whatsoever, a brighter screen, the TouchBar, a bit more battery, better microphones, stuff like that. Get it only if you want those and don't mind a little heavier machine.


I got my M1 MBA recently. I also own 2015 MBA and I've to say the decision taking MBA instead of MBP is legit. I feel like I'm using iPad instead of Mac, all things are buttery smooth. I open these apps vscode-insider, Safari 6 to 9 tabs and terminal with three process in each tab. Apple Music with bluetooth.


I replaced my 2017 MBP with the M1 MBA. Only downside is the smaller screen, and fewer ports (I have a dock though). It's better in every other way.

As others mentioned, the performance of the new MBA vs new Pro is similar. I think the MBA really highlights what's great about the first iteration of the m1. So, if you're ok with the smaller screen, I would go with the MBA. If you feel yourself leaning towards the pro, I would wait for the actual pro's coming later this year.


I do mobile dev and moved from a 2018 Pro to M1 Air and super happy with it so far. All my tools work great on the M1 and most are fully supporting the M1. I always like the smaller screen so even my pro was the smaller screen.

Only thing is the ports but i'm optimistic of https://eshop.macsales.com/shop/owc-thunderbolt-dock when I can get my hands on it.


And the m1 is an entry level part used in some of their cheaper products. I can’t wait to see what M2 or M1x has in store. It’s gonna be bonkers.


I’m a bit worried every time I hear this. We are setting our expectations so high... doesn’t seem like it’s possible for the M2 to be as much of a leap over the M1 as the M1 was over the status quo. The M2 might be amazing, but now that we are all awake to what’s possible I doubt it’s going to blow our socks off quite as much as we hope.

But the fact that we have anything to look forward to at all is awesome. These are super exciting times in the computing space. Processors have been boring for way too long.


M1x benchmarks leaked recently, and it's on the same TSMC 5nm process node with double the big CPU cores and double the GPU cores and double the RAM.

https://www.tomsguide.com/news/macbook-pro-m1x-benchmarks-ju...

Quite a healthy bump.

Next year, we should see an M2 on TSMC 3nm with it's ~40% die shrink and either a ~25% power reduction or ~15% performance increase.

I would personally expect them to take the power/heat cut like they did at 5nm and bump up the core count once again.


I think what’s so impressive about M1 though is the increase in (at least perceived) single threaded performance. We know core counts scale and that’s been the industry focus for some time. So yeah I am not expecting a huge perceived speed up from M1x/M2...but hey M1 feels nearly instantaneous I’m not really sure how big a difference a faster cpu would actually have.


Apple's been seeing a yearly 20% performance boost on it's big CPU cores for many years now.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

So while the M1x is using the same CPU and GPU cores as the M1, the M2 will get an additional performance boost from having a newer generation of core designs.


These are fake, posted here a few days ago.


One of the fairly accurate Apple leakers, Jon Prosser, has said that one of his known good sources has confirmed the scores are legit.


> One of the fairly accurate Apple leakers, Jon Prosser

lol


Someone with an ~80% accuracy rate wouldn't have any sources that have proven to be accurate in the past?


I think we'll see more changes to other parts of the system just based on the fact that now they control the entire silicon. The next machines will probably have incredible demos with flawless "always on" face recognition that happens in microseconds and locks/unlocks your computer as you step away or sit down. Or impossibly good voice recognition that's always listening even while the machine is 'asleep' in your bag or on your desk. We'll keep seeing more and more little coprocessors, accelerators, etc. running whatever fancy machine learning model Apple engineers can dream up. And because all of this is done in the hardware it will be impossibly fast and have almost no hit to battery life. The world is their oyster now, they aren't held back by Intel's whims anymore.


Unless the M2 is a higher core version with 16-32 cores? M1 is already built on a 5nm process and you're right about the upgrades being incremental and small. But apple can bump up the core count significantly to make the m1x/m2 a big leap.


How much will the average user notice the difference between 4 and 23 high-performance cores? This will have a huge impact for highly parallel tasks like rendering, but isn't it still the case that most software still isn't designed to scale horizontally in a meaningful way?


I think the M1 is so impressive because the normal user will mostly notice the single thread and the low thread count performance, so the 4+4 configuration of the M1 hits that sweet spot. The most obvious gain would be from all the multicore loads. But for more and more heavy computing tasks, those get more common too. And I would expect any variation of the M which is tweaked towards performance rather than low power as the M1 is, to also offer a little higher clock speeds. Considering the M1 runs at 3 GHz and the most recent x86 at up to 5, there should be quite some clockspeed headroom, if Apple chooses to use it.


> How much will the average user notice the difference between 4 and 23 high-performance cores?

Instead of 4 cores shared between 8 apps, you could have the same 8 apps each with a dedicated core (or two, even.)


A bigger battery means more room to boost clocks higher. A larger die means not just more CPU cores, but probably 2-4x the GPU performance.


I feel like the M1 chips kind of broke the timeline. It's like the past ~10 years of Great Intel CPU Stagnation never happened, and we finally now get the mobile computers we thought 10 years ago that we would have in 10 years.

I think the M1X - or whatever they call the chip line with a much larger thermal envelope - will blow our socks off.


During those 10 years, mobile went huge and PC demand stagnated. You can look up corporate revenues and news coverage to verify that for yourself, if you don't believe me. It's easy to see how that market environment gave Apple a leg up. Anecdotally, I bought one or two new laptops in the 10 years prior to the pandemic, but I probably bought eight or nine different phones. And while there was a little PC innovation (mainly at the earlier part of that decade), phone hardware got sooo much better. Snapdragon SoC, waterproof flagship phones, awesome cameras, 4G, the whole nine yards. In my mind, M1 is kind of an extension of that smartphone innovation, bridged from iPad and iPhone to the Mac.


If you look at the leaps in CPU perf the iPhone/iPad has had year over year, why wouldn’t we expect this to continue with these chips which are similar in many important ways?

I’d also expect the first release was relatively conservative and restrained to ensure a low risk debut. I’m optimistic for the next few generations.


Seems like they achieve double digit performance gains in each A series chip the last few generations.

I’m excited for a 16” MacBook Pro but hoping Microsoft brings a production version of Windows 10 arm to the market. I need it for various aspects of the work I do. I can keep a spare windows box around but it’d be great to have one system.


I think a mac pro level machine will be the interesting one. Put a bunch of these cpus in a system. I don't know if we can hope for expandability anymore, apple just might not do that anymore.


I'm going to make a prediction here: Including memory on a multi-chip module/building systems around an MCM SoC is going to become the standard for a large fraction of the personal computer market.

The performance gain from doing so in CPU<->core memory and core memory<->GPU transfers is huge, and the manufacturer can match RAM timing and performance precisely to their processor or even implement non standard ram types as they like. There are other benefits too like simplified motherboard design.

Now that Apple has taken the risk, other manufacturers will look at doing the same. Not all computers will use the SoC model, but for laptops and many desktops this will be a big win.


I don’t know how this “memory on the SOC” thing became a narrative. That isn’t responsible for any of the M1’s performance. Apple isn’t using anything fancy like HMC or HBM that can’t be done with off-package memory. It’s a regular 128 bit memory bus and standard LPDDR4x (at the top standard frequency), with slightly higher memory latency than Intel and AMD. Pretty much any HEDT x86 system has a more impressive memory system.


>I don’t know how this “memory on the SOC” thing became a narrative.

Yes it has been spreading like plague. And I had to post something similar [1] not long ago. And many more before that.

The M1 could have an Off package Quad Channel DDR4 Memory and still be as fast. The performance improvement ( from a memory perspective ) is coming from Same Memory Address Space and other similar optimisation.

[1] https://news.ycombinator.com/item?id=26225007


Going off package will at the very least increase power dissipation. Exiting a package, going across a PCB, and entering additional packages will increase capacitance significantly as well as increase resistance and inductance. This will impact performance. If the increased capacitance does not change actual operating speed, then the buffers are supplying more current to overcome the capacitance, not to mention the potential ringing and other undesired effects from the additional parasitics. There is a penalty for going off the SOC. It is not just a narrative, it is physics.

The same memory address space choice is of course important, but its performance and power envelope is impacted by the SOC vs. separate package choice.

The combination of M1 performance and low power has happened due to a series of choices made by Apple. Forgoing user configurability and fixing memory choices at manufacture while using SOC tech made mainstream by the phone industry is one of those impactful choices. There are of course several other important choices, but it is incorrect to discard this choice as non-impactful.


If I understood the parent comment, isn't it more about shared CPU/GPU memory? This is something different which M1 has in common with game consoles and smartphones but not traditional PC's, isn't it?


It's exactly same thing Intel and AMD have in their CPUs with integrated GPUs


no it's not the same. "Shared" means different things.

In M1, the GPU reads directly from memory written by the CPU.

In Intel/AMD, the data has to be copied from the CPU's address space to the GPU's. "Shared" only means there aren't separate main and graphics memory chips/banks. But said shared memory is segregated.


This document has something about Intel's UMA with zero copy buffers (not sure how relevant):

https://software.intel.com/sites/default/files/managed/db/88...


>> But said shared memory is segregated.

Uhm, as a games developer working on consoles.....no it isn't. You can do it like this if you wish, but generally the entire address space is accessible from either CPU or GPU. Maybe it's implemented like this on PC, but at least the architecture design on X1/XBS/PS4/PS5 allows both reads and writes from any area of memory by either cpu or gpu.


On pc by default the memory for the iGPU is a dedicated segment of RAM. There are probably tricks to read from each other RAM but not integrated like Apple has done.


It memory on a SiP.

Taking a quick look at an i9 9900 ($382 for chip alone) it supports DDR4-2666. i9-10900T runs at 2933 - $400 for chip alone.

Apple is running their stuff at 4100 or something. So that looks faster. And the mac mini costs $700 including the insane apple margins?

Can you spec out the $700 machine on pc parts picker that shows that the M1 is nothing special? One with faster or equivalent memory.

Calling the M1 and $700 machine a HEDT?? Huh? Apple is going to sell these things by the truckload.


The i9’s slow supported memory is a product of that being Intel’s 14nm line which uses the same architecture released in 2015. The 10nm core chips use LPDDR4x, just like the M1. The $999 Surface Laptop 3 uses it at 3733 MT/s, just a bit slower than the M1: https://www.anandtech.com/show/14933/microsoft-announces-sur.... That speed of memory is supported on the lowest end 10nm mobile i3, which appears in sub-$300 NUCs available since 2019.

The M1 is a great chip, but that has nothing to do with the location of the memory. LPDDR4x-4267 is a standard memory type. Kudos to Apple for using the highest commonly available speed bin, but it’s a standard speed bin for that type of memory.


You claim it’s common but I haven’t seen that speed out there a lot. And def not in a package close to apples.


It's very common for high-end ultrabooks. Just look at the XPS 13 it runs at 4267. https://www.dell.com/ph/p/xps-13-9310-2-in-1-laptop/pd?ref=P....

exactly the same as the Apple M1. It's just the most common high-end memory speed currently nothing special about it.


Most i9 9900 chips will happily run DDR4-4266 as well. The reason Intel only lists DDR4-2666 has to do with JEDEC standards and overclocking reasons.

Also note frequency is only part of the whole picture. CAS latency is important as well, which is much higher with Apple.


I'd say that having massive low latency caches on die plays a larger role.

>On the cache hierarchy side of things, we’ve known for a long time that Apple’s designs are monstrous, and the A14 Firestorm cores continue this trend. Last year we had speculated that the A13 had 128KB L1 Instruction cache, similar to the 128KB L1 Data cache for which we can test for, however following Darwin kernel source dumps Apple has confirmed that it’s actually a massive 192KB instruction cache.

That’s absolutely enormous and is 3x larger than the competing Arm designs, and 6x larger than current x86 designs, which yet again might explain why Apple does extremely well in very high instruction pressure workloads, such as the popular JavaScript benchmarks.

The huge caches also appear to be extremely fast – the L1D lands in at a 3-cycle load-use latency. AMD has a 32KB 4-cycle cache, whilst Intel’s latest Sunny Cove saw a regression to 5 cycles when they grew the size to 48KB.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


That cache is that big because the decode and ROB are so wide. If AMD or Intel's current designs widened the L1, it wouldn't make a difference. In fact, AMD reduced L1 cache size from 64k to 32k from Zen 1 to Zen 3.

x86 needs to find a way to scale decoders without blowing the power budget. Given that the decoders are already bigger than the integer units, I suspect that will be a hard thing to do.


I get crashes with XMP, usually right at worst possible time on a zoom call where I am presenting! And saying users should overclock just seems weird, apple works out of the box.

For whatever reason, the overall memory system on the M1 systems just seems better than intel. I really wish I could follow more details from on-die cache to how memory is actually loaded / unloaded to speeds, but every time I've looked at it a little it just seems the M1 / Apple are doing it better across the whole stack.


My RAM on my 8700K system is running at 3200 mhz (XMP I). I imagine the 9900 can do at least that.


Using XMP may void your warranty: https://community.intel.com/t5/Processors/XMP-Warranty-void/...

This may have changed since then (Mid-2020). Gamers Nexus have done an undercover sting where they found it was possible for a support agent to reject a warranty request on the basis of XMP: https://www.youtube.com/watch?v=I2gQ_bOnDx8&t=1155


Performance per Watt. Reducing the trace length makes it possible to get high frequency RAM working with acceptable power consumption. How many other ultrabooks clock their RAM at 4266MHz?


It has been said again and again, but this is a very common misconception. The memory is a soldered on extra chip. It's not on the SoC.


It’s in the same package as the SoC, so no, it’s not soldered separately (but it’s not on the same silicon chip).


That's some weird semantics. It is very much soldered separately, they are separate components soldered next to each other [1]. (Yeah, on a own sub-PCB, but still)

If that's not "soldered separately", then we might as well zoom out and apply that statement for the whole PC ("all components are in the same package, only the charger is separate")

[1] https://d3nevzfk7ii3be.cloudfront.net/igi/ZRQGFteQwoIVFbNn


Ah, my bad - should have looked at the board.


Why not both? A little DRAM on SoC and the rest of the DRAM on the motherboard. Kernel in charge of "swapping". Maintains expandability while keeping most of the performance benefit.


How? You're basically describing RAM caching. Putting all the ram physically close to the processor gives a giant performance gain that's mostly lost if any of the system's RAM is "remote" on the motherboard.


Is there any concrete numbers to back the claim that "Putting all the ram physically close to the processor gives a giant performance gain"? Will be interesting to see what is the memory latency of M1 compared with a regular Intel/AMD processor.


It doesn't. As has been corrected time and time again.

The M1 has pretty high memory latency at around 100 ns [1], which is significantly higher than either AMD or Intel for typical systems. Note that physical distance between CPU and memory is rather less important for latency, as DRAM is high latency in itself, so adding a few ns at most due to wiring is not going to matter.

[1] https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


Just for context, the M1's latency is fairly close to other LPDDR4(x) systems (Tigerlake and Zen3 SoCs). This is fairly typical of LPDDR4 compared to normal DDR4, part of the compromise.

Not the most scientific, but userbenchmark is useful because it has latency graphs available for millions of systems.

Latest gen Intel with LPDDR4x chips is well over 100ns https://www.userbenchmark.com/UserRun/40531587 While the same CPU is almost 40ns faster with SODIMMS of DDR4. https://www.userbenchmark.com/UserRun/40527352


Yes. Thank You. If M1 discussion continues to be like this we have a possibly of stamping out M1 misinformation on HN.

But sometimes we are just lazy to provide the context or to spell out everything. These information is so readily available with a simple Google. And yet the past dozens of M1 thread this "memory" advantage thing keeps popping up.


What performance benefit? Its normal memory running at JEDEC speeds.


It's subjective, like in high end audio.


Sounds like what the Amiga did with Chip RAM and Fast RAM.


It's a beast and it's also mega efficient. Right now on my MBA I'm running 2 displays (one external at 3008x1253 HiDPI 120hz), several apps open including VS Code and 3 browsers, playing a 4k vp9 YouTube video in Firefox, and the passively cooled M1 never crosses 40C. My other Macbook would be at 70C with the fan screaming.


My daily driver is a 2019 rMBP -- the last 15" model, with the upgraded keyboard.

I am SO TEMPTED by the M1 upgrade, especially given the trade-in valuation on my machine right now. However, it would be inconvenient to give up ports, and right NOW I still do some Windows virtualization, so I'm holding off.

But it's still SUPER TEMPTING.


May be worth noting that the battery life is much better on my M1 MBA than Intel devices, so you don't necessarily need to sacrifice one of the two ports for charging during the day.

It depends upon your use profile, but for me it leaves both available ports for peripherals not just the one that you might expect.


As is, I'm feeding power and my Tbolt display on one side, and use the side facing me to (occasionally) juice up my keyboard or iPad.

I'm not really interested in juggling plugs during the day.


If the current rumors about the upcoming MBPs are true you really might want to wait for them.


That's where I am.


Same here - my beast hackintosh has now turned back into a barely used gaming PC, because Air is faster and more convenient.


i5, so you're comparing to what, a 14nm processor? yawn.


"i5" isn't terribly specific. There are new i5s done on 10nm SuperFin.


It wouldn’t make much of a difference.


size vs performance, since Adam and Eve


In 1996 you needed 1,600 sq ft of supercomputing hardware to reach the same mark.

https://en.wikipedia.org/wiki/ASCI_Red


For a little less distant comparison, you could buy a Sun E10k in 1997. It was roughly two full racks in size with 64 total Sparc CPUS delivering ~50GFlops.

So 18 of those needed, 36 full racks of space, 1152 CPUs to get to 900GFlops. Each 10k is roughly 39"W x 50"D x 70"H, so about ~244 square feet of floor space for 18 of them, not including space needed around it. Including the space around it, it would be a typical 1BR apartment full of compute.

Also, I assume the M1 does significantly better than 900 GFlops if you don't run it through the browser.


I assume the M1 does significantly better than 900 GFlops if you don't run it through the browser.

What's being measured is loading something on to the GPU and running it there. It doesn't make much difference how it gets to the GPU.


Even more exciting would be a power use comparison, including for the air conditioning and other auxiliary support needs. Exciting times we live in


A single E10k fully populated needs 11,041 watts, not including the needed air conditioning. So 198kW for 18 of them. I don't know how to convert Btus/hr into power, but each E10k needs 37,165 Btus/hr of A/C.

The Macbook Air M1 would be something lower than 30W, since that's what the power supply is rated for.


37165 BTU/h is roughly a quite convenient 10 BTU/s. 1 W is 1 J/s, and 1 BTU is about 1000 J. So the AC would need about 10 kW, or 180 kW for all of them.

All in all the computers in the example thus come out at north of 400 kW for total power use. Yikes!

That's a power reduction by a factor of more than 13,000 in less than two decades!

Put differently: If your electrical power comes from burning coal, the 1997 alternative to the computing power of a single M1 would emit about 8000 kg of CO₂ per day. The equivalent of driving 80,000 km with a very modern gasoline car. The mind boggles!


“since that's what the power supply is rated for.“

Theoretically, that machine could use more power at full speed. That would mean you couldn’t charge it fast enough to keep it running at top speed forever, but that might not even be noticeable because you hit a heat limit earlier.

In practice, I guess the power usage is lower, as https://support.apple.com/kb/SP825?locale=en_US says it has a 49.9‑watt‑hour lithium‑polymer battery. At 30W, you would run out of that in less than 2 hours (but again, at full speed, you probably hit a heat limit earlier)



Are we comparing single precision vs. double precision?


If the current trend continues we will have a chip the size of a plank unit with a 2020 level computing capacity the size of the Jupiter.

Chat applications will be even slower then today


What will soon be the size of Jupiter is node_modules


Bad timing, the industry is about to move towards direct imports and much simpler dependency management thanks to the evolution of JS and likewise the tooling.

If anything node modules will slowly fade away in the coming years. And not just with that node rewrite that links directly.


Thank you for that info, I will consider it in the future. But before that happens and before these systems are widespread, it's going to be node_modules for another while.

And someone will still somehow have a 500MB folder with node_modules in 2037, I just know it.


Any more info?


If only I could give you gold


I opened Discord after 2yrs of not using it, remembering a simple Slack like chat that was slightly better.

I opened it again recently with all my old channels and it looks like they added a hundred new features I have no idea what they do or never would need.

Discord is still a good product but makes you miss the simplicity. Or a simple IRC interface (although I've seen some IRC servers with a million plugins).


This made me curious about GPU performance as well:

GeForce 210 (2009): 39.36 GFLOPS

GeForce GTX 980 Ti (2015): 6.060 TFLOPS

GeForce RTX 3090 (2020): 35.58 TFLOPS


The GT210 is just about the furthest thing from high end. The fastest single GPU model from that generation (GTX 285) is capable of ~700 tflops.


700 GFLOPS, you mean?


Yup, thanks.


Wikipedia lists it at just over 1TFLOP:

https://en.wikipedia.org/wiki/GeForce_200_series

Maybe it's another case of Wikipedia not being accurate (at this point in time)?


That is actually funny because I sourced my number from Wikipedia too:

https://en.m.wikipedia.org/wiki/List_of_Nvidia_graphics_proc...

So there’s definitely something wrong there.


Right, the low end models are excessively underpowered and only exists for niche cases or tricking people. -50 through -80 is a good baseline for normal, and even a GTS 250 is 387/470 gflops.


I like this metric: sq ft of supercomputer-year. It'd be fun to see comparisons of, say, phones, holding supercomputer-year constant.


Now, imagine going back in time to 1996 and trying to explain to people in the supercomputer lab that 25 years later there would be legions of people using portable supercomputers because you need this to build javascript applications for websites


Ok, my Pixel 4 phone has 950 Gigaflops??? https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_pr...


The Snapdragon 855 does 950 Gflops (FP32) natively not in the browser. The M1 does 2.6 Tflops (FP32) natively and 900 Gflops in the browser.


What difference does it make? WebGPU runs natively.


As far as I know WebGPU has a performance overhead, also he is using an experimental implementation. How do you explain the difference between the 2.6 TFlops (FP32) [1] of the M1 and the 950 GFlops (FP32) of the 855 [2]?

[1] https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

[2] https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_pr...


Thanks for pointing this out. This does put things in perspective. While it's true that achieving that in the browser is something, this sounds much less interesting and a lot more like the usual hype.


From the article:

> Hitting nearly 1TFlops in the browser (50% of peak) is extremely empowering and it's exciting to see such technology available.


So much for an "amazing M1". So it's as performant as a chip that was in a phone released a year prior.


A passively cooled M1 beats a Ryzen 3900X, an i7, and an i9 in most benchmarks[0].

But if you want to pick this single benchmark so that you can conclude that the M1 is not better than a mobile phone SoC from a year ago, then you do you.

[0] https://tech.ssut.me/apple-m1-chip-benchmarks-focused-on-the...


Some of those tests are realistically testing RAM speed more than CPU power. Yet he has the 3900x configured with only 1600Mhz memory clock compared to 2133Mhz on the M1.

The single core performance of these chips is very impressive, especially at such a TDP, but in raw multi-core CPU power it simply does not beat a 3900x.

But if you want to pick one set of benchmark tests so that you can conclude that the 3900X from two years ago is not better than an M1, then you do you.


> but in raw multi-core CPU power it simply does not beat a 3900x

I agree.

> if you want to pick one set of benchmark tests so that you can conclude that the 3900X from two years ago is not better than an M1

The thread was clearly about M1 vs mobile phone SoCs. I just gave some benchmarks to show that the M1 is not a mobile phone SoC but can instead compete with very decent laptop/desktop CPUs. The 3900X wasn't the main point. I even own a 5950X myself.

> you do you.

In all fairness: I shouldn't have used that phrasing.


The M1 hits roughly three times what the Snapdragon does outside of the browser. So it's about three times as performant in raw score.

Or, as the parent link says, INSIDE the browser, it hits 900 GFlops, which is absolutely amazing.


I like the M1 in theory but my work’s weird crusty express app produces an empty response on localhost on the M1. It works on everything else of course and even all the microservices work (also express) but at this point I’m stumped. Maybe I should just try x86 node for now?

I think my point is, for developers it’s not working out that great in a lot of cases, especially docker and so on. Performance is not everything if you were hoping to get some actual work done!


I use Docker (preview) on my M1 everyday for work and it's great. You can run `aarm64` containers natively, or you can emulate, for example, with `--platform linux/amd64`.

I'm guessing you're running into issues with C dependencies. You can go also go full Rosetta: create an alias to your terminal, right-click enable "Run in Rosetta", open it, then everything you run from this terminal will also be Rosetta (amd64), and you run Rosetta homebrew, node, etc. You can confirm what is running under Rosetta in Activity Monitor.

I've recently switched my default homebrew from Rosetta (/usr/local) to native (/opt), by switching my PATHs around since I was mostly waiting for go1.16 to arrive.


> You can go also go full Rosetta: create an alias to your terminal, right-click enable "Run in Rosetta", open it, then everything you run from this terminal will also be Rosetta (amd64), and you run Rosetta homebrew, node, etc. You can confirm what is running under Rosetta in Activity Monitor.

Great tip! Didn’t know. Thanks.


Docker tech preview seems to work pretty well so I'm not sure what you mean "especially docker". The Docker M1 preview also supports x86_64 emulation with QEmu reasonably well.

https://docs.docker.com/docker-for-mac/apple-m1/


I've been using docker on the M1 and it's been fine. Also cross-compiling works great (buildx).


Yes I'd be surprised if the fiddly open-source parts of non-XCode toolchains are reliably being built for M1 (or ARM in general) yet. An interesting downside to the M1 macs for sure, at least for a while. Thanks for bringing it up.


Actually, I've found the opposite. XCode stuff I found janky, poorly thought out and requiring arcane compiler flags set and a bunch of cryptic error messages.

Homebrew works great on M1. I use it a lot and have only had one problem (some gem on ruby doesn't compile properly) where I've had to switch to the x86 version, but it all seems to work side by side perfectly and you can just prefix any command with _arch x86-64_ if you do need x86.


Now that Homebrew is up and running on Apple Silicon, I haven't found too much that doesn't work. MacPorts seems to mostly working too though at this point brew seems more up to date.

If you want to get an M1 Mac for development, I haven't found too many things that don't work and most work very well. The compile speed on even an M1 MacBook Air is incredible.


waiting for this and get I can now exchange my developer one coupon with a real one ...


Have you tried running under Rosetta?


Using Elm compiler under Rosetta for over 2 weeks, it's still at-least 2x faster than latest Intel mac. Also using Atom editor with Rosetta which works okayish, vscode has got a M1 build which works well.


Yes, and it still didn't work :-(


You probably won't get this performance when webgpu is finalized. It has to add bounds and sanity checks. It's unclear how much worse the perf will be.


Assuming the M1 GPU works like most modern GPUs then bounds checking is already built into every memory operation.

Memory operations in modern GPUs basically evolved from fetching textures (which intrinsically have bounds checking built in, they have a width and height). All modern desktop GPUs (and probably mobile GPUs these days) use "descriptors" for textures and buffers which specify both address and size. Out of range fetches from a buffer return 0 and out of range writes are no operations.

There have been some GPUs in the past that could literally write to any address in main memory (famously the GPU in the XBox360 could do this), but its not true of any modern GPU as far as I know.

On a different note, 900 GFlops from a GPU is not really that impressive. Desktop GPUs reached this kind of performance nearly 10 years ago, but I guess its not bad for a first generation new design.


I guess it's not bad for a fanless 10-watt SoC the size of a stamp on an entry-level portable computer in a browser.

To get that kind of performance nearly 10 years ago in a desktop GPU, I bet you would need a whole lot of dollars, watts, and cube inches.

It is impressive unless you compare apples to oranges.

Plus, on bare metal it reaches 2.6 TFLOPs already.


>entry-level portable computer

Not all computers are Macs and at €1000 starting price, it's the entry level Mac but by no means an entry level computer.

Entry level computers are in the €400 ballpark (i5/4500U, 8GB RAM, 256 SSD).

For €1000 you could get a pretty strong gaming computer which is by no means entry level.


Entry level m1 is a quite a bit cheaper than that in euros.


MSRP is actually quite a bit more, almost €1129.

https://www.apple.com/at/shop/buy-mac/macbook-air

Not saying you can't find it under that on some promotion somewhere but that really depends on market timing and on where you live.

For example in Austria you can't find it under €999 and that's definitely not entry level money as Apple's entry level comes at a premium and is not representative of the entire PC market entry level.


Note that the price in Euro (almost?) always includes VAT, while in USD it almost never does. Which explains why albeit $1 < 1€, the displayed euro price is higher than the dollar price.

That being said, I agree with all you said.


I was looking at Mac mini m1 on the German Apple store, for 799.


Nearly 10 years ago a 2.6 TFLOP (FP32) GTX 660ti cost about $300 and had a TDP of 150w.


Kinda proving my point then? Power usage 15x higher and $300 is just for a card that is probably as big as a whole Mac Mini.

Now add the rest of the components and the prices, wattage and size shoots up exactly as described.


See, that's what I also thought, but I've heard that the built-in bounds checking can be pretty buggy. I'm not an expert though.

Yeah, compared to modern desktop GPUs, which can hit probably 20-30 times this, it's not that impressive. That being said, they're also consuming 20-30 times the power.


this is a good point, and I'm quite curious what the hit will be.

Safari makes it substantially easier to enable WebGPU than Chrome does (requiring a canary version and flags), which leads me to believe there's already some security mechanisms in place. But, time will tell!


I tried it with Chrome Canary with the relevant flag but unfortunately it didn't seem to work with this particular site failing with "TypeError: Failed to execute 'createBindGroupLayout' on 'GPUDevice': required member entries is undefined.".

Shame as I wanted to see what happened if I pitted my desktop against it. Of course it's likely the WebGPU implementations between browsers are not equivalent from a performance point of view.


Unfortunately the browsers haven't quite settled on a standard, so the code posted only works in Safari.

More context here: https://news.ycombinator.com/item?id=22022962


It's pretty clear that SPIR-V should be chosen.


You may want to read this summary of the arguments pro and con SPIR-V, made by Kvark a gfx engineer at Mozilla, so you have a more balanced view of the topic :

The summary exists in two variants: a graphic summary[1] and a textual one[2].

[1] https://kvark.github.io/webgpu-debate/SPIR-V.component.html

[2] https://kvark.github.io/webgpu-debate/SPIR-V.html


WGSL has been chosen as of 2021, nothing more to discuss.

https://gpuweb.github.io/gpuweb/wgsl.html


> I tried it with Chrome Canary with the relevant flag but unfortunately it didn't seem to work with this particular site failing with "TypeError: Failed to execute 'createBindGroupLayout' on 'GPUDevice': required member entries is undefined.".

Firefox also raises the same error. It looks like Safari implements a former version of the WebGPU draft[1] where Chrome and Firefox implement a more recent one.

[1] :https://gpuweb.github.io/gpuweb/#dictdef-gpubindgrouplayoute...


They would show real usage performance though.

Still valuable.


The M1 MacBook Pro gets just over 1TFlops.

    best: 1022.61 gflops
    [numthreads(2, 16, 1)]
    compute void main(constant float4[] A : register(u0),
                      constant float4[] B : register(u1),
                      device float4[] C : register(u2),
                      float3 threadID : SV_DispatchThreadID) {
      [...etc...]


There must be some quantization happening here for us to both get the same gflops to 6 significant digits.


I looked into this in the past: it's the timing API (and the fact that I'm not running that many iterations of the kernels). Browsers restrict the resolution to prevent side-channel attacks


My iPhone XS is faster??

best: 2684.35 gflops [numthreads(16, 16, 1)] compute void main(constant float4[] A : register(u0), constant float4[] B : register(u1), device float4[] C : register(u2), float3 threadID : SV_DispatchThreadID) { uint m = uint(threadID.x); uint n = uint(threadID.y);


That's exactly what I got on my air

```

best: 1022.61 gflops [numthreads(2, 16, 1)] compute void main(constant float4[] A : register(u0), constant float4[] B : register(u1), device float4[] C : register(u2), float3 threadID : SV_DispatchThreadID) { uint m = uint(threadID.x); uint n = uint(threadID.y);

```


M1 Mac Mini

best: 1022.61 gflops [numthreads(2, 4, 1)] compute void main(constant float4[] A : register(u0), constant float4[] B : register(u1), device float4[] C : register(u2), float3 threadID : SV_DispatchThreadID)


What's impressive about this is the high utilisation in this scenario. This is 900 GFLOPS out of a theoretical 2.1 TFLOPS on the M1.

The Macbook that this site lists at a minimum comes with a 5500M which has a theoretical peak throughput of 4.4 TFLOPS, over twice that of the M1. Yet the M1 gets better almost 9 times better performance in this benchmark. Apparently.

The issue is, though, we don't know whether this is actually about the architecture of the products or if the Safari WebGPU implementation is just not as well optimised for the AMD GPUs that came with the 2019 Intel Macs.

Or I suppose there's also the possibility that it's actually fallen back to the iGPU which actually has a peak throughput of 384 GFLOPS.


I'm impressed by this minimal blogging platform. No author name, no link to the main page with other blog-posts. Just the post. Reminds me of telegra.ph. I like it.


I like it too, but one thing that bothers me about online articles in general recently is the way they hide publishing date. If the date is there, then it's going to be purposely bumped across many years to generate more SEO clicks

Too many times I have tried to look up more recent information, but the article content clearly shows that it was written a few years ago due to inaccuracy

Without reading the article content, I haven't found a way to discern which articles are actually released recently. Google Search filters for time does not seem to work as well anymore as they have found a way around it, but Google Search quality going down the drain recently and I ranted about it in this post: https://news.ycombinator.com/item?id=26202941


oh this is an interesting concern. Perhaps I should expose an endpoint to allow checking the "last edit" date of all notes. I'll need to think through the privacy/security concerns about that, though


are you the maintainer of the platform? If so, props! I also dig the minimal vibe to it, it's a really interesting approach.


This is one of the reasons I added archiving to a browser we’re building[1]. I wouldn’t archive when I had to do it manually. The internet is way too ephemeral as an information source. Let me know if you want to try it at some point (super pre launch).

[1] CloudSynth.com


Unfortunately when I loaded it it had a major screen repaint about half a second during load. It would be more minimal if it didn't have that.


I saw that as well. I think it is loading a font.


Rather, The page includes the Markdown of the article, which is then rendered to HTML in client-side JavaScript, after page load. This is not a good approach: it means that everyone will always get a complete rerender after the page finishes “loading”, as it switches from monospaced Markdown to the final HTML. Doing it before the load event would be a little better, as it could make it so that many people wouldn’t get the flash of Markdown source, but the real solution is to please render the Markdown on the server and serve real HTML instead of <pre>[Markdown]</pre>.


It's minimal-looking, but the formatting is broken without Javascript.


yea, I haven't spent too much time reducing the markdown rendering overhead (just pulled a library off the shelf).

I basically just use this site to upload static text files. The same text can be rendered a couple ways (e.g. as hmtl, raw, as code with syntax highlighting)


Not sure I understand. Why don’t you run the markdown source through markdown once, then put up the resulting static HTML without any JavaScript?


eh, I'd prefer to save the source of truth when possible. I might want to swap out the rendering library one day for something prettier


What's broken? It looks fine with noscript blocking everything.


Comparison screenshot, Firefox on Windows: https://i.imgur.com/NmToofC.png


It renders MarkDown in the browser?? That can’t be, can it?


Pet peeve is using "here" or "this" as link text. I can't believe in tech people still do this.


Why is this so bad, and what do you suggest as an alternative?


Original: If you have Safari, you can try it [here].

Better: If you have Safari, [you can try it].

Best: [Try it on Safari].

--

Original: The tuning code can be found [here]. The basic idea is...

Better: The [tuning code is available]. The basic idea is...

Best: The basic idea of the [tuning code] is...

--

See how explicit it is? How it reads better, but also how anyone scanning the blue links immediately gets an idea of the link's destination? Do you prefer reading things in two steps or one? This way, the link text informs, rather than being a reference back to previous text.


Say I’m using a screen reader and am skimming through the links. I want to hear something like this:

“Link, WebGPU demo. Link, tuning code.”

Instead, I will hear this:

“Link, here. Link, here.”

This is not very useful.


No, you'll hear "Try the WebGPU demo; link, here".


I said skimming through the links. When you’re reading the whole text the difference isn’t so big (though I’d still prefer “link, try the WebGPU demo” to “try the WebGPU demo, link, here”), but it’s common to navigate through documents by heading or by link. (Related: all desktop browsers support navigating through the document by link/form field with the Tab key.)

Imagine also reading through the whole document and then deciding you want to click on one of the links. You’ll execute an action like “back to last link”, and the screen reader will announce “link, here”. Now which link was that? And so you’ll have to ask it to read the whole line before you find out, which really slows you down.


Explain. I like to do it


Numerous reasons: https://www.wordpress-web-designer-raleigh.com/2015/04/4-rea...

It's indirect communication. It reminds me of Windows "OK" boxes vs. OS X "Save". The linked text should describe the contents of the link, like a label. It's more readable (since link text is styled differently) when skimming the text and removes an extraneous phrase from the prose. Have you ever seen an NYTimes article use "here" as link text? No, they just write, and link the relevant words.


Thanks, that’s a good article and apologies for the belated reply.

I still think there is a place for “click here”, I think of it as introducing some stupidity to the page - a very easy action that helps ease people’s mind - having too much information density can also get overbearing. I might sound crazy but I think there’s a place for dumbing down at times! Of course, it should be used very selectively. I would also be annoyed if WikiPedia was full of click here’s. To me 1 or 2 maximum per page


It’s the demo website for Jott https://github.com/bwasti/jott


I'm a cynical bastard, but the M1 MacBook Air is the real deal. I don't recall being so pleased with a laptop in a long, long time. I'm yet to find something I could possibly gripe about.


At first I was skeptical, but I've seen nothing but good things about the M1. I'm now desperately waiting for the 16" M1. My 2015 Pro is starting to show it's age.


Same here, really hoping the removal of the touch-bar is not a myth. MacBook with M1, good keyboard and no touch-bar would definitely be the best laptop on the market for a looong time..


I'm never buying another MacBook with a Touch Bar. Ever. I've learned that lesson the hard way. I refuse to pay 3,500$ and have to put up with something actively hindering my work.

Getting rid of the Touch Bar might be realistic. If only it were realistic that they would add a USB-A and HDMI port back to it too...


The HDMI is apparently coming back, according to Ming-chi Kuo. No word on USB-A though.


Out of curiosity, if they kept the touch bar but as an addition to a row of function keys, rather than a replacement, would you still have an issue with the TB?


Personally I would rather it was gone. My fingers are big enough that they brush the edge of the touchbar while I type and volume/brightness/whatever just flaps endlessly while I'm trying to get work done. I have to use an external keyboard with that thing.


Btw you can lock the mode of the touch bar and customize/remove the items.


I doubt that would happen if there were a row of physical half-height function keys between the numbers and the touchbar. But I also doubt they would make that laptop, it would require a smaller touchpad and most people wouldn't want that, myself included.

Or 4:3 screen, but we can keep dreaming there...

My hope is that they offer the touchbar as a configurable option on all MacBooks going forward, at least until they realize no one is buying the ones with the touchbar included. Maybe some video editors like it? It's a failed experiment from where I'm sitting.


Or they could, you know, just do the obvious thing and give it a touch screen like their other devices.


For me, I think it's a novel concept, but it's rarely used in practice. I like having it as a scrub bar for video/audio apps or for fine-tuned volume control and it's nice for emojis. But other than that, I rarely interact with it.


Every now and then I overcome decades of muscle memory and push the buttons on a modal from the touch bar.

The ironic thing is that I would use it a lot, if, and only if, there were any browser which displays tabs as favicons on the touch bar.

But there just... isn't. It's completely baffling, the Safari preview is useless and Firefox has a 'touch to select address bar' button which I always forget is there.


I've found it occasionally helpful when using a fullscreen app, such as having access to Camera/Mute controls while on a video conference without having to remember the hotkey.

But I also wouldn't be particularly sad if it disappeared.


It all seems too good to be true, memory and ssd upgrades aside, the M1 devices seem to be fantastic value for money. Also waiting for a 16" M1, decided to order a mini in the meantime.


It definitely is. I'm daily driving a Macbook Air, something which I never thought I'd say. Even beyond the CPU crunching performance, I'm not sure what part of it is causing this, but it's just so much smoother for a variety of misc things. I suspect either better coupling with the integrated GPU or the unified memory.

But many little things, like the smoothness of the OS, or the way that the screen wakes from sleep instantly are just superior to my much more expensive, and power hungry desktop. Of course, my desktop can still cream it in a GPU workload but for programming, it's amazing.


It's certainly a capable SOC, but non-replaceable solid-state storage is a non-starter for me.


I would have said the same a couple years ago, but nowadays I can get an external 5TB solid state drive for a few hundred $$ that easily fits in my laptop case. As long as my OS runs from the internal disk, I don't really need much more. Everything in the cloud anyway. :)


My concern is more with the longevity of the drive. SSD failures are only a matter of time, and these new Macbooks are going to be bricked once they hit their TBW limits.


There’s a very good chance that the MacBook is outdated or otherwise broken by the time the SSD fails. I’ve been using SSDs for about 10 years, and I’m not the heaviest user, but so far I’ve never had an SSD fail from wear. Thumb drives, sure, but not SSDs yet.


I have no idea why you're downvoted. This is absolutely a valid concern and one of the biggest reasons why I'm staunchly anti-Apple.


I wouldn't be surprised if Apple excludes a random feature just because people won't need to buy again if they make the device people actually want.

You see it in the iPhones and the variant SKUs, and Canon and Nikon also had been doing this when there wasn't competition eating their lunch.


There are just going to be more and more pushes to turn your laptop into a subscription service. Pay apple $100/mo and always have the latest hardware sent to you on release, always have the latest version of the OS, always have a store you can drop off a broken laptop and walk back home with a perfectly working one that day no questions asked, etc.


I think it will be this for electronics, and hopefully telsa will be able to provide this for cars. If I cant own it fully, atleast make the subscription service completely hassle free and worth it.


ive been waiting for this since they first announced the iphone upgrade program, sign me up.


In fact this is an answer to the OP about they will hide features. Business is business and they have to survive in some form. One-off purchase is well one-off. If there is a business on-going concerns, they can sort of give out the shaver and recover from the blade sale etc. And as software need update any case whilst hardware we have been good enough for quite a while, it is better a software subscription model.

Of course the problem is whether they can cut the line ...


Enterprise IT works that way. You basically lease with option to buy servers/computers in bulk and refresh every 3-5 years.


That sounds like the absolute worst thing I can imagine ever happening to computing.


Completely agreed. I've owned a lot of laptops and other devices... There's no question at this point that the M1 MBA has become my all time favorite.

The only complaint I have is that you can't run two external monitors off of it, but I have no doubt that the next generation will address that.


A lot of people will still prefer/need two monitors I’m sure, but if you haven’t tried an ultrawide yet I would encourage you to. I would always pick an ultrawide over two monitors.


Any recommendations on software to tile the display? One reason I like 2 screens is because things like maximizing or full-screening still leaves me with the other display for other windows.

A single ultrawide would be way better, if I could count on having 2 virtual displays within.


I use BetterTouchTool for this, it lets you set up snapping so dragging the window e.g. to one side will snap it to half the width, as well as setting up keyboard shortcuts. It can also do a million other shortcut type things which I barely scratch the surface of, cool app!



Alternatively, there's a Hammerspoon package that supports this.

https://github.com/scottwhudson/Lunette

These bindings match Spectacle but you can map to Rectangle's instead.


https://github.com/koekeishiya/yabai is awesome, way more extreme than the other stuff. But it's super configurable and I love how it auto fills windows (but you may not, which can be configured)


I've been using "Magnet" for tiling. It's in the Apple App store. My use case, 15" laptop display and a 32" 4k display. I find it sufficient enough screen real estate. I sometimes think about getting an additional display but can't quite justify it.


I use Magnet too... simple to use, reliable, can't fault it.


I love https://manytricks.com/moom/. Being available in the app store is a major plus from my point of view. And it has the usual tricks like keyboard shortcuts, etc.


Here's a bit of deep Moom lore.

For a 21:9 screen, the max grid size of 25 isn't good enough for me, I specifically wanted 36 so that I could make three 'panels' at 11-13-11 resolution.

Found out with a bit of emailing back and forth that you can by pasting this into a terminal:

    defaults write com.manytricks.Moom "Grid: Maximum Dimension Size" -int 36
The grid display ends up being a bit small but I only tinkered with it for long enough to make 1,2,3 assign to the grids, 4 and 5 assign to the left-and-middle and right-and-middle, and q w e, a s d, z x c to top-middle-bottom of each of the three panels.


I'm a big fan of Spectacle App https://github.com/eczarny/spectacle.

* Edit* Apparently Spectacle has been discontinued.


Rectangle I think is very similar and actively developed.


yep, I moved from Spectacle to https://github.com/koekeishiya/yabai and it's amazing.


Still, it works perfectly fine on Big Sur, both Intel and M1 (in my experience).


I've been using Amethyst [0]. It works, is simple enough to configure and get started and has most things I need.

I found the default shortcuts to be the wrong way around for my cognition so I swapped them but otherwise it's been good. I mostly use the fullscreen (usually for the laptop's screen) and "3-column with main in the middle" layouts.

[0] https://github.com/ianyh/Amethyst


While not a tiling WM, check if Divvy could suit your needs. You can set up hotkeys to resize windows to parts of the screen. The screen is divided into a grid.


Try Divvy. I mapped its hot key to option space and then you can just move the currently focused app somewhere on its grid.


I heavily use Moom for this. It's $10 on the App Store IIRC. It's absolutely fantastic and I would buy it again in a heartbeat.


If only Apple didn't completely nerf text rendering on non-Retina screens by removing subpixel antialiasing


I got around that by using Switchresx to supersample on non-Retina screens. It’s not perfect but looks better than jaggies


I was wondering why my brand new dell display looked like shit on my M1. Thanks for the heads up!


I love my 55" 4k curved TV.


Using a DisplayLink adapter, you can extend a second external display. Still silly these machines they can't do it out of the box.


Bingo. I do this. It's a little buggy every now and then, you lose unlock with Apple Watch and there is a small performance cost on one core of the system, but they just shipped a native driver too.

Compared to my coworker's 10 core i9 build with over 64GB of ram, my base model Macbook Air builds our node app in half the time.


I kind a like that I can run my primary display at 164Hz via dedicated Type-C to DisplayPort adapter. Too bad DisplayLink can't go beyond 60Hz on 2K screen.


Just be aware that DisplayLink uses the CPU to do this, so it will increase overall CPU utilisation, and you'll notice dropped frames occasionally.


There are also some frustrating bugs with USB-C / thunderbolt connections like the LG UltraFine 4K monitor that have been improved but not yet fully resolved. This isn't unique to M1, but does seem to be new to the more recent MacBook models (possibly related to the T2 chips).


As far as I know that's not a M1 MBA limitation, that's a M1 limitation. As the M1 pros also are limited to 1 monitor. Or i'm wrong?


It's like that on the server side too with ARM64. Using AWS graviton 2 instances has been incredible--faster, cheaper, and just better in every way for most of my workloads. I think ARM caught nearly the entire industry flat-footed and asleep. It's going to be wild to see all the big PC and server manufacturers, the cloud providers, etc. scramble to have similar ARM offerings. I can't believe Amazon is a year into a solid v2 of ARM instances, while Azure and Google don't even have a beta or v1 on the horizon.


We're busy moving stuff AWS stuff to Graviton, too. I'm not directly involved, but my understanding is it's been all good (except that not every instance type is available with Graviton). We're going to save a whole lot of money (we spend several million per year on AWS) and as far as I know there's been no performance penalty or any kind of issue.


I would love to do the same, but i'm based on Lambda which i believe is x86-only (would love to be proven wrong!)


One day hopefully soon you'll wake up with an email from Amazon that your lambdas are on ARM, there's nothing for you to do, and by the way your bill is 20% cheaper. They've publically mentioned an intention to move all of AWS internals to ARM, starting with elastic loadbalancers. I'm sure lambda will be a high priority for them.


That seems possible for eg JS lambdas, however mine are compiled rust so I don't think I'd be eligible for that :)


I wish they would ship a Surface Pro like form factor with touch and tablet functionality. iPad Pro with MacOS basically.

Remote development is the name of the game for the backend, for frontend the M1 chips seem capable enough and can fit the form factor easily.

When I'm doing focused work I use an external screen/keyboard/mouse, the only times I use laptop ones are when I'm trying to get work done in a coffee shop or something like that - I even travel with BT keyboard/mouse because the laptop ones make me unproductive.

And for media consumption and casual browsing, couch surfing, travel, bed shopping - the laptop form factor is clunky, a tablet is a much better device.

Maybe I should just give up and buy both, but I wish I could have one device for everything (even if it costs as much as both of those combined)


Exactly this! An iPad with bluetooth keyboard (I cannot bring myself to drop £349 on the magic variety) is frustratingly close but it’s just not quite a ‘proper’ computer and I end up carrying the keyboard everywhere it’s so useful.


FYI the magic keyboard for the 11” iPad Pro is currently on sale for $199 (normally $299). [1] I got one recently and it’s great. So much better to type on than the butterfly keyboard on my 2017 MBP.

1: https://www.amazon.com/dp/B0863BQJMS/


Do get the magic keyboard. It completely transforms the iPad. It is ridiculously expensive for a keyboard but still it's worth it.


The issue for me is cost. iPad Pro + Magic Keyboard is more expensive than a MacBook Air... and I can't write code on it effectively or multi-task...

I do like the iPad Pro a lot though.


can't zoom (not that I like that software but reality bite). Need a laptop ... may be they can fix that so iPad Pro can do single task if one want to


I do hope Apple will release a v2 where the iPad can be put in portrait orientation with the Magic Keyboard (and enable vertical stack for multitasking). So close yet so far.


I highly recommend the magic keyboard. The trackpad feels just like my MacBook Air.


For me it would be my only device (I never make calls anyway) if it had a full OS or, maybe, if iOS could run full dev environments without shooting/freezing background processes. Not seeing that happen though...


I am overall happy with mine. It is very quick, and the battery life is quite exceptional.

> I'm yet to find something I could possibly gripe about.

I can. It does not much like my Philips 328E1 monitor. When it wakes up the display, sometimes it renders all the colors as sorta-inverted. Like, not actually inverted, but maybe just one color channel is inverted. Repeatedly sleeping & waking the display sometimes fixes after a few tries. Rebooting always fixes it.

Also, on a fresh login, after I reach the desktop it will sleep the display maybe 5 seconds later for unknown reasons. Usually it wakes up okay after a few more seconds, and then things are fine from then on.

I tried all sorts of things, then gave up. Now I don't let the display sleep, so I manually turn it off when I walk away from my desk. It makes switching to my other MacBook a little more hassle because I have to do it manually, but I can live with it.

And before you ask if there's something wrong with my display... it works great, 100% of the time, never a single glitch on my work computer, which is a 16 inch MacBook Pro.

Sometimes I regret getting the M1 and not waiting for them to work out the bugs.


Have you tried changing the cable/adapter out? USB4/Thunderbolt is an incredibly complex protocol now and cables are now active with built in microprocessors sometimes even running full OSes inside the cable. It's possible that could be the cause. You didn't mention what type of monitor connection you're using.


>I'm yet to find something I could possibly gripe about.

Have you experienced the accelerated hd degradation? An odd story that popped up recently.

(Not the greatest link....https://www.gizmochina.com/2021/02/24/apple-m1-mac-users-fac...)


The one thing I'm concerned about are reports of potential SSD overwear e.g.: [1].

1. https://linustechtips.com/topic/1306757-m1-mac-owners-are-ex...


I think that's mostly FUD. It's just one tool that's reporting those reads, and the reality is that with modern storage hardware, the numbers they report can be very misleading because of additional layers between the OS and physical drive.

In practice, I can't imagine the SSD load to be particularly worse than what the iPhones endure, and I have yet to hear anyone complain about their iPhone dying from their flash storage being overused.


No it's not FUD and I can absolutely prove this to you. Just open Activity Monitor -> Disk and then watch disk writes. Then check how SMART report changes and you'll see it's exactly match what MacOS itself reports.

MacOS is heavily rely on swap at least on M1 hardware for absolutely no reason.


Why are people downvoting this without saying anything?


It's definitely not FUD, but it's probably a bug. I'm surprised at how many people are in complete denial that this could be happening and are assuming that people aren't understanding the tooling or something, instead of the obvious explanation, that it's actually an issue.


I was just looking at Disk tab of Activity Monitor, and the numbers just don't make any sense, unless Mac apps are all throwing gobs of bits at the disk for no particular reason.

For example, in two days of uptime it says the SafariBookmarksSyncAgent has written 1.94 GB to disk. I haven't added any bookmarks to my machine in those two days and I haven't used Reading List, so how is that even possible?


As other's have stated, it's possible that SafariBookmarksSyncAgent has a process that regularly writes to memory (object creation and garbage collection) and that memory is being paged to disk.


A bug? Why isn't that possible? You seem to have excluded the possibility of a bug despite having evidence that one exists.


I don't think ChrisLTD is excluding the possibility of a bug - he is just asking how it's possible that a process has written so much to disk.


Yes, thank you.


I'm perplexed at the hot/cold stories of the M1.


Me too, I ultimately ended up quite frustrated with the laptop. My suspicion is that it ultimately comes down to your comfort with MacOS and your workflow. I spent a long time trying to get my M1 Macbook Air to simulate my workflow on Linux, but it was never _quite_ there. Sure the thing is zippy, but it doesn't mean much to me without a proper package manager or open operating system.

With that being said, I'm an old-school curmudgeon when it comes to computers. I'm hard to please, and I don't necessarily hold it against Apple that they didn't make "the perfect computer".

As an aside, is anyone interested in an $850 8gb Macbook Air? Lightly used with only 70tb written to the drive.


I'm curious, what is your workflow?

I'm asking because I'm assuming your real issues are with your stated reasons, since MacPorts is a "proper package manager" (being ported from one of the BSDs) and if you don't like that one, Brew is certainly popular. And I've never directly benefited from Linux being an "open operating system", since I don't write code that interacts with anything lower-level than the C API, but macOS' Darwin kernel and many of the binaries are open source.

I'm kind of old-school (my intro to Unix was a DECStation), and I find macOS to be plenty Unixy. If I'm not using Xcode I'm using Emacs (in VI mode). I've never said "I wish I had <Linux feature>"; actually, it's been rather nice that my Wifi doesn't break, I don't have to deal with PulseAudio, it goes to sleep--and wakes up!--when I open and close the lid, the UI is unified, networking is easy to use, etc. However, if you run the Linux GUI applications on macOS it's a klunky experience, so you benefit from finding native applications. Also, Docker wasn't very pleasant, but that might have just been Docker; fortunately I've only had to use it for one project.

As someone who loves the macOS + Unix experience, I'm just curious what workflow it doesn't work well with.


Well, I write quite a bit of GTK, and as you mentioned that's not necessarily a first-class experience on MacOS, which I expected and was fine with. It was quite a bit more frustrating to get the Rust toolchain working though, and from what I understand it will be a while before they iron out the issues on the M1. Another papercut. Then I had issues with binaries disappearing out of nowhere, similar to the aforementioned git disappearance. My Rust programs automatically were removed from PATH, and I still don't know what causes the issue. As for the package managers, I think Brew and Macports are both fine pieces of software, but they don't even come close to how robust and compatible something like pacman is. Working with the App Store is a frustrating experience, and I'd prefer to have all my software managed in one place.

Maybe it's just different strokes for different folks, but I'd much rather just clone my dotfiles and have a Linux workspace up and running in ~10 minutes tops. Moving my workflow over to MacOS feels like trying to board a plane while it's taking off, and it doesn't bode well for my productivity when I can't rely on my tools even showing up in the first place.


My experience (as a user) is that GTK on macOS is functional at best, and I find it an unpleasant experience, so macOS is definitely not ideal for that. No experience with Rust, but it isn't the path the tools expect, for sure. I've never had a problem with PATH not working, maybe the new shell in Big Sur is mostly-but-not-quite like bash? You could try changing your shell to bash, if you haven't already.

If you really want an apt-get (never used pacman) kind of everything-repository experience, though, you're going to be disappointed on any system that doesn't prioritize a centralized repository of software. In practice, this eliminates a commercial OS, and probably commercial software, since there's no effective way to get a central repository. The App Store is the closest, but then you get people complaining you're the gatekeeper, or you get a free-for-all like Google Play. I think Linux manages only because the number of people that use it are small enough that bad actors go elsewhere.


Well, hopefully you understand where I'm coming from then.


Yep, thanks!


That doesn't seem to have anything to do with the M1, though? It just seems you're uncomfortable with macos. A new processor isn't going to change that macs run macos, and if you don't like macos, then you're not going to like the Mac.

As a chip, the M1 is an absolutely amazing.


As a chip, the M1 is an absolutely amazing

Sure, it is an amazing chip. But the delta is only large compared to the rehashed 14nm Intel chips that Macs shipped with in the last few years. Recent Ryzen-based APUs are in a similar ballpark as M1's performance-wise.

I think that a lot of Mac users overestimate the M1, since their only experience with x86_64 is low-point Intel and not peak AMD.


Does Ryzen compete on gigaflops-per-watt, though?

You know Intel's in hot water when AMD makes a better Intel chip than Intel does, though...


I mean, I mostly agree with you. It is an issue with MacOS, and the M1 is a pretty nice package for what it offers. If I could reliably run Linux on it, I'd be a lot more excited. Hell, if I could even drive my monitors with it I'd be mostly satisfied. I guess I'm just one of those "niche users" at the end of the day.


Me too, I ultimately ended up quite frustrated with the laptop.

Same here. I used Macs since 2007 and the M1 MacBook Air basically ended my Mac tenure. The M1 felt like yet another step towards making the Mac more closed. The M1 is another case of Apple breaking a lot of backward compatibility and using the community to fix it for free (who needs a working Fortran compiler, to compile their BLAS and other numeric libraries?). Even though the M1 Macs feels a lot faster than the Intel Macs, macOS still feels tediously slow compared to Linux. Then there are a lot of low-level tools on Linux which do not have good equivalents on macOS (e.g. perf). And then there is the issue (unrelated to M1) that every good MacOS application is slowly switching to a subscription model, which only extends the feeling that I don't own my machine anymore.

I returned my M1 Air and bought a Ryzen-based ThinkPad, upgraded it to 32GB RAM, and I am very happy with. I am now selling my old Intel MacBook Air, which was my last Mac.


> The M1 felt like yet another step towards making the Mac more closed

There's a certain amount of hubris in this, in that (at least until AMD came along), pretty much every single Intel chip has only made Intel money, while ARM is broadly-licensed and is ripe for a surge. All Apple did was create an ARM-compatible chip, which is how someone already got Linux running on it: https://corellium.com/blog/linux-m1

Here's ARM themselves on their licensing model: https://www.arm.com/why-arm/how-licensing-works

Now, which one is more "closed", really? Near-absolute Intel hegemony that only benefits Intel, or ARM, which seems to benefit anyone who wants to tackle creating an ARM chip?

Honestly (disclosing my bias here), I hope IA64 dies in a fire in 5-10 years. It's had enough time in the limelight, Intel has rested on its laurels of late (arguably holding back progress), and its technical flaws run quite deep (another reason why the M1 seems impressive; ARM is simply a better-designed architecture from the ground up)


Profiling tooling on macOS is fairly robust, what features are you missing in general?


I'm interested in the laptop, how can I get in touch with you?


Try building node 10!

But seriously, I've had a very poor time getting my mac mini M1 to cooperate with the aging tech stack at work, especially since it involves docker.

I've had to resort to running my old laptop and SSHing in to get things working until I can put our web app onto a modern set of tech...


Have you tried the Docker for M1 preview? It's been great for me!


Yep, it's been terrible for me. Literally requires a full docker service restart every code change.


It's actually unreal. The only thing not great about it is only two ports. When i'm trying to build to 3-4 devices it's a pain. But found a good USB-C coming out in May which should hopefully help. Worth it to escape the touch bar and fan.


I agree, but I think it's sad that we have to "escape" the Touch Bar. It drives me insane and I'm considering switching from a Pro to an Air just to get rid of it. But that doesn't say anything good about Apple.


Agreed. It’s crazy that people on HN (you, me, and other commenters) are considering an Air just to get away from the TB. Airs only have one free port (one has to be used for charging).

It really says something that HNers are willing to give up ports and speed (and possibly SD, if rumors are true) just to get away from the TB.


One can be used for display _and_ charging, right?


Perhaps if you have a certain kind of monitor, presumably an expensive one? I have a no-TB 2017 MBP that has two ports, and I use one to charge and the other to connect a monitor over HDMI. That means I have zero free ports.

And since I run in clamshell mode, I can't even unplug the power cord temporarily, as the computer will only run in clamshell mode if it is plugged into power.

I do have an Anker mini-dock dongle, but it gets very hot when I use it, so I only use it when necessary. Also, it annoyingly cannot be used to charge any devices, so I can't even plug my iPhone into it to charge. I'm sure there are some other docks out there that are better, but when I was looking in 2017 it wasn't clear which ones those were (aside from the $200+ ones, which I wasn't about to purchase).


This is the one I am waiting for: https://eshop.macsales.com/shop/owc-thunderbolt-dock but ya, not cheap...

This one a bit cheaper: https://eshop.macsales.com/shop/owc-thunderbolt-hub


Yes I do that


What type of monitor do you use? I just did some quick googling, and it looks like they're 2x as expensive as monitors that just have HDMI. It's possible that they're higher-end in other ways (resolution, brightness), but I paid about $120 for a 24" Samsung a couple years ago, and it looks like the cheapest Thunderbolt/USB-C monitor I can get is $190. But that's only 21", which is pretty small.


I was an Air user for 10 years (2010 - 2020). Got the M1 MacBook Pro with TouchBar this time around and love it, and wouldn't go back to Air (but haven't tried the Air M1). Also got an M1 Mini and love that too.


I have friends who have both due to being developers, and; honestly - performance between the two is staggeringly similar.


Good to know, thanks! The reason why I went MBP was that I was using a temporary MBP w/touchbar and grew to love the touch bar.


I'd buy it today if it had a 14'' or 15'' screen.

I love the portability, but I just can't work on 13'' anymore.


try attach your printer and print something (hint: no drivers)


My nearly decade old HP laser printer works just fine with M1 - it talks AirPrint, macOS bundles a PCL driver, and HP have released drivers for M1 too...


Well my Samsung (now HP) does not. No drivers from HP, nor Apple.


These comments are just too generic and lack detail. Be cautious of advertising.


Damn you got me. It's true, I created an account 7 years ago, slowly accumulated 6k+ karma commenting on a variety of topics, all so that I could shill the M1 laptop in 2021. I planned it all.

And I would have gotten away with it too! If it hadn't been for you meddling kids...


Apple, one of the most recognizable brands in the world, pays people to go and tell the nerds at Hacker News that the MacBooks are good?

That's way less probable than the machines being actually good. Which they are.


And also trusts the nerds to not immediately post the email offering them this deal to their blog for the street cred immediately.


This kind of post is specifically discouraged in the HN guidelines.

> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.


it's not a new account. The amount of praise is too much to pull off without being suspicious i think. A few fake accounts will probably just get accepted, but the more you use it the faster somebody finds something wrong is my reasoning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: