Hacker News new | past | comments | ask | show | jobs | submit | therealmarv's comments login

Somebody knows if this is better compared to whatever Google Meet is using? With choppy near unusable slow Internet Google Meet still fulfils its purpose on Audio Calls where all other competitors fail (tested e.g. while being in Philippines on a remote island with very bad internet). However Google Meet's tech is not published anywhere afaik.

Can hardly try that out if this PR piece does not contain any code. We can judge it as well as you can from the couple examples they showed off

And the main question: Is it overall better (faster and less power hungry) than a reduced instruction set, powerful laptop ARM based CPU which is around the corner (Qualcomm)? Guessing not...

There's still an insurmountable amount of apps that are still X86 exclusive both new and legacy. So the chip not beating the best of ARM in benchmarks is largely irelevant.

A Ferrari will beat a tractor on every test bench numbers and every track, but I can't plow the field with a Ferrari, so any improvements in tractor technology is still welcome despite they'll never beat Ferraris.

I hear this argument, but I don't really believe it.

If you're talking apps before say 2015 (10 years ago), they can be emulated on ARM faster than they ran natively. That rules out 95% of the backward compatibility argument.

Most more recent apps are very portable. They were written in a managed language running on a cross-platform runtime. The source code is likely stored in git so it can be tracked down and recompiled.

Over 15 years of modern smartphones has ensured that most low-level libraries have support for ARM and other ISAs too as being ISA-agnostic has once again become important. Apple's 4 years of transition aren't to be underestimated either. Lots of devs/creatives use ARM machines and have ensured that pretty much all of the biggest pro software runs very well on non-x86 platforms.

Yes, some stuff remains, but I don't think the remaining stuff is as big a deal as some people claim.

Amazing that you can look at an ISA like ARM and say "reduced instruction set". It has 1300+ opcodes.

I used to think this too, but apparently RISC isn't about the number of instructions, but the complexity or execution time of each; as https://en.wikipedia.org/wiki/Reduced_instruction_set_comput... puts it,

> The key operational concept of the RISC computer is that each instruction performs only one function (e.g. copy a value from memory to a register).

and in fact that page even mentions at https://en.wikipedia.org/wiki/Reduced_instruction_set_comput... that

> Some CPUs have been specifically designed to have a very small set of instructions—but these designs are very different from classic RISC designs, so they have been given other names such as minimal instruction set computer (MISC) or transport triggered architecture (TTA).

It seems that "RISC" has just become a synonym for "load-store architecture"

Non-embedded POWER implementations are around 1000 opcodes, depending on the features supported, and even MIPS eventually got a square-root instruction.

Aarch64 looks a lot like x86-64 to me. Deep pipelines, loads of silicon spent on branch prediction, vector units.

At best ARM is regular rather than reduced

What are you guessing from? Historically, generation for generation, x86 is good at performance and awful at power consumption. Even when Apple (not aarch64 in general, just Apple) briefly pulled ahead on both, subsequent x86 chips kept winning on raw performance, even as they got destroyed on perf per Watt.

>kept winning on raw performance

13900K lost couple of % in single tread performance, which lead to 14900K being so overclocked/overvolted that it lead to it being useless for what it's made for - crunching numbers. See https://www.radgametools.com/oodleintel.htm.

IIRC they claimed that it supposedly is

Would be great if true! Competition always good for us consumers.

For Mac I've settled with Phoenix Slides https://blyt.net/phxslides/ (don't be afraid of the oldschool website design)

It's open source with a long history, extremely fast, handles 4K/5K monitors and color management correctly and supports also trackpad gestures on Mac. It replaced Xee3 for me last year.

Meanwhile my Intel i5 Macbook Pro 13" from Early 2020 which was around 1.8K when bought new in 2020 CAN DO IT! So it has been better in the past!

Apple downsized in regards to multi-monitor support so that you were forced to buy an M1...M3 Pro CPU which costs substantially more for only being able to drive more than one monitor.

A shame that they did not fix the clamshell mode on the basic M3 Macbook Pro yet.

Hopefully Apple wakes up with Qualcomm X Elite chips soon flooding the Windows laptop market (and they can all drive at least 3 external monitors).

I would use it if we can do a (best-effort) import of existing bash scripts into Amber.

after watching the OpenAI videos I'm looking at my sad Google Assistant speaker in the corner.

Come on Google... you can update it.

So why should I buy any Apple Laptop with M3 chip now (if I'm not in hurry)? lol

If you are not in a hurry, you almost never should buy new hardware as the next generation will be around the corner. On the other side, it could be up to 12 months, until the M4 is available across the line. And for most tasks, a M3 is a great value too. One might watch how many AI features that would benefit from a M4 are presented at WWDC. But then, the next Mac OS release won't be out before October.

The Macbook Airs with M3 have been launched 2 months ago. 2 months is really not that long ago, even in the Apple universe. For sure I'm waiting on what happens on WWDC!

That’s why a lot of people weren’t expecting this an even questioned Mark Gurman’s article saying it would happen.

They’ve just released MacBook Air 15 inch, new one is at least a year away.

or Amazon ;)

we want it to still work...

I started with MS-DOS 3.3 but interesting to see that this version is not in the GitHub repo.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact