Hacker Newsnew | past | comments | ask | show | jobs | submit | armanj's commentslogin

I'm in Toronto and I can confirm.

for vibe coding stuff, especially when you're outside touching grass, I believe MacBook Neo is perfect. it fills the gap between the phone remote control (which is too painful for chatting with ai cli) and, well, not having any dev device.

Do people really do that when out in the wild?

It's one of the nicest things to do if you love computers, and great for your health compared with staying indoors.

> Could one actually work like this, typing and everything? After my “heart-rate discovery” I decided I had to try it. I thought I’d have to build something myself, but actually one can just buy “walking desks”, and so I did. And after minor modifications, I discovered that I could walk and type perfectly well with it, even for a couple of hours. I was embarrassed I hadn’t figured out such a simple solution 20 years ago. But starting last fall—whenever the weather’s been good—I’ve tried to spend a couple of hours of each day walking outside like this

https://writings.stephenwolfram.com/2019/02/seeking-the-prod...

https://quantifiedself.com/blog/stephen-wolfram-finds-workin...


How do you deal with screen glare?

Moving to the UK is one option. It's been cloudy for about 7 months!

You get an Apple product. At least, for me it was that simple. The ThinkPad I had was pretty high end, and I was using polarized glasses and even a sun shade to work at the park while the girls played. Bought a MacBook and the screen seems to crisply outshine even the sunniest days -- I haven't had to worry about outdoor use since, to my recollection.

+1 to this. Those screens are great in ways that specs just don't show you.

Back when I did much work outside, I used a laptop that had accidental transflective characteristics. In bright sunlight, the LCD actually become quite clear monochrome, with some pixels acting as mirrors and others not, but I don't think they designed the LCD to do that.

I'm not OP but I work outside and use light mode. Macs are generally fairly bright as long as you aren't in direct sunlight. Solarized light mode for the win though.

I love computers but I am not addicted to computers.

LLM addicts do. The AI overlord said to touch grass, because it is beneficial, but they've glanced over the main part of "disconnect from everything".

That's kinda sad to be honest. It's better to just disconnect and enjoy the surroundings. You can get shit done and then rest outside without LLM bullshit.

It can't run LLMs very well, you'll be limited to tiny models with no coding ability and they'll be slow.

i assumed you're connected to internet and using codex/claude code

I'm pretty disappointed in the Neo's battery life though, it limits a lot how much you can do on the go.

How fast can it recharge is probably the main limiting factor. I’m used to finding power wherever I can from the bad old days, but the M1 laptops have spoiled me.

seems like `karpathy/autoresearch` on steroids

> buy

good luck


How reliable is this uptime? and why it's sooo different from gh's official status numbers?

Their headline figure is a bit exaggerated, it's driven from the official status numbers, but aggregates across all GH services.

Imagine you run 365 services, and each goes down 1 day a year.

If those are all on the same day, this would report you having 99.7% uptime.

If instead, each service goes down 1 day per year but on different days, this would report you having 0% uptime.

Despite the same actual downtime for any given service.

The truth is somewhere in the middle, that github has run degraded for a significant amount of time.

But I don't think it is fair to take an incident like this one[1], where 5% of requests were incorrectly denied authorisation, and count it the same as you would the whole of github being down.

[1] https://www.githubstatus.com/incidents/02z04m335tvv


yeah, it's a hard problem to accurately tell people a reliablity number.

Rachel famously wrote about this in "Your nines are not my nines"[0].

The truth is though, that some systems depend on others. Actions being down means you don't merge code or release: but you know... git operations being unavailable has the same effect. It's meaningless to separate the two.

So it depends on the framing.

[0]: https://rachelbythebay.com/w/2019/07/15/giant/


1. This one counts downtime from any service, so if anything is down or degraded they count it as 100% down, which is harsh.

2. Github is doing some classic big org sneaky things where they don't count degraded service fully. So if github actions is partially down for most people in a away that makes you say "github is down", there's a good chance that microsoft doesn't count that or counts it partially instead.


> Github is doing some classic big org sneaky things where they don't count degraded service fully.

Even worse example is the Travis CI. For more than a year their CI jobs sometimes get stuck or do not start for days, and, surprise-surprise, it's never shown at their status page[1] - always green. We would switch to something else entirely if not the unique offering of PowerPC and SystemZ servers/runners. Apart from that - it's the worst CI service I used so far.

[1] https://www.traviscistatus.com/history


> How reliable is this uptime?

IT seems to be quoting incident reports for the duration of each outage, so there is accountability in terms of being able to verify all the details of what they are counting.

> and why it's sooo different from gh's official status numbers?

Maybe this is counting any period with any service showing any level of issue as a complete fail, and the official numbers are cherry-picking a bit (only counting core services? not counting significant performance issues that the other count does because things were working, just v…e…r…y … s…l…o…w…l…y) or averaging values (so 75% services running at a given time looks ¼ as bad in their figures), the two sets of calculations could be done with a different granularity, …

In other words: lies, damned lies, and statistics!

The only way to know is to know how both are calculated in detail, and that information might not be readily available.


There is a link to the repo to verify the code and explain their process

i've been a zed user for almost 6 months. i've encountered maaany bugs which i reported, or that were already reported. they're still there. meanwhile, every single update shipped a feature or bugfix for "ai agents".

not sure how 1.0 ships with that massive pile of bugs. but ai agents are the first-class citizen in this editor, and developer experience is not a priority.

funny thing is i uninstalled zed right before the 1.0 release. kinda relieved i didn't miss anything.


I have a few lightweight apps using deepseek api, and funny how the initial credit I topped up for using r1 is still left. Nothing makes the user happier than getting more for less. cc: anthropics with its fancy token-wasting claude code "features"


Not like on Openai where the credits just expire


hn is this true


I did a quick benchmark & compared it with Qwen3.5: https://github.com/ArmanJR/PrismML-Bonsai-vs-Qwen3.5-Benchma...

in my results, accuracy-wise Ternary-Bonsai-8B is on par with Qwen3.5-4B. But in accuracy-per-byte, bonsai is the clear winner:

=> Ternary-Bonsai-1.7B achieved 65.1% from 462 MiB, beating Qwen3.5-0.8B by 12 points while being ~5% smaller on disk. => Ternary-Bonsai-4B is the accuracy-per-byte winner above 1 GiB. 83.0% from only 1.1 GiB, within 2 points of Qwen3.5-4B at 40% of the weight size.

they show strong promise on edge devices and where disk space is limited. I think this lab is worth watching.


while it seems even with 4.7 we will never see the quality of early 4.6 days, some dude is posting 'agi arrived!!!' on instagram and linkedIn.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: