Hacker Newsnew | past | comments | ask | show | jobs | submit | javawizard's commentslogin

Jet engines do not strike me as being inherently simpler than muscles, not by a long shot.

They're still the best way we know of going about the business of building a flying machine, for various reasons.


Piston engines surely are more complex than jet engines though? Which replaced the "flapping engines".

They are not. Turbine engines require much higher quality manufacturing and tolerances and operate at much higher speeds and pressures. There is more to it than the perceived number of moving parts.

> But the other way around is not possible due to the closed nature of GPT-5.

At risk of sounding glib: have you heard of distillation?


Distilling from a closed model like GPT-4 via API would be architecturally crippled.

You’re restricted to output logits only, with no access to attention patterns, intermediate activations, or layer-wise representations which are needed for proper knowledge transfer.

Without alignment of Q/K/V matrices or hidden state spaces the student model cannot learn the teacher model's reasoning inductive biases - only its surface behavior which will likely amplify hallucinations.

In contrast, open-weight teachers enable multi-level distillation: KL on logits + MSE on hidden states + attention matching.

Does that answer your question?


Plenty of sources suggest it is:

https://github.com/giladoved/webcam-heart-rate-monitor

https://medium.com/dev-genius/remote-heart-rate-detection-us...

The Reddit comments on that second one have examples of people doing it with low quality webcams: https://www.reddit.com/r/programming/comments/llnv93/remote_...

It's honestly amazing that this is doable.


My dumb ass sat there for a good bit looking at the example in the first link thinking "How does a 30-60 Hz webcam have enough samples per cycle to know it's 77 BPM?". Then it finally clicked in my head beats per minute are indeed not to be conflated with beats per second... :).

Non-paywalled version of the second link https://archive.is/NeBzJ


Xoogler here.

When you get to a company that's that big, the roles are much more finely specialized.

I forget the title now, but we had someone who interfaced with our team and did the whole "talk to customers" thing. Her feedback was then incorporated into our day-to-day roadmap through a complex series of people that ended with our team's product manager.

So people at Google do indeed do this, they just aren't engineers, usually aren't product managers, frequently are several layers removed from engineers, and as a consequence usually have all the problems GP described.


I would.


Big companies are soulless.

I've related elsewhere[0] my story about how Google laid me and half my team off 2 weeks before we were set to receive a six-figure retention bonus following an acquisition.

In the original Q&A with corp dev just after the acquisition was announced, someone pointed out that the contract we were offered allowed for that sort of thing. Google's representative said something similar to the parent comment: "Don't worry, that's not something we actually do."

It was especially galling because, after a round of layoffs a year or two prior to the acquisition, that startup had issued retention bonuses to those of us who were left. Unlike Google's subsequent post-acquisition bonus, contracts for those bonuses explicitly stated they were payable even if we were subsequently laid off or fired, as long as we weren't fired for one of a few specific reasons like embezzlement or harassment or other serious workplace misconduct.

It was such a marked contrast and, like the parent comment, it told me all I needed to know about how Google really feels about its employees, and how very literally true the old saying of "you can't trust what you don't have in writing" is.

Big companies are soulless.

[0] https://news.ycombinator.com/item?id=43680191


I can't agree with this interpretation. A human, somewhere in the bigco, decided laid you off. That specific person decided to take advantage of you, and is responsible for that action. Bigco may have an incentive structure that pushes for this behaviour, but a human looked at incentives and morals and decided. Don't let them off the hook by pointing at the bigco.


I wish CPLDs were more well known in the common vernacular.

The industry draws a distinction between CPLDs and FPGAs, and rightly so, but most "Arduino-level" hobbyists think "I want something I can program so that it acts like such-and-such a circuit, I know, I need an FPGA!" when what they probably want is what the professional world would call a CPLD - and the distinction in terminology between the two does more to confuse than to clarify.

I don't know how to fix this; it'd be lovely if the two followed convergent paths, with FPGAs gaining on-board storage and the line between them blurring. Or maybe we need a common term that encompasses both. ("Programmable logic device" is technically that, but no-one knows that.)

Anyway. CPLDs are neat.


I don’t see how CPLDs solve anything?

You write RTL for them just like you do for FPGAs, you need to configure them as well. The only major benefit is that they don’t have a delay between power up and logic active? But that’s not something that would make a difference for most people.

CPLDs are also a dying breed and being replaced with FPGAs that have parallel on-board flash to allow fast configuration after power up. (e.g. MAX10)


I don’t know anything about this (other than doing mediocre in some undergrad Verilog classes one million years ago). Wikipedia seems to call FPGAs a type of PLD. Of course, everybody has heard of FPGAs; is it right to think they’ve sort of branched off, become their own thing, and eclipsed their superset?


Hence the presumed implication behind the public_id field in GP's comment: anywhere identifiers are exposed, you use the public_id field, thereby preventing ID guessing while still retaining the benefits of ordered IDs where internal lookups are concerned.

Edit: just saw your edit, sounds like we're on the same page!


You know the fun thing is, something like the Allwinner A133 - which is one of the most popular SOCs in lower-end tablets today - is like $5, or $3 in quantity.

It turns out it's actually not as hard as you'd expect to whip together your own board with one of those + LPDDR4 RAM + eMMC storage + fixings, and get yourself something like what you're talking about for... I dunno, sub $50? Maybe even sub $20 depending on how much RAM you put on it and what other capabilities you give it.

I'm in the middle of designing just such a board right now. Totally recommend taking a stab at it if you have any EE chops at all (or want to learn!)


Lets just go with $50 and $20. If you're looking at that on top of the cost of a raspberry pi, comparing that to a super low-end Android phone, used, for something like $80-$100, is that really the way to go? The OS is different but termux has enough features, especially after rooting, that you can probably run whatever you're shooting for. Of course as a hobby, the parts that you find fun don't have to be the parts that I, or anyone else finds fun, so don't take this as me pissing in your cereal, it's more like there's the milk part and the cornflake part and so different strokes for different folks.


Interesting. I'm currently having great fun learning systems programming on the Allwinner A64, and never considered the option of building a board with one, assuming they are still available. Are you documentating your project somewhere?


I'm not but I totally could!

Feel free to drop me a line - my email is firstname@website, where both can be found on my GitHub profile (same username as HN).

And yes, the A64 is still available! https://www.lcsc.com/product-detail/C3036453.html


I'm actually curious, how do we know for a fact that this is the case?


Could you elaborate?

Do you mean to say they’ll never take the payment?


What are the terms? It is not at all clear from the announcement. "part of this three-year licensing agreement", it _could_ mean the license cost is $1 billion, which Disney in turn invests in OpenAI in return for equity, and they're calling it "investment" (that's what's hypothesized above, but I don't think we know). Disney surely gets something for the license other than the privilege of buying $1 billion in OpenAI stock at their most recent valuation price.


Disney gets the opportunity to tell the board and investors that they are now partnered with a leading AI company. In effect, Disney is now an AI company as well. They haven't really done anything, but if anyone asks they can just say "of course we're at the forefront of the entertainment industry. We're already leveraging AI in our partnerships"


Yeah - they save face.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: