in 2020 we lived through the extreme left's remedial lesson in public safety. Now we're living through the MAGA right's remedial lesson in trade policy.
(Most MAGA right know this, don't care, see tariffs as a hammer. But they are hitting all of us)
I see the value here. The problem isn't just the search; it's the trust. The biggest hurdle for Zenode won't be the tech, but convincing an engineer that your AI's summary of a footnote is accurate enough to risk a $10,000 board spin. That's a high bar.
I'd argue the core value isn't just a better search or a faster reader. It's about providing a verified, reliable source of truth. This brings up a key tension: you say the AI isn't yet at your co-founder's level of accuracy, but is that precisely the level of confidence required to replace an engineer's manual check? How do you close that gap? You've got the data, but the trust factor is a different threshold?
I.e. maybe you've built the tool to make the problem faster, but the real win would be a tool that makes the problem safer? The killer feature might not be more speed, but rather a confidence score on every AI-generated fact, with a clear path to the source document so an engineer can verify it. It’s not about avoiding the document entirely; it’s about having a better starting point and knowing exactly what to double-check.
Agreed - trust is the key! That's why we've built in sources with links to the exact location in the datasheet and part documents where the AI found an answer. We're working hard to make sure you can trust its answers, but we know most engineers 'trust but verify'. A (transparent) confidence score is a great idea to improve trust in the answer and sources.
To close the gap, we've built our own Q/A datasets and are training custom AIs how to search and read a datasheet (like a new engineer needs to learn early on). We're concentrating on teaching the AI how to identify key information vs noise as it relates to electrical engineering (differences like 'Voltage' in the Absolute Max vs Recommended section) and where information is likely to be found in a datasheet or app note.
Searched for "innovator's dilemma" in the article, didn't find it.
The crux is not whether the extremely rich incumbent (google) will have better infrastructure, but whether that is the field of competition that matters.
They do. But there’s a line between arbitrary and fair. One of the things players want most is for outcomes to not be random/arbitrary and for skill to be judged fairly. That good shots are rewarded and bad shots penalized. The game already has a lot of that baked in so no need to make it more so.
TPC Sawgrass is the closest to a perfect pro course (PGA National second I guess) since it was designed specifically for it. I think a course perfectly optimized for pro play would be very different from what most people would expect.
descriptively: Apple has enough market clout so that it (implicitly) thinks developers will pay the time cost to overcome the lack of documentation and save apple the time it would take to write the documentation. The lever isn't good/bad docs, it's the market dominance apple has in controlling access to high-spend customers.
Similarly, MS was notoriously hard to dev on during MS's period of market dominance. It's a lot more dev-friendly now because it doesn't dominate as much anymore.
I had a terrible time trying to find react performance _monitoring_. There's plenty of performance _troubleshooting_ once you already know what component is slow, but nothing that monitors.
Did I miss something? I see some `measureLifecycleperf` functions in the react source but those look like dead ends.
(Most MAGA right know this, don't care, see tariffs as a hammer. But they are hitting all of us)