Location: US (GA)
Remote: Yes
Willing to relocate: Only to PNW or Bay Area
Technologies: Python, Linux, embedded C, SQL, React, ML (PyTorch, LLMs), electrical
Hi! Generalist here looking to work on hard and meaningful problems with cool people.
2 YoE, coming off a year building the entire tech stack for a YC alum deeptech + biotech startup, from scratch, as the sole SWE or EE. (product: https://andsonbiotech.com/platform/)
I've also built/worked on rockets, drones, a lunar lander, a Mars helicopter at NASA, a LLM-driven car, AI benchmarks, and new type of 3D printer. Read about these: https://synapsomorphy.com/
Prefer early-mid stage, say seed to C, and core products that have 1 or more of [ML, biology, hardware], in order of preference.
Location: US (GA)
Remote: Yes
Willing to relocate: Only to PNW or Bay Area
Technologies: Python, Linux, embedded C, SQL, React, ML (PyTorch, LLMs), electrical
Hi! Generalist here looking to work on hard and meaningful problems with cool people.
2 YoE, coming off a year building the entire tech stack for a YC alum deeptech + biotech startup, from scratch, as the sole SWE or EE. (product: https://andsonbiotech.com/platform/)
I've also built/worked on rockets, drones, a lunar lander, a Mars helicopter at NASA, a LLM-driven car, AI benchmarks, and new type of 3D printer. Read about these: https://synapsomorphy.com/
Prefer early-mid stage, say seed to C, and core products that have 1 or more of [ML, biology, hardware], in order of preference.
Location: US (GA)
Remote: Yes
Willing to relocate: Only to PNW or Bay Area
Technologies: Python, Linux, embedded C, SQL, React, ML (PyTorch, LLMs), electrical
Hi! Generalist here looking to work on hard and meaningful problems with cool people.
2 YoE, coming off a year building the entire tech stack for a YC alum deeptech + biotech startup, from scratch, as the sole SWE or EE. (product: https://andsonbiotech.com/platform/)
I've also built/worked on rockets, drones, a lunar lander, a Mars helicopter at NASA, a LLM-driven car, AI benchmarks, and new type of 3D printer. Read about these: https://synapsomorphy.com/
Prefer early-mid stage, say seed to C, and core products that have 1 or more of [ML, biology, hardware], in order of preference.
I'm a Linux and Windows user thinking of getting a Macbook, mostly for the hardware.
All these recent proclamations of disappointment in Tahoe seem insanely overblown to me. The problem that this post leads with is that thumbnails' corners are too rounded, which "misrepresents" the original? Seriously?
Maybe it's worse now compared to the golden years, I don't know, never owned a Mac. And it's fair to criticize it from that perspective. But I am completely at a loss for how any of these issues could be bad enough to make you switch platforms. Windows and Linux are not exactly usability all-stars! I had to write my own app for decent speech-to-text on Linux which is built in at a system level on Macs.
This feels to me like just the age-old tale of people wanting to (love | hate) brands, when really, things are nuanced. I switched from Android to iOS recently and the experience did not change much. iOS is absolutely not "borderline unusable" like I've seen many claim. If anything it's maybe a 10% nicer experience overall.
Lack of nuance in people's takes makes for less signal in the noise and makes it annoying to figure out the actual pros and cons of different platforms.
Maybe the quality is reduced, sure. But if you "do some searches" you can find all of those things for any major software release.
Seems to me like people in Apple's walls are forgetting that the outside world is not some Garden of Eden. But yeah, I'd have to use it to say for sure.
> you can find all of those things for any major software release
Maybe it is because you were using Windows all the time and you can't judge outside (no judging), but the quality and the (legendary) reliability of macOS was true. Everything was well engineered, well designed, and had a purpose.
This is not the case anymore, and this is why people are so upset too. People are also upset because all those annoying things have been reported since betas and Apple did not really listened to them (except most absolute valid points).
> All these recent proclamations of disappointment in Tahoe seem insanely overblown to me. The problem that this post leads with is that thumbnails' corners are too rounded, which "misrepresents" the original? Seriously?
The example in Photos is absolutely egregious, and as a user of Linux for the past 25 years and recent user of a Mac for work I can’t remember something that bad in a mainstream desktop environment on Linux.
In fact from a usability perspective a modern Gnome desktop seems for more usable and consistent than modern Mac OS and that’s saying something. Font scaling seems to work better in Linux, UI wisgers in Gtk seem to be more consistent. Dark themes have been around on Linux far longer and it shows.
I don’t use the latest Mac OS version; it’s _okay_ from a usability perspective. But this new version seems like a clear downgrade for something where the purpose of paying large sums of money is for higher productivity and comfort.
It's an arms race between human writers and AI. Writers want to sound less like AI and AI wants to sound more like writers, so no indicator is reliable for long. Today typos indicate a real writer, so tomorrow LLMs will inject them where appropriate. Yesterday em dashes indicated LLM, so now LLMs use them less.
Beyond these surface level tells though, anyone who's read a lot of both AI-unassisted human writing as well as AI output should be able to pick up on the large amount of subtler cues that are present partly because they're harder to describe (so it's harder to RLHF LLMs in the human direction).
But even today when it's not too hard to sniff out AI writing, it's quite scary to me how bad many (most?) people's chatbot detection senses are, as indicated by this article. Thinking that human writing is LLM is a false positive which is bad but not catastrophic, but the opposite seems much worse. The long term social impact, being "post-truth", seems poised to be what people have been raving / warning about for years w.r.t other tech like the internet.
Today feels like the equivalent of WW1 for information warfare, society has been caught with its pants down by the speed of innovation.
> society has been caught with its pants down by the speed of innovation.
Or rather by the slowness of regulation and enforcement in the face of blatant copyright violation.
We've seen this before, for example with YouTube, which became the go-to place for videos by allowing copyrighted material to be uploaded and hosted en masse, and then a company that was already a search engine monopoly was somehow allowed to acquire YouTube, thereby extending and reinforcing Google's monopolization of the web.
Innovation has always been faster when copyright is lax. The US was copying British and other European inventions during the industrial age left and right, and their economy took off because of it.
Assuming Eric / Core doesn't come out with some scathing "real story":
Well, it's better to figure this out today (that Eric / Core are not so great) rather than a year or two down the line when I'd have already bought a new Pebble. Still sucks, I was excited. Never had one but I want something in the same niche.
Does anyone have suggestions for other good low-capability, long battery, hackable eink watches?
Former Rebble dev here, I've been very happy with the BangleJS. It doesn't meet all of your criteria but the battery lasts me a week and it's more hackable than Pebble ever was.
Chinese builders are not equal to Chinese hackers (even if the hackers are state sponsored). I doubt most companies would be interested in developing hacking tools. Hackers use the best tools available at their disposal, Claude is better than Deepseek. Hacking-tuned LLMs seems like a thing that might pop up in the future, but it takes a lot of resources. Why bother if you can just tell Claude it's doing legitimate work?
> I doubt most companies would be interested in developing hacking tools.
welcome to 2025. Chinese companies build open weight models, those models can be used / tuned by hackers, companies that built and released those models don't need to get involved at all.
That is a very different dev model compared to the closed Anthropic way.
> Claude is better than Deepseek
No one is claiming DeepSeek to be better, in fact all benchmark results show that Chinese KIMI, MiniMax and GLM to be on par or very close to the closed weight Claude Code.
That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?
I'm thinking a lot about the ARC-AGI ML benchmarks, especially the "shape" of the dataset and what that says about how it should be solved. I think there's good reasons to believe that deep learning - at least differentiable SGD backprop style - is a bad fit for this specific benchmark, due to the tasks being almost entirely discrete symmetries, and also having so little data to approximate the discrete symmetries with continuous ones (considering deep learning to be the learning of continuous symmetries). I think that a more explicit and discrete approach is the way to go, and it's possible to build something surprisingly general and not heuristic-based even without gradient descent, guided by minimum description length to search for both grid representations and solver functions. I'm looking for teammates for ARC-3 so hit me up if this sounds interesting, I'd love to chat!
I made a viewer on my website to build intuition for my preferred perception algorithm which is entropy filtering + correlation. Pretty neat to check out the heatmaps for random tasks, there is a lot of information inherent in the heatmap about the structure of the task: https://synapsomorphy.com/arc/
Hi! Generalist here looking to work on hard and meaningful problems with cool people.
2 YoE, coming off a year building the entire tech stack for a YC alum deeptech + biotech startup, from scratch, as the sole SWE or EE. (product: https://andsonbiotech.com/platform/)
I've also built/worked on rockets, drones, a lunar lander, a Mars helicopter at NASA, a LLM-driven car, AI benchmarks, and new type of 3D printer. Read about these: https://synapsomorphy.com/
Prefer early-mid stage, say seed to C, and core products that have 1 or more of [ML, biology, hardware], in order of preference.
reply