It does alright with Rust, but you can't assume it works as intended if it compiles successfully. The issue with current AI when solving complex or large scale coding problems is usually not syntax, it's logical issues and poor abstraction. Rust is great, but the borrow checker doesn't protect you from that.
I'm able to use AI for Rust code a lot more now than 6 months ago, but it's still common to have it spit out something decent looking, but not quite there. Sometimes re-prompting fixes all the issues, but it's pretty frustrating when it doesn't.
Location: Taipei, Taiwan
Remote: No preference
Willing to relocate: Yes (Taiwan, Vietnam, US)
Technologies: Vue, React/Next.js, Node.js, NestJS, Rust, Postgres/SQLite, Android (Kotlin/Jetpack), iOS (SwiftUI, TCA), Embedded C (ARM Cortex).
Résumé/CV: https://www.linkedin.com/in/samuel-pullman
Email: sampullman AT gmail DOT com
I'm a software developer with about 15 years experience - I started my career developing and manufacturing consumer electronics products, then moved on to web and mobile development. I do frontend, backend and CI/CD, and am able to pick up new technologies quickly. Recently I've been working on a website building platform: https://pubstud.io
"completely offline" also doesn't sound like a problem with a software project. At best it's a particular managed service experiencing downtime. Would Linux be to blame if my power supply goes up in smoke?
It’s a bit confusing to me exactly what went wrong. I think that when you have a redis/valkey cluster with multiple nodes and you use the cluster uri, there must be some kind of load balancer or custom routing. When we would attempt to connect to valkey the connection would look good, but when we would submit commands to it they would never execute. We had written our application so that it would operate with no issue (just slower) if the cache goes down. In this case, connections looked good but no work was actually being done. AWS support suggested we restart the nodes but because they were not responding they never shut down … or at least it took a really long time. They were never able to tell us what actually happened. My guess is that valkey command execution got stuck somehow but was still able to create new connections.
Can’t be reached outside the network that the instance and health check are running on? Maybe available in one AZ, but not on the one that’s trying to connect.
Maybe GP takes themselves too seriously, but I read article and the assessment seems valid. The article's thoughts on train etiquette came across to me as particularly condescending.
The author decided to write an article about their experience and limited research, which opens them up to criticism.
It depends. I've found o3 exceptional at finding things off the beaten path. I told it "I will be in city A, B and C in [month]" and would love to find out what to do. I have a car, am willing to drive and generally care most about gastronomy, food and culture. There's also flexibility in terms of timing and stays."
It read things like websites of municipalities of surrounding towns and found local food festivals in towns I never would've found out about otherwise. It's exactly the kind of stuff I'd previously read the experienced person's blog for.
It's a trade-off. AI gives you speed but lower quality. Blog posts are higher quality but take longer to find/curate.
For hiking routes, I ask AI for a list of suggested hiking routes in [area] based on my criteria (e.g. dog friendly, accessible by public transport, whatever) Then I google the specific suggested routes to fact-check the AI and get more detailed/reliable info.
That's true. I'm not anti AI or anything, I use it for plenty of things where search falls short or has decayed due to SEO spam
I guess it comes down to knowing where to find valuable information. If you already have known quality sources, AI is currently inferior.
Where I live I'm lucky to have tons of trails that have been meticulously mapped out and the made available (with images, directions, gear recommendations, etc.) on various blogs. I don't see AI being able to totally replace that in its current state, especially due to the semi-dynamic nature of the data.
I'd like to engage. Can you give a specific example?
We can make an honest attempt to see what the old vs AI options for it looks like. Both of us will walk away a bit more informed, and share here with others as well.
What do you mean? I use AI plenty and am aware of the capabilities.
AI will give good surface level advice and sometimes point to decent sources. If I'm looking for e.g. a good hike in a specific area near me, I know the blogs that will have directions, pics, and GPX data for all the routes. These are found via word of mouth, search, and local forums.
I wouldn't expect them to. Large language models compress the information at a high-level. If you need specifics, you need a search engine and a data set. LLMs specifically aren't that, despite all the fuss and hype.
My experience with AI is that prompting LLMs tends to fail when the task involves returning a long list of alternatives and/or selecting niche options unless the user explicitly names them.
Do it iteratively. Ask the LLM for a long list of alternatives but without detail, only the names. Then start a new chat, paste in the list of names, and ask for more detail.
I've done this several times when benchmarking lesser-known companies, and manual compilation using a search engine consistently outperformed LLMs even when those companies are part of the model's training data.
My intuition (could be completely wrong) is that lesser-known companies have much less density around them that popular brands, except if you are very specific. It would be great if this could be tweaked somehow.
It's a bit of a tangent and I agree with your point, but wanted to note that for one project our e2e tests went from ~40 min to less than 10, just by moving from Cypress to Playwright. You can go pretty far with Playwright and a couple of cheap runners.
I appreciate the point, but I've heard this kind of thing several times before - last time around was hype about how Cypress would have exactly this effect (spoiler: it did not live up to the hype). I don't believe the new framework du jour will save you from this kind of thing, it's about how you write & maintain the tests.
I wish I had hard evidence to show because my normal instinct would be similar to yours, but in this case I'm a total Playwright convert.
Part of it might be that Playwright makes it much easier to write and organize complex tests. But for that specific project, it was as close to a 1 to 1 conversion as you get, the speedup came without significant architectural changes.
The original reason for switching was flaky tests in CI that were taking way too much effort to fix over time, likely due to oddities in Cypress' command queue. After the switch, and in new projects using Playwright, I haven't had to deal with any intermittent flakiness.
At least Apple has humans doing review and support.
reply