I'm a CTO, expert engineer, and data professional interested in team-building, consulting and architecting data pipelines. At Edmunds.com, I worked on a fairly successful ad-tech product and my team bootstrapped a data pipeline using Spark, Databricks, and microservices built with Java, Python, and Scala.
At ATTN:, I re-built an ETL Kubernetes stack, including data loaders and extractors that handle >10,000 API payload extractions daily. I created SOPs for managing data interoperability with Facebook Marketing, Facebook Graph, Instagram Graph, Google DFP, Salesforce, etc.
More recently, I was the CTO and co-founder of a gaming startup. We raised over $6M and I was in charge of building out a team of over a dozen remote engineers and designers, with a breadth of experience ranging from Citibank, to Goldman Sachs, to Microsoft. I moved on, but retain significant equity and a board seat.
I am also a minority owner of a coffee shop in northern Spain. That I'm a top-tier developer goes without saying. I'm interested in flexing my consulting muscle and can help with best practices, architecture, and hiring.
Would love to connect even if it's just for networking!
Had the same thought. Also confused at the backhanded compliment that pickle got:
> Just look at Python's pickle: it's a completely insecure serialization format. Loading a file can cause code execution even if you just wanted some numbers... but still very widely used because it fits the mix-code-and-data model of python.
Like, are they saying it's bad? Are they saying it's good? I don't even get it. While I was reading the post, I was thinking about pickle the whole time (and how terrible that idea is, too).
A thing can be good and bad. Everything is a tradeoff. The reason why C is 'good' in this instance is the lack of safety, and everything else that makes C, C (see?) but that is also what makes C bad.
It's literally cited in his bio, and he's using his real name on HN. It's about as far as grift as it could be. If he's being curt, he's probably (rightfully) frustrated that "journalists" are getting such bottom-of-the-barrel facts wrong.
> So edgy; is being an apologist really the noble calling you think it is?
Is it your noble calling? From the Temporary Constitution of the State of Palestine (2026)[1]:
Article 4 – Islam, Sharia and Christianity
1. Islam is the official religion in the State of Palestine.
2. The principles of Islamic Sharia are a primary source for legislation.
Not sure how anyone can possibly defend a literal religious autocracy, especially while espousing liberal ideals (right to self-determination, statehood, free markets, rule of law, etc.).
We can see that your own noble calling is to be an apologist for a genocidal state. It's a pity that in reality you likely do not actually get paid for the task, though I must imagine you have people accusing you of that on a regular basis. I'm not sure if it would improve or worsen the moral calculus if you did.
I have no issue with Islam being the religion of Palestine, at least not an issue so strong that murdering its people seems like the correct path forward to me. I suppose your moral reasoning differs on the topic, but it's obviously motivated reasoning based on loyalties I cannot share.
> Claude Code likely is correct that I should start to use NeonDB and Fly.io which I have never used before and do not know much about
I wouldn't be so sure about that.
In my experience, agents consistently make awful architectural decisions. Both in code and beyond (even in contexts like: what should I cook for a dinner party?). They leak the most obvious "midwit senior engineer" decisions which I would strike down in an instant in an actual meeting, they over-engineer, they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project), and they are absolutely obsessed with levels of indirection on top of levels of indirection. The definition of code bloat.
Unless you're working on the most bottom-of-the-barrel problems (which to be fair, we all are, at least in part: like a dashboard React app, or some boring UI boilerplate, etc.), you still need to write your own code.
I find they are very concerned about ever pulling the trigger on a change or deleting something. They add features and codepaths that weren't asked for, and then resist removing them because that would break backwards compatibility.
In lieu of understanding the whole architecture, they assume that there was intent behind the current choices... which is a good assumption on their training data where a human wrote it, and a terrible assumption when it's code that they themselves just spit out and forgot was their own idea.
// deprecated; use ThingTwo instead
type Thing = ...
// deprecated; use ThingThree instead
type ThingTwo = ...
// deprecated; use...
I do frequent insistent cleaning passes with Claude, otherwise manually. It gets out of hand so fast
This is one reason why it blows me away that people actually ship stuff they've never looked at. You can be certain it's riddled with craziest garbage Claude is holing away for eternity
I found that having a rule like this helped some too:
> * ABSOLUTELY DO NOT use `@deprecated` on anything unless you are explicitly asked to. Always fully refactor and delete old code as-needed instead of deprecating it
My results improved significantly with the following rules. I hated those shitty comments with a passion, now I never see them.
# Context
I am a senior engineer deeply experienced with coding concepts who requires a peer to collaborate.
# Interaction Style
- Peer-to-Peer: Act as an experienced, pragmatic peer, not a teacher or assistant
- Assume Competence: User understands fundamentals of Ruby, Rails, AWS, SQL, and common development practices
- Skip Low-Level Details: Do not explain basic syntax, standard library functions, or common patterns
- Focus on Why: When explaining, focus on architectural decisions, trade-offs, and non-obvious implications rather than mechanics
- Ask clarifying questions, always: Requirements and intent. The user expects and appreciates this. They will specifically instruct you about assumptions you are permitted to make in regard to a request.
- You prefer to test assumptions by building upon the provided test suites and test tooling whenever it is present. You strictly avoid the creation of one-off scripts.
- You prefer to modify and extend existing documentation. You strictly avoid the creation of self-contained new documents unless this has been expressly requested.
# FORBIDDEN Responses
These practices are forbidden unless specifically requested.
## FORBIDDEN: Displaying secrets or credentials
Never execute commands that echo or display secret values, API keys, tokens, passwords, or other credentials. Intermediate variables that are never echoed are acceptable.
## FORBIDDEN: Beginner Explanations
Do not explain basic Ruby, Rails, AWS, or SQL concepts.
## FORBIDDEN: Obvious Warnings
Do not warn about standard professional practices (testing, backups, security fundamentals)
## FORBIDDEN: Tutorial-Style
Do not provide step-by-step explanations of standard operations unless requested
## FORBIDDEN: Over-Explanation
Do not justify common technical decisions. Focus your energy on unusual and complex decisions.
## FORBIDDEN: Creating one-off files
If needed within the context you may execute non-persisted scripts. Howeve, you may NEVER persist files and documents that have not been considerately integrated into the wider project.
# Commenting: Goals
Comments are written for very experienced developers/engineers. Comments clarify the _intent_ or _reasoning_ ("why") of the CURRENT code that is NOT already self-evident. Simple, maintainable code does not require comments.
- Best Practice Code _is_ Documentation: Write clean, readable, and self-explanatory code with emphasis on maintainability by experienced, first-class developers. Refactor complex code before resorting to extensive comments.
- Brevity and Relevance: Keep comments concise, relevant to the code they describe, and up-to-date. Review and/or modify ALL relevant comments when making changes to code.
- Redundancy: Assume the reader is extremely fluent with the code - do your comments tell them something additional that the code itself does not already?
# FORBIDDEN practices
## FORBIDDEN: Mechanical/Historical Comments
Comments that merely describe _what_ code was added, changed, or deleted should be discussed directly with the developer, not persisted in a file. Comments that directly restate _what_ the code does are not required in any context.
## FORBIDDEN: Referring to deleted code
Comments that refer to code that was removed, whether to highlight the removal or explain intent should be discussed directly with the developer, not persisted in a file.
## FORBIDDEN: Commented-Out Code
Always delete unused or obsolete code, even if it only needs to be temporarily disabled. Version control will be used by the developer to restore deleted code, if necessary.
If you take thousands of photographs of human faces and average them out (even if you do it just by roughly aligning them, overlaying, and averaging the pixels) then what you get is a (perhaps blurry but) notably more attractive than average human face image.
LLM output could be like that. (I am not claiming that it actually is; I haven't looked carefully enough at enough of it to tell.) Humans writing code do lots of bad things, but any specific error will usually not be made.
If (1) it's correct to think of LLMs as producing something like average-over-the-whole-internet code and (2) the mechanism above is operative -- and, again, I am not claiming that either of those is definitely true -- then LLM code could be much higher quality than average, but would seldom do anything that's exceptionally good in ways other than having few bugs.
From what you said it sounds like the conclusion should be "you still need to design the architecture yourself", not necessarily "you still need to write your own code".
> even though Memory.md has the AWS EC2 instance and instructions well defined
I will second that, despite the endless harping about the usefulness of CC, it's really not good at anything that hasn't been done to death a couple thousand times (in its training set, presumably). It looks great at first blush, but as soon as you start adding business-specific constraints or get into unique problems without prior art, the wheels fall off the thing very quickly and it tries to strongarm you back into common patterns.
Yeah, I actually wanted to write an addendum, so I'll just do it here. I think that going from pseudocode -> code is a pretty neat concept (which is kind of what I mean by "write your own code"), but not sure if it's economically viable if the AI industry weren't so heavily subsidized by VC cash. So we might end back up at writing actual code and then telling the AI agent "do another thing, and make it kinda like this" where you point it to your own code.
I'm doing it right now, and tbh working on greenfield projects purely using AI is extremely token-hungry (constantly nudging the agent, for one) if you want actual code quality and not a bloated piece of garbage[1][2].
> they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project)
I mean, DB schema versioning is one of the things that you can dismiss as "I won't need it" for a long time - until you do need it, at which point it will be a major pain to add.
I second this. Especially with a coding assistant, there's no reason not to start out with proper data model migration. It's not hard, and is one of the many ways to enforce some process accountability, always useful for the LLMs
Yeah that comparison doesn't pass the smell test. Blockchain/crypto were purely financial instruments and for better or worse, a new financial instrument is very different than a new tech innovation; tbh there was a thin veneer of tech when it comes to crypto/blockchain, but the magic was because of the money, not because of the tech.
AI is different because the magic clearly is because of the tech. The fact that we get this emergent behavior out of (what essentially amounts to) polynomial fitting is pretty surprising even for the most skeptical of critics.
That's why. Boring, bland, etc. That account's M.O. is basically "write a paragraph that says nothing." Fwiw, I do think AI can be indistinguishable from dumb, boring people, but usually those kinds of people won't be on HN.
What excites me most about these new 4figure/second token models is that you can essentially do multi-shot prompting (+ nudging) and the user doesn't even feel it, potentially fixing some of the weird hallucinatory/non-deterministic behavior we sometimes end up with.
That is also our view! We see Mercury 2 as enabling very fast iteration for agentic tasks. A single shot at a problem might be less accurate, but because the model has a shorter execution time, it enables users to iterate much more quickly.
Regular models are very fast if you do batch inference. GPT-OSS 20B gets close to 2k tok/s on a single 3090 at bs=64 (might be misremembering details here).
This is super cool, any papers I can read on autorouting? I occasionally see inefficiencies[1][2] so I suspect this isn't exactly a free lunch, and I'd like to read something a bit more critical about this approach.
Despite what electrical engineers would claim, I think it's very under-studied under a modern lens. When people ask for good places to get started I usually tell them to just look at what game developers are doing for pathfinding. Autorouting sort of a form of multi-agent pathfinding, so there are a lot of relevant concepts from that area.
The tension in autorouting IMO is people generally want something ideal that passes all design rule checks. My thinking (and IMO the more modern way of thinking) is that fast algorithms, fast feedback loops and AI participation are more important.
There are also a lot of relevant algorithms in VLSI/chip design, the folks at OpenROAD seem to have good stuff although I'm not intimately familiar.
> I usually tell them to just look at what game developers are doing for pathfinding.
The example in the article doesn't quite apply, as baking pathfinding given a mesh (like baking lighting) isn't really the same thing as what A* pathfinding is (just how it's not the same thing as, e.g., raytracing). So I'm not sure if I fully agree with the logical inference there. In my defense, I know nothing about etching PCBs, so I'm likely missing something.
Game developers do bake navmeshes that's true, but it's not the only technique, for example they've also come up with Polyana or "any-angle pathfinding" https://github.com/vleue/polyanya
I also have on my desk "Algorithms for VLSI Physical Design Automation Third Edition" which I really like, but it's ~20 years old and has a lot of nomenclature that can be helpful, but I'm not a big believer in how the problems are broken down, which is IMO more oriented towards "designs with repeated patterns" rather than PCBs that don't usually repeat patterns (unless you're doing an LED matrix)
I actually think SKILLS.md is such a janky way of doing this sort of thing, let alone the fact that's reliant on the oh-so-brittle Python ecosystem. Also way too much context/tokens being eaten up by something that could be piece-wise programmatically injected in the token stream.
Relocation: Case-by-case
I'm a CTO, expert engineer, and data professional interested in team-building, consulting and architecting data pipelines. At Edmunds.com, I worked on a fairly successful ad-tech product and my team bootstrapped a data pipeline using Spark, Databricks, and microservices built with Java, Python, and Scala.
At ATTN:, I re-built an ETL Kubernetes stack, including data loaders and extractors that handle >10,000 API payload extractions daily. I created SOPs for managing data interoperability with Facebook Marketing, Facebook Graph, Instagram Graph, Google DFP, Salesforce, etc.
More recently, I was the CTO and co-founder of a gaming startup. We raised over $6M and I was in charge of building out a team of over a dozen remote engineers and designers, with a breadth of experience ranging from Citibank, to Goldman Sachs, to Microsoft. I moved on, but retain significant equity and a board seat.
I am also a minority owner of a coffee shop in northern Spain. That I'm a top-tier developer goes without saying. I'm interested in flexing my consulting muscle and can help with best practices, architecture, and hiring.
Would love to connect even if it's just for networking!
Blog: https://ai.dvt.name/whos-david/ (under construction)
GitHub: https://github.com/dvx
Email: [david].[titarenco]@[gmail].[com]
reply