Hacker Newsnew | past | comments | ask | show | jobs | submit | TeeWEE's commentslogin

> Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. We're doing a generational rebuild of the underlying infrastructure to handle agent-rate work as the default. Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services

Two big red flags here.

First git itself is distributed and built for scale.

I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.

Are they going to rebuilt git??

Secondly: a big rebuilt of monolith to services. Firstly there is nothing wrong with a Modulith. Secondly “rebuilt” will cause a lot of busy work without immediate value for customers.

And first of all: this announcement is done due to the stock price not AI The productivity increase with AI is inflated because they want their stock price up.

Sell Gitlab stock while you can. The leadership team has no clue what they are doing.

Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.


> First git itself is distributed and built for scale.

there're different dimensions for "scale" - like handling large monorepos, orders of magnitude more commits, tighter requirements for latencies (for agentic use, e.g. for agentic history navigation)...


> Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.

It makes you have 10x more the errors if you YOLO it ;) especially at a scale even remotely comparable to gitlab :/

Doesn't really inspire the greatest of confidences when they are literally dropping the ball on one of the greatest opportunities as github is being ensloppified.

Sometimes I wonder if I am more passionate towards my 7$/yr vps's and websites running on it than 7 billion $ companies (GitLab has a market cap or net worth of $4.36 billion. The enterprise value is $3.10 billion.[0] to be exact)

break things and move fast should work when you have 1000 users on your website, not 1000 full on entreprises (probably more for gitlab)

> I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.

> Are they going to rebuilt git??

These comments make me realize again how you all (who were alive ie) must have felt during the pets.com and dotcom mania. Some of these sentences are almost onion-video like titles. Its so all weird at a certain point. I am unsure how to feel about this.

[0]: https://stockanalysis.com/stocks/gtlb/statistics/


That’s indeed the trick. Spacex “invests” in Cursor, looks good on their balance sheet.

And xAI now gets 10B of more revenue on their income statement.

Perfect financial statement boosting for the IPO which in turn will pay back these costs.

At least that’s the bet.


It's exactly what Nvidia is doing with everyone these days. They invest in a company with money that is earmarked to buy Nvidia GPUs. Nvidia's books show lots of investments and lots of sales - win win! Of course, it's just buying it's own products.


Can they really put $10B worth of options under "investment"?

If so that would seem like the most plausible take on why this is happening.


that seems incredibly shady


AI seems to be full of these kinds of circular deals. It's one reason to be wary of the financials of the business.


There are two kid of specs, formal spec, and "Product requirements / technical designs"

Technical design docs are higher level than code, they are impricise but highlight an architectural direction. Blanks need to be filled in. AI Shines here.

Formal specs == code Some language shine in being very close to a formal spec. Yes functional languages.

But lets first discuss which kind of spec we talk about.


I think nanoclaw is architecturaly much better suited to solve this problem.


We make the creator of the PR responsible for the code. Meaning they must understand it.

Also, we only allow engineers to commit (agent generated) code. Designers just come up with suggestions, engineers take it and ensure it fits our architecture.

We do have a huge codebase. We are teaching Claude Code with CLAUDE.md's and now also <feature>.spec.md (often a summary of the implementation plan).

In the end, engineers are responsible.


If you work at OpenAI, leave now while you can.


Do you trust your employees? Do you trust a contracter? Do you trust other people?

AI is similar to a person you dont know that does work for you. Probably AI is a bit more trustworthy than a random person.

But a company, needs to let employees take ownership of their work, and trust them. Allow them to make mistakes.

Isnt AI no different?


Yes, it is different.

An AI actions and reasons through probabilistic methods - creating a lot more risk than a human with memory, emotions, and rationale thinking.

We can’t trust AI to do any sensitive work because they consistently f up. With & without malicious intent, whether it’s a fault of their attention mechanisms, reward hacking, instrumental convergence, etc all very different than what causes most human f ups.


I think a key ingredient here is accountabilty and liability.

If there's a mistake, you can't blame the computer. Who is the human accountable at the end of it all? If there's liability, who pays for it?

That's where defining clear boundaries helps you design for your risk profile.


Can you sue an ai agent?


It’s totally different. People have to obey laws and contracts because there are consequences if they don’t, there are fines, arbitrage, courts.

What happens if AI agent you run causes a lot of damage? The best you can do is to turn it off


Exactly, and I would never turn over my email or computer over to a contractor or anyone really. They get their own environment, email etc. Their actions stay as their actions.


My point is: Trust the work of AI just like the work of a contracter: Check and verify, but dont micromanage.


As others have said: accountability


I hardly use Google anymore. I almost always use Claude. It can do the "higher level task" I often want to accomplish when I go to google.

Claude checks multiples websites, reads them all, and answers my question.


This is quite a bad idea. You need to control the size and quality of your context by giving it one file that is optimized.

You don’t want to be burning tokens and large files will give diminishing returns as is mentioned in the Claude Code blog.


It is not an "idea" but something I've been doing for months and it works very well. YMMV. Yes, you should avoid large files and control the size and quality of your context.


Indeed seems like Vercel completely missed the point about agents.

In Claude Code you can invoke an agent when you want as a developer and it copies the file content as context in the prompt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: