Hacker Newsnew | past | comments | ask | show | jobs | submit | nilkn's commentslogin

The free Google AI mode got it for me on the first try by just pasting in the comment and asking what TRAA was in that context.

> Look at popular projects -- a few minutes after an issue is filed they have sometimes 10+ patches submitted. All generating PRs and forks and all the things.

I think this is a really important point that is getting overlooked in most conversations about GitHub's reliability lately.

GitHub was not designed or architected for a world where millions of AI coding agents can trivially generate huge volumes of commits and PRs. This alone is such a huge spike and change in user behavior that it wouldn't be unreasonable to expect even a very well-architected site to struggle with reliability. For GitHub, N 9s of availability pre-AI simply does not mean the same thing as N 9s of availability post-AI. Those are two completely different levels of difficulty, even when N is the same.


Not even talking about how useless it is to create tens of PRs to solve the same issue.

But GitHub karma botting is a thing now.

Remember those elitist ppl who removed answers on stackoverflow coz their answer is better with 90000 answers?

Yup, now they are on GitHub farming karma with bots.


yeah this is indeed a good insight. Back in the days, who would expect so many bots to "review" code and leave overly verbose comments under every PR in a popular repo?

I feel like it's pretty easy to predict what OpenAI is trying to do. They want their codex agent integrated directly into the most popular, foundational tooling for one of the world's most used and most influential programming languages. And, vice versa, they probably want to be able to ensure that tooling remains well-maintained so it stays on top and continues to integrate well with their agent. They want codex to become the "default" coding agent by making it the one integrated into popular open source software.

This makes much more sense as an zoom-buys-keybase style acquihire. I bet within a month the astral devs will be on new projects.

Bundling codex with uv isnt going to meaningfully affect the number of people using it. It doesnt increase the switching costs or anything.


This sounds like grief and depression to me. You're struggling because you're still mentally filtering everything you do through another person who is no longer part of your life. You must learn to do things for you, not for someone else. You may find that some things you thought you enjoyed you actually were only doing for someone else. Likewise, you may discover that what you want do purely for yourself is different from what you might expect or predict.

Time will heal some of this naturally. But the #1 recommendation I would always make to anyone in this situation is to pursue exercise. Weightlifting, hiking, etc., generate rapidly compounding results across multiple dimensions of your life and also often generate some of the most authentic social experiences you can find as a 30+ year old adult.


Same here. The initial version of WSL back in the day could certainly be rough, but modern WSL2 seems totally fine to me. It is the key ingredient that allows me to have one workstation that can do "everything".


I'm surprised people still think this. Google has the strongest position of any company in the world on AI. They have expertise and capability across the entire stack from chips to data centers to fundamental research to frontier models. Just because they weren't first-to-market with a chatbot doesn't mean they almost lost or made some terrible durable blunder.

That's about Google, though. The picture about Sundar specifically is harder to evaluate. The pessimistic take is that Google had that position already and Sundar failed to proactively lead through a fundamental product shift, forcing the company onto the defensive for some time. The optimistic take is that Sundar, having occupied the top spot since 2015, prioritized investments in the company's overall technology development, then successfully executed a rapid product pivot when the market changed, securing a dominant position in both research and product that nobody else can compete with long-term.


All of Google's advantages in AI are despite Sundar Pichai's leadership, not because of it.


That's not clear to me. He's been in charge for over a decade, and the company he's in charge of has the most dominant position in AI in the world.


People give him way too many breaks, he's a money manager. He was asleep at the wheel when OpenAI absolutely steamrolled them, even though they very easily could have won that race.


Anthropic specifically called out systems "that take humans out of the loop entirely and automate selecting and engaging targets".

I take that to mean they don't want the military using Claude to decide who to kill. As a hyperbolic yet frankly realistic example, they don't want Claude to make a mistake and direct the military to kill innocent children accidentally identified as narco-terrorists.

At least, that's the most charitable interpretation of everything going on. I suspect they are also worried that the sitting administration wants to use AI to help them execute a full autocratic takeover of the United States, so they're attempting to kill one of the world's most innovative companies to set an example and pressure other AI labs into letting their technology be used for such purposes.


Right. Did the DoW ask for that? Or does Anthropic make a product that does that?


Obviously Anthropic does make a product that could do that -- just give Claude classified data and ask it who to target.

Obviously the military wants to use it for that purpose since they couldn't accept Anthropic's extremely limited terms.

One can easily and immediately infer the answers to both your questions are yes.


The DoW has explicitly said they don’t want this, and what you are describing are not automated kill drones.

Anthropic’s safeguards already prevent what you are describing, again the thing thar DoW has said they don’t want.


I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).

Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.


GOOD. I don’t want Anthropic, or anybody else to have their tools used for these things either.

But Dario is showing weakness here by talking around it. Whatever they were asked to do, they should just be upfront about.


> Whatever they were asked to do, they should just be upfront about.

Anthropic is not being asked to do anything, except renegotiate the contracts. The DoW Claude models run on government AWS. Anthropic has minimal access to these systems and does not see the classified data that is being ingested as prompts. It is very unlikely that Dario actually knows what the DoW wants to do with these models. But even if he did, it would be classified information that he is not at liberty to disclose.

However the product they provide likely has safety filters that cause some prompts to not be processed if it is violates the two contractual conditions. That is what the DoW wants removed.


He didn't talk around it. He wrote down specifically what the two issues were, which is precisely why now the entire world knows what's actually going on. If risking your company's existence to prevent a (potential) atrocity is weakness, I don't know what strength is.


Strength is saying what they were asked to do. I want to know!

Did the DoW ask them to make kill drones? Because if so THAT IS A REALLY BIG DEAL.

The vagueness is irritating. He’s saying they won’t do something, the DoW is saying they don’t even want them to do that, which should resolve the issue, but hasn’t. There is obviously something else at play here.


You're confused because you're taking everything the people involved are saying literally and trusting everything plainly at face value. The existence of the contradiction you're pointing out should be evidence that you need to think a level deeper, i.e., that you need to look at actions more than words. There's an incredibly easy resolution of the contradiction that is troubling you, and it's already been pointed out clearly above.


The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.

If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.


No, the DoW may be implicitly asking for those things.

That’s the point I’m trying to make here: Anthropic should just say the unsaid thing here.

DoW asked for the following thing: $foo. We won’t give that to them.


> Anthropic should just say the unsaid thing here.

> DoW asked for the following thing: $foo. We won’t give that to them.

Anthropic has explicitly said that multiple times, including in the letter we are presently discussing.

$foo is the ability to use Claude for domestic mass surveillance and analysis, and/or fully-autonomous killbots.


That thing is removing the restrictions from the contract.


https://x.com/SeanParnellASW/status/2027072228777734474?s=20

Here's the Chief Pentagon Spokesman pointing to the same verbiage and reiterating they they won't agree to those terms of use.


The first sentence of that post is:

> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.


Saying something on twitter is not a guarantee.

Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."


>he said this

>>no he didn’t he actually said the opposite of that and the link you just posted says the opposite of what you are claiming

>but he might change his mind!

Okay?


You asked repeatedly:

>Did the DoW ask for these things?

>Did the DoW ask for that?

I showed you where the spokeperson asked for the terms to change so they could make autonomous weapons. now, you're shifting the goal posts.


This administration would never lie, no siree! And especially not on Twitter!

I'm torn here. Who should we believe? The normal people or the people who operate exclusively in dishonesty?


And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.

Is a pundit/politician lying to you a new experience?


I certainly wouldn’t give them the benefit of the doubt.


Then Anthropic should say: this is what the DoW has asked for, and we aren’t able to do it, or don’t want to.


They may not be legally allowed to.


Pangram itself looks like it was just generated by Google AI Studio.


I'm very curious about this as well. This is the main thing that has held me back from a meaningful rebalancing. Eating a huge tax bill to avoid a theoretical future loss of unknown size and duration while also losing out on potential gains if that loss ends up not materializing is a hard pill to swallow. I suppose this is probably why most long-term investment advice suggests not trying to time the market unless you have a very short time horizon. (Note: for me, I'm referring to funds in taxable brokerage accounts.)


I have a family member who once told me that their net worth was roughly halved in 2008. They probably recovered it if they stayed in the market after, but I don't know what they did.

I suppose the real question is whether you can weather the storm long enough for the market to recover. And beyond that, how cynical you are overall about everything taking completely before that can happen. I wonder a lot about that second one.


If you'd been DCAing a fixed amount monthly into stocks for 10 years prior to the 2007 peak, then during the crash continued doing so without selling, the total value of your portfolio would've matched its pre-cash peak in just 3 years and exceeded it significantly by the time the market itself recovered in ~5.5 years.

3 years is really not a long time. So I'd say it comes down to emotional fortitude and probability of staying employed. If your time horizon is longer than 3 years, the calculation of whether to sell should essentially come down to calculating your odds of keeping your job. I bet it's possible to build a robust mathematical model that recommends a decision given your best personal estimate of your layoff probability during a severe market crash.


You vastly overestimate the money a lot of people have. A lot of people will be destroyed by this.


"Signaling" is just the information that your visible choices send to those around you, including strangers. That's why it's called "signaling" -- your choices are broadcasting an information signal about you to others.

To not signal, you must make choices that carry little or no information in the context in which they exist. If you make choices in a context in which they are abnormal (e.g., dressing very casually in a context that others can't access in similar clothing), they inherently broadcast unique information about you. In some cases, that information can create a complex side effect in how people perceive you, even if you don't intend it (e.g., "this person put in the absolute bare minimum effort, because they knew we'd have to be nice to them no matter what, which feels disrespectful to me; their lack of optional effort for others signals that they only care about themselves, not us").


> "Signaling" is just the information that your visible choices send to those around you, including strangers. That's why it's called "signaling" -- your choices are broadcasting an information signal about you to others.

Where the theory falls flat re- signaling to strangers is that there are people that do dress very differently, use different cars, sometimes shave, sometimes not, on different days of the week.

And it's also very well known that many people simply do not pay attention to others. They mind their own business and that's it.

When I'm driving a random car and I'm dressed casually and not shaven, what signal am I sending to the strangers I'll see once during the day and who are anyway only minding their own business?

And the next day when I put on fancy shoes, an expensive watch, and I take out one of my Porsche and then go out and cross path with strangers, what signal am I sending? I'll only ever see them during that other day. Strangers who, also, only mind their own business.

The funny thing is: just like I don't give a flying fuck about other people, other people don't give a flying fuck about me.

But anyway how can I be signaling one thing to strangers on monday and another thing tuesday to other strangers?

Where it gets better: some days my wife prepares the clothes she wants me to wear (maybe because people shall come to the house later on or whatever), some days she doesn't and I just change underwear after my shower and put the same jeans I had the day before. Then I go to the garage: we both have several car keys. Maybe she decided to take my Porsche, maybe not.

So basically: I don't always pick the clothes I wear and my wife loves to sometimes take my Porsche.

What am I "signaling" to strangers? Not only I'm not totally in control of my outfit and my car but also simply don't care.

"Grug hungry. Grug grabs money or credit card. Grub puts whatever clothes on. Grug goes to whatever car is in the garage. Grub drives to groceries store to buy atoms to stay alive".

That's literally me.

Now maybe people in this thread meant to say: "signaling in the workplace towards people you see every day at work" but that's way different than "signaling to strangers".

To put it simply: I think a lot of people in this thread are way overestimating the level of caring other people exhibit.

I guarantee you that on the caring continuum most people by very far are on the "I couldn't care less" extreme.

There is such a thing as people who simply don't give a fuck and nobody is signaling anything to people who aren't even paying attention to you.

Grug goes to the groceries store to buy atoms to survive, not to look at other people's clothes/watch/car.

Signaling to people who aren't strangers: OK, that one I can buy. But to strangers I call horse load of shit because many people can "signal" two entire different things on two different days of the week. The only signal people see is the same as what people see reading tea leaves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: