Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin

Something I have always appreciated. I'm much less anxious working with very intelligent people, even if their intelligence eclipses mine. They don't have unusual ideas about what I should or should not be able to grasp. They can recognize which of my ideas are intelligent and which of my ideas are half-baked.

Working with unintelligent people, you need to spend more time building up a reputation. They cannot tell if you're intelligent based on what you say, or how you explain things -- only if you get results. This is nerve wracking for multiple reasons, but chiefly because intelligent people can be wrong, or unlucky, etc, and so only judging someone based on results is partially to judge based on luck.


From my quick research online, it seems they've gone digital-only for season tickets because they don't want people just reselling them to turn a profit. They want actual season-long fans, so now if you transfer too many games they can track it and ban you. This is essentially anti-scalping. There's a legit justification.

You can still buy paper tickets at the stadium for a single game. But not for season passes anymore.

Apparently they've been making exceptions for him in years past where he was able to pay hundreds of dollars to have them custom printed for him. And this year they've decided to no longer provide that exception.

Honestly, this doesn't seem unreasonable to me. At some point, you have to cut off previous technologies because virtually everyone's moved to something better. You also can't buy tickets any more by snail mail with an enclosed check.

If this guy has the money for a season pass (!) he has the money for a smartphone. It seems like he just likes the nostalgia of paper tickets. But that's not a reason to add a separate ticketing flow just for him any more, like they had been up till now.

https://www.aol.com/articles/81-old-lifelong-dodgers-fan-012...

https://www.reddit.com/r/Dodgers/comments/1s5fkni/la_dodgers...


Thank you for this, very much appreciate the thoughtful response.

The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.


> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”

I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.


He shouldn't even need a reason. "I don't want a smartphone" should be sufficient and should not lock one out of commerce, events, and other cultural experiences.

In 50 years, everyone's going to have an advertisement-injecting brain implant, and stores are going to require you to have one in order to purchase anything, and they'll lock you out of commerce as a filthy Luddite if you don't get one. And, 50 years from now, commenters on HN will defend those businesses because the implant is "modern" and supporting those ancient smartphones and credit cards is hard to do.


> you can build a crazy popular & successful product while violating all the traditional rules about “good” code

which has always been true


We need to extend the ADA to protect people who are not technologically-abled.

What got me really mad is searching your own history. There's this "search watch history" on the https://www.youtube.com/feed/history

I remember watching video that contains certain word in the title. A minecraft contraption from a small channel (4 videos, 93 subs). I searched that word in the title. But youtube can't find it. Fortunately, I saved the world download that listed in the video with the name of the channel. So I searched the channel name + the word, it still can't find it.

So I searched only the channel name instead, in the search page. It works, and checking their videos, youtube mark one of them as watched. With the exact same title I searched. But it didn't show me in the history search. WTF youtube.


You should watch this talk by Nicholas Carlini (security researcher at Anthropic). Everything in the talk was done with Opus 4.6: https://www.youtube.com/watch?v=1sd26pWhfmg


That's interesting research, but I think a more important reason that you don't have access to them (not even via the bare Anthropic api) is to prevent distillation of the model by competitors (using the output of Anthropic's model to help train a new model).

Suggestions:

Put the parameters into the url so searches can be bookmarked, like zip codes, terms, filters, and other aspects can be shared easily as well.

Description search both include (like i7, 16GB) which is good for electronics and exclude for example exclude "repair" or "needs repair" which is helpful for many things.

Category specific filters, vehicle millage range, year

Keywords classification filters like pickup, delivery, payment methods, how many days you have to pay if known, etc.

You are probably already thinking along these lines for some of them, just an encouragement to implement. Yes categorization/filters can be fuzzy(commas, which word or plurals used, etc), so feel free to put the [beta] or [experimental] tag until a recipe that gets most of the stuff works.

Thanks for building this, I bookmarked it and already shared it with a few friends.


Releasing the model to bad actors at the same time as the major OS, browser, and security companies would be one idea. But some might consider that "messed up" too, whatever you mean by that. But in terms of acting in the public benefit, it seems consistent to work with companies that can make significant impact on users' security. The stated goal of Project Glasswing is to "secure the world's most critical software," not to be affirmative action for every wannabe out there.

I grew up in a similar environment, similar trajectory, but in Africa.

Dad was a teacher in a rural school, mum stayed at home.

Until I went to school I would stay outside all day with my friends, playing in and around the rivers and dams, making our own fun with abandoned cars and rusted out farming equipment.

Our school had one computer, and I was lucky enough to get to use it after hours from time to time.

I would study the manual from front to back so I could optimise my time while on the computer.

Practiced typing on a typewriter to type in code listings faster later (aging myself here ;)

Today I build AI agents and infrastructure to run them for a hyperscaler, and my car drives me around. Feels like another lifetime ago.


This hit the nail on the head.

I find much of the HN community insightful and interesting, but in terms of consumer feedback (especially in a B2C environment) I wouldn't touch feedback here with a 10-foot pole.

I don't mean that to be an insult, quite the opposite. Most people here are power users. But that is a galaxy away from how the average user interacts with the internet.


It's so homogeneous that there are 4 national languages, one of which is german with tons of loval dialects. It's so homogeneous that each canton has own sub regulations. It's so homogeneous that in it's biggest city, Zurich, 34% of people are foreigners and 45% born outside of Switzerland.

But we can look at the opposite part of spectrum - Moldova, poorest country in Europe - 85% of infra is covered by fibre, >90% of population has option to get 1GBit fiber

There are always reasons why something can't be done, just like solving frequent school shooting problem in US


> Large American AI company does not list the US as an adversarial actor

This is not a surprise or a gotcha.


For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.

One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.


Kinda reminds me of the story of king Croesus of Lydia, who asked the oracle of Delphi whether he should wage war against Cyrus the Great, the Oracle promptly told him that by doing so he would "destroy a great empire". Croesus then promptly attacked the Persians and lost.

https://en.wikipedia.org/wiki/Croesus#War_against_Persia_and...


This thread set off a software penalty called the flamewar detector.* I turned that off as soon as I saw it.

(* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)


The system card for Claude Mythos (PDF): https://www-cdn.anthropic.com/53566bf5440a10affd749724787c89...

Interesting to see that they will not be releasing Mythos generally. [edit: Mythos Preview generally - fair to say they may release a similar model but not this exact one]

I'm still reading the system card but here's a little highlight:

> Early indications in the training of Claude Mythos Preview suggested that the model was likely to have very strong general capabilities. We were sufficiently concerned about the potential risks of such a model that, for the first time, we arranged a 24-hour period of internal alignment review (discussed in the alignment assessment) before deploying an early version of the model for widespread internal use. This was in order to gain assurance against the model causing damage when interacting with internal infrastructure.

and interestingly:

> To be explicit, the decision not to make this model generally available does _not_ stem from Responsible Scaling Policy requirements.

Also really worth reading is section 7.2 which describes how the model "feels" to interact with. That's also what I remember from their release of Opus 4.5 in November - in a video an Anthropic employee described how they 'trusted' Opus to do more with less supervision. I think that is a pretty valuable benchmark at a certain level of 'intelligence'. Few of my co-workers could pass SWEBench but I would trust quite a few of them, and it's not entirely the same set.

Also very interesting is that they believe Mythos is higher risk than past models as an autonomous saboteur, to the point they've published a separate risk report for that specific threat model: https://www-cdn.anthropic.com/79c2d46d997783b9d2fb3241de4321...

The threat model in question:

> An AI model with access to powerful affordances within an organization could use its affordances to autonomously exploit, manipulate, or tamper with that organization’s systems or decision-making in a way that raises the risk of future significantly harmful outcomes (e.g. by altering the results of AI safety research).


> Magawa retired from bomb sniffing in June 2021 owing to his old age, as is standard for APOPO's HeroRATs.

> He spent a number of weeks mentoring 20 newly-recruited rats before ultimately retiring to a life of "snacking on bananas and peanuts".

> https://en.wikipedia.org/wiki/Magawa

End to life worthy of being envied.


As much as people on Hacker News complain about subscription models for productivity and creativity suites, the open arms embrace of subscription development tools (services, really) which seek to offload the very act itself makes me wonder how and why so many people are eager to dive right in. I get it. LLMs are cool technology.

Is this a symptom of the same phenomenon behind the deluge of disposable JavaScript frameworks of just ten years ago? Is it peer pressure, fear of missing out? At its root, I suspect so; of course I would imagine it's rare for the C-suite to have ever mandated the usage of a specific language or framework, and LLMs represent an unprecedented lever of power to have an even bigger shot at first mover's advantage, from a business perspective. (Yes, I am aware of how "good enough" local models have become for many.)

I don't really have anything useful nor actionable to say here regarding this dialling back of capability to deal with capacity issues. Are there any indications of shops or individual contributors with contingency plans on the table for dialling back LLM usage in kind to mitigate these unknowns? I know the calculus is such that potential (and frequently realised) gains heavily outweigh the risks of going all in, but, in the grander scheme of time and circumstance, long term commitments are starting to be more apparently risky. I am purposefully trying to avoid "begging the question" here; if instead of LLMs, this were some other tool or service, reactions to these events would have been far more pragmatic, with less of a reticence to invest time on in-house solutions when dealing with flaky vendors.


I'm much more impressed by Chinese state-made eagles vs. cats video: https://www.youtube.com/watch?v=5dGY0_pgkv8

> A contract is toilet paper

It isn't, but you can't get blood from a stone and squeezing costs money.

It sounds like the entity that the contract is with has no real assets and/or is based in a jurisdiction which is hard to enforce judgements in. That's a case where you need to get paid up-front, which is the real lesson in this article.


I've had a look at the (vibe coded) repro linked in the article to see if it holds up: https://github.com/juxt/agc-lgyro-lock-leak-bug/blob/c378438...

The repro runs on my computer, that's positive.

However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen. It doesn't actually use the emulator to prove that its thinking is right. Classic Claude being lazy (and the vibe coder not verifying).

I've vibe coded a fix so that the demonstration is actually done properly on the emulator. And also added verification that the 2 line patch actually fixes the bug: https://github.com/juxt/agc-lgyro-lock-leak-bug/pull/1


I feel Idiocracy is irresistible bait for 'not like the other girls'-types.

Everytime this movie comes up, droves of people mention how they get it, while others don't. It's becoming a trope in itself.


Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.

Reading this makes me even happier to pay for Anthropic.

Amodei and his sister saw through the behavior and called it out.

" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."


Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: