Hacker Newsnew | past | comments | ask | show | jobs | submit | SatvikBeri's commentslogin

If the government just banned all government agencies from working with Anthropic, that would be reasonable. But they didn't. They're banning any company that works with the military from working with Anthropic in any way, using a law that has never been invoked against an American company.

Well, great! Sounds like this is exactly what Anthropic wants and hopes for; for their technology to minimally benefit warfighting. Otherwise, are you suggesting they are so evil that they were just advertising those the terms to fool us and virtue signal?

> has never been invoked against an American company.

There's always a first. I am assuming it is not illegal to do that. It's a completely reasonable business decision to ensure your supply chain does not depend on things that may change against your goals. For example, you don't want to build or depend on an open source platform that you know is gonna rug pull, if you count on it remaining open source, do you? American or otherwise.


Anthropic was not anticipated injured party with standing in American courts, until today, now they are very much injured and do have standing to bring a whole slew of lawsuits against the administration who is operating illegally and unconstitutionally against an american company. This seems like the start of the battle for anthropic not the end. The government signed contracts they don't get to just reneg whenever they fucking please because cheeto bantito in chief and his unhinged alcoholic secretary of defense are unreliable liars

You can use AllocCheck.jl to guarantee your code doesn't allocate. It's conservative, so it'll sometimes throw false positives, but shouldn't throw false negatives. You apply the checks to function definitions.

You can profile memory usage line by line in detail pretty easily with tools like @timeit or @btime

In practice I've found it pretty easy to get inner loops down to few or 0 allocations when needed for parallelization


> might not need to use a compiled language

A very minor nit: Julia is a compiled language, but it has an unusual model where it compiles functions the first time they're used. This is why highly-optimized Julia can have pretty extreme performance.

> I would say the performance is overstated by the community but out of the box it is good enough to avoid languages like C/C++ to build solutions.

For about a year we had a 2-hour problem in our hiring pipeline where the main goal was to write the fastest code possible to do a task, and the best 2 solutions were in Julia. C++ was a close third, and Rust after that.


> A very minor nit: Julia is a compiled language

Caught. Should have just listed the usual suspects (C, C++, maybe Rust nowadays?).

> and the best 2 solutions were in Julia. C++ was a close third, and Rust after that.

Awesome. Which type of problem was this, if you can share?


It was roughly "given this dataset with 100 time series, write code to calculate these statistics as performantly as possible. Make one single-threaded version and one using 4 cores."

The C++ versions were around ~250 lines long, and the developers generally only had time to try one approach. While the Julia versions were around ~80 lines long, and the developers had time to try several approaches. I'm sure the best theoretically possible C++ version is faster than the best Julia version, but given that we're always working with time constraints, the performance per developer hour in Julia tends to be really good in my experience.


There are some big advantages to having it in the same language. You can write the easy, non-performant version quickly and gradually refactor while having an easy test case. This is especially nice in situations where you expect to throw away a lot of the code you write, e.g. research. Also, you don't have to write most code in a super performance obsessed way – Julia makes it really easy to find the key 5% that needs to be rewritten.

We've ported some tens of thousands of lines of numpy-heavy Python, and in practice our Julia code is actually more concise while being about 10x-100x more performant.


> There are some big advantages to having it in the same language.

Sure, but why not write it all in Rust or similar then? (Not writing it all in C++ I would understand.)

> This is especially nice in situations where you expect to throw away a lot of the code you write, e.g. research.

Right, that is very different from what I do. There is code I wrote 15 years ago that is still in use. And I expect the same would be true in 15 years from now. Though that is also code where a GC is a no-go entirely (the code is hard real-time).


> Sure, but why not write it all in Rust or similar then? (Not writing it all in C++ I would understand.)

I don't know Rust as well as I'd like, but when we've worked with some strong Rust programmers, their versions of the code are something like 4x as long as equivalent Julia code with minimal performance improvements. And since we primarily care about trading algorithms that don't have binary results, it's pretty helpful to be able to understand those at a glance.

Also, Rust's ecosystem for numeric computing still seems pretty underdeveloped, though it's getting better.


> Sure, but why not write it all in Rust or similar then?

Because it is clunky, ugly and unreadable mess. Which is something you may be willing to pay in some circumstances, but not while researching algorithms, doing data science, writing simulations etc.

> Though that is also code where a GC is a no-go entirely (the code is hard real-time).

There are GC systems with hard real time guarantees. There is more out there than just OpenJDK and Go.

Also, hard real-time systems rarely use runtime allocations, at least with off the shelf allocators.


REPLs/notebooks are really nice in situations where you don't know what you want ahead of time and are throwing away 90% of the code you write, such as trying to find better numerical algorithms to accomplish some goal, exploring poorly documented APIs (most of them), making a lot of plots for investigating, or working a bunch with matrices and Dataframes (where current static typing systems don't really offer much.)

Yeah, this is a entirely different domain than what I work in (hard real-time embedded and hard real-time Linux).

Though poorly documented APIs exist everywhere, but they are not something you can rely on anyway: if it isn't documented the behaviour can change without it being a breaking change. It would be irresponsible to (intentionally) depend on undocumented behaviour. Rather you should seek to get whatever it is documented. Otherwise there is a big risk that your code will break some years down the line when someone tries to upgrade a dependency. Most software I deal with is long-lived. There is code I wrote 15 years ago that is still in production and where the code base is still evolving, and I see no reason why that wouldn't be true in another 15 years as well.

At least you should write tests to cover any dependencies on undocumented behaviour. (You do have comprehensive tests right?)


Yeah it's definitely not for all domains.

I usually write the tests afterwards, except for very well-defined engineering problems, and the REPL exploration helps inform what tests to write.


We use Julia at our quant fund. We looked into it and several other alternatives 5 years ago as a more performant replacement for numpy/scipy/etc., and Go was one of the alternatives we considered, along with things like numba and writing C extensions.

Julia won for a very simple reason: we tried porting one of our major pipelines in several languages, and the Julia version was both the fastest to port and the most performant afterwards. Plus, Julia code is very easy to read/write for researchers who don't necessarily have a SWE background, while Go or C++ are not.

We started using Julia in the Research Infrastructure team, but other teams ended up adopting it voluntarily because they saw the performance gains.


Eh, he's given an interview where he talks about the Swift decision. He and several maintainers tried building some features in Swift, Rust, and C++, spending about two weeks on each one IIRC. And all the maintainers liked the experience of Swift better. That might have ended up wrong, but it's a pretty reasonable way to make a decision.

Two weeks with Rust and you're still fighting with the compiler. I think the LLM pulled a lot of weight selling the language, it can help smooth over the tricky bits.

idk man it's rare to fight the compiler once you've used Rust for long enough unless you're doing something that's the slightest bit complex with async.

You get to good at schmoozing the compiler you start to create actual logical bugs faster.


That's why I said "two weeks."

That goes for almost every language. I recall my first couple of weeks with various compiled language and they all had their 'wtf?' moments when a tiny mistake in the input generated reams of output. But once you get past that point you simply don't make those mistakes anymore. Try missing a '.' in a COBOL program and see what happens. Make sure there is enough paper in the box under LPT1...

Yeah, main issue with Swift is that the c++ interop (which was absolutely bleeding-edge) still isn't to the point of being able to pull in parts of the Ladybird codebase.

If I recall correctly, part of this was around classes they had that replaced parts of the STL, whereas the Swift C++ interop makes assumptions about things with certain standard names.


It's interesting that you find compaction trivial. I think it's one of the most important tasks, to the point where I use Amp these days because its "handoff" feature is so much nicer than CC's compaction.

What's nicer about their handoff feature?

It's a lot of little things + polish more than any big change.

Amp has a first-class concept of threads, a tree of sessions. This is really nice for long work on related features, it tracks which threads are spawned from others. When you type /handoff, it asks you for a goal for the new thread, then summarizes your existing context with respect to that goal, and opens a new thread with just that context.

This makes it really easy and pleasant to spin up new sessions to do relatively focused tasks, which keeps your context usage low and the model smarter. It also enables some really nice use cases like opening up an old thread where you built a feature two weeks ago, then spinning up a new one to do a modification.

You can do all of this with Claude Code but it's just clunkier and in my experience hasn't worked nearly as well, e.g. I find the compactions tend to be full of a lot of useless stuff, or miss something important compared to the handoffs.


is it similar to the concept of "branch off from here" feature in some chat UIs where you can continue one convo in different directions? but does amp keep each thread in a separate worktree/isolated env and let you choose which one to merge?

At work I'll buy a max subscription for anyone on my team who wants it. If it saves 1-2 hours a month it's worth it, and people get that even if they only use the LLMs to search the codebase. And the frontier models are noticeably better than others, still.

At home I have a $20/month subscription and that's covered everything I need so far. If I wanted to do more at home, I'd seriously look into the open weight models.


I've also seen Opus 4.6 as a pure upgrade. In particular, it's noticeably better at debugging complex issues and navigating our internal/custom framework.

Same here. 4.6 has been considerably more dilligent for me.

Likewise, I feel like it's degraded in performance a bit over the last couple weeks but that's just vibes. They surely vary thinking tokens based on load on the backend, especially for subscription users.

When my subscription 4.6 is flagging I'll switch over to Corporate API version and run the same prompts and get a noticeably better solution. In the end it's hard to compare nondeterministic systems.


That's very interesting!

Also, +1. Opus 4.6 is strictly better than 4.5 for me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: