I doubt the GP has gone back through their career and checked on each person who thought there were too many meetings have now all made the switched they're being accused of, though.
My point being that it's unlikely that it's that way round. It's more likely that there's no actual trend, and there are one or two that have done it and they've just extrapolated.
No, but no-one said that. It's far more likely that this "group" of people doesn't even exist, and it's yet another case of treating "strangers on the internet" or "other people" as one person and being upset when that "person" is being inconsistent.
"I doubt the GP has gone back through their career and checked on each person who thought there were too many meetings have now all made the switched they're being accused of, though."
Anyway, I'm stopping this now as it's not a constructive conversation anymore. Let's agree to disagree.
Both your error and the OP's error is in imagining that the same people are saying both things. The "community" fallacy, which has been around for about 10 years now, that pretends that people with something in common (e.g. "uses HN") are somehow a community that thinks identically is completely wrong.
Actually, it's some of the same people. I won't name names, but there are a lot of AI skeptics on this site who loudly and prominently comment on every AI story. And if you look at their posting histories you'll see the exact type of goalpost-shifting the parent commenter is talking about.
You see it elsewhere as well. There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content. At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.
There are 8 billion people on the planet. You can find a seemingly large group of people who believe anything. That doesn't mean the group exists in a way that's worth talking about.
> There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content
I can't believe that Ed Zitron, who I just looked up, has made a career out of creating and selling anti-AI content. He's 40. He cannot have been doing that for very long.
> At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.
Calling the truth "complaining" seems more revealing of you than them. If the AI was lying constantly, they weren't "complaining". They were telling the truth. Once the AI stopped lying so much, they stopped saying that as keeping on saying it would no longer be true. But there are still other issues to talk about. That's...right? Isn't it?
> I started to refer to the process of writing software using AI assistance (soon to become just "the process of writing software", I believe) with the term "Automatic Programming"
I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.
I believe that is tautologically true, and thus not a very useful framing.
Security is obviously a continuum (eg. you can even have a bug in your IPMI FW, and a network packet could break in without any interaction with the OS; or there could be a HW bug too), but there is a discrete "jump" between containers and VMs to the extent that it is useful to call one a security boundary and the other not. Just like a firewall is a security boundary even if it can have security bugs.
Whether this jump between exploitable surface area warrants this distinction is what the point is: many believe it does.
But you also cannot just handwave the difference by "it's a continuum". I did not use absolutes, but said "VMs are _better_ for security", which already implicit about a "continuum".
Containers are mostly used as a deployment/packaging model where typically VMs are used where stronger security is needed. This has been the established industry standard for a while. Look at major cloud providers for example.
AWS:
> Unless explicitly stated, AWS does not consider a container or primitives such as an ECS task or a Kubernetes pod to be a security boundary. A notable exception to this is ECS tasks running AWS Fargate, where the isolation boundary is a task. To account for this, we recommend that you use Fargate with ECS if your applications have strict isolation requirements.
> When you’re using the Fargate launch type, each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
They also further recommend that for even higher security requirements use different EC2 instances - which you can also run on dedicated hardware etc. But the fact that you can further increase isolation beyond VMs, does not make containers the same as VMs.
> There’s one myth worth clearing up: containers do not provide an impermeable security boundary, nor do they aim to. They provide some restrictions on access to shared resources on a host, but they don’t necessarily prevent a malicious attacker from circumventing these restrictions. Although both containers and VMs encapsulate an application, the container is a boundary for the application, but the VM is a boundary for the application and its resources, including resource allocation.
> If you're running an untrusted workload on Kubernetes Engine and need a strong security boundary, you should fall back on the isolation provided by the Google Cloud Platform project. For workloads sharing the same level of trust, you may get by with multi-tenancy, where a container is run on the same node as other containers or another node in the same cluster.
> Applications that run in traditional Linux containers access system resources in the same way that regular (non-containerized) applications do: by making system calls directly to the host kernel.
> One approach to improve container isolation is to run each container in its own virtual machine (VM). This gives each container its own "machine," including kernel and virtualized devices, completely separate from the host. Even if there is a vulnerability in the guest, the hypervisor still isolates the host, as well as other applications/containers running on the host.
> gVisor is more lightweight than a VM while maintaining a similar level of isolation. The core of gVisor is a kernel that runs as a normal, unprivileged process that supports most Linux system calls. This kernel is written in Go, which was chosen for its memory- and type-safety. Just like within a VM, an application running in a gVisor sandbox gets its own kernel and set of virtualized devices, distinct from the host and other sandboxes.
These guys are experts when it comes to securing workloads on shared infra and while there are different levels of isolation using various techniques, the current industry practice is to not consider regular Linux containers a security boundary.
Absolutely. I worked at a gene sequencing company and I led the software side of making a robotic product[0] to automate the 20-30 minutes of sample preparation time. It's great for lots of uses, but for anything outside the exact thing it automates, it doesn't cover it. For that you need an expert human.
No problem! It was probably the most fun product I've ever had the pleasure of leading the software dev of.
The company is one of the few in the world that makes gene sequencing technology - actual chemistry, biologics, protocols, hardware and software. Plasmidsaurus is a customer[0] - they use our devices and have built an incredibly successful service on top of them!
It happened, but not as often as you'd think. In 2017 I was arguing with someone that the back button should work and URLs should be obvious in a fairly large project and they said "people are used to the back button not working - like a bank website".
> And it was "only" ~$20 billion. Inflation can't be this high.
While I'm not sure about this buy, Cursor does at least have revenue. WhatsApp was basically running on VC/private money (they had an extremely nominal fee, but I never had to pay it), and was sold to buy its userbase into the Facebook fold. I don't think you can compare that to a business that at least has some decent revenue.
If Whatsapp is burning through say ~$1B yearly with zero revenue and Cursor is burning through say ~$2B with a ~$1B revenue, they're both still in the hole.
I wish people would stop talking about just revenue. It's mostly meaningless without knowing their expenses.
I think revenue is common to talk about because profit is also meaningless when a company spends every penny it earns to grow (new engineers, marketing, etc). Iirc Amazon made zero profit for quite some time.
Also revenue is a signal for product market fit. Is it a great one? Dunno. But for example I'd be hard pressed to sell $1billion of anything, even if I had something everyone wanted.
But I think your point about burn rate is important. How long can they have this attrition on cash before they collapse?
I mean, the financials just don't look great either way.
Their main product is part VSCode, which is a market that's almost impossible to make money in, and part reselling already expensive LLM tokens.
You can look at more parameters and judge how well a company could do in the future. For Amazon, you can predict that once they stop growing, they can make a pretty penny.
But with Cursor that doesn't seem likely. Even if they had the talent for training models from scratch, which I don't think they do, and IF inference makes money, which is not clear at all, training models is still a huge money sink.
So, for them getting bought out by xAi which has a base model they can use makes sense. But what does xAi get here? Another endless money pit?
You're right. I was commenting mostly on "why companies usually talk about revenue than profit"
I think the truth is that it's a new frontier. No one knows if any of this will make money. Investors are just betting that someone else will learn to monetize sometime soon.
reply