I've begun to think some bubbles are good for the economy overall. In the dotcom days anyone with an idea and a domain name could get funding. I myself worked for a company that nabbed 7x more funding than needed but still failed due to poor leadership. I had reservations about the founder but thought I could help drive things, but he was even more absent than I ever anticipated.
A lot of VCs and PEs lost a lot of money during the crash. This means a lot of capital was spent in the economy, generating a lot of good activity, and the companies that failed then also put a lot more capital back into the economy through bankruptcies. Other businesses can pick up talent, IP, and assets for cheap, and everyone can learn from the failures. While losing that money isn't great for VCs, what they got was a very valuable education to be better stewards of their investments, and pick better companies. The next rounds of companies have to hit metrics, milestones, have to prove their value, etc.
Never waste a perfectly good crisis: learn if nothing else.
True, however the pitch itself is telling. They're not saying "we're expecting AI to boost the productivity of our employees by 20% with no increase in labor costs." They're saying "we're going to spend less on humans" because investors are more ok with spending money on machines than people. That's the problem, they're not looking at how AI enhances people, they're looking at how it can eliminate people.
I would expect different because there are two major philosophies in business, and they indicate the direction a business will go. There is the philosophy of focus on profit/growth, or focus on costs/value-extraction.
Before anyone straw mans "but you can do both", yes, you can, but only one can be the MAIN way you run a business.
A focus on profit and growth will lead to seeing employees as strategic resources ready to be deployed to serve the business and generate revenue. One wants to enable those employees with the best matching tools that enable the employee to do more. In this case, a growth leader would say "AI can make our people 20% more productive without adding headcount, and lets us refocus another 10% of employees to more productive/profitable tasks, better utilizing the internal knowledge those employees have. These are the companies that tend to become more profitable and successful because leaders understand that good employees are an asset, not a liability.
The other focus, on costs and value extraction, see the business as a zero sum game; in order to increase profits, we must decrease costs or find a way to extract more value from customers. These are the companies that will reduce service levels with no change in pricing to improve profits. They'll understaff a facility to see the "true minimum number of employees" (the bare minimum), depending on some employees to go "above and beyond" to get things done for no additional compensation. They'll get rid of expensive employees, settle for replacements employees who are 75% as good but will work for 65% of the money, and keep headcount the same; worse service, but proportionally lower costs. These are the companies that maybe be huge, but they're not market leaders, they're reactive and basically rent takers.
When a leader begins to resent employees being paid to do their work, it's time for that leader to go, because they'll simply start that company on a decline.
Basically, there are those who understand the difference between cost and value, and those who are too focused on cost to even understand value. If your first response to AI is "we can cut people" you're in the latter camp. Your first reaction is to cut costs, rather than to exploit the new tech for even higher profits.
Eyes on the future instead of the past is how you grow and succeed. Employee success leads to company success. Employees are not a cost center.
We all know the story of WhatsApp having a billion users with 50 employees. While Slack had significantly more employees and not necessarily a better product/business.
If Slack were to cut their staff down to 50, that would mean they are the ruthless type that only care about value extraction and doesn't value their employees, right? But if they started with 50 and kept it there, they'd be celebrated for efficiency like WhatsApp?
Point is, there is no way to know how efficient a company is, or what their philosophy is, just by looking at their financials and their headcount. Even if the headcount moves.
> Current LLMs just are not as good as they are sold to be as a programming assistant and people consistently predict and self-report in the wrong direction on how useful they are.
I would argue you don't need the "as a programming assistant" phrase as right now from my experience over the past 2 years, literally every single AI tool is massively oversold as to its utility. I've literally not seen a single one that delivers on what it's billed as capable of.
They're useful, but right now they need a lot of handholding and I don't have time for that. Too much fact checking. If I want a tool I always have to double check, I was born with a memory so I'm already good there. I don't want to have to fact check my fact checker.
LLMs are great at small tasks. The larger the single task is, or the more tasks you try to cram into one session, the worse they fall apart.
Born in Pittsburgh, raised Catholic, pretty darn liberal. We had alter girls in the 90s, openly gay members who had ceremonies in the church, etc. I'm not catholic now but that was a good church in the 80s and 90s.
Catholicism is certainly interesting! It is somewhat similar from my experience. On one hand it was fairly conservative: super against abortion, and old fashioned family values (ie moms should stay home). On the other, there was a huge focus on service and helping the poor both at home and abroad. And not missionary type stuff, just helping no strings attached. Plus a real interest in education with an open mind. So slightly more complicated than a single left/right value.
> I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
UI has been taken over by graphic designers and human interaction experts have been pushed out. It happened as we started calling it "user experience" rather than "user interface" because people started to worry about the emotional state of the user, rather than being a tool. It became about being form over function, and now we have to worry about holding it wrong when in reality machines are here to serve humans, not the other way around.
It would only violate App Store guidelines if Apple forces itself to agree to, and be bound by them. I think it's arguable that they probably do not, and so they didn't violate the guidelines because they're not bound by them.
Wouldn’t the guidelines apply to anyone using it who doesn’t have specific, legal, written exemptions? Not to say they don’t have it, but simply hand-waving “well they wrote it so it doesn’t have to apply to them” doesn’t seem quite as simple to me. I could be wrong!
The whole point of an agreement is that it sets out what parties will do for each other, and what happens if there is a breach.
Apple could already do things with the App Store without needing to agree to something to get Apple to let Apple do App Store things.
Apple is not going to sue themselves for being in breach.
etc.
Just because there's e.g. a license agreement doesn't mean you need to agree to something, if you are somehow otherwise authorized to do the thing. E.g. fair use, or you have a pre-existing right or ownership, or whatever.
No. Apple does not sign up for an Apple Developer account. Contracts with oneself aren't even meaningful.
This is a common tech enthusiast fallacy: thinking that law is code. So there must be some "if app published, there must be a developer account, and if the developer account violates the rule the app must be removed". It just doesn't work that way.
Apple has contracts with third parties to allow them to distribute apps in Apple's App Store. That's it.
The law definitely is not code, but the law could require Apple to follow the same requirements they set for others. Then the government could sue Apple (or otherwise enforce this behavior.)
This is the reason why anti-trust agencies don’t like this. Apple (with its App Store) is a gatekeeper and in Europe at least it should not favor its own apps over the others(i.e maps, payments, AI integrations etc). It should play fair.
“Fair” meaning that an app that was designed from the ground up by the same people that created the device and operating system should get the same attention as a malware-ridden hack from six years ago?
What does fair even mean here? Ensuring the advantages of vertical integration can’t be enjoyed by users?
No, it’s really not. The law is not intended to be deterministic or efficient, and it is neither. Law explicitly leaves room for human judgment and context in ways code doesn’t and shouldn’t.
While I am in agreement about the common tech enthusiast, or perhaps just dev, mental failings regarding law, I feel obligated to point out that App store guidelines written by the company running the app store are not law.
Apple also pushed a notification through the AppleTV app. I thought I had all notifications turned off (I turn off notifications from most apps on all devices, just because you think I need to see your messages doesn't mean I think that and most apps do not need notifications). Quite irritating. That was the point where I decided I would not see F1 in theaters, and if I ever do it'll be free streaming.
Capex and opex are just accounting labels that help categorize costs and can improve planning ability. But at the end of the day a billion dollars is a billion dollars.
They’re significant here because opex impacts profits while capex sort of doesn’t. They have a path to profitability if revenue > opex, by quitting growth and slashing capex.
I believe there's a fair amount of tax implications involved with that bucketing though. Capex is taxed at a lower rate than opex is my understanding but I may be wrong on the specifics of it all.
reply