What's fast on Z platforms is typically IO rather than raw CPU - the platform can push a lot of parallell data. This is typically the bottleneck when compiling.
The cores are in my experience moderately fast at most. Note that there are a lot of licencing options and I think some are speed-capped - but I don't think that applies to IFL - a standard CPU licence-restricted to only run linux.
In Sweden and I think Europe, there seems to be quite much product development in apples. I think one of the reasons is that storage seems to have been more or less perfected so that the produce can be sold over almost a whole year.
Using only traditional methods there are several "new" Swedish varieties, Aroma, Frida and Saga that are very nice - and especially Saga is absolutely fantastic - On par or better that international varieties Jazz, Pink Lady and Honeycrisp.
Some of the more traditional varieties are also sold more and for a longer period because of the improved storage, even though that I think they have a shorter storage window.
Another reason I think is that not all of these varieties thrive as small trees, and most factory farmed trees are kept small because it makes picking them easier.
I don't think it's a best example. MMAcevedo is about running a real human mind on a different substrate (for science, for labor, or to torture it for fun a million times, I guess, by a bored teenager who got the image from torrents).
Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).
I think that is Hofstadter grieving his wife, and reflecting on how we embed models or predictions of others in our own neural networks, more than anything else.
We build models of the world in order to predict it.
But I guess you could say other people are objectively shaping the neurons in our brains. But so is that fiddly printer tray or whatever, to a small extent.
Hey that printer tray is a bit of someone's soul too. Many people's work and decisions, even a bit of the nature of our whole society is recorded in those flimsy things. It may or may not be comforting that most of what we contribute to the world will ultimately be considered mundane, even and perhaps especially if it's successful.
You need to be reasonably experienced and guide it.
First, you need to know that Claude will create nonsensical code. On a macro level it's not exactly smart it just has a lot of contextual static knowledge.
Debugging is not it's strongest skill. Most models don't do good at all. Opus is able to one-shot "troubleshooting" prompts occasionally, but it's a high probability that it veer of on a tangent if you just tell it to "fix things" based on errors or descriptions. You need to have an idea what you want fixed.
Another problem is that it can create very convincing looking - but stupid - code. If you can't guide it, that's almost guaranteed. It can create code that's totally backwards and overly complicated.
If it IS going on a wrong tangent, it's often hopeless to get it back on track. The conversation and context might be polluted. Restart and reframe the prompt and the problems at hand and try again.
I'm not totally sure about the language you are using, but syntax errors typically happens if it "forgets" to update some of the code, and very seldom just in a single file or edit.
I like to create a design.md and think a bit on my own, or maybe prompt to create it with a high level problem to get going, and make sure it's in the context (and mentioned in the prompts)
Sometimes people forget that you don't have to use AI to actually write the code. You can stick to "Ask" mode and it will give you useful suggestions and generate code but won't actually modify your files.
This has been my experience as well. I get drained at the end, and I spent most of my energy and thinking capacity dealing with the LLM instead of the problem space.
I use AI-Lint to enforce basic code hygeine and design taste across languages, and force it to develop test iterations it can run on its own and tell it to iterate until the tests go green.
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative)
* We're cutting to strengthen our strategic focus and control our operational costs.(Positive)
* We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
> The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
There's a third possibility: slop driven productivity declines as people realize they took a wrong turn.
Which makes me wonder: what is the best 'huge AI bust' trade?
I'm thinking that the slopocalypse is almost inevitable outside pure tech companies and can't be ruled out there either.
LLMs are a force multiplier. Clueless people will be able to produce tons of code that looks convincing but is totally misguided and misinformed. Exactly what large companies with complex in-house systems doesn't really need any more of.
There seems to be strong lobbying for insects as human food, in particular from companies that would be happy feed us with their own shit as long as it's cheap and they could get away with it
The green-left seems to enjoy that idea. Exactly why is hard to tell - especially on HN, but let's say I don't think it's rational.
The why is not that hard to understand - insects provide a lot of proteins compared to how much food they consume over their lifetime.
But yes, the obvious place to start is to use it for feeding chickens and not humans. Why chickens? Because insects are part of their natural diet when they are free. There is just a bunch of infrastructure problems that need to be solved for that to work as insects have pretty different problems to solve compared to other parts of the food production chain.
None of which requires startups, science or factories.
If you put cows on a field for a day, wait three days for insects to infest their shit, then put chickens on the field, the chickens scratch through the cow shit and eat the bugs. The cow shit gets nicely spread out and fertilises the soil more quickly.
The problem with this system is that it doesn't allow rich people to screw mega bucks out of the government for doing no work at all.
No, but there is a non-green left. And the greens do get most of their policy influence by associating with the rest of the left, since there are very few green parties that govern directly, or at least alone. So it's fair to say that such initiatives are successful because a subset of the broader left, the green-left, likes the idea.
> As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.
GPG is terrible at that.
0. Alice's GPG trusts Alice's key tautologically.
1. Alice's GPG can trust Bob's key because it can see Alice's signature.
2. Alice's GPG can trust Carol's key because Alice has Bob's key, and Carol's key is signed by Bob.
After that, things break. GPG has no tools for finding longer paths like Alice -> Bob -> ??? -> signature on some .tar.gz.
I'm in the "strong set", I can find a path to damn near anything, but only with a lot of effort.
The good way used to be using the path finder, some random website maintained by some random guy that disappeared years ago. The bad way is downloading a .tar.gz, checking the signature, fetching the key, then fetching every key that signed in, in the hopes somebody you know signed one of those, and so on.
And GPG is terrible at dealing with that, it hates having tens of thousands of keys in your keyring from such experiments.
GPG never grew into the modern era. It was made for persons who mostly know each other directly. Addressing the problem of finding a way to verify the keys of random free software developers isn't something it ever did well.
What's funny about this is that the whole idea of the "web of trust" was (and, as you demonstrate, is) literally PGP punting on this problem. That's how they talked about it at the time, in the 90s, when the concept was introduced! But now the precise mechanics of that punt have become a critically important PGP feature.
I don't think it punted as much as it never had that as an intended usage case.
I vaguely recall the PGP manuals talking about scenarios like a woman secretly communicating with her lover, or Bob introducing Carol to Alice, and people reading fingerprints over the phone. I don't think long trust chains and the use case of finding a trust path to some random software maintainer on the other side of the planet were part of the intended design.
I think to the extent the Web of Trust was supposed to work, it was assumed you'd have some familiarity with everyone along the chain, and work through it step by step. Alice would known Bob, who'd introduce his friend Carol, who'd introduce her friend Dave.
I think that the important conclusion to make of this is that publicly available code is not created or even curated by humans anymore, and it will be fed back into data sets for training.
It's not clear what the consequences are. Maybe not much, but there's not that much actual emergent intelligence in LLMs, so without culling by running the code there's seems to be a risk that the end result is a world full of even more nonsense than today.
This already happened a couple of years ago for research on word frequency in published texts. I think the consensus is that there's no point in collecting anymore since all available material is tainted by machine generated content and doesn't reflect human communication.
I think we'll be fine. AIs definitely generate a lot of garbage, but then they have us monkeys sifting through it, looking for gems, and occasionally they do drop some.
My point is, AI generated code still has a human directing it the majority of the time (I would hope!). It's not all bad.
But yea, if you're 12 and just type "yolo 3d game now" into Claude Code, I'd say I'd be worried about that but then immediately realized no... that'd be awesome.
The cores are in my experience moderately fast at most. Note that there are a lot of licencing options and I think some are speed-capped - but I don't think that applies to IFL - a standard CPU licence-restricted to only run linux.
reply