That's not correctly stated. "Private Credit" is defined as non-bank lending. Banks are doing "public" lending in the sense of being regulated. Private lending is any sort of financial instrument issued outside of those guard rails.
It's generally felt to be risky and volatile, but useful. Basically, it's never illegal just to hand your friend $20 even if the government isn't watching over the process to make sure you don't get scammed. This is the same thing at scale.
It is. (EDIT: It's a mixed bag. OP was correctly calling out a definitional error.)
Banks have loaned $300bn mostly to private-credit firms. Those firms then compete with the banks to do non-bank lending. It's a weird rabbit hole and I'm grumpy after a cancelled flight, but it feels like I'm in the middle of a Matt Levine writeup.
> I’ve been tempted to buy one and do “real dev work” on it just to show people it’s not this handicapped little machine.
But... you can do the same exercise with a $350 windows thing. Everyone knows you can do "real dev work" on it, because "real dev work" isn't a performance case anymore, hasn't been for like a decade now, and anyone who says otherwise is just a snob wanting an excuse to expense a $4k designer fashion accessory.
IMHO the important questions to answer are business side: will this displace sales of $350 windows machines or not, and (critically) will it displace sales of $1.3k Airs?
HN always wants to talk about the technical stuff, but the technical stuff here isn't really interesting. The MacBook Neo is indeed the best laptop you can get for $6-700.
But that's a weird price point in the market right now, as it underperforms the $1k "business laptops" (to avoid cannibalizing Air sales) and sits well above the "value laptop" price range.
No, you can't do real work on a $350 windows machine. No way such a setup is suitable for anything beyond browsing a tab or two and connecting to servers using SSH.
And, the whole shittiness of the experience will even distract you attempting real work: the horrible touchpad, the bad screen, the forced windows updates when you trying to start the machine to do something urgent, ads in Windows, the lack of proper programmability of Windows (unless you use WSL).... Add the fact that the toy is likely to break in a year or two. These issue exist on far more expensive Windows machines, how much more a $350 machine.
Leaving Windows machines and OS behind for more than a decade has been a continuing breath of fresh air. I have several issues with the Apple devices and macOS (as I have with Linux too), but on the whole they are far better than Windows. The only good thing about Windows that I miss on Macs is the file explorer and window management, not sure why Apple stubbornly refuses to copy those.
A lot of $350-ish Windows machines also don’t have SSDs but instead eMMC storage, which is dog slow and will make modern SSD-mandatory Windows feel even more awful to use.
If Windows/Linux/x86 is non-negotiable and that’s your budget, I would never in a million years recommend anything brand new. This is when you go pick up a $350 used midrange ThinkPad on eBay. It won’t outperform a Neo in terms of CPU and battery life but I guarantee it’ll be a better experience than the garbage routinely sold at this price point.
Of course you can. You can do real work on an $80 Amazon Fire. Yes, some things will be potentially impossible or frustrating but that's also true of the MacBook Neo, just a bit higher of a bar. A lot of this also depends on your definition of "real work".
$350 USD can get you a decent laptop with a SSD, 16GB RAM and something like an Intel N100 or N95. And they pretty comparable to a decent Intel Skylake CPU which are still pretty usable.
Yes, the Neo has a faster CPU but it also has less RAM and less storage and costs more and has less ports. Besides ray traced games what can the Neo do that the others can't? They'll take longer but they'll get there.
And if you're willing to go used? That $350 goes a lot further.
> Yes, the Neo has a faster CPU but it also has less RAM and less storage and costs more and has less ports.
8GB on Apple Silicon is far better than 16 GB on Wintel, and I don't event trust the quality of 16GB of RAM on a bottom of the barrel Windows machine.
Would you prefer a machine that is still good 7 years from now with less ports, or one with more ports that you have to replace in 2 years? Yes it is more expensive now, but over 7 years it is an absolute bargain.
16 GB physical RAM is just better. Apple isn't magic. Gimme a break. Both devices have SSDs for fast swapping and have RAM compression. You can't spin up a VM that has 8GB RAM on the Neo, you can't load a large spreadsheet or do a decently sized digital painting. I could maybe buy a claim that 8GB is better on Mac than 8GB on Windows.
Why would you have to replace it in 2 years? How do we know Apple will even be offering updates to Neo in 7 years? Will 8GB still be usable in 7 years really? 8GB is barely on the fence already.
I wouldn't be surprised if Apple drops the Neo from software support in less than 7 years.
The ThinkBook 14 Gen 6 at Costco for $380 has a single thread passmark score of 2800. The laptop I use to develop most of my SaaS products, with IDEs and claude open etc, has a score of 2000. I run Linux, but win10 iot runs fine on it too.
> No, you can't do real work on a $350 windows machine.
Sigh. I mean, even absent the obvious answers[1], that's just wrong anyway. You're being a snob. Want to run WSL? Run WSL. Want to run vscode natively? Ditto. Put it on a cheap TV and run your graphical layout and 3D modelling work. I mean, obviously it does all that stuff. OBVIOUSLY, because that stuff is all cheap and easy.
All the complaining you're doing is about preference, not capability. You're being a snob. Which is hardly weird, we're all snobs about something.
But snobs aren't going to buy the Neo either. Again, the business question here is whether the $350 junk users can be convinced to be snobs for $600.
[1] "Put Linux on it", "All of your stuff is in the cloud anyway", "It's still a thousand times faster than the machine on which I did my best work", etc...
You mean that machine from 30 years ago that was running 30 year old software that has nothing in common with today’s development? And how well does Linux run on 4GB?
That's a 16G windows box which will happily run multiple VMs for whatever your deployment environment is, something the Neo is actually going to struggle with. The Jasper Lake CPU is indeed awfully slow, but again for routine "dev" tasks that's just not a limit.
You would obviously refuse out of taste, but if you were actually forced to use this machine to do your job... you absolutely could.
I knew it. I was saying from the instant they started we'd have a scandal like this. Bunch of tech bros walking into the government with personal MBPs and administrative authority to demand data from anyone and everyone was a privacy crisis happening in real time.
Yet here on HN, what have we been arguing about? Big tech. Google and Meta have been allowed to become boogeymen in this community out of all proportion to the actual threat they posed[1].
While the actual boogeyman stealing our data to exploit in the market? It was us.
[1] I mean, lets be honest, while everyone has abstract complaints the truth is that they've actually been remarkably benign stewards of our data over the past 20 years. Much, much, MUCH more responsible than the glibertarian dude in the cubicle next to you, as it turns out.
Yep, and we're only hearing about this because in this case there was a whistleblower. Call me cynical but I'm sure that there is plenty of data DOGE workers exfiltrated from SSA and other places that we'll never directly know about.
Posts predicting this were apparently flagged as "political". For example, Bruce Schneier's warning [0]. For a site called Hacker News, DOGE unfortunately attracted a different priority of notoriety than, say, the numerous merger and acquisition and VC maneuvers reaching the front page. If hacker punks nominally subvert the established order by flaunting laws and authorities, then DOGE was very much hacking. Tina Peters is an unsophisticated hacker punk, She doesn't live up to the social engineering chops of Kevin Mitnick, but her plan did involve a Geek Squad uniform. Legendary but too "political". Attracts too much noise, not enough signal. That's why you didn't see an elevation of the developed thoughts you're talking about.
Since the beginning of DOGE, it has not been especially bold to predict:
- DOGE will cost more than it saves. The seminal errors, mistaking $ millions for $ billions, world-write permissions on their Drupal site, etc. convinced us that we can't expect deliberate professionalism.
- The very first whistleblower, out of NTSB, convinced us that exfiltration was the goal. This is within the top 5 whistleblower stories here. The critical detail was their instruction that access logs be scrubbed.
- And the general public smelled it, too. No one doubts that threats against Tesla dealerships were civil libertarian radicals, not recently-fired USAID bean counters.
- When Peter Theil's FBI handler, Johnathan Buma, went whistleblower a few months into DOGE, it wasn't about Theil. He saw a Russian active measure influencing Musk's inner circle. One of Kash Patel's first acts as FBI director was to order Buma arrested.
So, the commentary worrying about "big tech" was commentary within Y Combinator's sphere.
i dunno. NSA letters are a real thing and i have no reason to believe there's not at least some exfiltration of personal data from "big tech" to other actors.
> have no reason to believe there's not at least some exfiltration
Is it genuinely your opinion that that activity (just look at all the equivocation!) constitutes a risk at the same level as alleged by the linked article?
This is exactly what I'm talking about. HN has a tunnel vision disease on this subject. "Yes yes, DOGE bros stole the SSA database, but let's please talk about how awful Google is." It's clinical at this point.
i'm not saying it's not like these big tech firms don't have their hands legally tied by NSA letters, but that's entirely divorced from whether i trust them to steward my data.
And I repeat: you are refusing to engage with the privacy crisis right in front of our eyes and insisting instead we treat with your personal crusade against a threat that is almost entirely theoretical. That's just not rational discourse, it's a vendetta. And you're hardly alone.
> i dont intend to be on the side of flipping mcdonalds burgers
So say the kitchen staff at every Denny's too. And yet...
The analogy is apt, but your coping strategy falls down because of numbers. There aren't a lot of spots for those "chefs" to get paid like they expect.
Most HN commenters might have gotten by over the past decades thinking they were "talented chefs", but were really more like the "short order cooks" whose jobs got eaten by fast food.
Most of that is misguided. The IPX was the high volume, low cost, face-for-the-user Solaris box during exactly the moment in the mid 90's where Intel and Microsoft took over and Sun and the Unix vendors lost the plot.
People remember it as a ridiculous $15k joke that was half the speed of the Pentium 100 you ordered out of the back of Computer Shopper.
But when the IPC and IPX were released, SPARC was still ascendant (Intel's flagship was the 486/33!), "PCs" were still running Windows 3.1 (or just DOS), Linux didn't exist yet, and they were the best computers you could get. Well unless you were a graphic nerd and tilted to SGI instead.
I very specifically remember salivating over these boxes, which were legitimate upgrades over the SPARCstation 1/1+/2 machines which were groundbreaking in the late 80's.
I remember the java station 1 as well which was running byte code natively in an IPX-like box. That thing really was so horrendously slow out was basically a paperweight. I had the chance to play with one when it was just new but it was like a joke or something.
Maybe that gave the form factor a bad name because all their good stuff was in pizza boxes.
This take, which I've seen in a few different places now, seems 100% bonkers. A world where anyone can cheaply reimplement anyone else's software and use it on hardware of their own choosing in their own designs and for their own purposes is a free software utopia.
This isn't a problem, this is the goal. GNU was born when RMS couldn't use a printer the way he wanted because of an unmodifiable proprietary driver. That kind of thing just won't happen in the vibe coded future.
It's not going to be like that for proprietary software. All this future ends with is "totally free" software that companies will leech off of in their "totally locked down" software. I guarantee you that people wouldn't have had this reaction if someone had instead replicated Windows from leaked source. Well, other than Microsoft owners/employees.
> The entire notion of being allowed to enforce arbitrary terms of service is absurd.
For clarity, and while the HN seems to imply that, that is not what this decision was actually about.
It was about the specific requirement that disputes be handled by binding arbitration. The circuit court was actually clear they weren't making decisions about the facts of the case, precisely because the arbitrator gets to make those calls.
Now, sure, that can mean "you lose" in practice, depending on the claim and the arbiter. And in this specific situation it's a death knell for the plaintiffs, because this was an emerging class action suite looking for a big payout.
But no, the 9th circuit has not found that companies have the ability to enforce "arbitrary terms of service" via a TOS update email. They only made a call on this particular term update, and they were clear that they did so because it does not represent an actual change to the service terms (only to the dispute process).
That sounds like spin to me. If there were a clear "quality edge" in "certain business domains" stemming from "exclusive proprietary data", someone would have been exploiting it already using meat computers.
But no, businesses are dumb. They always have been. Existing businesses get disrupted by new ideas and new technology all the time. This very site is a temple to disruption!
Proprietary advantage is, 99.999% of the time, just structural advantage. You can't compete with Procter & Gamble because they already built their brands and factories and supply chains and you'd have to do all that from scratch while selling cheaper products as upstart value options. And there's not enough money in consumer junk to make that worth it.
But if you did have funding and wanted to beat them on first principles? Would you really start by training an LLM on what they're already doing? No, you'd throw money at a bunch of hackers from YC. Duh.
Frontier labs are paying the same constellation of firms offering proprietary data and access to experts in their fields to train LLMs.
They are neck-and-neck only because they are participating in the arms race. The only other way to keep up is mass-distillation, which could prove to be fragile (so far it seems to be sustainable).
Meh. I think there's basically no benefit shown so far to careful curation. That's where we've been in machine learning for three decades, after all. Also recognize that the Great Leap Forward of LLMs was when they got big enough to abandon that strategy and just slurp in the Library of All The Junk.
I think one needs to at least recognize the possibility that... there just isn't any more data for training. We've done it all. The models we have today have already distilled all of the output of human cleverness throughout history. If there's more data to be had, we need to make it the hard way.
Ok, maybe pretraining is now complete and solved. Next up: post-training, reinforcement learning, engineering RL environments for realistic problem solving, recording data online during use, then offline simulation of how it could have gone better and faster, distilling that into the next model etc. etc. There's still decades worth of progress to be made this way.
" There's still decades worth of progress to be made this way."
That's not true. Moreover the progress can slow to a crawl where it's barely noticeable. And in that world the humans continues to stay ahead - that's the magic of humans. To be aware of surroundings and adapt sufficiently whilst taking advantage of tools and leveraging them.
This is an interesting theoretical statement that does not survive a collision with reality. The long-tail expert RHLF training is effective. We have seen significant employment impact to call center employees. This does not mean its progress will be cheap or immediate.
The quality edge hasn't shown up yet. If this strategy actually works then the quality improvements will only become apparent in the next round of major LLM updates. There's a lot of valuable training data locked up behind corporate firewalls. But this is all somewhat speculative for now.
> You need to explain, from a systems point of view _why_ the gains must diffuse out as you suggest.
Do we? I mean, isn't "because they always have" enough of an argument on its own?
I am hardly a libertarian ideologue nor AI-first LLM jockey. But I do think people tend to catastrophize too much. Blacksmiths were killed dead by the industrial revolution. "Secretary" is a forgotten art. It's been decades since an actuary actually calculated a sum on an actual table. And the apocalypse didn't arrive. All those jobs, and more, were backfilled by new stuff that was previously too expensive to contemplate. We're eating at more restaurants. We can find jobs as content creators and twitch streamers.
Life not only goes on after rapid technological change, it improves. That's not to say that every individual is going to appreciate it in the moment or that regulation and safety net work needs to happen at the margins. But, we'll all be fine.
AGImageddon is, at its core, just another economic phenomenon driven by technology. And that's basically always worked to society's benefit over the long term.
The 1880s blacksmith didn't become a 1950s American suburbanite. They moved to shared housing in Manchester and a shorter lifespan working for poverty wages, lost fingers/arms in machines, maybe ended up on skid row, the section of town for failures who couldn't 'adapt' to the new modern world. Their children died in WW1 in a trench to industrial produced gas. Their children's children were transported around the world to die storming a beach in WW2. And their children's children's children lived on meager 1940-60 diets as the world rebuilt it's food stocks destroyed by industrialized war, eating new industrial food replacements like margarine and SPAM. There were hundreds of millions of industrial enabled deaths. There was industrial enabled famine and near famine.
That all gets waived away with 'always worked to society's benefit'. It took almost 70 years and the post WW2 destruction of the rest of the worlds economies/infrastructure to create that 1950s American suburbanite world. 'always worked to society's benefit over the long term' is just handwaving not based on the reality of adapting, or if those societies even wanted to join in.
Because not all peoples/nations even had a choice. Japan among many originally opted out. But they were forced to 'modernize'. Peoples around the world were forced into the industrial world by railroads and machine guns and the industrial need for rubber/banana whatever plantations or lumber or strip mines. Once one nation passed through the door, every nation had to follow or be subjugated.
> The 1880s blacksmith [...] moved to shared housing in Manchester and a shorter lifespan working for poverty wages, lost fingers/arms in machines, maybe ended up on skid row
That's... just not remotely true, unless you're talking about it as a maybe-it-happened-to-someone story. In fact it's basically a lie.
Every income group in the US (and recognize that "blacksmiths" represent skilled trades workers who earned well above median and had for thousands of years!) saw huge, huge, HUGE increases between 1880 and 1950. I mean... are you high?
> It took almost 70 years and the post WW2 destruction of the rest of the worlds economies/infrastructure to create that 1950s American suburbanite world.
Again, big citation needed on this one. Western Europe was very close to US quality-of-life numbers by the 60's, and the more successful nations started to pass it in the 90's. (Also recognize that the US had already pulled ahead in the 30's, Germany and France were lagging even before the war). You're looking at something along the lines of a decade to rebuild, tops.
You need to tighten up before you call someone a liar. Manchester is the poster child city for the industrial revolution. The blacksmith moving to Manchester had a lower lifespan/quality of life, it's not in question or up for debate. He is who we will be in the AI disruption, not the person in 1950.
You don't think there are 70 years between 1880 and the end of WW2 and the real start of suburban American prosperity we think of when we think of the end results today? And I need a citation? Or are you saying I should use 1960 not 1950s as the point, since it took a decade to rebuild in much of the world?
> Manchester is the poster child city for the industrial revolution.
Which is to say, you cherry picked the data rather than looking at aggregates. Manchester industrialization being terribly managed isn't an indictment of steel machining or electrification, it means the government fucked up.
What you are claiming (that the industrial revolution led to lower quality of life generally) is simply false, period. And it won't be true of AGImageddon either, no matter how deeply you believe it. Economics just doesn't work that way.
Oh look, I didn't lie. No apology? Nope, just more attacks.
I picked THE Industrial Revolution city. THE CITY where it all happened. Did your high school not have a history class? I picked where it went wrong, the first go live site. That's what you do for analyzing things. You don't pick go live 500. That isn't cherry picking, that's what we do when we discuss scenarios that INITIALLY came up so they don't happen again. We don't just whitewash like you would like.
I claimed the industrial revolution led to lower quality of life for the blacksmith. The modern narrative when talking about AI implies they just turned into 1950s style suburbanites and waives away any thought/planning/discussion like you are trying to do. The reality, as it factually happened, was a much worse life and it is worth considering when implementing something that could be just as impactful.
People like you want to just handwave away the inconvenient fact that I am more likely to be the blacksmith in Manchester than to be born in some post-work AI Utopia that may exist in 70 years after things settle. why can't we even discuss this? Why do we have to stumble blindly into it, to the point you call me a liar/cherry picker for pointing out basic history taught in high school and basic root cause analysis concepts?
The reason that Manchester is taught about in American high schools is so that we learn from it and we understand our current world didn't just magically happen. Good and bad happened along the way, and that we have to work within that reality. Good can come in the end, be positive IF progress IS being made. Bad will happen, fix it don't just accept it, challenge it. Think about it. Look to history to prevent the easy things to prevent.
Just stop. Your ability to show a handful of negative externalities from industrialization doesn't invalidate the progress of the last century and a half, and to argue so (as you clearly did) is laughable.
And all the same logic applies to AI. Do we need to be willing to re-regulate and adjust as this is deployed? Almost certainly. Will it make us all wealthier? Undeniably.
We will need to re-regulate and adjust but talking about it ahead of time and moving forward intelligently is laughable? talking about how the last huge revolution played out initially is laughable? Come on. And yes, when you are talking about the start of something you normally only have a handful of examples. That is how things start, with a few instances.
You didn't know basic level history, called me liar, then a cherry picker for using the gold standard example.
You might want to check yourself before you tell people to stop, call them liars, cherry pickers, or make claims. No need to mis-represent me. My point is that 70 years of upheaval prior to the modern version of the world get ignored in the discussion. My point is that original people impacted, the proverbial blacksmith or buggy whip maker that 'adapted' had worse, shorter lives because of adapting.
This pattern continuing indefinitely without the need for analysis would be certainly nice but we do need to confront recent data. In the US, multiple metrics of quality-of-life peaked around 2015 and have declined since then, with some showing 11% decline while US total wealth has doubled! (with the majority of that decline pre-covid and pre-AI) [0][1][2].
What forces act on this trend? How can we make predictions? An interesting metric, which tracks the aggregate of many complex factors is the distribution of wealth, which could be seen as proxy for the distribution of power or agency of a person in their society. Median income as a fraction of total wealth decreased nearly 50% in real terms over this same period. [3]
Now inversely, during the period where life quality increased most the last century (1920 - 1980) inequality was _falling_.
How is super-human AI advanced through 2030, 2040, 2050 likely to affect things? Will it sharpen the inequality or relax it?
With AI the cost of raw resources to products goes down, but it's likely inequality increases. It's not obvious which force has a bigger impact on human quality of life as things shake out. However, I think the strongest argument – which also explains the steady improvements in QoL through previous changes you mentioned – has been to follow inequality, or median share of power in society.
>This pattern continuing indefinitely without the need for analysis would be certainly nice but we do need to confront recent data. In the US, multiple metrics of quality-of-life peaked around 2015 and have declined since then, with some showing 11% decline while US total wealth has doubled! (with the majority of that decline pre-covid and pre-AI) [0][1][2].
It's hard to take that metric seriously when the top city is Raleigh, NC. If that were the best city you'd expect people to vote with their feet and move their in droves.
There's an argument about the speed of change though, a society going through the technological evolution from blacksmithing to industrial metallurgy didn't experience it happening in the short-medium term (1-10 years), it had a gradient of change.
Over time with the speed of technological development compounding on itself, the rate of change becoming much more acute, there's a debate to happen on the "what if this change happens over 5-10 years"? Can you imagine a world where in 10 years most well-paid office jobs are automated away, there's no generational change to re-educate and employ people, there would be loads of unemployable people who were highly-specialised to a world that ceased to exist, metaphorically overnight in the span of a human life.
Pushing this concern away with "it happened in history and we're fine" leaves a lot of room for catastrophising, at least a measured discussion about this scenario needs to be had, just in case it happens in a way that our historical past couldn't account for. No need to be a doomer, nor a luddite, to have the discussion: can we be in any way prepared for this case?
I mean, arguably AI is faster (but it's equally arguably oversold, certainly we aren't seeing that kind of change yet). But the stuff I cited was faster than you think. In the rural US, in 1900, most routine transport was still done with horses. By the 20's it was basically all in trucks, and trucks don't need hand-forged shoes that the blacksmiths were making[1]. Likewise professional typists were still clacking away in 1982 but by the mid 90's their jobs[2] had been 100% automated.
[1] "Blacksmithing" didn't disappear, obviously, but it survives as an expert craft for luxury goods. That's sort of what's going to happen to "hacking" in the future, I suspect.
[2] Likewise, some of the best positions survived as "personal assistants" for executive staff too lazy to learn to type. Interestingly these positions are some of the first being destroyed by the OpenClaw nonsense.
The professional typist' role evolved - to serving through other ways, as you say - by become executive assistants. Much like a Bank Tellers' role also evolved.
And its not because they (executives) are too lazy to type. They actually need people to manage their calendar, monitor emails etc. Moreover, the personal computing revolution led to an expansion of firms that needed more of said people.
Could this be disrupted by things like OpenClaw? Maybe. Personally I doubt it. Trust is a huge element that LLMs have yet to overcome and may never over come. Its the same reason Apple pulled "Apple Intelligence". I know this place is full of doom and gloom, but I am not a SWE by trade so I can see the bigger picture and not get bogged down by the fact it might affect my income.
Moreover, work is more 'fun' with people around. So to you it may seem irrational to keep employed for that basis (call it Culture) but to others, and in particular the executive class - nope. People will start realising things like this once the hysteria dies down.
The "role" might have evolved, but the jobs disappeared. There are, what, maybe two or three orders of magnitude fewer "executive assistants" than there were typists in the 70's? I was making an argument about economics, not job classification.
It's generally felt to be risky and volatile, but useful. Basically, it's never illegal just to hand your friend $20 even if the government isn't watching over the process to make sure you don't get scammed. This is the same thing at scale.
reply