Current LLM is best used to generate a string of text that's most statically likely to form a sentence together, so from user's perspective, it's most useful as an alternative to manual search engine to allow user to find quick answers to a simple question, such as "how much soda is needed for baking X unit of Y bread", or "how to print 'Hello World' in a 10 times in a loop in X programming language". Beyond this use case, the result can be unreliable, and this is something to be expected.
Sure, it can also generate long code and even an entire fine-looking project, but it generates it by following a statistical template, that's it.
That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there. But the hard, domain-specific problem, is less likely to have a publicly-available solution.
>I'm feeling people are using AI in the wrong way.
I think people struggle to comprehend the mechanisms that lets them talk to computers as if they were human. So far in computing, we have always been able to trace the red string back to the origin, deterministically.
LLM's break that, and we, especially us programmers, have a hard time with it. We want to say "it's just statistics", but there is no intuitive way to jump from "it's statistics" to what we are doing with LLM's in coding now.
>That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there.
I think the idea that LLM's "just copy" is a misunderstanding. The training data is atomized, and the combination of the atoms can be as unique from a LLM as from a human.
In 2026 there is no doubt LLM's can generate new unique code by any definition that matters. Saying LLM's "just copy" is as true as saying any human writer just copies words already written by others. Strictly speaking true, but also irrelevant.
Well said. It also causes a lot of bitterness among engineers too, not being able to follow the red string is maddening to some. This rage can prevent them from finding good prompting strategies also which would directly ease a lot of the pain, in a similar way to how it’s far harder to teach my mother how to do something on her phone if she’s already frustrated with it.
I think you severely overestimate your understanding of how these systems work. We’ve been beating the dead horse of “next character approximation” for the last 5 years in these comments. Global maxima would have been reached long ago if that’s all there was to it.
Play around with some frontier models, you’ll be pleasantly surprised.
This point is irrelevant when discussing capabilities. It's like saying that your brain is literally just a bunch of atoms following a set of physics laws. Absolutely true but not particularly helpful. Complex systems have emergent properties.
You're talking about a company owned by one of the richest "tech bro" out there. He's not just an ISP, he's a visionary (for better or worse) with a lots of ideas.
TLS payload is encrypted, but meta data (such as SNI and other fingerprints) is not. These meta data could still be valuable for someone who know how to utilize it.
Some dirt-cheap VPS maybe too unreliable to run anything serious, that's why they sell that for dirt cheap. And their consumers generally won't complain about sudden server reboots, because that's what expected for the price.
If they increase their prices, then many of their customers maybe better off just use Linode or DigitalOcean etc instead, as these vendors provides better guaranty on stability.
I’m just making a market assumption that DO needs to raise their prices as well. Everyone needs RAM.
Not sure what the lift might be, but in theory everything should be relatively similar in future state, just more expensive. This is basically a form of inflation.
Not sure about that too, the market is just too delicate at current moment.
People renting VPS to do something, running a service, a website, email etc. But there are other ways to achieve the same without the need of a VPS.
If VPS cost increase to a certain level, some people will just host the service on their own Raspberry Pis through Cloudflare Tunnel, or just simply shut the service down.
Perhaps but what is that point and are we truly approaching it? I’m thinking a $5 bill becomes a $7-10 bill. Those prices have already changed on a lot of things (food, cars, housing, etc) so won’t really be a shock to anyone. And is still an immaterial cost versus the headache of completely shifting their provider and architecture. I don’t think the cancellations will be massive, especially if you are wise and don’t raise prices for users who are paying for near zero utilization of resources. Those users are always most at risk of cancellation. Any time you email them some number of them will cancel. It’s best to not communicate much at all with this cohort. Every time you talk to them you’re reminding them “oh I am paying them for something I don’t use” and so they will login and cancel.
Unlikely. What's more likely is that they just simply don't care. They have turned this into an arms race now, killing off competitors is much more important than caring about coastal damage or even their own future.
But this does highlight the fact that most of our hardware is produced (and thus can be restricted) by a few cartel-like players, just like what's happened in the Internet industry.
> any random enemy with hardware access could plug in a USB cable, flash the older exploitable signed firmware, steal your personal data, install a trojan, etc
A lot of my phones stopped receiving firmware updates long ago, the manufacturer just simply stopped providing them. The only way to safely use them is to install custom firmware that are still address the problems, and this eFuse thing can be used to prevent custom firmware.
This eFuse is part of the plot to prevent user from accessing open source firmware, it's just that. Your "user safety" jargon cannot confuse people anymore, after all the knowledge people (at least the smart few) has learned during the years.
The definitive factor here is whether the Chinese company wants to put themselves deep in the American political water. To any company, living under a crosshair alone is difficult enough, it becomes even more so when the company is foreign-owned.
The sell was a strategically correct decision by ByteDance, they made money out of it and they secured some future income, at least for a short while. But no app or service lives forever anyways, so it's still a good trade.
Whether or not the sell could benefit actual American users was never mattered.
I wasn't talking about Bytedance's point of view, but that of the Chinese government. As a private company without state backing, I agree that selling might have been the best decision to maximise their profits. But could have been politically advantageous for China to veto the sale- even if the platform were politically neutral- because it was basically untouchable, as was already proven by years of delay. They might have used it as a bargaining chip, and then I'd be curious to know what they got back.
Why so self-defecting? Even some of the job could be out-sourced, some positions just cannot. If those people can unionize, it might starts to grow from there.
It's like planting the seeds, it might not work from time to time and from places to places, but sometime the result might surprise you. But you'll never know if you don't give it a try.
It's like... (maybe an inappropriate example) how NRA brainlessly defending gun rights. They don't first spend 500 billions on gun safety research trying to prove gun is safe, no, they want guns, and then they come up reasons why guns are good.
In the recent years I'm started to think maybe this NRA-style method is actually how to set things in motion (if it's not the only effective way), as any added prerequisite or cations may eventually bog things down to a stop. You all read the CIA sabotage manual right? There's a chapter detailed how you can stop a plan by adding complexity (i.e. bigger committees etc).
Hmmm... Got me thinking, why must all software implement (and maintain) transport security?
The security standard changes/improves over time. With software like stunnel takes care of it, your software could be practically security wise up-to-day forever as long as you or your user keeps their stunnel updated.
As someone that has built security applications for most of this century, I can confidently say that when you make security the problem of one device, system, team or entity that it results in insecurity. It might satisfy some auditors but that’s about it.
The most obvious issue is that if any system is compromised, then the attacker can potentially sniff traffic and they are all effectively compromised. The next one, and it’s really key to TLS, is that the app you are proxying probably has an opinion or desired behavior when things can’t be authenticated or are improper. Someone reading you blog and the cert is a day old? Probably not super risky to let them read it. Logging in to the mail server and the keys are bad? You might want the server to just block that.
For like a home lab situation or kind of toy systems? These tools are great, I’ve used stunned more than a few times to hack things together
Was a Nova user, moved to Lawnchair yesterday. It's not the same experience I got from Nova (for example, the Clock widget don't work), but the adaptation is not unbearable.
I purchased Nova Launcher Prime years ago thinking it was the best investment I put on Google Play, well, maybe I should've spent the money on something else.
I'm curious how much slowdown a "weak" CPU can cause for real-life programming task, assuming the CPU is at least gen 4 Intel.
I never used a mobile/power-efficient CPU myself, but I do use old CPUs. For example, this I5-4210M on my T440p, it's obviously not fast compare to newer ones, but when writing code on it (Go and a bit of Rust), I don't really feel a day-or-night level difference. Sure, it's slower, but not unbearably, in fact for most cases I barely notice it.
My boss used to have an apt sticker on his ThinkPad that said 'My other computer is a data center'. In my case that's also true; I just use local I/O for KVM but the heft is in whatever I'm SSH'd into.
I daily a T480 at home and an X280 on the road. Swapped the batteries for fresh ones last week, they do around 6 hours on a charge for my use case and they run Linux so personally I don't see any reason to upgrade any time soon.
I don't remember the T480 I had was any slow, except of course when running games. So I do agree that the machine is still capable for most use cases today.
But I also saw people (usually X series users) complaining on YouTube saying something like their "mobile" CPU is trash etc. My thinking is, if the slowdown is actually insignificant for real-life use cases, then I rather have longer battery life than better performance.
Put it this way, I only retired my 2013 T530 last year. I think 3rd gen i5 2C4T.
The main limitation for my daily development was simply RAM. The system topped out at 2x8GB. Otherwise, I could run android studio and all my modern JetBrains stuff pretty well. Slow, but good enough.
Compile times were of course terrible, but most of what I do is small embedded firmware type stuff so it never took too long.
But as siblings mention, for anything super heavy it was just an ssh terminal into a beefy server. At a certain point, two real cores is just not enough.
I did upgrade it to the top-spec 4c8t processor right at the end, but it ran way too hot. Between keeping the system on the edge of thermal throttling and the halved battery life, it was not worth the money :(
Desktop CPUs have a lot more punch than their equivalent mobile parts.
My current dev machine is an X1 carbon from 2019. Compiling go code is slower than I’d like, some JavaScript-heavy websites like Jira take a couple extra seconds to load, and the GPU can drive a 4k monitor but it isn’t snappy.
Still, the form factor is perfect, and my next upgrade will be exactly the same machine but more powerful and with a brighter display with the same 2.5k resolution.
Anecdata: I've never experienced any noticeable annoyance with my T470p (i7-6820HQ) but with my new (old) NUC with an i3-3217U every time you go compile some Rust or C++ it's already a bit annoying (I'm running xfce and Firefox on it, it's perfectly usable - but I wouldn't want to compile all day on it).
Current LLM is best used to generate a string of text that's most statically likely to form a sentence together, so from user's perspective, it's most useful as an alternative to manual search engine to allow user to find quick answers to a simple question, such as "how much soda is needed for baking X unit of Y bread", or "how to print 'Hello World' in a 10 times in a loop in X programming language". Beyond this use case, the result can be unreliable, and this is something to be expected.
Sure, it can also generate long code and even an entire fine-looking project, but it generates it by following a statistical template, that's it.
That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there. But the hard, domain-specific problem, is less likely to have a publicly-available solution.
reply