If I'm doing work than involves three different libraries, I'm not reading and committing to memory the whole documentation for each of those libraries. I might well have a few tabs with some of those libraries' source files too. I can easily end up with tens of tabs open as a form of breadcrumb trail for an issue I'm tracking down.
Then there's all the basic stuff — email and calendar are tabs in my browser, not standalone applications. Ditto the the ticket I'm working on.
I think the real issue is that browsers need to some lightweight "sleep" mechanism that sits somewhere between a live tab and just keeping the source in cache.
There's all the usual "$APPLICATION is a memory hog" complaints, for one.
In the SWE world, dev servers are a luxury that you don't get in most companies, and most people use their laptops as workstations. Depending on your workflow, you might well have a bunch of VMs/containers running.
Even outside of SWE world, people have plenty of use for more than 8GiB of RAM. Large Photoshop documents with loads of layers, a DAW with a bazillion plugins and samples, anything involving 4k video are all workloads that would struggle running on such a small RAM allowance.
This depends on industry. Around here, working locally on laptop is a luxury, and most devs are required to treat their laptop like a thin client.
Of course, being developer laptops, they all come with 16 gigs of RAM. In contrast, the remote VMs where we do all of the actual work are limited to 4GiB unless we get manager and IT approval for more.
our company just went with the "server in the basement" approach, with every employee having a user account (no VM or docker separation, just normal file permissions). Sure, sounds like the 80s, but it works rearly well. Remote access with wireguard, uptime similar or better than cloud, sharing the same beefy CPUs works well and gives good utilization. Running jobs that need hundreds of GB of RAM isn't an issue as long as you respect other's needs too dont hog the RAM all day. And in amortized costs per employee its dirt cheap. I only wish we had more GPUs.
> Interesting. I required all my devs to use local VMs for development.
It doesn’t work when you’re developing on a large database, since it won’t fit. Database (and data warehouse) development has been held back from modern practices just for this reason.
Current job used to let us run containers locally, but they decided to wrap initially docker, and then podman with "helper" scripts. These broke regularly, and became too much overhead to maintain so we are mandated to do local dev but access a dev k8 cluster to perform any level of testing that is more than unit and requires a db.
A really shame as running local docker/podman for postges was fine when you just ran the commands.
I find this quite surprising! What benefit does your org accrue by mandating that the db instance used for testing is centralised? Where I am, the tests simply assume that there’s a database available on a certain port. docker-compose.yml makes it easy to spin this up for those so inclined. At that stage it’s immaterial whether it’s running natively, or in docker, or forwarded from somewhere else. Our tests stump up all the data they need and tear down the db afterwards. In contrast, I imagine that a dev k8s cluster requires some management and would be a single point of failure.
I really don't understand why they do what they do.
Large corp gotta large corp?
My guess is that providing the ability to pull containers means you can run code that they haven't explicitly given permission for, and the laptop scanning tools can't hijack them?
Yes, zero latency typing in your local IDE on a laptop sounds like the dream.
In enterprise, we get shared servers with constant connection issues, performance problems, and full disks.
Alternatively we can use Windows VMs in Azure, with network attached storage where "git log" can take a full minute. And that's apparently the strategic solution.
Not to mention that in Azure 8 CPUs gets you four physical cores of a previous gen server CPU. To anyone working with 4 CPUs or 2 physical cores: good luck.
Part of what bothers me with AI energy consumption isn't just how wasteful it might be from an ecological perspective, it's how brutally inefficient it is compared to the biological "state of the art" — 2000kcal = 8,368 kJ. 8,368 kJ / 86,400 s = 96.9 W.
So the benchmark is achieving human-like intelligence on a 100W budget. I'd be very curious to see what can be achieved by AI targeting that power budget.
Is it though? When I ask an LLM research questions, it often answers in 20 seconds what it would take me an entire afternoon to figure out with traditional research.
Similarly, I've had times where it wrote me scientific simulation code that would take me 2 days, in around a minute.
Obviously I'm cherry-picking the best examples, but I would guess that overall, the energy usage my LLM queries have required is vastly less than my own biological energy usage if I did the equivalent work on my own. Plus it's not just the energy to run my body -- it's the energy to house me, heat my home, transport my groceries, and so forth. People have way more energy needs than just the kilocalories that fuel them.
If you're using AI productively, I assume it's already much more energy-efficient than the energy footprint of a human for the same amount of work.
Training energy is amortized across the lifespan of a model. For any given query for the most popular commercial models, your share of the energy used to train it is a small fraction of the energy used for inference (e.g. 10%).
For this kind of thinking to work in practice you would need to kill the people that AI makes redundant. This is apart from the fact that right now we are at a choke point where it's much more important to generate less CO2 than it is to write scientific simulation code a little quicker (and most people are using AI for much more unnecessary stuff like marketing)
> For this kind of thinking to work in practice you would need to kill the people that AI makes redundant.
That is certainly not a logical leap I'm making. AI doesn't make anybody redundant, the same way mechanized farming didn't. It just frees them up to do more productive things.
Now consider whether LLM's will ultimately speed up the technological advancements necessary to reduce CO2? It's certainly plausible.
Think about how much cloud computing and open sourced changed it so you could launch a startup with 3 engineers instead of 20. What happened? An explosion of startups, since there were so many more engineers to go around. The engineers weren't delivering pizzas instead.
Same thing is happening with anything that needs more art -- the potential for video games here is extraordinary. A trained artist is way more effective leveraging AI and handling 10x the output, as the tools mature. Now you get 10x more video games, or 10x more complex/larger worlds, or whatever it is that the market ends up wanting.
Except reality is they're not. If you want to argue the contrary, show the statistics that unemployment among digital artists is rising.
So many people make this mistake when new technologies come out, thinking they'll replace workers. They just make workers more productive. Sometimes people do end up shifting to different fields, but there's so much commercial demand for art assets in so many things, the labor market shrinking is not the case for digital artists right now.
How so? A human needs the entire civilisation to be productive at that level. If you take a just the entire US electricity consumption and divide it by its population, you'll get a result that's an order of magnitude higher. And that's just electricity. And that's just domestic consumption, even though US Americans consume tons of foreign-made goods.
Ah! And don't get me started about how specific its energy source must be! Pure electricity, no less! Where a human brain comes attached with an engine that can power it for days on a mere ham sandwich!
I can generate images or get LLM answers in below 15 seconds on mundane hardware. The image generator draws many times faster than any normal person, and the LLM even on my consumer hardware still produces output faster than I can type (and I'm quite good at that), let alone think what to type.
Speed highly correlates with power efficiency. I believe my hardware maxes out somewhere around 150W. 15 seconds of that isn't much at all.
> Also, why are people moving mountains to make huge, power obliterating datacenters if actually "its fine, its not that much"?
I presume that's mostly training, not inference. But in general anything that serves millions of requests in a small footprint is going to look pretty big.
It's not a good analogy at all, because of what they said about mundane hardware. They're specifically not talking about any kind of ridiculous wattage situation, they're talking about single GPUs that need fewer watts than a human in an office to make text faster than a human, or that need 2-10x the watts to make video a thousand times faster.
An LLM gives AN answer. If you ask for not many more than that it gets confused, but instead of acting in a human-like way, it confidently proceeds forward with incorrect answers. You never quite know when the context got poisoned, but reliability drops to 0.
There's many things to say on this. Free is worthless. Speed is not necessarily a good thing. The image generation is drivel. But...
The main nail in the coffin is accountability. I can't trust my work if I can't trust the output of the machine. (and as a bonus, the machine can't build a house. It's single purpose).
Beyond wasteful the linked article can't even remotely be taken seriously.
> An AI cloud can generate revenue of $10-12 billion dollars per gigawatt, annually.
What? I let ChatGPT swag an answer on the revenue forecast and it cited $2-6B rev per GW year.
And then we get this gem...
> Wärtsilä, historically a ship engine manufacturer, realized the same engines that power cruise ships can power large AI clusters. It has already signed 800MW of US datacenter contracts.
So now we're going to be spewing ~486 g CO₂e per kWh using something that wasn't designed to run 24/7/365 to handle these workloads? These datacenters choosing to use these forms of power should have to secure a local vote showcasing, and being held to, annual measurements of NOx, CO, VOC and PM.
This article just showcases all the horrible bandaids being applied to procure energy in any way possible with little regard to health or environmental impact.
I'm an EU citizen and UK resident. If I were to become one of those officials, my banking situation would become much more complex. One of the defining characteristics of the EU (not that the UK ever cared, even before leaving) is Freedom of Movement, and this is a credible threat to that freedom.
When in the EU the UK was actually one of the countries (if not the country) that made freedom of movement the easiest because, indeed, they did not care. You could move there with zero involvement or knowledge from the authorities.
Yeah, moving here involved basically buying a plane ticket, and, after I got here, booking an appointment to get a National Insurance number (basically equivalent to an American Social Security number). Never occurred to me that moving to any other EU country might be harder than that.
My experience moving to Germany from the UK in 2018 was only one step harder than that from bureaucracy — two appointments, one for social security and the other for an ID card. Not even that I had a much poorer grasp of the German language than I realised was a problem*, as the bureaucracy is mostly bilingual and when it isn't has interpreters.
The only actual hard part was just that the rental market in Berlin has vastly more demand than supply.
* hopefully next month I pass a B1 exam, which tells you how hard it has been for me to get fluent.
One of Cambridge's commuter villages. Was a home owner, still am, very useful passive income.
I'm not sure about how London compares, but Berlin has rent controls so the queues for open house viewings around here can go all the way down the apartment staircase and along the street.
> Never occurred to me that moving to any other EU country might be harder than that.
I don't think it is? I moved to Spain from other EU country the same way, basically bought the cheapest one-way plane ticket I could find, spent ~1 month here before deciding I wanted to live here, then got myself the local residence card one morning and that's about it. Everything else just worked by using my passport in the meantime.
No, it is significantly more difficult in other EU countries, yes.
Here in Finland for example the process is actually no different than for a non-EU migrant (same amount of time taken for an unproblematic application, same amount of appointments). You are just much more likely to be accepted but in fact they do still reserve the right to reject people. And it is, probably unintentionally, much harder to exist in Finland as a non-resident as you can't have a bank account, can't use foreign phone numbers for most things and any phone you can get is very limited (can't call many numbers, etc). I couldn't even log into the local eBay for the first 6 months. All the Nordics I would guess are similar.
And people have contested in the comments to you that Spain is not actually so easy as you suggested...
I actually don't know any western country that is as easy to move to as the UK was pre-Brexit. I still think the UK is in fact one of the easier Western countries to move to, especially if you can't find moderately paid work
Countries with a national id system I would guess tend to be more difficult overall though. And the UK famously is not one of those.
It depends on the country. And Spain is not as simple as you say. Even getting the NIE is very difficult due to the foreign police not making enough appointments available. And expensive immigration agencies hoarding those appointments to make money.
Then you need a social security number exist is different than the NIE, you need empradonamiento, you need to register with the health service and you need to set up your tax if you're going to work here (or if you live there more than 180 days of the year)
> Then got myself the local residence card one morning
Well, exactly. Some countries require/required registration and residence card. That did not exist in the UK when it was in the EU, you just showed your passport/ID card when you needed to prove your right to be there (basically once in a blue Moon). Even now EU residents don't have any physical documents.
The National Insurance number @pdpi mentioned is unrelated as everyone has one once they work and an appointment is not always required to get one, and you can actually start working before you get one.
If you work as an employee there is also usually nothing to do regarding tax.
In Barcelona it is impossible to get an appointment for the residence card. There is online booking system, but it never shows any available slots. But then there are few companies that for 50-100 euros can get an appointment.
But then even with appointment one only gets a temporary permit unless one already got a job offer. One gets the permanent card only after starting a business or buying a property or getting a work.
Also to open a permanent bank account one needs to have at least a temporary residence. Otherwise banks can only open a tourist account valid for few months.
"There is online booking system, but it never shows any available slots. But then there are few companies that for 50-100 euros can get an appointment."
^^^ shouldn't complain about this on Hacker News.
I wrote my own bot and it took a day or so.
The appointment slot came in in 30 minutes thereafter ;)
If you're an EU citizen you by definition have a permanent permit, until either your country of origin or host country leaves the EU. If you are not then woe be you, but that's a separate matter.
That's not actually the case, strictly speaking. Residence in another EU country requires meeting certain criteria even if some countries (like the UK when it was in the EU) do not check or really enforce them. This also means that an EU citizen can be deported from another EU country back to their home country if they don't meet those criteria.
"Permanent residence" is also again different and requires residence under those criteria for at least 5 years.
In theory yes, one can stay in Spain as a citizen of a EU country indefinitely. In practice for anything in Spain you need a tax number. Even to get an Internet connection at home one needs it.
I had neither when I moved, sold my things, tried to survive, ended up sleeping outside for a few days and I found a job after I moved here, not before. But yeah, there is one or two more appointments in reality, one for the social security and one for registering with your local city government, both a lot easier to get than the residence permit which can be a bit of a hassle unless you work with agencies to get it.
I'm Portuguese and have lived in the UK for over a decade.
UK keyboard layouts suck for writing Portuguese, because they lack convenient ways to type all the diacritics. Portuguese layouts (especially on macOS) suck horrendously for programming (curly braces and square brackets are inordinately annoying to type).
These days, all my physical keyboards are US (ANSI) layouts, and I use the US International (with dead keys) layout exclusively. It's the only relatively sane option that allows me to write both code and all the natural languages I'm liable to write on any given day (read: English, Portuguese, and some random French or German loanwords here and there).
As a Brazilian fellow, 100% agreed. US international is the least bad compromise I've found. I can't say I mind the dead keys too much. And I do enjoy that all combinations are sensible (i.e. key for the symbol + key for the letter). Memorizing the (not quite random but not exactly 100% logical either) position for some of the diacritics would be very annoying to me.
I guess I don't mind it too much because the standard portuguese keyboard layout also rely on dead keys for accented letters, instead of having dedicated keys for them. (Or at least the Brazilian Portuguese layout does, not sure about the European Portuguese layout). So that's just what I've always been used to.
I speak English and French at work and I use an ANSI US keyboard frequently, but the laptop itself is on AZERTY. I keep three layouts : French AZERTY, Normal US and US International. When typing code or English, it's the US layout, then I switch to International when speaking French. AZERTY only if I don't have my keyboard with me
I'd go a bit farther — "mock" is basically the name for those dummy versions.
That said, there is a massive difference between writing mocks and using a mocking library like Mockito — just like there is a difference between using dependency injection and building your application around a DI framework.
> there is a massive difference between writing mocks and using a mocking library like Mockito
How to reconcile the differences in this discussion?
The comment at the root of the thread said "my experience with mocks is they were over-specified and lead to fragile services, even for fresh codebases. Using a 'fake' version of the service is better". The reply then said "if mocking doesn't provide a fake, it's not 'mocking'".
I'm wary of blanket sentiments like "if you ended up with a bad result, you weren't mocking". -- Is it the case that libraries like mockito are mostly used badly, but that correct use of them provides a good way of implementing robust 'fake services'?
In my opinion, we do mocking the exact opposite of how we should be doing it — Mocks shouldn't be written by the person writing tests, but rather by the people who implemented the service being mocked. It's exceedingly rare to see this pattern in the wild (and, frustratingly, I can't think of an example off the top of my head), but I know Ive had good experiences with cases of package `foo` offering a `foo-testing` package that offers mocks. Turns out that mocks are a lot more robust when they're built on top of the same internals as the production version, and doing it that way also obviates much of the need for general-purpose mocking libraries.
AI has an image problem around how it takes advantage of other people's work, without credit or compensation. This trend of saccharine "thank you" notes to famous, influential developers (earlier Rob Pike, now Rich Hickey) signed by the models seems like a really glib attempt at fixing that problem. "Look, look! We're giving credit, and we're so cute about how we're doing it!"
It's entirely natural for people to react strongly to that nonsense.
Every time I try to have this conversation with anyone I become very aware that most developers have never spent a single microsecond on thinking about licenses or rights when it comes to software.
To me when it's very obviously infuriating that a creator can release something awesome for free, with just a small requirement of copying the license attribution to the output, and then the consumers of it cannot even follow that small request. It should be simple: if you can't follow that then don't use it and don't ingest it and output derivatives of it.
Yet having this discussion with nearly anyone I'm usually met with "what? license? it's OSS. What do you mean I need to do things in order to use it, are you sure?". Tons of people using MIT and distributing binaries but have never copied the license to the output as required. They are simply and blissfully unaware that there is this largely-unenforced requirement that authors care deeply about and LLMs violate en masse. Without understanding this, they think the authors are deranged.
> I think parts of Liquid Glass on macOS looks pretty bad. But I don't care that much about how things look, so it doesn't offend me.
I don't care overmuch about the purely cosmetic side of it, but Liquid Glass looks absolutely terrible from an ergonomics point of view. It's just plainly, objectively bad UX.
Tip: in accessibility , enable High Contrast and disable transparency. Optionally disable animations. Decent experience imo. I can now see what areas are clickable.
Nb I see tons of rendering bugs across a bunch of apps and I suspect it’s because I disabled as much animation and transparency as I could. Things like the keyboard opening slightly off the screen to the right then jumping into place, some apps going black when certain overlays are open, stuff like that.
I did basically that on my iPhone. My laptop was needing a cleanup, so I just wiped it and re-installed Sequoia. the Mac Studio never got the upgrade at all. If at some point I find there's something in Tahoe that I particularly need, I'll revisit upgrading.
Doing a fresh install of Sequoia was the best move for me, too. I had an unnecessary amount of third party apps installed for no reason. I don't even use Ice for the menu bar anymore, I realized the icons that I had hidden I didn't need in the first place so I completely disabled them, in whichever apps it's possible.
It's low risk from the acquirer's point of view. Somebody else paid for that research, you just get to buy it once it's proven itself sufficiently to your liking.
It’s a bit… aggressively worded, yes, which does detract from the message. But it was the investigation itself that tickled me. I wish my abilities to debug “why is my laptop crashing” extended to the level of determining that this sort of timing is off.
Then there's all the basic stuff — email and calendar are tabs in my browser, not standalone applications. Ditto the the ticket I'm working on.
I think the real issue is that browsers need to some lightweight "sleep" mechanism that sits somewhere between a live tab and just keeping the source in cache.
reply