Hacker Newsnew | past | comments | ask | show | jobs | submit | Intralexical's commentslogin

With the obvious caveat that low-level game engine, image/video processing, numerical code etc. isn't really viable in Python. But outside of that, it's fast enough for gluing together other code that's doing the heavy lifting.

Right, if you're writing an Xbox game for example you wouldn't go with Python but if you're in the range of use cases where a language isn't fully disqualified then the pure speed of the language itself rarely needs to guide the decision.

Generally you choose Python for the conciseness you mentioned, and then move the performance-critical functions into another language like C or (I find to be easiest) Cython. Ideally most of your code stays Python, and you either optimize self-contained pieces, or find library bindings that have done it for you.

A profiler like this can be used to identify which parts to rewrite in a faster language. Sometimes it's easier to write everything in Python first, then measure, than guess at the start which parts need to be fast.

You can also get gains by switching algorithms, both in pure Python and when using a compiled library like `numpy`. And there are also some operations, like string manipulation or the `sqlite3` module, where the Python runtime's implementation has already been optimized in a compiled language.


I have no idea if this is true, but I remember seeing in a Kurzgesagt video that developing phage resistance reduces antibiotic resistance, and vice versa. So you might corner bacteria by using both.

https://www.youtube.com/watch?v=YI3tsmFsrOg&t=5m18s


They've been constantly trying to set up P2P solutions. Torrents, DWEB, IPFS, Filecoin, WebTorrent, YJS, whole bunch of tech acronyms. I'm not sure much of it has really caught on?

https://blog.archive.org/tag/decentralized-web/

https://github.com/internetarchive/dweb-transports

Third-party attempt:

https://wiki.archiveteam.org/index.php/INTERNETARCHIVE.BAK

Turns out it's hard! Or maybe just too niche. But you can also help them today, by seeding some of collections that are available as torrents.


Can you share more about your time at the Canadian one? I feel like there was a big hullabaloo about it years ago, but it's not really clear what they do.

Not sure what hullabaloo -- they do provide a bunch of services to Canadian institutions (including Libraries and Archives Canada) and they perform physical services like book scanning and in the last few years I believe they are the parent organization for the physical Canadian datacentre _somewhere in BC_.

For my work, I worked in their Archiving & Data Services department, on https://archive-it.org/ -- I didn't know this before I joined, but Internet Archive offers various for-pay services to other cultural institutions, mostly around archiving their stuff or white-labelling playback of archives.

For example https://webarchiveweb.bac-lac.canada.ca/ (the Government of Canada's own Internet Archive) is actually outsourced to ADS within Internet Archive.

On one hand this is neat, as IA have expertise around this, but on the other hand (as a Canadian) I don't like that it's not actually sovereign and that it looks like it's run by our government but that it's not. Tradeoffs, I guess.


Not only is the study testing something which only vaguely resembles how doctors diagnose patients, but isolated accuracy percentages are also a terrible way to measure healthcare quality.

If 90% of patients have a cold, and 10% have metastatic aneuristic super-boneitis, then you can get 90% accuracy by saying every patient has a cold. I would expect a probabilistic token-prediction machine to be good at that. But hopefully, you can see why a human doctor might accept scoring a lower accuracy percentage, if it means they follow up with more tests that catch the 10% boneitis.


What percentage of patients have blood clots in their lungs and a history of lupus, like the article described? That's not on the same level as a common cold at all.

> One experiment focused on 76 patients who arrived at the emergency room of a Boston hospital.

> In one case in the Harvard study, a patient presented with a blood clot to the lungs and worsening symptoms.

That's a single anecdotal fluke from the study, which is misleadingly used to represent the headlining percentages.

If you read the linked paper, it says the LLMs did not outperform any group of doctors in the most important cases:

> The median proportion of cannot-miss diagnoses included for o1-preview was 0.92 [interquartile range (IQR) 0.62 to 1.0], although this was not significantly higher than GPT-4, attending physicians, or residents.

And again, the bigger issue is that skimming nurse's notes and predicting the next tokens, as the study made the doctors do, is not how doctors diagnose medical conditions.


But that's not what I was responding to. "Oh, all of the cases are probably just common colds, so it just guessed cold and was right by sheer luck" is not what happened in the article.

Do you know how examples work? Or methodology? The claim I made is that statistical accuracy percentage ≠ healthcare outcomes, and you will mislead yourself in dangerous ways if you believe a headline that implies they're interchangeable. Not that the model literally guessed common colds when the patients had... boneitis...

The lupus anecdote on its own is irrelevant to the whether the statistics are being interpreted in valid ways or not. Also, I said nothing about luck.


Does it still work, though?

Where else would you put the repository domains?


I would put them into a configuration file. You know, so people can configure which repositories are being searched.

Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.


The search APIs are separate from the repository URLs, and the different distros' APIs need to be parsed in different ways. And before you ask, the search APIs have to be separate from the repositories, if you don't want to waste disk, network, and time keeping hundreds of local index files up-to-date every week.

They can't just be "configured" by changing a URL. I guess maybe you could self-host the search page for some of the distros, and reuse the parser, but are people really doing that? Otherwise, you'd have to write new code to parse the results, at which point you might as well soft-fork the script anyway.

> Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.

YAGNI. And if your org does need it for some reason, you're probably better off running something specifically tailored for your own needs instead of whatever implementation makes it in.

The whole script's only 1300 lines. Would adding spending 150 lines on configuration and littering the user's dotfiles be worth it? Now what happens if the configuration's missing/corrupted? When you update the script, do you keep the old dotfile that might be using a deprecated API, or do you replace the old configuration and clobber any customization the user's done? Oops, there go another 1,000 lines, on edge cases, option flags, conf merging, warning messages... And good luck getting bug reporters to explain their configuration changes!

Also, this stuff doesn't "change often". The distros literally can't change it often, because doing so might break LTS stability. I know it's fun to point out perceived flaws in other people's work, but in this case, the URLs are tightly bound to the parsing logic, which is the right place to put them IMO.


Are you asking if this tool can find something on ubuntu 26.04 when the urls it has were hardcoded 11 years ago?

The URL to search for packages in Ubuntu for example hasn't changed to my knowledge. Are you assuming it's only looking for packages in releases that were current at the time?

The site it hardcodes is https://packages.ubuntu.com, so yes I would expect it to work fine

In about a hundred or so separate microservices, of course…

I like the idea of fuel cells, but hydrogen's going to have an image problem as soon as people see the failure mode, if it's just being stored as H2 in compressed tanks. Liquid fossil fuels and electric batteries burn with a gradual flame. Hydrogen suddenly detonates, with a supersonic, shattering shockwave, if it's mishandled.

Even with Cold War money, Lockheed's famed Kelly Johnson couldn't make the logistics work for the CL-400.


Copying my own comment below, with GH links and my (non-AI) summary after skimming:

> https://github.com/zed-industries/zed/issues/7054

> https://github.com/zed-industries/zed/issues/12589

> TL;DR: Mix of language tooling, unsigned proprietary blobs, corrupted and/or GLIBC-dependent files, redundant copies of already-installed executables. The Node packages especially are able to run scripts on install. Personal preference aside, might also create issues with security laws, certifications. All without user consent.

> Issues opened in January and June 2024. They've been rejected, closed, and opened a couple times since then. No changes directly improving this yet as of April 2026.

So... If you want broad language support via LSP servers, then you're going to have to bring in other ecosystems, and Node/Typescript is a big one that doesn't always have alternatives. [0] That's not a Zed-specific problem.

IMO the real issue with Zed is the "runs them by default without asking" part. Plus the questionable practices with binary blobs and the cavalier attitude in the discussions, when I can just use an editor that... Doesn't do any of that.

[0] https://microsoft.github.io/language-server-protocol/impleme...


What are they doing with proprietary binary blobs? I thought it's open source.

If you need an education in law to be able to trust a business isn't trying to steal from you, then maybe you just shouldn't trust that business at all.

Especially for something like a code editor, where plenty of less-shady competitors are available.


> Especially for something like a code editor, where plenty of less-shady competitors are available.

On what basis are you claiming Zed is shady? I seek evidence, not feels.

If you don't understand the contract language, it seems rather presumptuous to make that kind of claim. See what I mean?

If you want to make a _relative_ claim, then I have to ask: have you read the licenses of VS Code, JetBrains, Cursor, WindSurf?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: