Title should be something like “Apple Pencil Pro causes iPad to overheat and slow down”. This sounds really annoying, but the overly broad title is just clickbait.
In my experience, this bug - lags and overheating when drawing with the Apple Pencil - exists since iPadOS 16. When searching for it on the web, I found lots of reports and no indication that it is solved, including by hardware replacements.
In any case, HN's guidelines ask to use the original title of an article, unless it is misleading or linkbait. I'd agree that Apple's software quality has been going down.
iPad OS is largely dysfunctional in a myriad of other areas too. I like my iPad but the number of times Chrome or a simple app just freezes is getting out of hand. Also there is a bug where the iPad will freeze if I had a Bluetooth device connect while the device is locked. I think this got fixed in some recent update but it happened frequently for well over a year and it'll lock the iPad for 15-60 minutes at a time.
These issues are becoming more recurring. Meanwhile Apple is trying to sell me on some stupid intelligence that I do not need.
People want the service-y features, though. Immediately getting access to all your passwords, bookmarks, history, and extensions when you log in is a major selling point of both Firefox and Chrome, and people want features like that more than they want local-only software that’s fully under their control.
They have something like 700 employees working on Firefox. To be honest, I have no idea what they do — take a look at the commit history and you’ll see that the core group of engine contributors is quite small — but $280,000 per employee including benefits, HR, office rent, and other overhead actually isn’t bad at all.
He’s just a kvetcher. I wish that kind of content didn’t do so well on YouTube/reddit - he’s rewarded handsomely for being like this, so he has no incentive to change.
Accuweather tried to sue NOAA to prevent them giving forecasts to the public via the internet. Accuweather wanted to stay as a middle-man.
Accuweather and basically every forecaster is using input data from NOAA or other government agencies.
The biggest private companies could, theoretically, run proprietary models for their customers if NOAA were to quit doing forecasts altogether. For instance, the GFS is open source so someone with a big enough supercomputer could use it.
None of the models ingests that data, mostly because there is no way to ensure that personal weather stations are sited correctly. So, for instance, an anemometer has to be around 30’ above ground with no obstructions to generate usable readings. Most people putting a personal weather station in the backyard aren’t going to the trouble to locate the station where it can provide data that are usable.
Plus, surface readings are only a small part of the atmospheric picture; you need weather balloons to generate a more holistic reflection of the atmosphere at a site. There are also data generated from aircraft, but those are more one-dimensional (in the sense that they are only sampling conditions at a single altitude) than twice-daily balloon launches from selected sites. Until someone decides to implement a private, nation-wide radiosonde network, there probably won’t be a numerical prediction model that operates independently of NOAA data.
Why do you feel the need to be so rude about an interesting little blog post?
Obviously you don’t need an LLM to prettify obfuscated JavaScript. But take a look at the repo. It didn’t just add the whitespace back — it restored the original file structure, inferred function and variable names, wrote TypeScript type definitions based on usage, and added (actually decent) comments throughout the source code. That simply isn’t possible without an LLM.
Please link to a tool that can infer function/variable names and TypeScript type definitions from minified JS without using LLMs or requiring significant user input.
You didn't say it's impossible for a tool to do it. You said it's impossible.
Also, it's guessing at the names, guessing at type definitions, and it makes further guesses based on its previous ones, correct or no. If you don't already know what you're doing, you're in trouble.
For someone to claim that they're sharing the "decompiled" source of Claude Code for the public good is self-important back-patting nonsense, let alone misleading.
> I get the sense that you’re frustrated that something you’ve invested a lot of time and energy into learning is being automated. Maybe you’re scared because the technology has moved so quickly and your understanding of how to use it hasn’t kept up.
This is profiling/projection. You're incapable of responding to the GP's points so you're instead emotionally lashing out and attacking them. This is not really suitable for HN.
> But making someone’s day worse on Hacker News isn’t a good way to deal with that.
This suggests that you're incapable of distinguishing criticism of someone's work with personal attacks on them (furthered by the profiling that you tried to conduct above). Those things are not the same. If your day is ruined by someone posting reasonable criticism of an article that you personally submitted to HN, a place explicitly designed for intellectual curiosity, your expectations need to be adjusted.
OP is an adult who published his writing and posted it here himself for feedback. It's not all going to be positive.
> I get the sense that you’re frustrated that something you’ve invested a lot of time and energy into learning is being automated. Maybe you’re scared because the technology has moved so quickly and your understanding of how to use it hasn’t kept up
Wrong on all counts. I use LLMs to write code all the time, and I know how they work, which is why I find processing 5MB of JS through one to be an obscene waste of energy.
I do not use LLMs to publicly claim abilities I don't already have myself. Reading this article does not worry me one bit about my job security.
Right? If a company can't be trusted to do password hashing, then they definitely shouldn't be trusted with PII. With that said, I think it's the SSO and broader ecosystem that orgs are paying vendors for.
Passwords are data that, once you've built a system that requires you store them, must be kept forever, with basically zero tolerance for loss or unavailability (so you have to make them continuously available to systems that validate user authentication), but also have zero tolerance for exposure. And it's a type of data that has no profit in it. You can't use it to improve UX or target ads or anything else of value.
At best, stored passwords are something you always get right and are value neutral to you. And everything below that is toxic.
While I don't disagree with anything you wrote in particular, nothing in your comment above answers the original question “why are hashed password hazardous material”.
My understanding was that if properly hashed, then the hashed passwords should have no value whatsoever (it should be indistinguishably from random noise and should not be reversible by any means).
The fact that tptacek (who is very well known for his competence in security and cryptography) says otherwise is intriguing me deeply but your response doesn't answer the question.
Properly stored nuclear material is safe as well. But it’s still hazardous material because if the storage or handling are imperfect, they are harmful.
Passwords are the same, except we’re constantly finding new attacks and weaknesses.
As some examples:
1. When a new attacks is found against a hash construction so all the password stored based on that are now more vulnerable
2. When it turns out your auth server is logging passwords in plaintext so it doesn’t help that your DB is storing them properly hashed.
3. When your auth call isn’t properly validating hashed passwords so attackers can either bypass the correct flow or intuit things about the password
2. and 3. have again nothing to do with the question. Plaintext passwords are obviously unsafe material, but that's not the point of the question.
Nuclear material that leaks its containment is bad no matter what, how is a hashed password leaking a bad thing is the entire question and you're only addressing it in 1. but I don't find that convincing. When was the last time a practical attack was found on a hashing scheme that wasn't already obsolete 15 years ago?
[1]: http://euclid.psych.yorku.ca/SCS/Gallery/colorpick.html
reply