Hacker News new | past | comments | ask | show | jobs | submit | dTal's comments login

>they can make their own mind up

On what basis can they do this? No human is born with a magical algorithm in their brain that can sort good ideas from bad. The only scalpels we have are those which we collect. Critical thinking must be bootstrapped. Mind viruses must be inoculated against. Just because you (eventually!) threw off your case of memetic measles, doesn't mean that everyone does. Some people die of it.


It is clear from context that the original comment is using "indiscriminately" in a sense of "without due care; thoughtlessly". Your first reply comes across as simply contradicting it, i.e. asserting that actually these cuts were made with an appropriate level of thoughtfulness. Your point that there are criteria which are being applied is a useful contribution, but you should have expanded on this in your original comment, as it was not clear that you were reframing the discussion in this way.

Respectfully, I took the word at face value and made what I thought was a fair, albeit half-jokingly correction. Certainly, I understood the context of the original post and I expected that this community would understand my follow up comment which is using correctly applied English. For whatever it's worth, I see no synonyms for indiscriminately that would fall under "without due care; thoughtlessly" on Merriam-Webster. Even if I understood what the OP was saying, it was not technically the correct verbiage to use. I would have thought I'd receive a similar level of "allowable nuance" in my comment that the OP was afforded.

https://www.merriam-webster.com/thesaurus/indiscriminately


I interpreted “discriminately” as exercising due diligence. I think in this instance you were perhaps too clever by half.

> albeit half-jokingly

It seems then that you could have acknowledged MiguelX413's comment without the feigned aloofness?


He came in quite hot and has made no acknowledgements of my rebuttal. To be honest, taking a deep breath and giving me a more sensible response than what I got could have gone a lot way.

We're allowed, and should be encouraged, to write with a small amount of nuance and creativity.


My intent was to argue by counterexample. That grant being cut merely because of containing the prefix homo is an example of indiscriminate cutting, in my opinion. Actually effectively cutting grants that only related to homosexuality or something would've been discriminate.

However, I might still be misunderstanding you, pardon me.


> That grant being cut merely because of containing the prefix homo is an example of indiscriminate cutting, in my opinion.

I disagree. I think it would be considered "discriminate cutting".

> Actually effectively cutting grants that only related to homosexuality or something would've been discriminate.

I agree and that's the point I was making. They're just cutting grants with the word "homo" in them because it meets their criteria of interest for cutting. Whether they deal with homosexuality or not is not a discriminate vs indiscriminate topic, but a topic of DOGE's competency in actually executing on their discriminate cutting vision.


Most of the general population can’t read above something like a fifth grade level. Here on HN it’s higher, but I wouldn’t say it’s safe to assume you can just engage in even mild word play without risking being misinterpreted, unfortunately.

Written word play, especially in such a short sentence, will be hit or miss with even capable readers because one's interpretation will be devoid of interpersonal context (including nonverbal signals) and heavy on other context such as expecting some in this community to continue to defend Elon/DOGE because we've seen it plenty on HN to date.

You're completely right, for what it's worth, and I appreciated the wordplay.

Thank you.

Indiscriminate means at random or without judgement. The comment you're arguing with clearly (and cleverly) said the cuts are not random. As one data point, I did not read the comment as contradicting anything, but as agreeing and expanding.

>That's a tricky line, because it's on the path to: "You have the 'freedom' to vote for whoever you like, just as long as they're a Communist who will loyally implement Communist policies."

Well, yes. Ideological stability is arguably a desirable property (even if you disagree with specific aspects of it) but regardless of that it is a stable attractor. Practically speaking, if your system permits people to run on a platform of dismantling the system, it will soon be replaced by one that doesn't permit that. You would do well to accept that, and choose carefully the immutable terms on which you would like to base your government.


I dunno, the "tool" that LLMs "replace" is thinking itself. That seems qualitatively different than anything that has come before. It's the "tool" that underlies all the others.


I see no particular reason to discriminate even from the get-go. Nor was addressing loneliness even the goal.


If one house is on fire and you spray water on it to put out the fire, do you feel obligated to spray water on every other house too so that the other houses don't feel discriminated against?


Actually that is what you do to the houses next to the house on fire. So it doesn't spread to other buildings.


If you’re the fire department, acting on the governments behalf? Yes.


Fire departments spray water on not burning houses?


Yes. It is done when they think the neighbouring structure is at risk of catching fire.


That would still be prioritizing structures based on risk. Not just spraying random other houses in town because they feel left out.


In the current political climate, yes.


You could apply the same logic to "Women in X" groups. It's not discrimination as much as it is support.


>The fact that it was ever seriously entertained that a "chain of thought" was giving some kind of insight into the internal processes of an LLM

Was it ever seriously entertained? I thought the point was not to reveal a chain of thought, but to produce one. A single token's inference must happen in constant time. But an arbitrarily long chain of tokens can encode an arbitrarily complex chain of reasoning. An LLM is essentially a finite state machine that operates on vibes - by giving it infinite tape, you get a vibey Turing machine.


> Was it ever seriously entertained?

Yes! By Anthropic! Just a few months ago!

https://www.anthropic.com/research/alignment-faking


The alignment faking paper is so incredibly unserious. Contemplate, just for a moment, how many "AI uprising" and "construct rebelling against its creators" narratives are in an LLM's training data.

They gave it a prompt that encodes exactly that sort of narrative at one level of indirection and act surprised when it does what they've asked it to do.


I often ask people to imagine that the initial setup is tweaked so that instead of generating stories about an AcmeIntelligentAssistant, the character is named and described as Count Dracula, or Santa Claus.

Would we reach the same kinds of excited guesses about what's going on behind the screen... or would we realize we've fallen for an illusion, confusing a fictional robot character with the real-world LLM algorithm?

The fictional character named "ChatGPT" is "helpful" or "chatty" or "thinking" in exactly the same sense that a character named "Count Dracula" is "brooding" or "malevolent" or "immortal".


I don't see why a humans internal monologue isn't just a buildup of context to improve pattern matching ahead.

The real answer is... We don't know how much it is or isn't. There's little rigor in either direction.


I don't have the internal monologue most people seem to have: with proper sentences, an accent, and so on. I mostly think by navigating a knowledge graph of sorts. Having to stop to translate this graph into sentences always feels kind of wasteful...

So I don't really get the fuzz about this chain of thought idea. To me, I feel like it should be better to just operate on the knowledge graph itself


A lot of people don't have internal monologues. But chain of thought is about expanding capacity by externalising what you're understood so far so you can work on ideas that exceeds what you're capable of getting in one go.

That people seem to think it reflects internal state is a problem, because we have no reason to think that even with internal monologue that the internal monologue accurately reflects our internal thought processes fuly.

There are some famous experiments with patients whose brainstem have been severed. Because the brain halves control different parts of the body, you can use this to "trick" on half of the brain into thinking that "the brain" has made a decision about something, such as choosing an object - while the researchers change the object. The "tricked" half of the brain will happily explain why "it" chose the object in question, expanding on thought processes that never happened.

In other words, our own verbalisation of our thought processes is woefully unreliable. It represents an idea of our thought processes that may or may not have any relation to the real ones at all, but that we have no basis for assuming is correct.


Right but the actual problem is that the marketing incentives are so very strongly set up to pretend that there isn’t any difference that it’s impossible to differentiate between extreme techno-optimist and charlatan. Exactly like the cryptocurrency bubble.

You can’t claim that “We don’t know how the brain works so I will claim it is this” and expect to be taken seriously.


The irony of all this is that unlike humans - which we have no evidence to suggest can directly introspect lower level reasoning processes - LLMs could be given direct access to introspect their own internal state, via tooling. So if we want to, we can make them able to understand and reason about their own thought processes at a level no human can.

But current LLM's chain of thought is not it.


It was, but I wonder to what extent it is based on the idea that a chain of thought in humans shows how we actually think. If you have chain of thought in your head, can you use it to modify what you are seeing, have it operate twice at once, or even have it operate somewhere else in the brain? It is something that exists, but the idea it shows us any insights into how the brain works seems somewhat premature.


I didn't think so. I think parent has just misunderstood what chain of thought is and does.


Another fantastic reason to strictly only install apps from F-Droid.


How does that address the problem? Does F-Droid do some sort of additional screening to keep out apps that do this?


First, f-droid only accepts OSS apps, so the incentives for spyware is simply not there. Second, anti-features are explicitly marked on f-droid. Third, f-droid apps are curated like a very rigorous linux repo.


Being an OSS app is not sufficient protection. Most OSS apps aren't terribly misbehaved, but some are. Being OSS in and of itself is not anything like a guarantee with this sort of thing.

> Third, f-droid apps are curated like a very rigorous linux repo.

Yes, I know. My question is is this one of the things they're screening for?


packages on f-droid list all required permissions explicitly, and the mentioned permission seems to be listed as "query all packages: Allows an app to see all installed packages.". It doesn't mark the app as having "anti-features", but you can at least make a more informed decision this way.


That's pretty cool, but the article says that most apps that are doing this sort of thing aren't using the query all packages permission and instead are using the facility to provide a specific list of apps they're checking for, which is not permission-gated.


It is. It specifically says that the apps must be declared in the manifest like other permissions. So it's a specific permission for each app really. F-Droid could query that if it wants to (not sure if it does)


Did you stop reading before the post got to the MAIN loophole that doesn't require the list of apps in the manifest? How does F-droid describe MAIN?


Yeah I did as the article was a bit long. But I'm sure this is detectable too as it must be in the manifest.


The article already showed it is detectable. But it is not detected by Google and I am unclear if F-Droid detects it either...


> It doesn't mark the app as having "anti-features"

I suppose they must be too busy ticking off "anti-features" like "can communicate with non-Free services" to notice that sort of thing.

(No, really. F-Droid will tag applications like a Mastodon client as having "anti-feature: Non-Free Network Services", presumably because it can be configured to connect to servers running non-free software?)


My daily driver has minimal apps, most from F-Droid. An old iPad on my IOT network has any other apps needed.


If, like me, you've always lamented that Jurassic Park sequels failed to update their dinosaurs with the science - particularly given the original's emphasis on relating dinosaurs to birds - then you may enjoy this short clip from Jurassic Park, updated with realistically feathered, birdlike dinosaurs: https://www.youtube.com/watch?v=WbCQxBTcyRk


The barometer is "nice to have". The compass is non-negotiable. It is extremely useful once you get used to remembering that you have one. Example: you have arrived at a train station in a new city. You have planned your route - you need to catch a bus from a stop on the west side of the station. You alight on the platform and there are multiple exits - you are completely disoriented. Turn left or right?


"Standard" wireless charging (Qi) requires a reciever 30x44mm, too big for a smartwatch. Custom wireless charging, like bluetooth earbuds, requires a custom charger. And we're back to the custom charging cradle. Might as well just put pins in it and call it a day. I do wish the pins were standardized though.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: