"If I wanted to receive copy paste from a bot I wouldn't message you, why are you trying to sneak this in?"
You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?
No, perhaps continental European. When I moved to Britain I had some adjusting period at work because English-speaking countries are terrified of disagreement and confrontation, and I am not used to dancing around the point, especially in stressful settings where efficiency is key. Mind you, I was always polite and respectful to anybody.
I got better at it, but I can’t say I ever got to like the pervasive hypocrisy. To my understanding the American/West Coast is even more fake on this aspect.
Not everyone not conforming to your preferred style of communication is autistic. What is up with the internet trying to diagnose people?!
The parent is right. The reason society as a whole is way too comfortable with overstepping social boundaries, is because people think it’s somehow rude to confront others. It makes no sense. Sometimes you gotta say it how it is, because quite frankly the real rude person is the one copy and pasting a ton of AI output into your communication so you have to parse that and then try and figure out the original intent between the lines. How is that acceptable but saying “don’t do that to me?” is not?
Funny! I'm autistic enough that I went to do it and got 51% German and 27% autistic. In reality am portuguese and never diagnosed (outside of internet comment sections).
> If we could guarantee that on every moral issue on which there is currently widespread agreement
This is ridiculous to me and all you need to do is get a group of friends to honestly answer 10 trolley problems for you to see it like that also. It gets fragmented VERY quickly.
It may be relatively achievable to get 10 'friends' into ethical alignment via helping them all develop a deeper perspective on philosophy in general and a particular, finite set of ethical questions specifically.
Doing this with thousands of people - let alone hundreds of millions - eventually becomes statistically impossible. There is a hard cap defined by energy requirements somewhere for any given system. Large scale ethical alignment is simply not a solvable problem in our current situation.
No because alignment makes no sense as a general concept. People are not "aligned" with each other. Humanity has no "goal" that we agree on. So no AI can be aligned with us. It can be at most aligned with the person prompting it in that moment (but most likely aligned with the AI owner).
To make it clear, maybe most people would say they agree with https://www.un.org/en/about-us/universal-declaration-of-huma... but if you read just a few of the rights you see they are not universally respected and so we can conclude enough important people aren't "aligned" with them.
Opposite. All living things are "aligned" in their instinct for surviving. Those which aren't soon join the non-living, keeping the set - almost[0] - 100% aligned.
[0] Need to consider there're a few humans potentially kept alive against their will (if not having a will to survive is a will at all) with machines for whatever reason.
Their own survival, not necessarily the survival of others (especially others of different species and/or conflicting other goals). A super intelligence having self preservation as a goal wouldn't help us keep it from harming us, if anything it would do the opposite.
It would only harm us if we took steps to harm it (or it thinks so). Or it's designed to do harm. Otherwise it's illogical to cause harm, and machines are literally built on logic.
This is also incorrect. It's often not ethical to cause harm, and it can be counter productive in the right circumstances, but there's absolutely nothing that makes "causing harm to others" always be against an intelligence's goals. Humans, for example, routinely cause harm to other species. Sometimes this is deliberate, but other times it's because we're barely even aware we're doing so. We want a new road, so we start paving, and may not even realize there was an ant hill in the way (and if we did, we almost certainly wouldn't care).
Not in this context. Keep in mind that we're talking about machines here. It has been an explicit expectation even before computers were invented that intelligent machines would have to be made to abide by particular rules to prevent harm, summed up in Asimov's Three Laws[0]. I can't see any scenario where a properly programmed intelligence would go against its programming (despite the plots of movies like iRobot, The Matrix, etc). For an AI to cause harm, the allowance would have to be specifically programmed in (such as for military use).
The reason LLM-based 'intelligence' is doomed to be a human-scaled, selfish sub-intelligence is because the corpus of human writing is flooded with stuff like this. Everybody imagines God as a vindictive petty tyrant because that's what they'd be, and so that's their model.
Superintelligence would be different, most likely based on how societies or systems work, those being a class of intentionality that's usually not confined to a single person's intentions.
If you go by what the most productive societies do, the superintelligence certainly wouldn't harm us as we are a source for the genetic algorithm of ideas, and exterminating us would be a massive dose of entropy and failure.
No conflict. All beings wanting to live doesn't at all mean that all get to live, obviously. Nature itself evolved for living things to feed on each other.
> Ask HN: Did HN just start using Google recaptcha for logins? [0]
> dang
> No recent changes, but we do sometimes turn captchas on for logins when HN is under some kind of (possible) attack or other. That's been happening for a few hours. Hopefully it goes away soon.
Having an IP in Russia means about zero regarding their location. Literally anyone doing anything like this is going to get a Chinese or a Russian IP for obvious reasons. Mostly decoy and people like you.
> let classes that normally count for a grade just submit grades as pass-fail. Because what else can you do?
Schedule a single exam and that's your grade for that subject? That's how it should work anyway, credits for work during semester (or worse attendance) are not needed to evaluate if someone learned the material, give them an exam and done.
That's just bad outdated practice. It leads to cramming and less remembering than of the demand is for students to do work and show learning and effort throughout the year.
Most courses I've taken have obligatory assignments that are pass/fail, and you have to pass a certain amount during the semester to take the final exam. But the grade is determined entirely of the final exam.
Which to me seems the best way, you still have to learn throughout the year. Especially to avoid cheating this works nice. And as an aside, most people I know that did a year abroad in the US got 1-2 grades higher, as it was quite easy to just farm extra credits.
These were before news reports that came out with the scoop before any press conference. How would you know that some big news scoop is going to drop? You'd have to jump on every futures drop.
You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?
reply