Hacker News new | past | comments | ask | show | jobs | submit | zendist's comments login

That's definitely a thing. Additionally, humans are surprisingly friendly in all the wrong ways when it comes to physical security (tailgating, "forgotten ID/credentials", etc.).

A compromised human is immensely more feasible than a physical break in, but almost all posts above fixate on the latter

Sorry to anyone having this, that sounds awful.

Would we easily know if the inverse phenomenon is happening in the rest of us? We're seeing people "better looking" than "they are"?


Can’t help but think of the 2002 Ted Chiang novelette “Liking What You See” and its tech “Calliagnosia,” a medical procedure that eliminates a person’s ability to perceive beauty. Excellent read (as are almost all his stories, imho).

https://en.wikipedia.org/wiki/Liking_What_You_See:_A_Documen...


Don't know about that - but we're incredibly sensitive to some minor changes to faces;

I saw a clip not too long ago of a face digitally transitioning between male and female, the changes themselves were incredibly subtle, and yet the result was obvious and undeniable.

There's also the uncanny valley, faces that are almost human yet very slightly off, and somehow come across as incredibly creepy.


I believe the medical term for that is "drunk". A condition I've had the misfortune to suffer from myself on occasion.


Experiments have shown that we perceive our own face as more attractive than it really is. When presented with a series of morphed pictures of their own face, from less attractive to more attractive, people tend to not pick the unmodified picture as the real one, but one morphed slightly more towards attractive (where “attractive” mostly means “symmetric”, IIRC).


Sounds interesting, but I hope the study was done both with photos and photos flipped liked in the mirror.


Some do, after sobering up.


That’s what the those crazies say, the reptilians amongst us glamour all humans into seeing them as better looking humans.


Same as https://www.theverge.com/2024/7/30/24209650/openai-chatgpt-a... ? I don't have Twitter, so I'm not 100p sure.


Full tweet:

"We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.

Users in this alpha will receive an email with instructions and a message in their mobile app. We'll continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall. As previously mentioned, video and screen sharing capabilities will launch at a later date.

Since we first demoed advanced Voice Mode, we’ve been working to reinforce the safety and quality of voice conversations as we prepare to bring this frontier technology to millions of people.

We tested GPT-4o's voice capabilities with 100+ external red teamers across 45 languages. To protect people's privacy, we've trained the model to only speak in the four preset voices, and we built systems to block outputs that differ from those voices. We've also implemented guardrails to block requests for violent or copyrighted content.

Learnings from this alpha will help us make the Advanced Voice experience safer and more enjoyable for everyone. We plan to share a detailed report on GPT-4o’s capabilities, limitations, and safety evaluations in early August."


It seems that CloudStrike only parsed and didn't validate, to great effect :-) /s

Not saying that this advice isn't solid, just thought it's funny given the news of this week.


Parsing subsumes validation.


The rule of three[1] also comes to mind and is a hard learned lesson.

My brain has a tendency to desire refactoring when I see two similar functions, I want to refactor--it's almost always a bad idea. More often than not, I later find out that the premature refactoring would've forced me to split the functions again.

1: https://en.m.wikipedia.org/wiki/Rule_of_three_(computer_prog...


better understand DRY first, before you add lots of technical debt to your code: https://news.ycombinator.com/item?id=40525064#40525690


Nice, I advocate for this but never new it was a more formal thing.


If you were alive in the 80's, it's not the first time ;-) https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_al...


Also, the 80's movie 'WarGames'


True, but somehow AI makes this worse for me.


> AI doomers are a sci-fi flavored rapture cult, full of vocal crazies spreading overblown fears

> Stochastic parrots and other animals in linear algebra zoo ain't going to end the world

> Proceeds to put black-box SOTA ML models at the core of Mutual Assured Destruction dynamics

What could possibly go wrong.



.NET is a strong contender, I would say. The standard library is immensely useful, and many things you may wonder if you need are available as Nuget packages, coming from the same devs who build the std lib.


I don't know about that comparison. CoffeeScript transpiled to JavaScript- Kotlin doesn't transpile to Java, it compiles to JVM bytecode.

So in a sense, from a tooling and compiler perspective, they're orthogonal products, similar (I guess) to C# and F# on the .NET CLR VM.


Thrilled to see Jared Parsons of the C# team pitch in and provide some perspective on how things were done for C#5 when a similar change was made. Kudos Jared!


What's interesting is that C# 5 release (which made the breaking change) was back in 2012, and both the change and the reasons for it were very widely discussed at the time. This is right around the time when Go shipped its 1.0, and it's kinda surprising that they either didn't look closely at "near-peer" languages, or if they did, couldn't see how this problem was fully applicable to their PL design, as well.

(Note that C# at least had the excuse of not having closures in the first version, which makes scoping of "foreach" moot - the problem only showed up in C# 2.0. But Go had lambdas from the get-go, so this interaction between loops and closures was always there.)


Well, this isn't the only language feature where Go designers ignored the path trailed by other languages.

Still looking forward to the day they will discover Pascal enumerations.


Same here, DevDiv is now polyglot focused, so you will see regular comments from .NET folks on other languages as well (mainly Java and Go). David Fowler tends to tweet every now and then about them as well.


Yeah, it's often a joy to read these posts and the perspectives on it; this is how programming language development should be done.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: