That's definitely a thing. Additionally, humans are surprisingly friendly in all the wrong ways when it comes to physical security (tailgating, "forgotten ID/credentials", etc.).
Can’t help but think of the 2002 Ted Chiang novelette “Liking What You See” and its tech “Calliagnosia,” a medical procedure that eliminates a person’s ability to perceive beauty. Excellent read (as are almost all his stories, imho).
Don't know about that - but we're incredibly sensitive to some minor changes to faces;
I saw a clip not too long ago of a face digitally transitioning between male and female, the changes themselves were incredibly subtle, and yet the result was obvious and undeniable.
There's also the uncanny valley, faces that are almost human yet very slightly off, and somehow come across as incredibly creepy.
Experiments have shown that we perceive our own face as more attractive than it really is. When presented with a series of morphed pictures of their own face, from less attractive to more attractive, people tend to not pick the unmodified picture as the real one, but one morphed slightly more towards attractive (where “attractive” mostly means “symmetric”, IIRC).
"We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.
Users in this alpha will receive an email with instructions and a message in their mobile app. We'll continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall. As previously mentioned, video and screen sharing capabilities will launch at a later date.
Since we first demoed advanced Voice Mode, we’ve been working to reinforce the safety and quality of voice conversations as we prepare to bring this frontier technology to millions of people.
We tested GPT-4o's voice capabilities with 100+ external red teamers across 45 languages. To protect people's privacy, we've trained the model to only speak in the four preset voices, and we built systems to block outputs that differ from those voices. We've also implemented guardrails to block requests for violent or copyrighted content.
Learnings from this alpha will help us make the Advanced Voice experience safer and more enjoyable for everyone. We plan to share a detailed report on GPT-4o’s capabilities, limitations, and safety evaluations in early August."
The rule of three[1] also comes to mind and is a hard learned lesson.
My brain has a tendency to desire refactoring when I see two similar functions, I want to refactor--it's almost always a bad idea. More often than not, I later find out that the premature refactoring would've forced me to split the functions again.
.NET is a strong contender, I would say. The standard library is immensely useful, and many things you may wonder if you need are available as Nuget packages, coming from the same devs who build the std lib.
Thrilled to see Jared Parsons of the C# team pitch in and provide some perspective on how things were done for C#5 when a similar change was made. Kudos Jared!
What's interesting is that C# 5 release (which made the breaking change) was back in 2012, and both the change and the reasons for it were very widely discussed at the time. This is right around the time when Go shipped its 1.0, and it's kinda surprising that they either didn't look closely at "near-peer" languages, or if they did, couldn't see how this problem was fully applicable to their PL design, as well.
(Note that C# at least had the excuse of not having closures in the first version, which makes scoping of "foreach" moot - the problem only showed up in C# 2.0. But Go had lambdas from the get-go, so this interaction between loops and closures was always there.)
Same here, DevDiv is now polyglot focused, so you will see regular comments from .NET folks on other languages as well (mainly Java and Go). David Fowler tends to tweet every now and then about them as well.
reply