"In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention. “I would have thought that any algorithm that worked with this basic outline would be doomed,” Shor said. “But I was wrong.”"
Wonderful! Yeah, admittedly, I haven't sat out and drawn it out (my process for learning algorithms is often just do it by hand until it intuitively makes sense), but it did strike me as pretty straightforward on first glance. Thanks!
Me too. I've been trying to find the piece of art with the earth's kingdoms of life surrounding a macro scale - it used to be my wallpaper but I lost track of it. I'd put some of those artists' works on my wall.
For the most part, approximating symbolic AI with LLMs is way more powerful than approximating LLMs with symbolic AI. There may be more power still in keeping the "logic" inside fuzzy weights instead of complicating things by merging two paradigms.
Until you want to be able to show exactly how and why your system arrived at a result...
Completely depends on the thing you're trying to do, but if you're running an autonomous vehicle or a factory system or something similar, relying on mysterious weights and fuzziness to make critical decisions sounds like a disaster.
But being able to point to a pile of Horn clauses that were vetted by PMs and legal, and a graph of every piece of knowledge that was used to make those decisions... that sounds valuable.
Yes, I'm sure the stuff we call symbolic AI isn't great for making chat bots and assistants... But there are plenty of other senses of AI where a more deterministic and explicit approach makes sense?
"but if you're running an autonomous vehicle or a factory system or something similar, relying on mysterious weights and fuzziness to make critical decisions sounds like a disaster."
When your brain messes up and you have an accident, isn't that a pretty similar scenario?
Sometimes people make misstakes - some more than others and we do tests to determine whether a human can do a job good enough by its results, without examining their brains.
With autonomous cars it can be the same. Lots of tests - and if it safely can handle complex situations reliable - that would be good enough for me. (But I think we are currently pretty far from it)
> When your brain messes up and you have an accident, isn't that a pretty similar scenario?
When LLM's start going to jail for killing people then we can consider them equivalent. Until that happens, they must be held to a higher standard specifically because the incentives aren't there otherwise.
I doubt most drivers actually seriously consider the possibility of jail if they mess up. Nasty bump, might be late for work, can they afford the insurance etc., but "jail is for bad people and I'm not a bad person".
And even if they did, AI are trained by rewiring their brains directly. It's not a 1:1 match to punishments for humans (or animals), but it is a de facto incentive.
How many were incentivised is a different question to how many were punished.
My previous example was intended to illustrate the threat of jail is not a huge incentive to humans because humans don't take it seriously even though it exists — we think it's for other people, not people like us.
Further I would suggest LLMs can be incentivised by jail even without experiencing jail, just because they happened to have descriptions of jail in their training set.
It's an interesting discussion point for autonomous driving: we set the bar quite high for it, but accept really shitty human drivers without question.
What could be changed to make autonomous driving more acceptable without improving the underlying tech, but improving the blame game in law and civil liability?
AI cars are on par with average of all humans. AI cars are not considered good enough. Many humans who are worse drivers than the AI are still on the roads despite the traffic laws.
"Many humans who are worse drivers than the AI are still on the roads despite the traffic laws."
Instead of letting AI in alpha state on the streets, I rather vote for consequently removing the worse humans, by making regular tests for everyone mandatory. Not discriminate by age, but by skill.
And AI can come, after we stop seeing all those fails of them on youtube.
It's hard to prove that they are a bad driver, since they did pass the existing driving test (at least once).
The fact that they're a bad driver is only provable after they're in an accident from their own fault. And even then, it's only grievious fault that they get banned from driving (such as drunk driving).
However, this standard is not applied to autonomous driving, since they can pass the human driving test and yet is not accepted as capable.
Insufficient. How would they have the political power to do that in basically every country on the planet since the invention of the horseless carriage?
That's my personal hunch, too. Not so much for making "intelligence" ala Hal 9000, but for coordination / planning / structuring of what was acquired using probabilistic methods.
For, like, actual practical applications ... rather than pretending to be a person.
Had a person associated with a VC basically tell me that the VC community would fund nothing to do with symbolic, though. Not the current trend sauce.
I looked a little into it. Some of it is indeed built on stuff that is quite old. It seems like they’re doing a vector embedding thing but have operations defined on their embedding space that capture certain symbolic reasoning abilities and allow you to perform reasoning using multiple networks (they have a structured way to combine the output of a color network and a shape network to encode “red triangle” even if the networks don’t know about each other)
You have to appreciate the humility.