Hacker News new | past | comments | ask | show | jobs | submit login
The Year in Computer Science (quantamagazine.org)
107 points by isaacfrond on Dec 21, 2023 | hide | past | favorite | 35 comments



"In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention. “I would have thought that any algorithm that worked with this basic outline would be doomed,” Shor said. “But I was wrong.”"

You have to appreciate the humility.


I know LLMs are all the rage, but I’m genuinely shocked at the new fastest path algo for negative weight graphs. That’s awesome!


Relevant Quanta article: https://www.quantamagazine.org/finally-a-fast-algorithm-for-...

The CACM article discussed here at the time (just 3 months ago!): https://news.ycombinator.com/item?id=37275676

The relevant paper is https://arxiv.org/abs/2203.03456 and an improvement here https://arxiv.org/abs/2304.05279

I wonder if this will become standard curriculum for undergrads sooner rather than later. It's apparently a very simple and approachable method.


Wonderful! Yeah, admittedly, I haven't sat out and drawn it out (my process for learning algorithms is often just do it by hand until it intuitively makes sense), but it did strike me as pretty straightforward on first glance. Thanks!


I love the art in quanta magazine


Me too. I've been trying to find the piece of art with the earth's kingdoms of life surrounding a macro scale - it used to be my wallpaper but I lost track of it. I'd put some of those artists' works on my wall.


Oops, an omission. macro scale atom*



Veritasium uses very similar style.


Current LLM's, Combined with Symbolic AI. Could that be the next jump.


For the most part, approximating symbolic AI with LLMs is way more powerful than approximating LLMs with symbolic AI. There may be more power still in keeping the "logic" inside fuzzy weights instead of complicating things by merging two paradigms.


Until you want to be able to show exactly how and why your system arrived at a result...

Completely depends on the thing you're trying to do, but if you're running an autonomous vehicle or a factory system or something similar, relying on mysterious weights and fuzziness to make critical decisions sounds like a disaster.

But being able to point to a pile of Horn clauses that were vetted by PMs and legal, and a graph of every piece of knowledge that was used to make those decisions... that sounds valuable.

Yes, I'm sure the stuff we call symbolic AI isn't great for making chat bots and assistants... But there are plenty of other senses of AI where a more deterministic and explicit approach makes sense?

That's my admittedly naive take, anyways.


"but if you're running an autonomous vehicle or a factory system or something similar, relying on mysterious weights and fuzziness to make critical decisions sounds like a disaster."

When your brain messes up and you have an accident, isn't that a pretty similar scenario?

Sometimes people make misstakes - some more than others and we do tests to determine whether a human can do a job good enough by its results, without examining their brains.

With autonomous cars it can be the same. Lots of tests - and if it safely can handle complex situations reliable - that would be good enough for me. (But I think we are currently pretty far from it)


> When your brain messes up and you have an accident, isn't that a pretty similar scenario?

When LLM's start going to jail for killing people then we can consider them equivalent. Until that happens, they must be held to a higher standard specifically because the incentives aren't there otherwise.


I doubt most drivers actually seriously consider the possibility of jail if they mess up. Nasty bump, might be late for work, can they afford the insurance etc., but "jail is for bad people and I'm not a bad person".

And even if they did, AI are trained by rewiring their brains directly. It's not a 1:1 match to punishments for humans (or animals), but it is a de facto incentive.


plenty of drivers have gone to jail over their driving.


How many were incentivised is a different question to how many were punished.

My previous example was intended to illustrate the threat of jail is not a huge incentive to humans because humans don't take it seriously even though it exists — we think it's for other people, not people like us.

Further I would suggest LLMs can be incentivised by jail even without experiencing jail, just because they happened to have descriptions of jail in their training set.


you're certainly free to make that claim but that doesn't make it true.


I made two claims and I don't know which one you're dismissing with the intellectual equivalent of "no u".


It's an interesting discussion point for autonomous driving: we set the bar quite high for it, but accept really shitty human drivers without question.

What could be changed to make autonomous driving more acceptable without improving the underlying tech, but improving the blame game in law and civil liability?


No, we have traffic laws.


AI cars are on par with average of all humans. AI cars are not considered good enough. Many humans who are worse drivers than the AI are still on the roads despite the traffic laws.


"Many humans who are worse drivers than the AI are still on the roads despite the traffic laws."

Instead of letting AI in alpha state on the streets, I rather vote for consequently removing the worse humans, by making regular tests for everyone mandatory. Not discriminate by age, but by skill. And AI can come, after we stop seeing all those fails of them on youtube.


Great.

Now, care to guess why this isn't already the case?


It's hard to prove that they are a bad driver, since they did pass the existing driving test (at least once).

The fact that they're a bad driver is only provable after they're in an accident from their own fault. And even then, it's only grievious fault that they get banned from driving (such as drunk driving).

However, this standard is not applied to autonomous driving, since they can pass the human driving test and yet is not accepted as capable.


Strong lobby from the bad drivers?


Insufficient. How would they have the political power to do that in basically every country on the planet since the invention of the horseless carriage?


That's my personal hunch, too. Not so much for making "intelligence" ala Hal 9000, but for coordination / planning / structuring of what was acquired using probabilistic methods.

For, like, actual practical applications ... rather than pretending to be a person.

Had a person associated with a VC basically tell me that the VC community would fund nothing to do with symbolic, though. Not the current trend sauce.


(fun fact.) What do you get if you subtract 1 from each letter in I, B, M ?


That's super interesting. Any ideas on other trends that are occurring but that VCs won't touch? Genuinely curious now.


"A person associated with a VC" could mean virtually anyone. Not to be too dismissive but also temper how much stock you put into that judgement.


I'm not saying this person is right, and I have no idea, not my milieu.


I do not understand the Vector AI thing.

Doing math on embeddings isn't new, so it can't be that.

So what is it?


I looked a little into it. Some of it is indeed built on stuff that is quite old. It seems like they’re doing a vector embedding thing but have operations defined on their embedding space that capture certain symbolic reasoning abilities and allow you to perform reasoning using multiple networks (they have a structured way to combine the output of a color network and a shape network to encode “red triangle” even if the networks don’t know about each other)


Most of this are just AI things




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: