Hacker News new | past | comments | ask | show | jobs | submit login

I'm acutely curious how we can attack the problem of detecting undisclosed AI breakthroughs, particularly those made by organizations uninterested in ever revealing them. Commercially, yes, many organizations want to boast about their AI, and we may find those AIs via games or news. But many organizations involved in national security and financial markets will want to hide their advantage. I would love to hear more about how Open AI thinks this problem can be approached.

By analyzing sequences of amazingly brilliant developments/products/inventions/movements performed by single company or conglomerate of linked companies, and which are few levels above what competitors can deliver.

I am not from OpenAI though.

If you came up with AI algorithms that gave you a massive edge on financial markets, presumably you would also be intelligent enough to execute the strategies discreetly, through highly distributed trading entities, just below levels that raise eyebrows, so as not to be found out.

Similar problem to bot detection on online poker networks, except much harder on real financial markets, and probably not something you can regulate.

Try to understand how exactly Renaissance or Two Sigma are making money with their algorithmic trading, from the outside, I don't think you'll have much luck.

By that measure, startups are powered by AI and large corps are not. By which I mean, so many variables go into successful product development. Google arguably has the strongest AI in the world, but it has been remarkably bad at introducing new products recently, so those two phenomena seem decoupled to me...

And google definitely has few AI powered projects, others can't match. Google is not a target of that goal, just because they always disclose their AI achievements.

This is the biggest problem IMO. Just imagine a couple of AIs that systematically discover a few exploitable loopholes or trading algos all at once; it would be like black & scholes exponentiated. I doubt outside research would be able to infer anything from available data.

Step one: Stop calling equations "AI".

Step zero: Define "Intelligence"consistently and unambiguously.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact