Everyone involved in this article has an incentive to overstate its abilities. The creators will say it is better than it is to get more work. Defense to create a deterrent. Journalists to get clicks.
Given the baseline for government understanding of AI is poor, my prior on how impressive this thing is in reality (as some sort of AGI pathway breakthrough) is pretty low.
I would bet they have some large database and some good, but mostly conventional IT around it.
Having worked in the intel community (long ago), I suspect the practical purpose of Sentient is to replicate the mind of the intel analyst -- to select and fuse multiple intelligence sources into a coherent story that suggests an underlying human activity that would be of interest to investigate further or classify as actionable and then pass it along to an operational group like law enforcement or the military.
Another possibility, of course, is FUD. Promote a project as being more impactful than it really is so that your political critics are distracted by it and thus overlook your other projects that are more important, short-term, and real.
You do have to wonder. Why would a secrecy-driven org like NRO employ a meaningful and daunting project name like "Sentient" unless you want outsiders to become take an interest in the program? When you want a program to fly under the public radar, you name it something meaningless and innocuous like "BranMuffin" or "Spatula", not meaningful and fearful like "KillerFlyingRobots".
Another FUD angle: make your state-actor counterparties think you're developing a technology that you're aware is massively time- and resource-consuming, but not meaningfully useful, or better yet, can be gamed into being less-than-useful, such that they burn cycles and resources in a wild goose chase.
(I suspect tech companies do this, in part, with the tech-fad treadmill bandwagon.)
I mean, this is the same NRO that made the infamous NROL-39 mission patch, with a sinister looking octopus reaching across the planet, caption reading "Nothing Is Beyond Our Reach."
Well, considering all the article talks about is ML, but calls it an "artificial brain", I think the government is par for the course with your average ML company marketer.
Nobody is close to creating actual artificial intelligence and we're probably getting close to the top of the hype cycle on the term.
I have to agree, all this "artificial brain" talk when the reality is that ML is just really fancy "y = ax + b find the best values for a and b given this data "
What's this you say, everything is a hyperbolic system of conservation laws? I'd be intrigued to know how you'd put Maxwell's equations on that form, or the Einstein field equations, for instance.
Looking around at many AI startups, the only difference between that and the startups is the “good” part and the absence of an army of human contractors doing labeling and sometimes even the primary function by hand.
Well described and true.
It's the way of the world. If they don't do it, what abilities they do have, won't get funded. Right now to get funds, employees, partners what other route exists, given the competition for all three? Until better routes are discovered, everyone is in the mass broadcasting and overstating of abilities game.
Polygraphs also perform no better than coin flips in determining whether a person is lying. They're still used throughout government to make security clearance decisions. I would be careful assuming that if this technology is ineffective that will translate into it not becoming a lynchpin in their organizations.
Given the baseline for government understanding of AI is poor, my prior on how impressive this thing is in reality (as some sort of AGI pathway breakthrough) is pretty low.
I would bet they have some large database and some good, but mostly conventional IT around it.