Energy-Based Models (EBMs) capture dependencies between variables by associating a scalar energy to each configuration of the variables. Inference consists in clamping the value of observed variables and finding configurations of the remaining variables that minimize the energy. Learning consists in finding an energy function in which observed configurations of the variables are given lower energies than unobserved ones. The EBM approach provides a common theoretical framework for many learning models, including traditional discriminative and generative approaches, as well as graph-transformer networks, conditional random fields, maximum margin Markov networks, and several manifold learning methods.
Probabilistic models must be properly normalized, which sometimes requires evaluating intractable integrals over the space of all possible variable configurations. Since EBMs have no requirement for proper normalization, this problem is naturally circumvented. EBMs can be viewed as a form of non-probabilistic factor graphs, and they provide considerably more flexibility in the design of architectures and training criteria than probabilistic approaches.
Seems like a really interesting unification of the wide variety of techniques out there in statistics and machine learning, analogous to the "everything is a computation graph, as long as it's differentiable" revolution. I like it when this kind of thing has its day. Would be interesting to see how well it works non-robotics problems.
We actually ran into them when doing research in our startup. It is a really powerful perspective.
Can someone explain to me what the major difference is between energy-based models and variational Bayes approximations, which throw out calculating the normalization constant and switch to maximizing the log joint probability of the data and model?
I think the problem is the goal is not well defined. So, increased velocity has no bearing on increased velocity towards the target.
A side question, why is there no research into whether human intelligence is computable? The assumption in AI is that human intelligence is computable, but I've never seen any good argument or evidence that this is true. Seems very unscientific to exert so much energy into this research direction without validating the fundamental assumption.
For example, the one instance I know of that defines AGI in a quantitative manner is Solomonoff induction (SI), but it is not computable. If SI is representative of human intelligence, then AGI is impossible.
What prompted you to even ask this question? Where in the article does it say that "These results are step N on the path to AGI!"? This is research: the researchers found a problem that had limited solutions before (learning concepts with limited examples) and came up with a better solution. It's not clear why you're questioning this.
> [on the computability of human intelligence]
This is just silly. Even if human intelligence was beyond the reach of silicon (which I and many real researchers doubt), this work is still useful even if it doesn't result in AGI.
You're too fixated on AGI. It's not the end goal.
"About OpenAI: OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence."
At any rate, the bigger question is whether AGI is even possible. Why does no one even take a stab at answering this question? We have all these well funded research institutes that just assume AGI is possible, and we could just be throwing all the money down a hole.
If you've seen any papers addressing the more fundamental question in a quantitative manner I'd be interested to see a link.
If AGI is impossible, natural intelligent is either impossible or non-physical, but Penrose’s argument seems be that computing is insufficient for AGI, not that AGI is impossible.
All-in-all, that book is a big nothing.
1. ignore the question by labeling it nonsensical
2. expand our range of hypotheses to include immaterial answers
Math itself is necessarily immaterial, and modern science would completely collapse without mathematics.
So, it seems prima facie your claim that immaterial explanations have never been able to adequately answer any question to be incorrect. I see no problem with also proposing an immaterial soul as a scientific hypothesis, if we are able to make the claim quantifiable and empirically testable.
It depends on a lot of things that were considered impossible, but not things that are logically incompatible with materialism.
> So, it seems prima facie your claim that immaterial explanations have never been able to adequately answer any question to be incorrect.
No, all the things you point to are material explanations.
The fundamental issue is whether the new theory, materialistic or not, can be quantitatively and empirically tested in some way. I propose the notion of the mind as a halting oracle is such a thing. And we can call it a materialistic halting oracle so we don't violate the need for purely materialistic explanations.
No, that strikes me as false. Geometry is very material and can explain a good bunch of mathematics. Differential geometry, an important aspect of modern AI, is necessarily symbolic, but that does not exclude its materialistic applications.
Human brains exist, human brains are AGI, human brains are manufactured on a daily basis, ergo, it is in theory possible to manufacture an AGI.
In any case, I'm not sure why the term "artificial general intelligence" should be constrained to Turing machines. Nothing about it implies a Turing machine; just that it possesses intelligence and is artificially manufactured.
Anyways, all the AI/ML hype is generated not by actual commercial value, but implied AGI. So, it would behoove us to question the underlying assumption that AGI is actually possible. After all, it is the scientific thing to do.
On computability of intelligence. I'm not an expert on this, but many people study the dynamics of biological neural networks and can represent these dynamics as PDEs which can then be mapped to electrical circuits. Granted approximations happen along the way and it has been difficult to scale these methods to large population of neurons. It still points to a solid argument that biological neural networks can be represented on a silicon substrate. This is basically what the entire field of neuromorphic engineering is focused on.
Why does the mapping of biological neural networks to silicon substrate imply the human mind is a computer?
Is this a rhetorical question or something, because it seems to me you've answered yourself there. I mean, if the mapping works, what else should it imply besides the consequent?
Both AIXI and solomonoff induction can be approximated arbitrarily closely, given sufficient computational resources.
Human intelligence is generally expected to be computable because physics is generally believed to be computable. However, don't get me wrong: I would be quite excited and probably pleased to learn that it isn't. I just am not convinced enough (or really, convinced much at all) to hang anything important on the idea that it isn't computable.
Why believe human intelligence is limited by physics?
Human cognition affects human action which has physical effects. So at the least, intelligence is causally linked with known phycial processes. I'd say that's sufficient to make it amenable to physical inquiry.
Pulling out drastic measures like extraphysical magic, just to hand wave at an imprecise problem seems like an act of epistemic violence or something.
For example, I wrote an article explaining how modeling the human mind as a halting oracle results in empirically meaningful results.
Actually, I'm confused what we'd even be claiming by calling intelligence purely non-physical. Perhaps you're thinking of the qualia of intelligence?
Regarding the article you wrote, I think the concept of "partial oracle", while coherent, isn't super helpful. Or rather, it's too broad of a characterization. The halting problem doesn't claim that Turing machines can't solve some halting problems, it's that there's no finite class of turing machines which can together solve any halting problem.
It's not precisely news that some Turing machines can solve the halting problem for certain (infinite) classes of machines. Case in point, regular languages.
Anyway, meta-discussion-wise, I'm super happy to find kindred souls that also enjoy thinking about these things. I recently came across the Complexity Zoo website, and if you're anything like me, I suspect you'll enjoy it too!
In the article the point is a bit subtle, but partial oracles are still not Turing reducible. The point of the article is that the fact humans cannot solve all problems does not imply they are Turing reducible.
At any rate, from my cursory analysis, it appears the notion of a non Turing reducible human mind is empirically tractable, so I still don't understand why it is not a viable scientific hypothesis and why AGI is assumed to be necessarily true. The mind as a partial oracle would imply AGI is impossible.
Thanks for the link, I'll be checking it out!
The main point is the brain, as far as we know, is reducible to the known laws of physics, which, as far as we know, are entirely computable. Therefore, if human beings exhibit non computable phenomena, such as functioning as halting oracles, then that is a good reason to believe the mind does not reduce to the brain. And we could call this 'mind' a new form of matter or something, so that people don't get alarmed by non-physical phenomena.
This is where I disagree. It would, to me, seem to be a good reason to believe we've missed something about the way the body (not specifically the brain) works.
> And we could call this 'mind' a new form of matter or something, so that people don't get alarmed by non-physical phenomena.
This just sounds like you are eager to label any new phenomena "non-physical". There is no reason to pull the extraphysical gun just because you are stumbling upon new physics.
When doing so, for any string, the proportion of the inputs (with respect to the coin flip measure thing) to the UTM that eventually halt with whatever particular output, which will have been found to halt , will approach "all of them".
Now, you can't quite say "run as long as you need to in order to get answer within this epsilon of correct answer", because you don't have a way of determining whether something hasn't halted yet but will, or if it never will, but, it is still computable in the limit. If you use absurdly (possibly quite absurdly) large but finite amounts of computational resources, it should give you a good enough approximation.
Humans make mental errors often enough. I don't see why one would think that, because solomonoff induction is only approximable, that that suggests human intelligence is uncomputable. Why not suppose that humans just have a very close approximation to the uncomputable thing, rather than the noncomputable thing itself?
Why believe that human intelligence is computable? Well, for me it is more of a "choosing not to believe that it is uncomputable". If I believed it uncomputable, I might hang too much of my philosophy/worldview on that assumption, and then if I turned out to be incorrect on that, too much might come crashing down.
Edit: note: I mean "not believing", not "disbelieving". Not(believe(p)) rather than believe(not(p)) .
I don't see what will come "crashing down." Science is about being able to quantify and empirically test our hypotheses. Nothing about uncomputable intelligence seems to undermine this idea, but perhaps I am missing something. Instead, it broadens the range of hypotheses we can use to explain the world, which seems to be a good thing.
Like, for example, if I built a justification for my believing in the existence of souls on top of the belief that the human mind is uncomputable, then if I turned out to be incorrect about the human mind, it might lead me to believe that souls don't exist (though that wouldn't really be a logical result, but I am not always rational). So, in order to have better foundations for my personal beliefs, I prefer not to rely on the assumption of uncomputability of the mind (nor on its negation).
Basically, I want to avoid being too optimistic and believing too much because of it being convenient for my other beliefs.
It does seem likely that ideal minds would be (for reasons like those you describe).
I just think I should require a rather strong argument to conclude that they are, instead of remaining non-commital.
At least, speaking for myself, I do not exhaustively search literally every hypothesis and match it against my data. Your mileage may vary. Dunno. It's a diverse world out there, right?
Haha. My problem is that my brain attempts to do this but can’t, which just leads to analysis paralysis instead.
Are you implying you're a prodigy who's never fallen while learning to walk as a kid?
A true Solomonoff Inductor would be wildly, wildly smarter than a human being, if it could get over the problem that such a machine would also consume super-exponentially more resources than the universe has.
If that is what is required for induction, it is surprising that humans are able to do so well at identifying concise descriptions of the data we observe. This seems inexplicable with a computational view of human cognition.
Also that assumes that the goal is some general AI. We're just getting started here
You don't know that
OK and how about just simulating the universe? There is a legitimate question about computability there - it does seem plausible that aspects of simulating the universe could be uncomputable.
Suppose that this is case - well it's still an open question as to whether or not this spells doom for the simulation route. The uncomputable bits are going to be some quantum this or that, and it's not at all clear that such low-level bits are fundamentally required for human-level intelligence; that the high-level process of intelligence is inseparable from the underlying processes which give rise to human intelligence.
Personally, I find it a highly unlikely that intelligence is inseparable/has no reduced model, for whatever my prognostication is worth.
BUT even if it is inseparable, there's still a strong argument to be made that you could construct AGI through means of so-called 'embodied computation', just like biology does.
If we can empirically quantify the behavior of an irreducible entity, which seems plausible, then the hypothesis is scientific.
Or are you saying only AGIs can have conversations? That's begging the question.