We don't understand consciousness as it pertains to the underlying question; knowing that brain activity can produce consciousness does not get us any closer to knowing that a matrix multiplication can't.
True, but I don't dispute that in principle a lot of matrix multiplication could produce consciousness. I just suggest that an honest appraisal of brains and LLMs suggests little to no consciousness on the part of the latter.
If we use your metric, then an honest appraisal of brains and computers suggests little to no mathematical ability on the part of the latter either. If we assume that a similar medium or structure is necessary for similar results, then it should be highly improbable that a bunch of semiconductors could ever perform even simple math, since they are very structurally dissimilar to the human brain.
Only if you insist on thinking of brains and ICs as magical mysterious objects about which nothing can be said. We understand how both of these objects work to one degree or another. My point is that it is precisely the understanding of both phenomena which suggests that LLMs are not conscious or, arguably, intelligent.
The "to one degree or another" is doing all the work in this argument. Does my knowledge of how a full adder works now grant me the ability to discern malware at a glance? Similarly, should we start dismissing psychologists because your average neurologist can just cure depression and other mental issues? Should we do the same with sociologists or economists? Maybe even neurologists could be replaced by physicists or mathematicians.
Or maybe the abstract, high level understanding the brain provided by psychology is enough to explain its dynamic behavior? Maybe I can become an expert in IC design by learning React?
We know how neural nets work on a fundamental level and we know how they work on multiple levels of higher abstraction, yet explainability is one the biggest problems in machine learning right now. These models can solve complex problems which computer scientists long struggled to develop algorithms for, even though every aspect of them, except emergent behaviors due to complex interactions, is known by us.
The issue is that consciousness is a strongly emergent property - touching every level of abstraction and comprising patterns from the specific to the general. Knowledge of how a system works on the ground level or how it works on some coarse levels of abstraction, does not allow you to classify it as conscious or unconscious.
Additionally, consciousness is ill-defined. There is no agreed upon definition that is free of contradictions, does not accidentally include systems that we would not see as conscious or does not accidentally exclude a significant portion of humanity.
I invite you to think up some properties of the human brain that you would classify as essential for consciousness to emerge, and then try to think up exceptions. I'm very confident that you can come up with at least one for every single property.
Yes, yes, yes, but pretending that we know nothing at all about consciousness or that nothing can be said about the likelihood that an LLM has it or other properties is absurd in the extreme. There are genuine limits to stupidity.