Just asking. Philosophy by itself just seems to be THE important thing to study and maybe research on the road to creating an understanding artificial mind. And "deep learning", "free energy principle"... were these produced with help of theories with philosophical insight, systematically?
Seriously. 'Existential' (epistemological+influential) power of AGI by definition (sorry, no definition, read it like "in my opinion") encompasses "all of science" or at least some "analog" of science (working somewhat differently than "human science"), in which there will be analogs of all human sciences nonetheless. I want to put forth my opinion that it's foolish to search for some numb mathematical formalism, one that isn't informed by philosophical theories, in hopes that it will generate all of science or comparable. That's too primitive. Remember that scientists from different fields have different cognitive styles, different sciences have different philosophy, methodology, paradigms, the "style of mental content". AGI by definition (or in my opinion) will present a unified understanding of all of them from a "sufficiently philosophically powerful/abstract" philosophical foundation. So why not work on that theory?
1. Sloppy, unclear thinking. I see this constantly in discussions about AGI, superintelligence, etc. Unclear definitions, bad arguments, speculation of a future religion "worshipping" AIs, using sci-fi scenarios as somehow indicative of future progress of the field, on and on. It makes me long for the days of the early-mid 20th century, when scientists and technicians were both technical and philosophically-educated people.
2. The complete and utter lack of ethical knowledge, which in practice means AI companies adopt whatever flavor-of-the-day ideology is being touted as "ethical." Today, that seems to be DEI, although it seems to have peaked. Tomorrow, it'll be something else. The depth of "ethics of AI" or "AI safety" for most researchers seems to be entirely dependent on whatever society at large finds unpleasant.
I have been kicking around the idea of starting a blog/Substack about the philosophy of AI and technology, mostly because of this exact issue. My only hesitation is that I'm unclear of what the monetization model would be – and I already have enough work to do and bills to pay. If anyone would find this interesting, please let me know.