You can manage to do something like that as a CS PhD if you have the right advisor, though it's harder to pull off than it used to be as CS has gotten much more obsessed about being a rigorous science. It especially used to be the case in AI, and in some corners still is, that a good thesis makes conceptual and philosophical advances in analyzing problems and domains (or proposing new problems), which are "validated" not only via mathematical theorems, user studies, or benchmarks, but by arguing for your conclusions, the way a philosophy thesis would (though of course technical results can be used to bolster the argument where appropriate). The PhD theses Douglas Hofstadter supervises are an example.
I would deeply appreciate advice from you or anyone who has thoughts.
I did a double major in CS and Business during my undergrad at CMU ('09) and focused very much on practical learning (read: programming/web apps) and corporate/startup endeavors. However, I was always drawn towards studying the relationship between minds and machines on my own time. Mostly triggered from Godel's theorem, reading GEB/AI books, and some obsessive impulse to learn about my own mind.
Now that I'm working my first job, this impulse is stronger than ever. I find myself reading papers/books on philosophy, anthropic mechanism, AI, etc. during what free time I have. I suspect that I should study a PhD in this subject, given this impulse doesn't seem to be going away.
However, I have absolutely no research experience and had little contact with professors during my undergrad. Would you advise I seriously pursue this intellectual interest as a PhD (versus during my free time)? If so, do you have any thoughts on how I should go about applying? Given that most applications require research recommendations, I was thinking of contacting professors of papers I admired, but am not sure how well that approach would work.
Thank you for reading! My email is in my profile if that works better.
A PhD is a formal license to do research and it marks the start (not the end) of a lifetime of research. You need such a license if you plan to work at a company with a rigid corporate ladder or in academia.
The only additional reason to get a PhD besides the license is an increased probability of being in contact with peers who you can collaborate with. People often undervalue this but empirically it's pretty clear what the benefits of having at least one research collaborator are.
If you actually do decide to go for a PhD, you're going to need at least one strong recommendation that speaks to your research ability if you want to get into a top program. Your undergrad institution and GPA put you in the running to be sure, but admissions committees are looking for evidence that you can perform research. Recommendations that say "this kid got an A in my class and is a good student" don't really have an impact on your application either way.
I don't want to give a "don't do it" answer, but I would say that it's difficult, so it'd only be worth trying to negotiate a PhD, academic publishing, where you fit into a discipline, etc., if you're really committed to a research career. It also depends on what exactly you'd want to study; a lot depends on finding a supportive advisor who would be willing to supervise the kind of thesis you want to work on. This depends not only on the style of work, but also the specific domain, e.g. you're going to get a totally different set of candidates if you're interested in, say, interactive entertainment (there's a sub-field of game AI, AI-for-narrative, etc.) or perhaps something to do with robotics (also its own subfield), or else something to do with human-computer interactions (something vaguely in HCI, CSCW, etc.).
"Big AI" isn't very much in favor currently, partly for good reasons and partly for bad reasons. There's a strong worry about being too unrigorous or philosophical or vague or even sci-fi. Academic AI probably overcorrects for a fear of being seen like crazy singularity-mongers, and there's also a legacy of having over-promised some big-AI stuff in the '50s and '60s. Most funding is also for more concrete technical projects, though there is a subset of people doing some funded research in the area of artificial creativity and creativity support (Margaret Boden and Gerhard Fischer are two entry points into that literature).
So, most research tends to be much narrower and investigate specific empirical or mathematical questions, like whether a particular reinforcement-learning algorithm converges, or how to improve an object-tracking algorithm, or something of that sort. Even in Cog Sci departments, the theses tend to be more specific, like doing an eye-tracking study that investigates some question about perception. To the extent the "bigger picture" stuff gets done at all, there's a feeling that it's a late-career thing people like Hofstadter can get away with, but it's harder to do as a PhD thesis.
Not sure that actually answered your question, but the short version is: it's hard to get in a position where you can study the kind of stuff discussed in GEB, but if you can think of more specific technical questions on the peripheries of your big-picture interests, it may be more doable.
Or you can approach it from the philosophy side; one of the profs in my grad program was a joint appointment in philosophy (logic) and computer science.