ChatGPT 3.5 is the base level people expect LLMs to be, it would be 2-3 generation(3-4 years) of hardware before we can reach that. Anything below is just going to get bad reviews
Is 512gb a typo? The current biggest consumer card has 24GB, so we're probably 15 years from a 512GB card (judging from the increase of 4Gb to 24GB between 2012 and 2022).
I doubt it to be honest, desktop GPUs use too much power (and hence produce too much heat) to be integrated in that fashion, and any kind of shared memory will be too high latency.
There are 'desktop' (well server) cpus with 64GB of HBM memory per socket now. And big LLMs can be run on lower memory bandwidth systems (like zen4 chips with 12x ddr5 per socket) at lower performance, but where 1-2TB of ram is no big deal.
But for what applications? Sure, for answering free-form questions I expect GPT-3.5+ quality. I don't think GPT-3.5 is necessary to provide auto-complete in your email client.