What would be obsolete? The underlying operations are almost always just lots of matrix multiplies on lots of memory. Releasing a new set of weights doesn’t somehow change the math being done.
To the extent that AI architectures are just matrix multiplications, there is already an asic: GPUs.
If you want more efficiency gain than general matrix multiplication hardware, you need to start getting specific about the NN architectures the hardware will support.