Yes. You can simulate a 53 qubit quantum computer with a 2^53 complexed valued vector as input (about 20 petabytes), and a 2^53 x 2^53 complex valued unitary matrix, which will take about a trillion exabytes to represent exactly. However, that is for a generic quantum operation on all 53 qubits, and some programs can be represented significantly more compactly.
> I assume you can get that experience with low-level Tensorflow and a GPU for parallel computing.
No. Most GPUs have at most about 13 gigabytes of onboard memory and would not be able to hold the matrix in memory. Also, GPUs still do not reduce the computational complexity of matrix multiplication, you still have to perform the full ~ n^2.37 operations (using Coppersmith-Winograd). Though again, this is for a general Unitary transformation; reductions can sometimes be made; this is how IBM was able to validate result computed by Google.
However, yes, this does mean that you can simulate a smaller quantum computer using just linear algebra and no particularly fancy tricks.