He has a valid point. There are certain types of workloads which don't scale well on a single machine when writing pure python code. If your workloads happens to need to get more cpu performance out of a single machine in pure python and isn't easily to parallelize using processes, python's not the best choice. Personally I feel like go is good for this use case because I consider writing my own object lifecycle code a pita. Others will only use multithreaded c code, something I'm not eager to touch.
I do however feel the need to make a couple clarifications:
If you fork a process, you don't necessarily duplicate memory. Yay for COW. Fork twice and you've got fanout with very little extra memory. Threading works fantastically for I/O. C libraries can release the GIL during processing. So if you have to do something computationally expensive, you can let another python thread run.
I rarely feel restricted by python, and when I do I usually find that other aspects make up for it. I find it makes a decent systems programming language (although not as good as C). The os library and the ctypes library give you the capability to do most (all?) system level tasks.
I tend to prefer go over python when library support isn't an issue, but that's more due to it's type system and channel primitives. As someone who's pushed python to it's limits as a systems programming language I'm pretty comfortable dropping down to C isn't often called for, and writing in java/scala isn't ever required.
COW semantics of fork is not that useful with python because of reference counting (at least cpython implementation where the reference count lives inside the object memory representation). It may work much better with pypy (which uses a 'real' GC but still has a GIL)