No they just require Infinite Memory  XOR GC.
Pure Functional programming has no concept of Alloc/Delloc. Let alone the concept of binding/assignment can fail. These are real. To quote James Michens 
>Pointers are real. They’re what the hardware understands. Somebody has to deal with them. You can’t just place a LISP book on top of an x86 chip and hope that the hardware
learns about lambda calculus by osmosis. Denying the existence of pointers is like living in ancient Greece and denying the existence of Krackens and then being confused about why none of your ships ever make it to Morocco, or Ur-Morocco, or whatever Morocco was called back then. Pointers are like Krackens—real, living things that must be dealt with so that polite society can exist.
 Infinite memory simply means more memory then the program can ever consume... But the halting problem exists so you can't actually know how much memory your program will consume :P
Formally verified plus a Scheme48 interpreter as part of VLISP project.
ATS has no garbage collector that I'm aware of. I've seen it used in device drivers and 8-bit MCU's.
LinearML is GC-less, functional, parallel programming.
So, functional languages neither require GC's nor infinite memory. Two also combined low-level efficiency with easy, formal verification vs C programs. So, that's higher mapping of idea to code, high efficiency, and better safety all at once.
"You can’t just place a LISP book on top of an x86 chip"
I believe I just did with PreScheme. Microsoft goes further to straight-up use a theorem prover to do x86 coding.
Also if all these things are amazing why aren't anyway using them?
"Also if all these things are amazing why aren't anyway using them?"
Social and economic factors as usual. See Gabriel's essay Worse is Better:
Just take C language. I have its history in detail and with citations. It was literally an engineered language chopped up to run on bad hardware, chopped again with arbitrary alterations on bad hardware, and slightly extended for bad hardware again. Most people had bad hardware. Worked good on that. Spread like a virus with gradual improvements. Still nowhere near what engineered languages can pull off in various tradeoffs to consider. Yet, almost everything is written in it now thanks to it working on half-assed hardware, a MULTICS chop called UNIX doing so, UNIX distributed freely, and UNIX written in C. Social & economic factors spread it like a virus plus improved it to approximate solutions designed under cathedral model with better properties.
Meanwhile, alternatives sprang up that kicked both their butts in capabilities. The LISP machines, functional's answer to whole systems, had a flow and consistency you still can't match with modern stacks. B5000, a HW/OS combo designed for safe languages, would've given hackers hell. Amiga's combined SW and HW offloaders for excellent performance... like today's servers & game consoles. BeOS screamed in concurrency, multimedia, and ease of use while popular Windows and Mac boxes couldn't do but a fraction of it. AS/400's and VMS boxes ran, ran, and then ran some more with at least one person forgetting how to reboot them haha. Some in high-security survived NSA pentesting while things that get easily smashed by amateurs prevail today for security-critical work. I think it's clear a language or system's technical superiority has almost no causal relationship with mainstream adoption by laypeople or technical people.
Actually and sadly, it's usually better to bet on Worse is Better approaches with slight improvements for success or adoption. Occasionally, you can bet on The Right Thing with a win as Mozilla is doing with Rust. Heck, even Burroughs B5000 lives on in Unisys mainframes. OpenVMS lives on in Windows NT family as it cloned it for desktops minus strong focus on quality (sighs). ZeroMQ is a good example in middleware. Nix applying database-style principles to package management. Technically superior stuff occasionally mainstreams but not often. Human nature usually wins. :(
int (*pg)(int) = f();
int x = h() + 2*z;
int w = pg(y);
/* ... etc */
There could be some higher level (though nonetheless quite primitive) assignment-free language for specifying tasks for the 1000 cores of this chip, instead of programming them in assembler.
Yes. Just an infinite amount of stack. Why? See my previous post.
If we pretend for a minute we live in the real world... Oh guess what your stack can still over flow and assignment/binding can still fail.
Also in a pure stack based language you can't preform multithreading. If you do... Well now your building a whole OoO super-scalar functional VM on top of a physical processor just to avoid doing GC.
About multithreading: this language can describes a computation done by each of up to 1000 nodes. Those implicitly run in parallel.