LabVIEW has subVIs which are effectively no different from any subroutine or method in another language. LabVIEW has dynamic dispatch so it can run code with heterogeneous ancestry. You can launch code asynchronously in the background, which isn't even necessary to accomplish multi-threaded execution in LabVIEW (though there are plenty of other gotchas for those used to manual control of threading along with a couple of sticky single threaded processes that might get into your way when trying to high level reusable code). You can even implement by-reference architectures adding yet another way to break out of the 2D-ness of its diagrams. Perhaps a new development for most here will be that LabVIEW is now completely free for personal use (non-commercial & non-academic). Still, like some have pointed out, LabVIEW really shines with its hardware integration. It's the Apple of the physical I/O world. The only reason I avoid it for anything larger than small projects is it needing it's not tiny run-time engine which isn't any different from .Net distributables just more.... niche?
With text you get top to bottom lines of text (a single dimension) and any additional dimensionality has to be conceptualized completely in your mind... Or in design tools like UML... which display relations in a 2D manner. SQL design tools these days provide 2D visualizations in a graph-like manner to relate all the linkages between tables. User stories, process flow, and state diagrams are (or at least should be) mapped out in 2D in a design document before putting down code. How does the execution order of functions and the usage of variables provide any more freedom?
All I want to establish is that LabVIEW is another tool in the toolbox. People used to text are used to SEEING a single dimension and thinking about all the others in their head or outside the language. LabVIEW places two dimensions in front of you which changes how you can/have to think about the other dimensions of the software. With skilled architecture of a LabVIEW program the application will already resemble a UML, flow, or state chart. I do agree that some stuff that feels much simpler in text languages such as calculations are much more of a bear in LabVIEW; tasks that are inherently single dimensional in their text expression suddenly fan out into something more resembling the cmos gate logic traffic light circuit I made at uni.
I do embedded uC development with C/C++, I do industrial control systems and automated test in LabVIEW, and I even subject myself to the iron maiden of kludging together hundreds of libraries known as configuration file editing with a smattering of glue logic AKA modern web development (only partially sarcastic, if I never have to look at a webpack config file again I'll die happy). I (obviously by now) have the inverse view of most in this thread. For simple stuff I use C#. Microcontroller based projects I use C/C++. For larger projects I'll use LabVIEW.
Then, when something has to run in a browser I stick my head in the toilet and smash the seat down against my head repeatedly. Then I'll begin to search google for the 30 tabs I'll need to open to relearn how to get an environment setup, figure out which packages are available for what I'm trying to do, learn how to interact with the security of the backend framework I'm using, learn the de facto templating engine for said framework, decide which of the 4 available database relation packages I want to use for said backend, spend a week starting over because I realize one of the packages I based the architecture around was super easy to start with but is out of date; has expired documentation; conflicts with this other newer library I was planning on using for some other feature... Now I need a cold shower and a drink.
P.S. I do find a lot of modern web development fun, but the mind-load on top of all my other projects and professional work can be a bit much. I'm sure someone that started out in webdev has the same exact vomitous reaction to something like LabVIEW.