It never did. A concurrent program has always meant a decomposition of a program into parts that can be executed out-of-order while maintaining the same output. Whether or not this is by some scheduler on a single-core cpu is irrelevant to the theory.
The words "concurrency", "multi-threaded", and "parallel" each have different meanings. You can have a program that is multi-threaded but which does not maintain concurrency ie. the output is dependent on race conditions. This is usually, but not always, a bug.
Your confusion stems from the fact that most people do not care about the distinction. You usually don't care about concurrency unless you plan to actually run things in a manner which is unordered. Thus when the theory says "concurrency" you think "parallel". Technically speaking you're wrong, practically speaking your mistake usually won't matter.
As a final example consider a single-core computer running Windows with multiple processes running a myriad of services and programs. At any one time only a single process can run on that single available core, but in practice they are running "at the same time", because the system is implementing a model of concurrency allowing it to schedule and execute the different processes out-of-order.
The words "concurrency", "multi-threaded", and "parallel" each have different meanings. You can have a program that is multi-threaded but which does not maintain concurrency ie. the output is dependent on race conditions. This is usually, but not always, a bug.
Your confusion stems from the fact that most people do not care about the distinction. You usually don't care about concurrency unless you plan to actually run things in a manner which is unordered. Thus when the theory says "concurrency" you think "parallel". Technically speaking you're wrong, practically speaking your mistake usually won't matter.
As a final example consider a single-core computer running Windows with multiple processes running a myriad of services and programs. At any one time only a single process can run on that single available core, but in practice they are running "at the same time", because the system is implementing a model of concurrency allowing it to schedule and execute the different processes out-of-order.