That's a bit weird to see a seasoned programmer harboring those views. I remember that when learning OOP I was equally confused. "I can do all that with functions and structures already." And indeed you can. There is nothing you can't technically do without OOP.
OOP is not a programming feature, it is a software engineering feature. It allows for cleaner APIs and higher levels of abstractions. IMO the core feature of OOP is less the ability to bind functions with data but the ability to overload operators.
Sure, you can have functions that have the "this" pointer as the first argument and have exactly the same things. You can add a bunch of flags and have the core features of inheritance.
You can. Now what do you prefer to write, when finding the middle of a 3d segment?
mid = (a + b)/2
or
mid = scalar_division(vector_add(a,b),2)
?
If you allow operators to be overloard, is there any good reason to not place them close to your structure definition? Isn't it a good practice to force these functions to be defined if your structure is?
The core idea of OOP is that it extends the language by allowing to build more abstract concepts in top of lower level abstractions.
That's a core misunderstanding that I keep seeing with low-level programmers. They want to see the implementation details of everything and refuse to obscure some behaviors and trust the libs/compiler makers.
People who spend more time in higher level algorithms can't bother with all the implementation details. When I do DL, I want the ability to concatenate layers of different types (that inherit the same generic type), be able to check their output(), manipulate tensors of floats, assign them a scalar value or multiply them by some.
You can go a very long way without using any kind of OOP and staying at the implementation level. I am actually in awe of how much one can do that way. But OOP is a tool for teams to work together without knowing the details of each other's parts and build increasingly complex abstractions.
The problem with placing operator overloads within the definition of one object is — which object do you look to when trying to add two different types of objects and the assumption is TypeA.+(TypeB) is the same as TypeB.+(TypeA)? You put them in an inheritance perhaps, but that might unnecessarily complicate your type classification. Okay, so you use some sort of interface, perhaps with a default implementation. Well, interfaces aren’t unique to OOP, but they do add a bit of confusion — when looking for an implementation you might now have 3 places to look (Possibly 4 mentions if default implementations aren’t allowed).
Really there’s an easy conclusion to this — allowing and preferring user defined operators or operator overloads is a hotly debated topic and one where there appears to be no right answer — about the only conclusion you can draw then, is that operator overload is not exclusive to OOP: https://softwareengineering.stackexchange.com/questions/1809...
I’d also suggest that with functional programs, there are limits to how much you should cram in to one program, and that a smart way to modularize your code would be to, for example, follow the Redux reducer pattern and build or compose larger state objects and operations from functions that operate on just parts of the state object—this way you isolate written changes in a similar way to private OOP variables, where it’s just not expected (or even “in scope”) to modify other parts of the global state. Additionally, you can as in OOP control access to state by encouraging the use of “selector functions” or not directly accessing state. You could make your functions take smaller typed structs of state, really, functional programming is at least as expressive as OOP programming. I’ll say that both allow you to make mistakes like mix concerns, or use lots of globals, perform magic with meta-programming or overloading, or not be expressive enough to create your own DSL in—OOP or Functional, these concerns are in my experience shared by basically any language.
Also outside of numerical work and some set stuff there's not really that many times you actually need to overload operators.. And you can easily extend languages without objects (Lisp)
OOP is not a programming feature, it is a software engineering feature. It allows for cleaner APIs and higher levels of abstractions. IMO the core feature of OOP is less the ability to bind functions with data but the ability to overload operators.
Sure, you can have functions that have the "this" pointer as the first argument and have exactly the same things. You can add a bunch of flags and have the core features of inheritance.
You can. Now what do you prefer to write, when finding the middle of a 3d segment?
mid = (a + b)/2
or
mid = scalar_division(vector_add(a,b),2)
?
If you allow operators to be overloard, is there any good reason to not place them close to your structure definition? Isn't it a good practice to force these functions to be defined if your structure is?
The core idea of OOP is that it extends the language by allowing to build more abstract concepts in top of lower level abstractions.
That's a core misunderstanding that I keep seeing with low-level programmers. They want to see the implementation details of everything and refuse to obscure some behaviors and trust the libs/compiler makers.
People who spend more time in higher level algorithms can't bother with all the implementation details. When I do DL, I want the ability to concatenate layers of different types (that inherit the same generic type), be able to check their output(), manipulate tensors of floats, assign them a scalar value or multiply them by some.
You can go a very long way without using any kind of OOP and staying at the implementation level. I am actually in awe of how much one can do that way. But OOP is a tool for teams to work together without knowing the details of each other's parts and build increasingly complex abstractions.