Hacker News new | past | comments | ask | show | jobs | submit login

There is some middle ground between data-oriented design and OOP: just organize your objects in such a way that:

a) objects of the same type occupy continuous blocks in memory,

b) messages are passed to objects of the same type, then to objects of another type etc.

In this way, you don't lose the advantages of encapsulation, inheritance, polymorphism etc but you also don't sacrifice cache coherence much.

OOP does not enforce a 'random' memory access order, you can very will organize your objects in such a way that speed is not sacrificed much.




This is kinda what Entity Component Systems do - they implement in-memory relational database for game objects, handle dependenceis and allow your game logic code to run efficiently over them while still keeping the pretense of OOP :)

Why pretense? Because behaviors (Systems in ECS terms) are completely separated from data (Components) and data for different game objects (Entities) is kept together in regular or sparse arrays.

Encapsulation is nowhere to be seen, code is written to specify the components it depends on and run on these arrays.

ECS is very fashionable in gamedev lately as it allows for efficient multithreading, explicit depencencies for each subsystem, cache locality and trivial (de)serialization. Used together with handles (tagged indexes instead of direct pointers) it reduces likelihood of dangling pointers and other memory management bugs.


ECS ist Standard for enterprise web apps as well


I have seen some enterprise web apps, but they never used ECS. Can you please share more details about your experience?


May be referring to the common 3 layer architectures (see Fowler's PoEAA) which map closely to ECS:

Top layer is for "frontend", whatever that means for the product (UI, sound, simulation, etc.), the stuff with side effects. "Systems".

Middle layer is purely functional, for business/domain logic AKA utility functions. The most liquid layer, but should not be confused as trivial.

Bottom layer is where state (or a way to access & modify it) lives. Data access layer, component layer, etc.


I don't think they really map that closely... But you might be right. Thanks.


I am curious as to what you are referring to. Are you thinking of redux-like architectures?


> objects of the same type occupy continuous blocks in memory,

Depending on the language, a single object may have a lot of overhead that adds up in an array. What you often see is one ArrayObject with arrays of properties, kind of like a transposition.

A problem there is that in memory the arrays are of course laid out one after the other, which actually destroys cache locality if you need to access more than 1 property inside a loop (it will need to load back and forth to the different property arrays), so it's a somewhat dumb approach. But, at least it saves the overhead, so maybe not too bad. And in a high level interpreted language like php you likely weren't gonna get cache locality anyway.

The point is to group all properties you are going to be accessing in a hot loop together in a small-ish array.

C has structs for this, 0 overhead "entities" (although they may be padded to multiples of 4 bytes, so keep that in mind). You have compiler specific keywords to forego padding ("struct packing"), or maybe you're lucky and the data just fits exactly right. Either way, in such cases an array of structs is imo the most sane way to go.

In fact, C++ offers classes and structs. In my opinion, struct should be used for entities like "weapon" or "car". CLASSES (or objects) should be unix-philosophy adhering miniprograms that do one task and do it well (oh hey, it's the single responsibility principle!).

They way most programmers write OOP is a pretty convoluted way to model actual entities anyway. car.drive()? Oh? The car drives it self? No. agent.drive(car) should be the actual method. Agent, mind you, can be a driving AI, or a human driver, or whatever. Maybe the agent is a part of the car? In that case, use composition, not inheritance. (oh hey, entity component system!)


> A problem there is that in memory the arrays are of course laid out one after the other, which actually destroys cache locality if you need to access more than 1 property inside a loop (it will need to load back and forth to the different property arrays), so it's a somewhat dumb approach.

Caches are perfectly capable of dealing with more than one stream of data (there are some very specific edge cases you may have to consider), accessing multiple arrays linearly in a loop is generally more efficient than accessing a single array of structs when you don't use almost all the struct elements.


I've seen a lot of ECS implementations that store components in hash maps, keyed by entity ID. They iterate over one hash map in a linear way which is fast, but then they do a bunch of slow lookups like GP is saying.

In those situations, GP's suggestions are wise.

If you can iterate over arrays in parallel like you say, that's also a good approach.


There are tradeofs between regular arrays, sparse arrays and hash maps in ECS - it's very similar in concept to storage hints in relational databases, and similarly to relational databases you can add indexes if needed.


> They iterate over one hash map in a linear way which is fast, but then they do a bunch of slow lookups like GP is saying.

Usually all but the most simple "systems" will need to access more than one component, which means you have a choice between a) store component data in regular arrays (and potentially waste huge amounts of space if relatively few entities have those components) or b) store component data in some kind of hash table (and then you use cache locality for all but the "primary" component of a system).


> In fact, C++ offers classes and structs. In my opinion, struct should be used for entities like "weapon" or "car". CLASSES (or objects) should be unix-philosophy adhering miniprograms that do one task and do it well (oh hey, it's the single responsibility principle!).

Please no. The only thing that matters is the language rules ; any non-computer-encodable arbitrary rule like this on top of the language rules just causes an additional lava layer.

There is one difference between class and struct and it's default visibility. Use one or the other according to which causes less tokens to appear in your code


Fair point


OOP doesn't force you do do car.drive().

You can have an abstract agent class/interface with virtual "drive(Car c)" method. The method would be overriden by AIAgent, HumanAgent etc.

The car itself would have more basic behavior, such as "accelerate()", "turnLeft()", "turnRight()"


> You can have an abstract agent class/interface with virtual "drive(Car c)" method. The method would be overriden by AIAgent, HumanAgent etc.

Is that a convoluted way of saying "use multiple dispatch", or am I reading it wrong?


I think you are over engineering it. The way I imagine it:

Car {float throttle, float brake, float wheel} inherits PhysicalObject {velocity, mass, position}

Agent.accelerate(Car c){ c.throttle++; }

Agent.drive(Car c){ ... accelerate(c); ... }

Human inherits Agent


Interface is the best way to do a clean job in this case.


Kill me now


> A problem there is that in memory the arrays are of course laid out one after the other, which actually destroys cache locality if you need to access more than 1 property inside a loop (it will need to load back and forth to the different property arrays), so it's a somewhat dumb approach.

This is actually why memory layout != DoD. You need to account for this in the architecture of the program, so that the systems only operate on a small amount of data that are relevant to them at one time.

The tradeoff is paying for all the data, all the time, and some of the data most of the time. For a large class of programs that can be architected around mostly non-unique, trivially copyable fields with few relations, the tradeoff between AoS and SoAs is obvious.

For other programs where your entities need relational information and form trees or graphs, it can be less obvious whether the data representation is going to be faster. However in these cases you store the relationship as your data (for example, as an adjacency matrix), but implementing any kind of textbook algorithm over it is basically reverse engineering pointers with indexes.


> agent.drive(car) should be the actual method

That's equally object-oriented so I don't see how OOP is a nonsensical way to model. A sensible model is up to the designer, not OOP.


The reason OOP kills cache locality and multithreading opportunities is walking the object graph depth-first through nested method calls and pointers.

Doesn't matter if it's car.drive() referencing driver through a private pointer or driver.drive(Car c) calling methods on Car through the provided parameter - in both cases you will jump from class Car to Driver and back and then again for the next car and the next driver.

In real life the callstack is rarely 2-levels deep - I've seen stacktraces that had hundreds of levels. So your code will jump 100 levels down then 10 levels up then another 10 levels down, and so on, and then finally back up through 100 levels of nesting only to advance to the next top-level object and do the whole ceremony again for each of them :) It boggles the mind when you think about it :)

When the object graph is big enough and doesn't completely fit in cache this slows the code by orders of magnitude each time you jump through the border.

And because dependencies are implicit and execution order is accidental (and programmer doesn't actually know what other execution orders would be correct) - you cannot easily parallelize that code.

The alternative is to specify the dependencies explicitly, split the data according to functions that use it not according to metaphysical Classes where it belongs and walk the data graph in levels - starting from the level that doesn't depend on any other code being run, completing the level first then going to the level that now has all dependencies satisfied, and so on.

Of course there might be cycles that require special treatment, but at least they are explicit so you won't introduce them unless you actually have to.

End result is basically "relational programming". In case of gamedev it's called Entity Component System.


The point I'm trying to make is that OO design principles are one thing, how these are implemented by various systems or languages is quite another.

OOP does not kill cache locality for the simple reason that these are orthogonal concepts.

> split the data according to functions that use it not according to metaphysical Classes where it belongs

Well, of course and as mentioned, picking the best model, and thus the best object representation, for the job is paramount. I read "split the data according to functions that use it" as "come up with objects that make the most sense for what you're trying to achieve", not "forget about OO design".


You're right. It's not OOP, but rather the way most tutorials teach it (and most programmers write it).


You might be looking for "Entity - Component - System" design, common in video games. Entities are still virtual-world objects like you might expect, but none of them would dare keep track of something like their position or temperature or whatever. Instead, they register a component with the appropriate system, which keeps all the data colocated for efficient physics and the like.


If we are speaking of C code, it's not quite so bad as it looks to have somewhat fat structs across multiple arrays, since you can fit 64 bytes in a cache line on contemporary desktop CPUs, and that sets your real max-unit-size; the CPU is actively trying to keep the line hot and it does so (in the average case) by speculating that you're going to fetch the next index of the array. Since you have multiple cache lines, you can keep multiple arrays hot at the same time, it's just a matter of keeping it easy to predict fetching behavior by using simple loops that don't jump around...which leads to the pattern parent suggests, of cascading messages or buffers in groups of same type so that you get a few big iterations out of the way, and then a much smaller number of indirected accesses.


If you loose vectorization, you might be loosing a 4x, 8x, 16, ... 32x perf difference by organizing your data in such a way that memory operations and data manipulation can't be vectorized.


But you usually can't achieve vectorization by just simply changing your data layout, the compiler's auto-vectorization features usually doesn't work that well. SOA or AOSOA layout for vectorization only becomes important when you begin to explicitly write SIMD code in intrinsics or pure assembly.

And explicitly writing in SIMD is quite a hard feat in itself: it's okay when you're accelerating small, simple, and isolated algorithms in hot-code paths, but when you're doing much more complex calculations the time you need to invest in it to make it work goes out of hand pretty quickly.


> But you usually can't achieve vectorization by just simply changing your data layout, the compiler's auto-vectorization features usually doesn't work that well.

Please don't build a straw man.

You (or the compiler) can't achieve vectorization if you have the wrong data layout. Period.

How easy / hard is for you or the compiler to vectorize something depends on the application.

It can "just work", it might require a one line `pragma simd`, it might require you to use portable `std::simd` types by hands, or use SIMD intrinsics, or write assembly manually.

But none of these are options if you have the wrong data layout.


When you say vectorize, are you referring to loop unrolling? Or SIMD or something?


I have never heard vectorization to refer to anything other than SIMD. Loop unrolling is usually only a useful technique to enable SIMD, as far as I know (at least on modern processors, where branch prediction has greatly decreased the cost of jump instructions).


> as far as I know (at least on modern processors, where branch prediction has greatly decreased the cost of jump instructions).

What about ILP? Can't that benefit from an unrolled loop in some cases? For example if there's a fairly long dependency chain but you might still be able to go through two loop bodies at once instead.


I don't know for sure at all, but I don't think it's impossible that speculative execution could also achieve the same at the processor level.


I don't see how this has anything to do with speculation? In most cases where you care about this you don't have to speculate if all the loop iterations are needed. For example in matrix multiplication all of those iterations will be needed.


What I'm thinking is that the processor has an instruction stream that looks like this:

  loop: 
    instr_1
    instr_2
    ...
    instr_n 
    jcond loop
Now, assuming the loop is not unrolled, it would need to speculate that `jcond loop` will jump to be able to execute 2 copies of instr_1 in parallel - I'm saying that it may be able to do that, though I am by no means sure.


Oh, I see what you mean -- I was talking (and thinking) about the unrolled version so it didn't make sense how speculation could help there. But I imagine that typically the kind of long chains that you might want to do in parallel in a single basic block are perhaps something that wouldn't get executed that far after a branch, if the only purpose is to not waste time after a branch misprediction. Plus from what I understand you'd still be wasting execution units here, just not by idling them but rather by speculating the "I'm done" branch repeatedly.

EDIT: I just found that the idea that I had in my head actually exists and is called "modulo scheduling".


I find in simulation codes that lack of awareness of (a) is an absolute performance killer. Generally, it's better to use a pattern for an object that's a container for something - so don't have a 'Particle' object but a 'Particles' one that keeps things stores the properties of particles contiguously. In my old magnetics research area you have at least 8 and more frequently 10+ spatially varying parameters in double precision that you'd potentially need to store per particle/cell.


Quite so. There’s a false equivalence in this article between data and encapsulated state, but if that were so then the flyweight pattern and its ilk couldn’t exist.


Only in C++. Most other OOP languages do not allow controlling allocation that way.

Also, OOP only allows array-of-structs continuous data. Struct-of-arrays and hybrid forms are usually awkward or impossible. And with everything except maybe C++ and Rust, those "structs" in OOP-land do have quite an overhead compared to C structs.


OOP does not say anything about memory allocation.

OO principles are one thing, what specific languages do is quite another.


There are no real OO principles. Ask ten people and you will get ten different answers. OO is defined by the languages and tools claiming to implement it, and the set of principles derived from those is inconsistent and contradictory.


I think that, while most people can't really articulate this well enough, there is a pretty good common understanding of what style of programming is OO: it's a style of programming where code is quite deeply tied to data, especially modifications of persistent state (encapsulation), and where subtyping is commonly used to model program behavior (interfaces, inheritance, virtual dispatch, polymorphism).

This would mostly contrast with procedural code, where code and data are much more separate - procedures often manipulate and pass around complex data structures -, and subtyping is not commonly used for program behavior; instead, flow control is usually explicit (e.g. switch()'ing on an enum value).

It is also commonly contrasted to Functional Programming, where data is also loosely tied to code, with functions often reading (but usually not modifying) deep parts of complex data structures; and where higher order functions and sum types are used to achieve dynamic dispatch.


It's obviously not so.

There are OO principles, which are indeed well-known, and each OO language has its own take on how to implement them.

It's not even needed to use an OO language to follow OO design principles. My day job is pure C and we follow OO principles as much as practical.


You conspicuously don't actually name any OO principles. If you did I'm sure we could find "OO" languages that don't conform to them.

My personal definition of OO has been backed down to directly connecting some concept of "method" to a data structure, and some form of polymorphism of those methods depending on what data structure you pass in to some function/method. You may note this is incredibly weak, but it does have the virtue of usefully distinguishing between two sets of languages, and that those two sets will have real differences in how you program them. Beyond that it's hard to create a definition of OO that has the second property; you may be able to split the world into "languages that implement OO visibility rules (private, protected, public) and those that don't", but you'll fail the second criterion, in that languages that just leave everything public aren't meaningfully different to program in than ones that implement the visibility rules.

I could create several different sets of "OO principles", which wouldn't be mutually exclusive necessarily but certainly would be distinguishable. Especially the distinction between the silly principle that OO objects should somehow reflect real-world entities, which was the major failure in 1980s/1990s OO principles and has, mercifully, all but died in the modern era but most certainly was at one point an "OO principle", and any of the several sets of OO principles I could name that actually function in the real world.


Well, this is HN so I did not want to sound condescending by stating the obvious.

OO Programming 101, OO principles: encapsulation, abstraction, inheritance, polymorphism, SOLID.

Whatever a specific language adheres to and how is beside the point.


That must be some kind of "new OOP", since the "old OOP" is messaging, local retention, and protection and hiding of state-process, and extreme late-binding of all things. At least according to Alan Kay, who wrote this verbatim.

To wit, encapsulation and abstraction existed outside of OOP (for example, Modula had it before), inheritance is not a necessary feature for OOP (Self doesn't have it), and the O in SOLID doesn't apply to Smalltalk and Self.


That's standard OOP as it stands today (versus the 60s when Alan Kay coined the term).

Alan Kay considers that inheritance and polymorphism are not essential, fine. He does consider encapsulation essential, though. Specific languages have their own take, fine.

The point being is that there are well-known OO principles. Claiming otherwise is either disingenuous or ignorant.


> versus the 60s when Alan Kay coined the term

It was in the 70s, and his description that I quoted is from the 2000s.

> Alan Kay considers that inheritance and polymorphism are not essential, fine.

Polymorphism is a logical outcome of his requirements. So in a purely logical sense it is essential, although I imagine that saying that might be a little bit like saying that CO2 is essential for a campfire (as in that you can't get a campfire without emitting CO2, even though that is strictly a matter of consequences).

> He does consider encapsulation essential, though.

Yes, because biological cells are encapsulated.

> there are well-known OO principles. Claiming otherwise is either disingenuous or ignorant.

There surely are some "well-known principles" but whether the "known" in that phrase has the same meaning as in "knowledge" (justified true belief at a first approximation) seems debatable.


The tree under your reply proves my point. There is no one set of OO principles. This thread identifies at least two, the original Kay principles and what I wasn't sure you were going to name, which is what I'd call the outdated 1990s ideas of OO. Then there's today's idea, which is probably pretty close to what I said in my post and is exemplified by duck-typed dynamic languages and a lot of modern languages like Go and Rust. That's at least three, and that's staying fairly broad; if we start quibbling about arcane details the count only goes up.


> what I wasn't sure you were going to name, which is what I'd call the outdated 1990s ideas of OO.

I'm sorry but this is getting surreal.

I named the standard OO principles and concepts which are very much valid and alive today, though of course how they are applied (or if they are applied at all) varies from language to language. Claiming otherwise is absurd. If anything this whole article and thread show that too many people are confused by the concepts of OO principles (if they know what that means at all), programming languages (that may or may not implement some of these principles), design practices/patterns (how to come up with a model of objects): these are all different things. Certainly selecting objects that reflect real-life entities is not an OO principle, for instance, but rather a design practice (good or bad, it depends).

In my team we do C exclusively and follow OO principles as much as practical. Any software engineer worth their salt has a good idea of what that means.


Those aren't exclusive to OO, though.

C, for all its faults, has encapsulation at a module level: any functions you don't define in your header file aren't exported and are thus private. Go and rust do the same thing.

Abstraction is even more common. Functions are abstractions. And any language with typeclasses (like haskell) or function overriding (like Julia) uses a form of polymorphism

Really, the only essentially object-oriented things here are inheritance and (by extension) inheritance-based polymorphism.


> C, for all its faults, has encapsulation at a module level:

No language that supports memory access for the entire address space of the currently running program can ever support something like encapsulation: you can pass pointers to objects and functions outside the currently running module, or you could somehow derive this info from outside the module and so access functions and objects that were not declared in header files. Thus the language cannot give you the isolation guarantees that memory managed languages can. What it can do, is put up some roadblocks or barriers that require effort to cross. But there is a big difference between correctness guarantees and roadblocks.

There really is a qualitative change when you are working in a memory managed language as that allows the language to assign fine grained control over which memory addresses are available to which data structures, which is something that you cannot do with C.


Well then, C++ doesn't have encapsulation, therefore C++ isn't OO. Hell, even Java isn't OO if you allow JNI.


I don't want to get into a debate defining what language is OO and what is not. My point was rebutting the notion that C has private data structures by pointing out only languages in which memory is managed by the runtime (e.g. VM) can offer isolation guarantees. Attempts to switch the topic to OS level memory protections are not really what we're talking about here, as the OS doesn't provide language level protections. So yes, if your code leaves the VM then you lose those VM protections.


> I don't want to get into a debate defining what language is OO and what is not

I mean, that is the discussion we were having. We weren't talking about language VMs.


I was replying to a statement that C had isolation and I pointed out that it didn't. The response was a non-sequitur: "So then even C++ isn't OO", and I responded that the question is not whether C++ is OO but whether it's memory is managed. Not sure how any of this is hard to follow or why these arguments should trip you up.


The statement was specifically that C had encapsulation, within the context of a discussion about whether OOP should be defined as "encapsulation, abstraction, polymorphism, and inheritance."

You interpreted that as meaning memory isolation for some reason (even though plenty of clearly-OOP languages do not implement that), and when someone asked you how that definition of encapsulation squared with the fact that C++ is generally considered object-oriented, you said you didn't want to have that conversation.

It's not hard to follow and it didn't trip anyone up; you just changed the subject out of nowhere and for no discernible reason by injecting a contextually-inappropriate definition of "encapsulation."


If we're talking about making guarantees about blocking the programmer's ability to modify parts of the address space, we're no longer discussing programming paradigms. We're discussing security proofs. The MMU does not play a core role in object-oriented programming.


Historically, this is not entirely correct. Segmented MMUs (as opposed to the more common, currently used concept of paged MMUs) were intended to provide the hardware support for the protection levels and the data/code mixture in OOP. I.e. each object would have executable, readable, r/w and inaccessible parts. Protected by the MMU, depending on the currently accessing context, that is, a subclass, friend class, other class, etc. But creating a segment descriptor for each object or even just each class was, of course, far too expensive in the end.


That's actually really interesting, I hadn't heard of that.


We're not talking about the programmer doing something, but about the code doing something, which is absolutely all about security proofs. And while the OS protects an address space, the OO runtime protects memory within that runtime, so a private variable isn't available to code running outside the class while the same cannot be said for C code. That's the benefit of offloading memory management in interpreted languages.


OOP is a set of principles. C is a language. These are not the same things.


Then OOP is no true scotsman, because no language implements all the principles.

Or in other terms, without an implementation, OOP isn't even usable, it isn't even real. Just maybe a desirable ideal somewhere.


"Abstraction" as a principle is something we've been doing since we came up with function calls. Encapsulation as a principle is something we do when writing C code. The only one of the listed OO principles which is in any sense exclusive to OOP is inheritance.


> Encapsulation as a principle is something we do when writing idiomatic C code.

That's clearly not the case. C obviously does not enforce encapsulation, and it's extremely common for devs not to follow this principle, in fact it's pretty much the default not to and it takes discipline to enforce it.

"Encapsulation at module level", as you wrote earlier, is not encapsulation. If you implement your object as a struct (which is really what objects are) then encapsulation means not accessing the content of that struct/object directly.


Encapsulation is information hiding, where the internal components of a unit of code are inaccessible to its consumers (by fiat or by convention — see python's _private methods). This includes hiding procedures, fields and types. Context objects are a form of data encapsulation, for instance, because their contents are meant to be inaccessible, and they're not uncommon in C.

I also gave the examples of rust and go, which have private struct fields but are not really object-oriented, and encapsulate at the module level. Point is, OOP does not by any stretch have a monopoly on encapsulation, and OOP should not be defined in terms of it.


Sorry but I no longer understand what you are arguing about, nor do I understand your point.

Encapsulation in the context of OOP means effectively hiding the data within an object from the external world and not allowing direct access to these data.

OOP may not have a monopoly on this but this is indeed a defining feature of OOP (which you know very well if you ever took a programming 101 course): You may have encapsulation without OOP, but in OOP you must have encapsulation. It's not OOP if there's no encapsulation.

Encapsulation is not something enforced by C (access to struct's fields is free for all). And this is not a principle generally followed in C code (most C code does directly access fields within whatever struct). Hence my rebuke to your claim of the contrary.

Now, obviously this can be done in C, this is a matter of choice. OOP can be done in any language. There seems to be confusion in many comments between OOP and specific languages.

Lastly OOP are a set of principles. Principles are rarely followed in their entirety and indeed many languages pick and choose which, if any, principles they implement and how they implement them. It's the same when 'practising' OOP in a language where you have to do everything "by hand", like C: You pick and choose as needed.

I'm out.


> this is indeed a defining feature of OOP (which you know very well if you ever took a programming 101 course)

Setting aside the fact that any 101 course is necessarily reductive and inaccurate, "encapsulation + abstraction + polymorphism + inheritance = OOP" is something that gets regurgitated a lot without ever really being argued in favour of.

Since the first 3 of those 4 points are not at all limited to OOP, it really doesn't make sense for them to constitute ¾ of the definition. Are they really OOP principles if basically every modern language follows them? And now that we largely agree composition > inheritance, OOP often ignores that fourth principle too.

I know you hate C as an example here, so let's use rust instead. If a rust codebase can exercise encapsulation, polymorphism, and "abstraction" (still the vaguest and weakest criterion imo), and OOP code is discouraged now from using inheritance anyway, what stops it from being OOP? Most of the rust I've seen hasn't fit with any conventional notion of OOP, but it still technically matches the definition. Doesn't that make it a bad definition?


ML supports encapsulation via modules and abstract types.


But not destructuring objects into SoA memory layouts is a "principle", since the availability of pointers to objects is rather fundamental for all OO "specific languages".


> But not destructuring objects into SoA memory layouts is a "principle"

No, not really. There are languages that allow inside-out-objects where the "object" is a tuple of (class, index). The class holds a bunch of arrays containing each object's properties at the index-position that the object indicates. Totally destructured, yet holds all the usual OO "principles" like implementation hiding, abstraction, access via objects, etc. https://metacpan.org/pod/Object::InsideOut

This is exactly what I meant with "there are no OO principles". Noone has a clear-cut set of those and almost anything can be made to fit some set of "OO principles".


> since the availability of pointers to objects is rather fundamental for all OO "specific languages".

I don't see how that is the case. For example some implementations of Smalltalk used object tables, so there were no "pointers to objects", just numerical object IDs. The physical interpretation of such IDs could get very arbitrary.


This is not specific to OO and is not an OOP principle. As soon as you have data dynamically allocated in memory and you start passing pointers to them around you have to be careful.




Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: