
Is abstraction overrated in programming? Chess, interviews and OOP - toAnswerIt
https://www.quora.com/Is-abstraction-overrated-in-programming/answer/Matthew-Drescher?&amp;share=1
======
JoeAltmaier
Reminds me of an old contest - the 8 Queens problem, posed by Byte Magazine in
the '80s as a programming contest.

All the solutions presented started by declaring an 8X8 board, with a 1 or 0
in each cell to represent a queen. Then there was some two-dimensional
iteration over the board, testing if any queen was on the same row, column or
diagonal as any other.

I'd dismissed that solution as inefficient from the start. Since only one
queen can be in each column, I represented the board as an 8-element array
with each element containing the row number of the queen in that column.

My 2nd insight was, only one queen can be in each row, so the elements of the
board array were simply the numbers 1..8. The search consisted of permuting
the digits, then testing for diagonals e.g. test pairs to verify
abs(difference) != abs(i1-i2). That is, their row numbers were not the same as
the difference between their column indexes.

Finally, when permuting, you could test the elements as you went and trim the
permutation tree drastically if you found a failure. No sense permuting the
right-most elements of the array if there's already a diagonal conflict in the
digits placed so far.

The whole thing ran in trivial time. The contest winners ran in hours or days
(this was all in BASIC on <1Mhz processors back then). Made me wish I'd
actually entered the contest. But I didn't bother, I guessed many people would
have found the same solution.

~~~
perl4ever
I entered a programming contest once, where the task was to take some input
and construct an optimal solution to a geometric problem. I got irritated at
the rules not being clear and well thought out, and I wasn't smart enough to
work out a good implementation of the real-world algorithm they were hoping
contestants would come up with. So I wrote code to provide pretty much the
stupidest (most trivial) possible solution that fulfilled the specific
constraints for a valid output that were stated. Because entrants were scored
on a combination of speed, code size, and quality, my entry came in a close
second, because it was by far the smallest and fastest. I kicked myself for
not trying harder to optimize the quality even a little bit above "braindead".

------
hota_mazi
It seems nuts to me that anyone would expect the Queen class to extend Bishop
just because they happen to move diagonally. A Queen "IS NOT A" Bishop.

If anything, specify traits that define the movement of the pieces and have
each piece extend that trait (with the Queen extending both
CAN_MOVE_DIAGONALLY and CAN_MOVE_LATERALLY).

~~~
jgalt212
composition over inheritance

[https://en.wikipedia.org/wiki/Composition_over_inheritance](https://en.wikipedia.org/wiki/Composition_over_inheritance)

~~~
hota_mazi
No, my observation has nothing to do with composition vs/ inheritance.

It's about defining what a class IS vs/ what a class HAS.

~~~
mikekchar
It's hard to ask this question without sounding snarky, but I hope you will
understand that I'm seriously curious to hear what you think.

> No, my observation has nothing to do with composition vs/ inheritance.

> It's about defining what a class IS vs/ what a class HAS.

What do you feel is the difference between these two sentences? In other
words, why is inheritance vs composition _not_ the same as what a class _is_
vs what a class _has_?

~~~
imtringued
People are overthinking this.

In practice inheritance is a very special case of composition that is only
advantageous in a very narrow set of usecases. Basically almost never.

In fact the number of usecases is so low that I would prefer that modern
languages just omit it entirely because the added value is incredibly tiny
compared to the cost of confused developers.

The primary difference between an interface and inheritance is that with
inheritance you do not have to implement every method and can use the default
implementation of the base class.

Example:

    
    
        class Piece {
            int x
            int y
            boolean validMove(int newX, int newY) {
                return false;     
            }
        }
    
        class Queen extends Piece {
            boolean validMove(int newX, int newY)
            //we use the default implementation
        }
    

Is equivalent to this

    
    
        interface IPiece {
             boolean validMove(int newX, int newY);
        }
    
        class Piece() implements IPiece {
            int x
            int y
            boolean validMove(int newX, int newY) {
                return false;     
            }
        }
    
        class Queen implements IPiece {
            Piece piece;
            boolean validMove(int newX, int newY) {
                //we use the default implementation
                return piece.validMove(newX, newY);
            }
        }
    

Inheritance doesn't actually offer any code reuse. All it does is choose the
default implementation of the parent class. If you have dozens of polymorphic
functions in a class and only want to change 3 it might be convenient. I would
question why you need so much polymorphism to warrant a special language
feature. C and Rust developers are fine without it. Functional programmers are
fine without it. But for some reason in OOP you need it everywhere and are
actually worse off because it's overused.

Modern language designers, please do not implement inheritance of classes in
your new languages!

~~~
hota_mazi
> In practice inheritance is a very special case of composition that is only
> advantageous in a very narrow set of usecases.

You are confused. See my other message below where I show that composition and
inheritance are not mutually exclusive (and arguably, the best of both worlds
is using inheritance via composition).

~~~
mikekchar
Edit: Sigh... This message is badly written, mainly because I confused your
point :-) I'm not quite sure why I was thinking you were arguing that it was
not an ISA relationship. In any case, removing my addled brain from the
equation is probably useful -- you can probably figure out that there is more
than one way to look at this. And while you clearly think that your way is
correct, it is useful to view it from other perspectives.

While it is interesting to see your viewpoint, you may also consider that not
everybody will see it the same way that you do. For example, why do you
suppose that inheritance via composition makes it not an ISA relationship? Is
a Swiss Army knife a screwdriver? You may say "no", but others may say "yes"
\-- it is a screwdriver by way of containing a screwdriver. You can verify
this usage of the language simply by listening to sales pitches for the crap
they sell on the shopping channel :-) ISA and HASA do not necessarily need to
be exclusive properties.

It's easy to get wrapped up in your own definitions of things and to block out
how others may view the universe. This is precisely why I originally asked the
question. I appreciated the enlightenment. I hope that the above will help you
achieve the same.

~~~
hota_mazi
> For example, why do you suppose that inheritance via composition makes it
> not an ISA relationship?

I don't. My other message [1] clearly states:

    
    
        class A(b: B) : B by b // IS_A via composition
    

My point is that inheritance can be achieved in many different ways,
composition being one of them.

Inheritance is an architecture concept, composition is a mechanism which can
be used to implement inheritance. They are complementary.

[1]
[https://news.ycombinator.com/item?id=16050458](https://news.ycombinator.com/item?id=16050458)

------
johnfn
The OP has translated the interviewers question of "design a readable and
maintainable abstraction for a chessboard" into "design the most space-
optimized chessboard possible", then wonders why the interviewer doesn't like
his solution.

~~~
smallnamespace
Or maybe the OP understood that everyone who has ever implemented a chessboard
are also building chess engines where memory efficiency and speed actually
matter a lot.

It's the interviewers who are ignorant here, they took a real problem and
translated it _badly_ into a toy problem. Then they failed to realize what
they did.

~~~
lozenge
They could be building an online social game site, or implementing a UI but
using an existing AI. In which case the OO version will be easier to code,
understand, render, and render history in chess notation. The only thing his
methods is good for is implementing an AI and serialization.

~~~
christophilus
I think the author's approach is data-centric rather than object-centric.

His methods would be easy to persist: write the longs to a file. Easy to
render: based on bit positions, draw a piece. Easy to understand. I looked at
it and immediately knew what was happening (which almost never happens when I
have to look at a massive class hierarchy). History would be trivial with his
approach: since it's space-efficient, you could literally store the bit-space
for each previous board configuration or if you wanted to optimize more, you
could store a simple sequence of moves.

------
Chriky
Is this for real? The solution he proposes is _specific to Chess_!

I mean, does this guy really think the company cares about efficiently
representing chess game states??

Obviously, obviously, OBVIOUSLY, the Chess aspects of this question are
irrelevant. What they are trying to find out if how well you will work on
their actual codebase, which is presumably not comprised of global functions
operating on bitfields

~~~
inopinatus
The guy is openly arguing with an interviewer after taking a question too
literally. I suspect my interview impressions would've included "candidate may
be on the spectrum".

However, this entire thesis is founded on a false dilemma, because you can
write an OO domain model with a compact representation.

~~~
fjdfhdjldj
why the onus is not on the interviewer? Why is he bringing up chess if he just
wants OO answers? Aren't there better problems to pose more suited for testing
OO knowledge? Bitboard is a standard way of writing chess engines. I think
most would agree that a good programmer is one that seeks the best solution,
not one that follows dogmatically his preconceived ideas.

~~~
inopinatus
Because you can write an OO domain model using a bitboard representation. It's
an opportunity to demonstrate advanced OO knowledge, which this Quora answer
most definitely fails at.

~~~
loup-vaillant
I'm interested. Care to elaborate how this OO domain would look like?

~~~
inopinatus
A variant of the flyweight pattern using contextual identity. All board
representation remains a bunch of uint64_t, all messages sent to boards result
in bit manipulation (i.e. _not_ massive duplication of objects), and you
instantiate the piece objects once per game.

There's simply no need for "instantiate all the things" approach assumed in
the Quora answer. That is OO at its most naive.

OOP is messaging, encapsulation, and late binding. What you do inside the
capsule is up to you; and at a higher level, how the behaviour emerges from a
composition of objects is also up to you.

~~~
loup-vaillant
I have the sinking feeling that your first paragraph is probably what the
original author had in mind all along… He advocated for a representation of
the entire board, and that's apparently what you would do as well.

I originally though you were advocating for a representation of each _piece_
on the board, were "messages" were "sent" to individual pieces, not the entire
board. I was curious about how you'd map that interface to a compact
representation.

~~~
inopinatus
I am. You should definitely be able to send messages to piece objects. That's
not incompatible with the Board object having a bitboard for its internal
representation of board configuration. The piece objects have contextual
identity based on the board configuration. Hint: they _don 't_ have positional
state.

~~~
loup-vaillant
That still doesn't tell me how you map the interface to the implementation. I
need a piece of code here. You won't get away buy leaving this as an exercise
to the reader.

------
simonhamp
Don’t design the abstraction; derive the abstraction from your design if/as it
naturally becomes apparent.

That way you’ll end up with something that works much sooner, instead of
getting caught up in designing layers of abstraction.

------
jstimpfle
I have no idea what abstraction actually is. It seems to me it's never
abstraction unless it leads to complicated designs for the simplest problems.
I've been accused of hating abstraction because I wrote in C instead and
programmed out my own concepts -- instead of just using a ready-made framework
with ill-fitting ones (Qt in that case).

A much better term than abstraction to me is "semantic compression" (got that
from Casey Muratori of Handmade Hero). This basically means factoring out
common bits with the goal to express the solution with the least amount of
repetition (while of course taking care not to factor out parts which are only
coincidentally common. I figure that's the "semantic" part).

To do semantic compression you need abstraction, but not pointless abstraction
-- just the right amount.

~~~
Veedrac
IMO, the key issue is to differentiate between abstractions in language-space
and abstractions in problem-space. Turning a chess board into an object is
language-space, since it affects vocabulary; writing a function to count the
pawns is problem-space, since it defines a step towards the problem one wants
to solve.

Nobody programs without problem-space abstractions any more; this is
effectively what you get with functions and libraries. When someone uses a
prebuilt tangent function, that's working in problem-space.

Language-space abstractions don't pull the same weight. If they did, Haskell
programmers would be so much higher productivity than C programmers that the
latter would be simply competed out of the market. Instead we see marginal
benefits against marginal costs, and the gamut of C, C++, Go, Haskell, Python,
Javascript, etc. are, if not equally good, at least sufficiently similar that
there is genuine debate.

If in doubt, abstract the problem. If you do an abstraction and don't have
_less work to do_ afterwards, maybe you've abstracted the wrong thing.

~~~
macintux
> If they did, Haskell programmers would be so much higher productivity than C
> programmers that the latter would be simply competed out of the market.

That implies a much smarter market than I believe exists.

I think it’s possible (not arguing it’s true) that Haskell is dramatically
better yet entrenched interests (e.g., managers afraid of not being able to
find programmers, and programmers afraid of being obsoleted) block the logical
conclusion.

~~~
seanmcdirmid
I’m pretty sure most of the Haskell crowd doesn’t believe this. There are
benefits to learning, thinking in, and using Haskell, but raw easily measured
productivity increases isn’t one of them (nor is it for almost any other
programming language).

------
coldcode
Understanding the problem > any abstraction you pick without understanding the
problem first. In this case chess was a terrible choice for an interview. In
chess the biggest issue is tree size. If you don't start there then no
abstraction will ever save you.

~~~
ZirconiumX
I'm going to disagree with you here. In chess the problem is not about
reducing the tree size: you can get a "tree" by picking random moves. The
problem is about reducing the tree size _intelligently_.

My chess program - Dorpsgek - has an 800 byte board structure (though I
recently reduced that to about 400). It uses techniques that the computer
chess community considered to be impractical. But still it works, and it works
because the space that I use per board contains very useful information, such
as attacks to a given square.

The advantage that bitboards give are not space by any meaningful quantity -
for an alpha-beta search of depth d with a branching factor of b you will
allocate at most d boards - or even 1 board - at any given node, not b^d
boards. The advantages that bitboards give are speed and information density.

And to anybody who thinks that a bitboard approach is not sufficiently generic
for different board sizes, wrap the bitboard using an arbitrary size bit set
and the problem is essentially solved.

------
wellpast
I can relate to the pain of going through an interview knowing that the
interviewer is expecting a specific approach/answer (an _OO_ answer!) that my
hard-won experience has already long ruled out. And morals and/or my sense of
identity — and/or a fear that the Gods are watching — refuses to allow me to
play the interviewer’s game (at my own practical loss).

~~~
agarden
I don't quite understand. If I were interviewing the OP and asked this chess
question, then got his answer presented much like it was on Quora ("Well, the
thing to understand about representing a chess board is that if you want to
build a usable chess engine, the size of chess board representation is a
critical constraint..."), I would be impressed. Hire that guy. He understands
that the important thing is to figure out the solution constraints.

On the flip side, if the interviewer learns something from you in an interview
and that means he doesn't want to hire you, aren't you glad to have escaped
that work environment? Assuming, of course, that the interviewer is
representative of the people you would be doing actual work with.

~~~
wellpast
I agree w/ you. So in that sense it’s not _truly_ at my loss—that part I was
being (somewhat) rhetorical about.

However it has definitely been the case where I’ve contended with some Senior
Engineer who has found the light in OO programming, inheritance, Template
Method, etc, etc and no amount of dialectic will change his mind.

This programmer often doesn’t see the severe impedance that his OO code, his
type hierarchies, etc are causing to agile delivery of the new and newer
software. How do you show an OO-religionist...the light, so to speak? These
quesitions are not easy to get into a lab and give him the hard numbers.

Just an example here: Matrix algebra is easiest done with first-class matrix
abstractions. So my compeer puts everything into some _other_ abstraction. And
I say, look, look how much easier it is to do our work when our data is
modeled differently? And he says “that’s not easier” — okay, you are
_objectively_ right, but your peer doesn’t “see it” or refuses to see it, or
what, I don’t know— but this is almost de facto industry situation—you can’t
_prove_ that your ergonomics are better because your peer doesn’t even speak
in proofs anyway.

Now I could go find another job but my equity is actually worth something here
(it really is), so I put up with it. But it’s frustrating nevertheless-and
having a better abstraction only costs my coworker to _learn_ something. It’s
not going to make my equity any less valuable—might make it more valuable, if
anything.

------
cglouch
in case anyone was curious how a bitboard chess engine works, I wrote one from
scratch in python and included a writeup describing my general approach
(mostly focused on the move generation aspect):

[https://github.com/cglouch/snakefish](https://github.com/cglouch/snakefish)

(I cheated a little by not implementing castling / en-passant since it's a
pain, and my engine is still really slow, but hey it works!)

~~~
mratzloff
That was really interesting. Thanks for sharing!

------
tel
The Bitboard is very abstract. The interface it instantiates will likely not
let leak it’s concise internal representation and will discuss the operations
on the game state at a domain level. This is abstraction.

So this is a failure of OO abstraction. Why? OO abstraction is really complex
and makes many forced moves that provide little value here. Inheritance and a
focus on “nouns” makes idiomatic code highly specialized to certain domain
models. Unfortunately, these are rarely useful.

~~~
qznc
OO design is often about the tradeoff between over-engineering and over-
specialization. The rules of chess can usually be assumed to not changed,
which is different to most programming tasks.

If you would design for Chess 2.0 and you expect some game designers to change
the rules every week (thinking up new kinds of pieces, changing the rules for
existing pieces, changing the board layout, etc), would you still use
Bitboard? Maybe it would be better to focus on the "nouns" the game designers
use and keep optimizations like BitBoard in mind for later?

~~~
tel
The point of abstraction is to not be tied to Bitboard--or any concrete
representation whatsoever. Chess 2.0 changing its rules is exactly what can be
protected against.

~~~
qznc
Yes in theory.

In practice, if code uses a Queen object, you put costly indirections in front
of the Bitboard and thus already lost performance.

~~~
tel
You assume a Queen object and also some compiler details. These might be in
practice relevant some of the time but (a) how often, really? And (b) are
those forced decisions or decisions of convenience and momentum?

Fwiw, as soon as you say “object” I’m betting you’re taking on expensive,
unnecessary OO mental modeling.

------
wellpast
The CPU does not care about your abstractions. Abstractions are entirely wrt
the human dealing with them. A good abstraction is only “good” with respect to
some context/purpose—there is no such thing as a universally/generally good
abstraction. And often a “great” abstraction will hurt your performance — so
then is it so great?

But setting aside performance concerns, when we speak of a “good” abstraction
we are usually (or should be) saying this is good for some _purpose_ —good for
readability, for example.

But even better—or of utmost importance in the real world-is this: is the
abstraction under question “good for business”? And that is entirely asking
this: does the abstraction allow for rapidly _acting_ on business ideas,
creativity, needs, etc.

However, I believe that once the context is fixed/agreed upon, that there is
an objective answer to which of this or that abstraction is better. However
experience in the practical world of today’s software development painfully
has shown me that the “better” abstractions are harder to come by...and when
“found”, don’t tend to stick. This is because most practitioners don’t have
the ability to produce powerful algebraic systems (which is what “good”
abstractions are—“alegebras” over a business domain) because practitioners are
generally not mathematicians, even have a philistine dislike/disdain for
powerful systems if they have a whiff of mathematical-like power to them at
all.

In this sense one could argue for an abstraction being “good” with respect to
the social context in which they are operated in (i.e., if your team members
don’t understand how to correctly wield an abstraction, is that abstraction
good?) However I don’t like these kinds of arguments bc a lesser system is
still capped in its power even if all its users understand it.

There are limits in what you can do with, say, Euclidean Geometry, even if it
is much simpler to understand than other Gemotries. An often retort to this is
No it isn’t. But that usually comes form perspectives with limited
imagination. That said, many businesses are fine and thrive with limited
imaginations.

~~~
seanmcdirmid
Great abstractions are not always algebraic. Words and vocabulary are
incredibly successful as abstractions, as are ontologies. In fact, an entire
programming paradigm has been constructed around such abstractions and has
been reasonably successful.

~~~
wellpast
Words and vocabulary aren’t enough. Not the colloquial/dictionary sense, at
least.

So if we’re talking about “facts” in an “ontology” — these are still (they
must be if they are going to be processed by a machine) concrete formalisms -
that’s what I mean by algebras.

If we are not machine-processing these ontologies but just printing them out
for users, then I don’t count that. Because we’re not really programming over
those abstractions. We’re just giving them back to the humans.

~~~
seanmcdirmid
Instructing someone how to do something and the computer how to do it is
basically the same thing. Yes, humans have more latitude in how they follow
instructions, but the same skills ofnabsteaction are applied. The whole point
about OOP is that you could still weild abstractions without being a
mathematician.

Now, how often do people learn chess algebraically?

------
ZirconiumX
My opinion is that this post reaches the same conclusion as I would, but I
disagree with the logic behind it.

In a modern, state of the art chess program that uses alpha-beta with a
branching factor b and depth d, you will have at most d boards allocated, or
even 1 board allocated. Neither of those figures approach b^d. That makes
board size largely irrelevant, except for the amount of memory copying needed,
which for a d-board approach would be b^d copies (a single board approach
would only mutate that board).

EDIT: One major issue I just noticed with the article is that the two
differing implementations are not apples-to-apples equivalent. One uses
bitboards, based around manipulation of bits, and another one is akin to
representing the board with an array.

------
valuearb
“But first of all, the OO inheritance here is irrelevant. The queen is the
only piece which actually “inherits” properties from other pieces! We don't
need an object model to simply reuse some functions to calculate legal moves
given a position. Just a few global functions.”

Don’t all pieces have a location? Aren’t all pieces movable? Might we want to
display pieces? Do we want to journal them to files to save game state, or
their moves to streams to play remotely?

All of these things can be done procedurally, but they also fit nicely into an
OO design.

~~~
fjdfhdjldj
but then why use OO? what does OO offer that this method does not? In fact one
could argue that this method offers better decoupling since it separates the
rendering from the game Data.

~~~
loup-vaillant
OO is better because bundling data and code together is better, because code
is data, and I have no idea what I'm babbling about. </strawman>

The real advantage of OOP, as used today in Java/C++, is _instantiation_.
Instead of having procedures working on global variables, you have procedure
working on _local_ variables, including complex data structures.

Instantiation took some time to get widespread adoption. Originally, even
local variables were actually global, scoped variables, thus forbidding even
recursive calls (see earlier versions of FORTRAN). Programming languages since
use a stack to instantiate their local variables (and return pointers), thus
enabling recursion. The instantiation of more complex data structures (arrays,
user defined structs…) followed.

Ironically, instantiation took some time to become ubiquitous in the C
language even though it fully supported it from the very beginning, with a
stack and user-defined compound types. Case in point: Lex/Yacc, which use
global names (and state) by default.

Now however, instantiation is so pervasive (global variables are assumed evil
by default), that we don't even call it OO any more. We just call it good
practice.

~~~
jstimpfle
> Now however, instantiation is so pervasive (global variables are assumed
> evil by default), that we don't even call it OO any more. We just call it
> good practice.

This is another one of my pet peeves... If it's global (a static resource),
make it a global. Local variables for static resources make code so much less
readable. The only argument against globals is testing, and that's only an
argument because common OO languages have no support for resetting global
"objects"! Solution: Just don't use OO syntax in the first place - it's wrong.
Just write init() and exit() functions.

~~~
dragonwriter
> The only argument against globals is testing

No, the _newer_ argument against globals is testing, but that's mostly a side
effect of the _older_ issue, that globals limit composability/reusability,
which was the main objection to globals before TDD became a popular religion.

~~~
jstimpfle
The "reusability" argument is just as wrong... True reusability (without any
changes) is not possible in most cases anyway, and furthermore I don't see a
reason why surrounding some static object (which can only exist once in a
program, like a sound module, a network module, etc) with braces and a "class"
keyword would somehow increase this vague idea of reusability.

It's only a syntactic change. It's not changing what should actually _happen_.
It's just making it less readable. How in the world can that be an improvement
on any frontier?

~~~
dragonwriter
> I don't see a reason why surrounding some static object (which can only
> exist once in a program, like a sound module, a network module, etc) with
> braces and a "class" keyword would somehow increase this vague idea of
> reusability.

Often, because the assumption that it can exist only once is _wrong_ ; this is
particularly true of instances of some descriptive class of interface to an
external hardware resource, which covers all of your examples.

> It's only a syntactic change.

No, it's usually not; while the syntactic change is usually _necessary_ , it
usually isn't the whole difference between the desirable modular code and the
bad global-using code that should be made, and quite often if you aren't
writing it as a global resource in the first place, you never make the other
wrong decisions that would need to be changed.

Using globals may occasionally be justified (either as an optimization or,
even more rarely, as “correct” from a fundamental design perspective), but
most often it's a symptom of sloppy thinking.

~~~
jstimpfle
> Often, because the assumption that it can exist only once is wrong; this is
> particularly true of instances of some descriptive class of interface to an
> external hardware resource, which covers all of your examples.

The key here is how to understand the word "can". It's "only" a design
decision! Of course you can do almost anything on a computer. However, most
programs don't make sense with two sound modules or network modules or
graphics modules. So "can" here means, "it absolutely makes no sense, and I'm
never realistically going to instance two or more of this thing". (And if I
really want to do that later, 1 in 1000 times, I'll just edit the code).

> it usually isn't the whole difference between the desirable modular code and
> the bad global-using code that should be made, and quite often if you aren't
> writing it as a global resource in the first place, you never make the other
> wrong decisions that would need to be changed.

Give an example: I don't think there are any. I can easily give you some bad
things that happen when avoiding globals to represent static resources: Much
more input and output arguments to type. Then, the syntactically ugly,
useless, meaningless Singleton. I've seen it many times, and it is the best
proof that it made no sense to avoid the global in the first place, and it
even potentially leads to nondeterministic initialization.

And more importantly, an occasional reader has a much harder time browsing
through the code because she never knows where the local variables are
pointing at.

~~~
loup-vaillant
Most VR games make sense with 2 displays (Headset + screen), and 2 sound
systems (binaural for the headset + stereo for the audience). You'd likely
know that from the outset, though.

~~~
jstimpfle
Yes - and for each of these examples, there would likely be still a single
"object instance" managing these devices (e.g. "Display" module). Or two
entirely different modules ("Headset" module + "Screen" module), again each
having only a single "instance".

The concept of instancing is inappropriate to most situations. The things that
we have more than one ("dynamically many") instances from are typically dead
data, but then again these are typically managed in a single pool. (That's
"tables", again).

~~~
loup-vaillant
> _The concept of instancing is inappropriate to most situations._

At the application level, maybe. At the library level, this can easily kill
reuse. See Lex/Yacc. They used to assume a program would only have one parser.
Then some naive developer tried to parse both JSON and Lua tables in the same
program. Oops.

Another example: standard libraries (including home grown ones). They
instantiate everywhere: arrays, hash tables, file descriptors…

~~~
jstimpfle
Sure, there are exceptions. The other side goes the same way: If it's not a
static resource, don't make it a global...

> arrays, hash tables, file descriptors

Most arrays / hashtables in my own use are actually static resources as
well...

File descriptors, same thing. They are (or correspond to) a process-wide
resource managed by the OS. Most maintainable way is to manage them in a
global pool.

~~~
loup-vaillant
I feel we are not talking about the same thing.

There's no question you instantiate a substantial number of arrays and hash
tables in your program. Maybe most of them are global variables, but you still
spawn several instances of arrays and hash tables. You do not have a single
giant array and a single giant hash table, you have several of them.

Same thing with parsers. Lex/Yacc used to allow only one parser _of any kind_
in the whole program. But if you want to parse 2 languages, you need 2
parsers. Of course, you'd need only one parser per language, and perhaps those
two parsers parser's state will be stored in global/static variables. You
still instantiated 2 parsers.

At least that's how I understand "instantiation".

------
payne92
Oh Dear Lord.

Programming IS abstraction.

In fact, that's pretty much ALL programming is: using, defining, and
implementing abstractions.

~~~
Veedrac
A definition which doesn't differentiate is a definition which provides no
information. If your terminology is broad enough to encompass everything, it's
also too broad to tell you anything.

~~~
qznc
Defining a function is already a mechanism of abstraction. You take a snippet
of code and give it a name. Usually you generalize by adding parameters. Maybe
you can even make it generic (C++ templates, Java Generics, Haskell
typeclasses, etc). Removing the abstraction of a function means to inline the
code. You could do that manually (copy&paste).

I wrote more on this here:
[http://beza1e1.tuxen.de/precise_abstractions.html](http://beza1e1.tuxen.de/precise_abstractions.html)

Abstracting is not everything there is to programming, but modern code
consists nearly completely of abstractions.

------
_yawn
The whole point of abstract data types is to separate implementation from
interface in order to allow a change in representation.

The point is to allow programming things like the AI algorithm without caring
at all about if the board is represented with a hash map, nested arrays, or a
bitboard. In all cases, all the AI cares about is move generation, not
internal representation.

Abstraction does not hinder ideas like the bitboard, it enables them. Both the
interviewer and the interviewee are missing the point.

------
c22
I don't know why he wants to waste so much space with twelve unsigned longs
when he can do it in eight (one for each piece and two more masks for color).

~~~
Const-me
I don’t know why would you both want to waste so much space?

A cell is either empty, or contains one of the 12 figures.

To store 13 values, you need 4 bits per cell i.e. 256 bits for the complete
board. That’s 4 unsigned longs.

~~~
c22
Color is still binary and there are at most 32 pieces on the board, so why not
3 bits per cell and a 32 bit lookup for color? Only 224 bits!

~~~
JoshuaDavid
If you're going to go that far, you might as well use 64 bits to say whether
each square contains a piece. There are a maximum of 32 pieces, and for each
(potentially) occupied square there are 12 possibilities for what might be
there (not counting "nothing" as a possibility in a square since that's
already covered by the 64 bits indicating whether each square is occupied),
which can be represented in 4 bits per square. So that's 64 bits to say which
squares have pieces, plus 32 * 4 = 128 bits to say what is in each occupied
square, for a total of 192 bits to store the chessboard.

You can take it even further, too. For the 32 possibly occupied spaces, you
can encode what is in them as a 32 digit base 12 integer, then convert to
binary, which is guaranteed to fit in 115 bits. So you can get away with 115 +
64 = 179 bits.

------
jancsika
The class-based solution provides more information about the developer's
approach than the single struct for the BitBoard.

For example, where would the equivalent of "findMoves" be declared in the
latter case? Is the bitmath abstracted out into a set of helpers, or is it
done in one big function?

------
petters
Why do all the boards need to be kept in memory during the search?

~~~
lozenge
Having read a python implementation of bitboards in the thread, I believe it's
for search pruning. If you search many of the boards after three moves, then
there will be some duplicates. Preferably each board should only be evaluated
(for which player it favours) once and that can be done by storing them in a
hash table or similar structure, which requires storing the board.

------
moomin
I’m a bit concerned with the bitboard representation. What happens when a pawn
takes another piece?

Edit: The above is what I would say if someone presented this approach in an
interview.

~~~
qznc
Let's say black pawn takes the white queen: Unset the bit representing this
black pawn, set the bit where the black pawn is now, set the white queen long
to zero. One load and two write operations to memory. The bit fiddling is
probably neglible in terms of time on a modern CPU.

~~~
moomin
So here's the second question I'd ask: what happens if there was already a
black pawn on the white queen's column?

~~~
qznc
A 64bit data type means one bit per field. It is not 8bit per pawn. It does
not distinguish between pawns.

------
fastcum
for the ones interested here is a great resource list about bitboards in chess
[https://chessprogramming.wikispaces.com/Bitboards](https://chessprogramming.wikispaces.com/Bitboards)

