Hacker News new | past | comments | ask | show | jobs | submit login
C++ Programming Questions to Ask on Interview (2017) (tests4geeks.com)
113 points by sytelus 6 months ago | hide | past | web | favorite | 109 comments



I write c++ high frequency trading systems, but I think I'd have to look up the answer to most of these. I've heard of all of them, though, and I know roughly what the point of them is.

Your problem when interviewing someone is that in real life, you are relying on the person being quick to learn, since they are unlikely to have the answer in cache. Maybe if they'd just read Stroustrup they'd know some of the specifics, but most coders would know what the point was and google it.

You have to wonder that a fair question is. I supposed you should know of a bunch of things about c++ (why you're using it, major features like templates), without knowing all the specifics (SFINAE/CRTP examples). For instance it's fair to ask someone what a memory barrier is. But not to list all the varieties (relaxed, acquire, etc).

I've hired a number of c++ devs in the last few months, and all of them have turned out fine. I never asked anything particularly specific like this list of questions, just poked around the very large house that is c++. What does the candidate think changed in c++11? Why are we using it for HFT? What kind of thing will slow us down?

I think if I'd asked these questions I'd have dropped a lot of perfectly fine candidates.


I'm glad to see this as I'm interviewing for c++ dev position after working with it non-commercially for years and I didn't even recognize a couple of the questions and wouldn't have any idea how to answer several of them off the top of my head. Certainly I could look any of them up and implement them if necessary, I really rarely code without having a book open next to me to look up the finer details of things I haven't memorized yet.


Glad to see this as well. I'm about to hire some C++ developers for embedded work and have never been a fan of this type of interviewing, if only because I know I personally don't excel at this style and consider myself extremely competent. I'm hoping to find ways of assessing incoming candidates along the lines you've briefly touched on in this thread!


Regarding HFT, how much is C++ and how much is FPGA? I always thought even CPUs often turn out too slow for that.


There’s HFT systems in Java, and OCaml. I’m not an expert in these domains but since markets are only open for certain hours of the day, you can pre-allocate enough memory and just not collect till after the bell rings. In my understanding.


Some of this is straightforward. The rest.... eesh.

I mean, yeah, I guess you can quiz your candidates on their knowledge of stuff like enable_if and ability to write templates based on traits like is_integer, but don't expect that to be a good filter for anything but language minutiae.

And question 10... what on earth is a "max heap". I'm gathering from context that they are asking about a heap in the sense of the packed tree data structure. I've used those, and written those on multiple occasions, so it's not an impractical subject. But (1) if you asked me the question the way it's phrased here I'd look at you blankly and wonder if you were talking about some kind of custom allocator, and (2) they don't even want the candidate to talk about the data structure! They just want a class definition written for the thing with a handful of obvious interfaces (add, get maximum, y'know... it's just a heap). How is that useful to anyone?


A "max heap" is a heap that has O(1) access of the maximum priority element, as opposed to a "min heap" which returns the lowest priority. It's largely a semantic distinction, as a max heap = a min heap with negative keys. Also a heap isn't just a packed tree, it's referring specifically to a heap data structure [1] (which is often implemented using a packed tree, but not necessarily).

This is very common terminology, and honestly I'd expect any CS grad past an sophomore undergraduate level to understand exactly what a max heap is, especially in an interview setting.

[1] https://en.wikipedia.org/wiki/Heap_(data_structure)


> I'd expect any CS grad past an sophomore undergraduate level to understand exactly what a max heap is

Never once heard the term, and I've written several heap-based priority queues over the course of more than two decades of professional work.

So... yeah, I'm too dumb for that filter I guess.


Surely you've heard the distinction between the max-heap and a min-heap before...

And honestly, I'm sure if you asked what it was in an interview, you would get a straight answer and not be dinged for it. I would filter based on your saltiness level though.


Sounds like their would be mutual filtering.


sounds like all those people are filtering for is who went to the same university, at around the same time as they did.


I don’t think so. Heaps are a bread-and-butter data structure typically covered in an undergraduate Data Structures course and then used in an undergraduate introductory Algorithms course.

I’d expect someone to know heaps the same way I’d expect them to know about hashmaps or quicksort.


But as mentioned multiple times, I do know heaps. I've written heaps that were deployed in production software even. Yet given a question that asks without context "Write a max heap" I'd be lost. That's not a term I know.

Conflation of terminology (about which heaps are already a terribly confusing example) with expertise is kinda the essence of why this kind of list is garbage.

And in any case, I retreat the broader point that has nothing to do with my personal vocabulary: the question wasn't even asking about how heaps work! It just wanted to know you could write a class declaration with the API.


I agree that the list isn’t good. I would only expect someone to know that it permits constant time access to a maximum or minimum element, and that it has a tree structure. Actually writing an implementation I would not.

A class declaration with an API (and no implementation) would be rather simple: insert, top, remove, and a constructor.

I will say that I typically hear “heap” and whether it’s max or min is largely implicit in the task, so perhaps the comparator qualification is unneeded.


True, but so many excellent programmers, people with 10 years of experience, typically don't remember all the details of classes they took a decade ago. I remember heaps, but I'd have to look up the details since it's not something I work with everyday. Once you're deep into your career, it's much more about practical real world style programming.


> I've written several heap-based priority queues over the course of more than two decades of professional work

Out of curiosity, what terminology would you use to distinguish between a heap that offers up the lowest value first and one that offers the highest value first?


Why would one even need such a term? The difference is the direction of the pointy end in the comparison operator...


> Why would one even need such a term?

Like any word, it's shorthand for a longer set of words.

> The difference is the direction of the pointy end in the comparison operator...

That's not descriptive of what the thing is, that's descriptive of how one part of it will be implemented.

I get that you don't think it is important, but I am surprised that you are acting defensive about the fact that it's a term practitioners ordinarily use. It's the first thing someone who looks up a binary heap is likely to see in Wikipedia, captioning the top two illustrations in that article. It's not just Stanford-cult algorithmic knowledge.


Totally agree. You learn so much theory in computer science courses, but that is less important during practical career work. It's much more about learning codebases, libraries, and good debugging skills.


I've had a BSCS for years, an MS in a related field and working on an MSCS and I've never heard of "max heap" or "min heap".

Your explanation makes it quite clear and I guess now I know -- but it's not been common in anything I've ever seen.


I just don't understand how this is possible. The only way to implement Dijkstra's algorithm in less than O(|V|^2) time is using a min-heap. In practice it might be rare to use heaps over ordered sets but it is one of the first tree structures I was taught in my data structures course.


This is the first time ever I hear about "max heap", or quite possibly I heard it and promptly forgot it. No, I don't accept your implication about my credentials and how I fared in practice since getting the degree in 1997 supports me (it's quite algorithm-heavy for the last few years). Some people take their own favorite subjects waaayyyyy too seriously.

I have no intention of becoming or finding good becoming a walking lexicon of obscure algorithms and minutiae. Thank god I'm really good at forgetting stuff I don't need - as well as learning stuff I do need but never did before, but only when I actually need it. Dynamic learning capability (let's just call it that) seems far superior to me than static learning. We have Google to outsource a lot of that to.

Anyway, looking up max heap it's trivial, no idea what my benefit would be if I knew every variation of a tree structure somebody came up with by heart. Often it's just a name for something that anyone who needs it would come up with by themselves anyway if they understand the greater concept because it's kind of obvious. I have used plenty of data structures for which I'm sure someone invented a name for without knowing that name.


Anyway, looking up max heap it's trivial, no idea what my benefit would be if I knew every variation of a tree structure somebody came up with by heart.

It's not just any other tree structure. It's the tree structure that is by far the most common way of implementing a priority queue, and it's the tree structure at the heart of the algorithm "heap sort", which is not exactly obscure. It's also the tree structure that gives "the heap" its name (though modern operating systems and allocators don't use heaps for this purpose anymore). I've personally implemented heaps in a variety of languages and contexts throughout the years as part of my work.

I don't want to sound too dismissive here, but this is not some obscure data structure with only theoretical interest. It's perfectly fine to not know off the top of your head what splay trees, Fibonacci trees or skip lists are or how they work. But if you have a degree in computer science, you should know what a heap is.


You are not saying anything that I did not already respond to. I know trees, I know priority queues, I completely understand the data structure. What's the point of being able to assign some completely made-up and arbitrary name to it? If you think being a walking lexicon is an achievement, okay. That's spelling contest kind of "knowledge". I get it - some people really like it. Everybody is different and that is fine. Names (by themselves) mean a lot to many people. It's a low-effort signaling thing I guess. It is fine if I was ever selected against in such an interview - it would be entirely mutual. I do not believe it tests what you think it does. If you need a priority queue and don't happen to be able to come up with a good data structure on your own - or if you are too lazy to think about it - you can just google it. As long as you understand the basics of the underlying data structures - trees, arrays, lists, graphs (in general) - you easily understand the nuances of the particular variation of the base structure.


Heaps are not exactly obscure datastuctures and they are practically useful. They are also relativity simple to implement. I wouldn't ask about them in an interview, but wouldn't be surprised if I were asked.


I don't understand the purpose of your reply. You built a straw man which you now attack. I didn't say "data structures don't matter" or whatever else you interpret into my comment. Also read my other reply to another reply.

I understand that it is highly unpopular that some particular knowledge is unnecessary - for every piece of knowledge there are people who feel very strongly about it. I guess that's partly because getting the knowledge was quite an effort, and then psychology kicks in, what was hard to obtain has value by that proxy. Makes it hard to talk rationally and calmly about what is actually needed. That's why questions such as what children or students need to learn are so hard, the discussion always becomes extremely heated, and removing anything from the list can only be done against lots of screaming. So sure, me saying that being able to immediately connect some name and immediately have the respective thing, here some algorithm, available without looking it up is not popular maybe not with everybody, but those who are opposed to what I see feel very strongly about it. Unfortunately that includes interpreting things into what I say/write that is not really there. I mean, obviously, a good and thorough education in algorithm fundamentals is extremely valuable.


From your previous comment:

> I have no intention of becoming or finding good becoming a walking lexicon of obscure algorithms and minutiae.

This implies that you think that heaps are obscure and minutiae. If that wasn't what you meant I'm not sure why you brought it up in a discussion about heaps.

> I understand that it is highly unpopular that some particular knowledge is unnecessary -

what is highly popular today is the idea that knowledge is unnecessary because you can always google it. You can't google what you do not know it exist.


Why are you being so nasty?

> This implies

Why don't you stick to what I wrote? You respond to the straw man your own imagination creates.

> is the idea that knowledge is unnecessary

See above. You are being quite ridiculous.

http://dilbert.com/strip/2015-06-07


I do agree we have a 'dick from the internet issue here'.


Hello Dick, oh so you found a mirror, great! I doubt self-reflection is something a troll like you is capable of though, given that your ability to parse simple English without self-projecting your own - pretty limited and frankly a bit stupid - thoughts into other people's content. That's some narcissism component that you have here.

EDIT: I knew it, you narcissist use alt accounts for voting manipulation. LOL, I won't even report you, sick little child (no sane person would go through such trouble for literally nothing). I knew it was only you all along, and this comment was a provocation test - nobody else is reading this any more at this point. The only way to get such a quick within-minutes reaction to this comment is if you cheat (and use a notification service, which was already likely in your case so I counted on it). Got you! My own personal troll, I feel honored!


I've had dozens of interviews for mid-level and senior-level engineer positions in the past 10 months at many diverse companies and gotten offers from most of them (retracted for unrelated reasons), and none of them asked about data structures even close to this. So it seems like it's probably really obscure. Or maybe only interviews for certain positions look for this kind of knowledge?


Cool startups that want to be like Google and FB, but are still at around 10 employees.


Almost nobody writes heaps anymore, they're not even the best structure for priority queues now, that typically taken by skiplists or more advanced balanced trees like red-black tree.

It is one of those stupid questions. The good question would be to task them to implement a priority queue (also a bit rough as they are in standard library now or boost, but for old C++ you may get to write one) and see what they come up with. Discuss a bit too.


I'm curious, what's wrong with array based heaps? I've implemented both Red-black trees and array based heaps in the past. If I had to take a guess I would say the latter should be noticeably faster. In any case Red-black trees are much harder to implement.


The problem with heaps is cache locality if you have anything different from tiny types like pointers stored inside. Plus that they do have to be resized more often than the other structures and you're forced to allocate a continuous chunk of memory if they're big. (And for small, you actually cannot beat a simple list with a one item sort by a heap.)

Both skiplists and rbtrees allow for a tricky but efficient colored pointer implementation and also if you need fine grained locking or lock free implementations.


Red-black trees aren't known for having good spatial locality of reference (heaps aren't either, but I fail to see how this is a point in favor of RB trees). Is there a particular implementation that doesn't share this shortcoming that you had in mind? That would be interesting to me.

If allocating a heap as a big chunk were a problem, you could certainly do it as nodes and pointers like a RB tree is traditionally done (reify the tree instead of array-encode it). Though the array-encoding is itself usually an optimization, so I wouldn't think you'd want to do that. Doing one big allocation (and copy) every once in a while (in a table-doubling-esque manner) should be a win over individually allocating nodes, if you can amortize over many operations. And again if that's not true or undesireable, you can just allocate a heap like you'd allocate a tree normally.


If cache locality is a big concern, you should either think about implementing some variation of a B-tree, or do very fancy things like design trees with van Emde Boas layouts (which are cache oblivious and thus has excellent cache performance) [0]. Or both, at once!

Both of these things are significantly more complex than just implementing a simple max-heap, which will almost certainly perform well enough for most any practical purpose. A red-black tree will probably have significantly worse cache behaviour than a max heap, since every dereference is jumping around in memory. In a max-heap, you can count on having at least a couple of the first levels in cache when you start.

[0]: http://tildeweb.au.dk/au121/papers/soda02.pdf


Yes, if you're keeping pointers. The thing is, the gain is completely wiped out by maintaining the heap invariant which has to copy the whole thing instead of walking a single branch. The same memory bandwidth issue plagues B trees. This shows up with big trees. For small, again, you're better off with a list.

Van Emde Boas trees are better but yet more complex still. (And colored pointer variant is not available for them)

The tradeoff is speed of write vs read in case of heaps. They have pretty bad write performance.


PHK's "B-heaps" are interesting: https://queue.acm.org/detail.cfm?id=1814327 .


I was wondering the same, and was thinking that maybe the RB needs to touch fewer nodes to rebalance after a remove operaiton, thus resulting in better cache friendliness (although GP said cache _locality_)? But it seems unlikely since the heap property is much weaker than the RB-balanced binary tree property.


RB tree nodes abuse the interesting and common property that nodes are created next to each other most of the time in typical usage and are not moved far around as in some balancing trees and are small. Thus you end up with cache locality even when not directly aiming at it.

B+trees have some of that property by only keeping data in leaves, but you need a very smart leaf layer to capitalize on it. You end up with more inner nodes though.


> RB tree nodes abuse the interesting and common property that nodes are created next to each other most of the time in typical usage

That's true only if you create the nodes in already sorted order. Furthermore, for the RB version of priority queues, you need pointers, which are not needed for the array based variant (the childs of cell x are always 2* x and 2* x+1), which typically adds another 16 bytes of data per node.


A heap tree has a very specific structure, even if value locations have some flexibility. An RB-tree has more flexibility in its structure, with rigidity in value locations.


I had an interview with a C++ guru while I was just a user of C++, among other languages and not really into language pedantry. So I failed at most questions, even some that are actually classic like the difference between struct and class (I was not even aware you can have member functions in struct in C++)

I still got hired, probably because I did not go to that interview with a high expectation of being hired (I was actually curious about what they were doing and would have been happy to just chat about that). So I would typically say I don't know, propose an hypothesis (usually wrong) and suggest that in the real world I would just google it real quick or write a test program.

They later told me that some interviewees were really stressed out by these questions (I can understand, they were hard!) and that no one could really answer a lot. They liked that I was actually curious about the things I learnt about C++ and would ask follow up questions ("Really? struct and class are almost the same in C++? Are they packing the same way as well?"). I realized how unfair this situation was: I was good at getting hired because I was not in a dire need for a job at the time. Would have I been desperate for this job, the temptation to try and fake it would have been great. And would have backfired quickly as the interviewer still is to this day the C++ programmer with the deepest knowledge of the language that I have ever met.


On the other hand it's really hard to figure out the limits of a interviee's knowledge if you never ask the hard questions. Obviously, as an interviewer, you mustn't penalize people for not knowing obscure parts of the language, but asking easy questions will not give you an idea of the persons aptitude.


Yes, that's probably a good way to start a conversation to gauge knowledge. But the article linked proposes recruiters who have no knowledge of C++ should use these questions as tests. I don;t think this is a good idea.


I think these questions are good to choose from when you have an obviously talented candidate and you are looking for an advanced question that probes the depth of someone’s knowledge. Most interviews I’ve conducted would never get that far. When I interviewed C++ programmers, people would struggle through things like when do destructors get called and how to iterate through every other array element. R-value references and move semantics? LOL. Let’s start with the difference between a set and a list and see if we get through that!

99% of the candidate flow I’ve seen would never have gotten any of these questions right.


Slightly off topic, but a candidate applying for a general programming position who claims knowledge of C++ in their CV is presenting an absolute gift for interviewers: because, if they actually do know their C++, then they are likely to be a strong candidate overall; conversely, if they are bullsh1tting you it is very easy to find this out quickly.

True story: once interviewed a guy who claimed to know C++; set him a challenge that he couldn't answer, simplified it a bit, still couldn't answer, we iterated down until I discovered he didn't even know how to declare an int variable. Turns out he had been a tester on a system written in C++ ...


I'm a C# developer and I've been strongly considering learning C++ in detail because I feel like it would make me a better programmer in general. However getting started with such a complex language seems daunting. Anyone know where one should start? C++11? C++14? Something even older?


I very much agree that it helps you understand the "programming stack" much more intimately. If that's what you are after, you'll also have the parts that are relevant to you covered relatively quickly (compared to fully "knowing" C++ which would probably take at least a decade).

I also agree with the other reply that the strongest learning experience is basically in the chronology of the language development. It takes running into a case where they are the best solution to really grasp "why include guards?", "why templates?", "why smart pointers?", "why iterators?" and so on, and you also gain a lot of appreciation for what compilers in other languages do for you behind the scenes.

I also highly recommend https://godbolt.org/, which allows you to see what machine code is produced in the end. Seeing all the template magic getting distilled to clean asm is quite amazing.


The way that I learned C++ was starting from how the language was developed. With functions, arrays, basic variables. Then moving onto classes and polymorphism. Then with C++11 and C++14 newer features.


It doesn’t matter that much. If I were you I would pick a project you care about and read the source.


> cannot return 0 from a void [main] function

"A conforming implementation may provide more versions of main(), but they must all have return type int." ~ Bjarne Stroustrup - http://www.stroustrup.com/bs_faq2.html#void-main


So in the first question, they test to make sure the candidate knows the minutia that std::unique_ptr<> has no copy constructor.

And in the followup, they assume the candidate will just assume that "SearchQuery", whose header is not shown, will have a copy constructor.

Just no.


I looked at that first one and didn't realize it wouldn't compile simply because I would never write something like that anyways. That function should just take an int&.

However, understanding that SearchQuery is broken is pretty important. Breaking the Rule-of-3 is probably the #1 mistake made by junior C++ developers trying to write object-oriented code, and the consequences are pretty nasty. You'll get corruption and crashes from use-after-free and double-free errors and nothing will make sense because the problem will move around as you create temporary copies while trying to debug.

That being said, I wasn't familiar with that sort of stuff when I was first hired as a C++ developer. I was very fortunate to start my career with a company that was willing to hire people and train them.


Right now one of the major gripes I have about Modern C++ is RAII / Rule-of-five. After going through all of this stuff I felt it brought too much complexity for what it tried to accomplish...

RAII basically assumes that your objects are always going to be in the init-use-free pattern (you read a file, you use it, and close it after you’re finished) In reality not all things work this way, as objects have various different states according to their state machines and will use and free resources depending on that state. But in Modern C++ you’re forced to think only in the on/off paradigm, which results in lots of small unnecessary classes with their own on/off state. (If you try graphics programming you’ll understand right away...) Now when you add rule-of-five this becomes a monstrosity to handle, as you have to ensure about memory copying/moving for every little RAII class you make.

So why not let normal assignment be shallow copy by default, but have an explicit copy() method? Why is everything so implicit in Modern C++ where you can’t assure that by simply looking at a = b you can’t tell how memory is allocated/freed...

Nowadays I have lost hope in Modern C++ and are looking for alternatives: Rust (complex but at least memory safe) Orthodox C++ (not memory safe but at least sane for graphics/game programming), or Jai/Zig/other experimental lnguages (which claim to be a better C)


I'm a graphics programmer myself. To be honest, my strategy for OpenGL-related resources is to allocate everything I'll ever need at program start and leak everything at program shutdown. It's simple and efficient and all copies are indeed shallow. However, if I was writing more generic code, I would probably make extensive use of movable but non-copyable containers for my OpenGL resources.

You're right that the RAII model sometimes feels a bit limiting for the complexity sometimes found in graphics, but I think part of that was self-inflected by our API design. That seems to be changing. The best-looking technique I've seen for handling Vulkan/DX12 is to make your program more like a compiler. You take a scene as input and you compile it into a command buffer to give the GPU. Your resource handling for stuff like texture slots thus looks more like register allocation. State shadowing is a trivial optimization pass when your GPU code is data.

The problem is that when you write RAII classes for GPU resources in the traditional way, C++ sees the GPU state through a tiny pinhole and thus can't optimize very well. When you treat your GPU commands as a program that your C++ program is building, you have more global information about GPU state available, which allows you to make more efficient choices.


Modern C++ RAII is more about rule of zero than rule of five. How many different kinds of resources do you manage (that also can't be handled by std::unique_ptr with custom deleter) that you need "lots of small unnecessary" classes? If your classes manage a resource and also do something else, that's a clear violation of the rule of five/zero paradigm: When your class manages a resource, it should only do that and correctly implement all five special member functions, otherwise it should default all of them. In most cases, smart pointers can do the latter for you already.

For example, it's trivial to have shallow copies by default - just wrap it in std::shared_ptr (or a thread-unsafe homebrew version if you can prove that synchronization is a performance problem).

I admit to not having done a lot of graphics programming, but I can't shake the feeling that you are using modern C++ wrong if your complaint is "too many small RAII classes".


Assignment is supposed to perform a deep copy because otherwise you need to be aware of the implementation of the object you are copying, which may or may not allocate data on the heap. By forcing a deep copy, all assignments have identical semantics – you obtain a new, independent copy of the resource.

You could say, "well, developers can just adopt a uniform semantics for what parts of objects are considered 'shallow' vs. 'deep'", but now the compiler can generate correct default copy operations in many fewer cases, since the rule is arbitrary vs. simply "copy everything".


It is usually called rule of five, 3 is half of it and makes people sometimes forget to write move assignment.

I'm pretty sure business do not like to actually train people nowadays. :)


By volume of content it's "usually" called the rule of 3, because the rule of 5 didn't exist until C++11, and most of the C++ content was written before moving was a thing.

It's now called the rule of 5.


I originally wrote "Rule-of-Three / Rule-of-Five / Rule-of-Zero" but regrettably, I cut it for brevity.


That std::unique_ptr has a deleted copy constructor is not minutia! The huge program of C++11 to deprecate auto_ptr, redefine value categories, introduce rvalue-references and move constructors, &c. had the creation of a move-only smart pointer as one of its most proximate goals.


A thing can be a design goal and minutiae at the same time. The design goal was to reduce accidental errors in code, particularly by the incautious and by learners. But for a practitioner who clearly understands the concept, the exact way in which you must accomplish an operation permitted by the concept is just "however the library wants you to do it" minutiae.


> The design goal was to reduce accidental errors in code, particularly by the incautious and by learners

No, it really isn't. The whole point of a unique pointer is to ensure that the pointer is in fact unique. Thus, std::unique_ptr is specifically designed to disallow any form of copying.


> the minutia that std::unique_ptr<> has no copy constructor.

This is not minutae. This is a test if you understand the theory behind ownership semantics.


You can "test the theory behind ownership semantics" without relying on the intricacies of a C++11 construct (however trivial).


Unique_ptr is such a fundamental type that, if one claims to be a c++ programmer, one has to know it and understand how it works.


> "Unique_ptr is such a fundamental type that, if one claims to be a c++ programmer, one has to know it and understand how it works."

No.

Those utilities weren't standardized before C++11. People wrote their own implementations before that. People had to understand stack unwinding and scoping, memory, RAII, design patterns that relate to allocation/de-allocation of memory, ownership transfer.

Then came in the people that decided it'd be a great idea to start quizzing others about C++11 concretions.


> Then came in the people that decided it'd be a great idea to start quizzing others about C++11 concretions.

So it has been almost a decade. How much longer before the whole move semantics addition to the language specification becomes relevant to you?


When the embedded project I've been working on since 2009 decides to upgrade compiler versions (which will be approximately never). Or when I change jobs or move on to a different project.


My workplace is still using c++98 since upgrading compilers was too much work and a compatibility risk for our customers. This year I spend almost a month getting new versions of our dependencies to run with the existing setup, cutting out various features along the way. Avoiding that upgrade is no longer beneficial for us.


Move semantics are nice, but they aren't critical or even fundamental to understanding C++. I would bet most C++ programmers do not fully understand them.

As a veteran of three decades of C++, I've seen a lot of new hotness turn into old and busted and removed. There's plenty of dark corners where you don't need to look to get a job done.


> "So it has been almost a decade. How much longer before the whole move semantics addition to the language specification becomes relevant to you?"

I'm not debating the usefulness of move semantics. I'm debating the weight being placed on them in these so-called "C++ interview questions". Wouldn't it be better to discuss the actual problem, rather than a concrete solution we ended up having in C++11?


> Those utilities weren't standardized before C++11.

That's quite a disigenuous statement. Work on C++11 started after the approval of c++98, and the c++0x smart pointers have been implemented and available for quite a while before C++11 was finalized.


I would question where you learned the "theory" behind modern c++ ownership semantics without ever encountering std::unique_ptr, given that it is the reason move semantics were added to the language. As far as I can tell you would have a hard time even finding a source on post c++11 ownership semantics that doesn't mention std::unique_ptr at all. So not knowing about std::unique_ptr seems like a good indication that your sources were c++98 based, making your "theory" a decade out of date.


Knowing that unique_ptr has no copy constructor is not a minutuia. It is the whole reason for the type to exist in the first page.


Any sane interviewer in the field would accept "just don't do that" or "I think the answer is <X> but I would write a test program to make sure and/or check the spec about it"


These questions seem pretty good!

In the max heap question, I'd argue that add should take a parameter by value, rather than a const reference, since this prevents having to needlessly copy temporaries (and is less verbose than overloading for T&&).

List of iterator categories is missing input iterators.


My reading of their moving assignment operator is that it is wrong, or at the very least unclear.

Basically they do std::swap on query field. In that case you are relying on the destructor of the moved object performing the delete (if necessary). I would argue that that falls into the “too clever” spectrum, and also kind of assumes that no one is going to refactor the destructor at some point in such a way that some other property will result in them incorrectly thing that the query is null, and so leaking it.

Having now read through this, I agree with others assessment - these questions are questionable at best. What is it they’re proven to do?


I think relying on the destructor of the moved-from object running feels idiomatic to me. That's what the destructor is there for after all.


The destructor deleting its /own/ data. But here we are moving a bunch of the fields (and so clearing them) but swapping one of them. That seems like bad coding, and doesn’t match any idiom I’ve seen :-/


I wasn't too sure on some of the easy questions, but this swap... Oh god, my "someday, this is going to shoot someone in the foot"-sense started to tingle.


Move assignment has nothing to do with 'clearing fields' or whatever it's that the moved-to value steals what it can and leaves the moved from value in a valid but unspecified state. Notably, you should still be able to assign-to and destroy a moved-from value.

It's an invariant of the 'DirectorySearchResult' class that query is non-null, as the destructor deletes it without checking it for nullity. The most straight-forward and efficient way to achieve both the stealing and the valid state is to swap the pointers.


> It's an invariant of the 'DirectorySearchResult' class that query is non-null, as the destructor deletes it without checking it for nullity.

Deleting a null pointer is a fully specified no-op in C++.


That's true. And I see the other example answers they have written do assume the pointer can be null, although the original class makes no indication of this.

To make the question clearer they should have added a function on the original class that checks (or doesn't check) the pointer before dereferencing it, thereby clarifying whether query being null is valid or not.


for vectors it makes std::move a very fast operation


Vector has a good move assignment which also works. Why swap when you can move?

Swap is additionally risky because someone can ADL specialize it for their own purposes. (which is the point) And suddenly your move assignment is broken.

So people work around it by adding using std::swap clause there...

And then you're calling all those destructors.


std::move on vector zeroes out the source element array.

e.g. (pseudo-ish code)

operator=(vector&& foo) {

delete [] elements;

elements = foo.elements;

foo.elements = nullptr;

}

what the example code they have as an answer does is something morally equivalent to this:

operator=(vector&& foo) {

swap(elements, foo.elements);'

}

relying on the knowledge that because it's being moved the ~vector is going to be called immediately afterwards.

This strikes me as being "clever" in the "potentially confusing code, for no real reason" sense


actually that move assignment operator is not ideal. The foo parameter won't be destroyed immediately but only at the end of whatever scope (if any) it was at the call site.

The idiomatic copy and swap is like this:

T& operator =(T x) // note pass by value { std::swap(this, x); return this; }

This acts both as a normal and move assignment operator (if T is not copy constructible);

edit: how do you get HN to show asterisks? There should be one before each 'this'


Prefix each line with 4 spaces:

    T& operator =(T x) // note pass by value
    { std::swap(*this, x); return *this; }


Wow. Haven't gone near C++ for 12+ years. It done got weird.


This is why Go and Rust are coming into their own. C++ is a clusterf*ck of complicated design decisions (note, I didn't say bad, just complicated). It's certainly a powerful language for systems development but one has to wonder if it needs to be. Anytime I see a language overdo it on the features, my sense is that they need to step back and survey the landscape of developers before they decide to add more stuff.


That's exactly what I thought. Not used it professionally for a while (and then older compilers) and vaguely knew there were a lot of changes around memory management.

But was it that hard before? Constructors, destructors and operators were fine... weren't they?


I like that each question has a purpose. It's an interesting problem to determine a (small) set of questions that "cover" the (large) space of C++ knowledge in the sense that if someone can answer them all, they likely have a strong knowledge of the language.

> The interview candidate might think that returning a noncopiable object from a function is also a compiler error, but in this case it’s allowed, thanks to copy elision. You can ask the candidate under what conditions copy elision is performed.

This isn't copy elision. Copy elision isn't even permitted here because the returned expression is the name of a function parameter. The two-stage overload resolution that tries a move constructor first is defined in the same section of the spec, [class.copy.elision], as copy elision but the rules are different and it is always performed, even if the implementation doesn't do copy elision.


Writing safe, idiomatic C++ is hard.

Most C++ code found in college books is outdated and not idiomatic, driving students away from the language.

But it is true that C++ can also be your worst nightmare if you don't have discipline.

C++ will turn your bad coding habits into infinite frustration... and that is great. C++ is the proof you really like programming.


I wrote C++ for relatively complex embedded guidance/positional apps for 5 years. I think I did it reasonably well - definitely better than 80% of my colleagues. Since then (10 years or so) I've been C# almost exclusively.

This makes me hope I never have a need to go near that language ever again.


I remember a time where mastering Stroustrup and the two Scoot Meyer was enough to be considered an expert. Even as a Perl lover, the level of intricacy of modern C++ makes me very reluctant to choose it on a new project.


These questions are quite simple. I wouldn't use them to find good C++ developers.

However, it's quite hard to come up with good C++ interview questions. You want to test for fundamental knowledge of C++, for the practice of using C++ for developing code, but avoid impractical cases that nobody remembers in their right mind.

Every interviewer has his own favorite C++ trick. The fact that an interviewee doesn't know or use that trick doesn't mean that his C++ skills are inadequate. It means that the interviewer's trick may not be as commonly used as he may think.


OK I'll bite.

1 - knowing this is 100% useless information, in reality I have a compiler to tell me that.

2 - Seriously? new/delete in 2018? In very rare cases when I need a raw pointer I call unique_ptr::release().

5 The question is so-so but the answer's wrong. Iterators don't need to point anywhere. You can make an iterator generating values instead of referencing container. Not necessarily a good idea but still.

6 Same. Just because it's not specified in C++ standard doesn't mean it's bad, or should be avoided. E.g.

    const uint32_t n = 11;
    printf( "Eleven: %d\n", n );
is UB but the code is 100% fine to my standards. Stuff like OpenMP or intrinsics is not specified either, does it mean we should avoid them?

8-9 IMO template metaprogramming should be avoided. If you want your template data structure to behave differently for pointers and non-pointers, expose another template argument for that, there's too many corner cases like handles (technically pointers, logically values) and smart pointers (the opposite).

10 I don't think the sorted vector listed in "possible answer" is a max heap.

Conclusion: if the OP claims the questions are "Proven", I'm curious about provement methodology.


> is UB but the code is 100% fine to my standards

Strongly disagree - I'd never let that through code-review. When living in C/C++ land, we should be extremely aware of the differences between the integer types, and handle their conversions carefully and explicitly.

Given that the "%d" format-specifier maps to `int`, not to `int32_t` (let alone `uint32_t`), that code really could wreck things if your platform has an `int` type which isn't 32 bit - https://stackoverflow.com/a/3168293/


About int vs sint32, why exactly you should be aware about the differences? The only cases when I encounter these differences, they look like broken C++ compiler: https://stackoverflow.com/q/50552347/126995

Similar about %d and unsigned. Technically UB, practically works fine in 100% of cases. If you ask “but how do you know it works?” I’ll answer “because I read assembly”.


> why exactly you should be aware about the differences?

Because they aren't assured to be the same. They might happen to be the same on your current platform.

C++ isn't like Java. It's the programmer's job to be very aware of these sorts of platform-specific points of variation.

I don't see why you'd deliberately write undefined behaviour when there's no benefit to doing so. This isn't a purely academic concern - I've seen a malformed printf call wreak absolute havoc, presumably because it was stripping too much or too little data off the stack. Was a nightmare to track it down.

I don't want to think about which particular families of dangerous type-based mistakes are permissible in my codebase -- I'd rather write code which is correct.

> The only cases when I encounter these differences, they look like broken C++ compiler

If you write dangerous, non-portable code that invokes UB, you may well find that it works on plenty of real-world platforms. Your code is still wrong. For the specific case of assuming that `int` is 32-bit, you'll probably get away with this sort of clumsiness, yes, but 'ILP64' targets exist, even if they're rare.

Additionally, using `int` as a dangerous alias for `int32_t` is not expressive when it comes to readability.

> Technically UB, practically works fine in 100% of cases

UB can bite you in bizarre, unpredictable, intermittent ways, especially with optimising compilers.

Why argue that a bad habit isn't that bad, when you could just avoid doing it?

> If you ask “but how do you know it works?” I’ll answer “because I read assembly”.

That's a very fragile assurance of correctness. Unless you explicitly embed that assembly code, the compiler is free to generate different code at any point in future. Can you guarantee that you'll never change or upgrade your compiler, tweak its flags, or target another platform? Can you guarantee the optimiser will never change its mind and generate different code for that function?


> They might happen to be the same on your current platform.

Good, and I’m not writing STL. I develop user-facing software, and I know the platforms I support. It just happened so that in the last couple of decades, all my current platforms have 32-bit integers, and I don’t expect that to change within my remaining lifetime.

> It's the programmer's job to be very aware of these sorts of platform-specific points of variation.

Exactly. I’m writing platform-specific code in C++, and I don’t expect compiler to interfere. When I want to abstract these things away I don’t code C++, there’re better languages out there for such requirements.

> why you'd deliberately write undefined behaviour when there's no benefit to doing so.

Read the comments in the link I’ve posted. Because clang authors share your opinion on the subject, I had to write a couple of pages of boilerplate code, changing ~50 C++ functions for absolutely no value. That’s one of the reasons why I prefer MSVC, by the way.

> ILP64 targets exist, even if they're rare.

Why should I care? As you see from this example, caring is not free, it has costs.

> Additionally, using `int` as a dangerous alias for `int32_t` is not expressive when it comes to readability.

Additionally, typing int is 3 keypresses, typing int32_t is 8. Given how much code I type every day, and how many integers are there in my code, the difference is not insignificant. Even Nintendo did it better on CodeWarrior toolchain, with u8, u32, u64, etc.

> UB can bite you in bizarre, unpredictable, intermittent ways, especially with optimising compilers.

I think you’re demonizing UB. Different cases are different. Some will crash your app right away, the worst of them will silently give incorrect results with 0.001% probability, but the code from my original comment is harmless.

Also C++ compilers aren’t perfect, couple times in my career I found bugs in them. Many times, I found bugs in standard libraries or OSes. If you think writing 100% standard compliant C++ code guarantees correct programs, I have bad news for you.

> Can you guarantee that you'll never change or upgrade your compiler, tweak its flags

Do you have an example of specific compiler version, or flags, that will break my code above, when targeting i386, amd64 or arm32 platforms?


> 1 - knowing this is 100% useless information, in reality I have a compiler to tell me that.

A clever compiler might tell you that adding std::move will make the code compile. But if you do not understand what std::move does it will lead to more bugs as you might not expect the ptr to suddenly be empty in main.


I agree with what you're saying but they aren't asking what it does.

They're asking "What happens when?" giving code that doesn't compile.

Nothing happens.


Good way to weed out bad employeers who treat programmers like janitors with no hope of career advancement.


Why is checking for self assignment necessary?


There are pathological cases where due to layers of abstraction (in a data structure say), an element gets moved to itself. The algorithm is unaware of this but it happens anyways. If you fail to check for self assignment, you will clear the "other" owning pointer/resource and this will result in a leak, as well as resulting in an ill-formed object.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: