Hacker News new | past | comments | ask | show | jobs | submit | mcdeltat's comments login

While the parent comment may be a little extreme, I would still agree soft skills have been taken too far to the point of neglecting real engineering. Yeah of course don't be a dick to those around you. But there's one thing to be pleasant to coworkers and another thing to make the whole business about socially engineering each other to "rise up the ladder". The latter is, in my opinion, fake as fuck and quite toxic.

You can have pretty normal temperament and be a decent engineer, but if you aren't positive 110% of the time, always sucking up to management, always accepting more work, always subtly gaming those around you, then apparently you're a "bad engineer" for not understanding the "business". I might be crazy but that shit is not normal. Go play the game if you want to but it is not normal.


> There are plenty of very charismatic people who are not enjoyable to work with, and there are plenty of uncharismatic people who are very enjoyable to work with

Couldn't agree more. In fact, I would generally correlate being charismatic with being an asshole, particularly in any company or professional environment. There's surely overlap between charismatic and manipulative. IMO the best influential people are those who are genuinely likeable and respectful and have solid ideas, who don't need to "sell" their opinions based on some social hacking bullshit.


As someone who knows a bit about photography, not sure this is a good product. It seems to me to be more like gimmick/aesthetic appeal, especially given the price tag ($3000+ AUD?). Like Apple: "minimalist", opinionated, very expensive.

In terms of specs it doesn't look very impressive. Doubt it holds up compared to mainstream brands at that price point. The lack of physical buttons probably makes it less appealing to professionals. Touch controls are just not the same for quick actions based on muscle memory. I don't get the color modes. It advertises as "no fluff" but includes all these presets, which anyone halfway interested in editing will likely avoid in leiu of their own styles. In fact if you look at all the software features and modes, it's quite standard, not really minimal at all. You're limited to Sigma lenses. Never used a Sigma lens so I can't comment on quality. But I don't know why you would want to limit yourself, again particularly given the price point.

Overall this looks like an aesthetic sell rather than a good photography product. It's for people who want to appear cool with a sleek-looking camera that gives you the popular image "looks" out of the box. And who are willing to accept an inflated price tag for it.


The camera is using Leica L-mount[1] so you have the full range of Leica as well as Panasonic and Sigma lenses available. For the sort of people who invest in Leica glass, $3k AUD will not seem unreasonable and the minimalist aesthetic is right in the Leica wheelhouse.

[1] https://en.wikipedia.org/wiki/Leica_L-Mount


Ah ok my bad, didn't know about the L mount. I suppose if you are willing to pay that much... sure? I would still argue that it is not a good value for money product. Why would one take pride in purchasing an inferior camera for a premium?


I'm sure a few will land in the hands of some youtube "influencers" who will gush and hype about it...


> Why can't you just do this at the language level like any sane person?

The reality is C++ is a ridiculously complex and legacy-ridden language, with a difficult goal to preserve backwards compatibility. I haven't read the history on the keyword args proposals but I'm guessing they were declined due to a deluge of silly edge case interactions with C++ semantics that became too hard to work around. Like how struct designated initialisers have to be in order of member declaration due to object lifetime rules or something like that.

I would recommend trying to not be outraged at the state of C++ these days. It's time to stop hoping that C++ gets the nice features we need in any sort of reasonable manner. The reality of the language is not compatible with much niceness.


>Like how struct designated initialisers have to be in order of member declaration due to object lifetime rules or something like that.

It matters in which order sub-objects are initialized - if you have a class with the members A and B, and B takes pointer or reference to A in its constructor and does something with A, A better be already initialized. Sub-objects are initialized in the order of their declaration and having different order of designed initializers would be confusing. In fact, exactly that problem we have in C++ with the list of initializers of base classes and data members in constructors.


You still don't need to syntactically require same order initialization, it's an easy job for a compiler to reorder things so that all dependencies work out - every language with order independent declarations have to do that for example.


You don't need to require same-order initialization, but allowing people to do different orders will be confusing when actions are reordered behind the scenes. Especially imagine if there are dependencies between the objects you're passing in.

  struct A {
    B one;
    C two;
  };
  struct B {
    B() {
      cout << "init B" << endl;
    }
  }
  struct C {
    C() {
      cout << "init C" << endl;
    }
  }

Mixing up the order is confusing:

  A{.two=B(),.one=A()}
since `two` is initialized after `one` despite coming before (the comma operator <expr a>, <expr b> usually means `expr a` happens before `expr b`.

This case is a little contrived, but run the evolution forward: you can have members that depend on each other, or have complex initialization logic of their own. There, debugging the specific order of events is important.


Scala does it by pulling out the keyword arguments to variables so that your example would become

{ val x$1 = B() val x$2 = A() A(.one = x$2, .two = x$1) }

This maintains left-to-right evaluation order while allowing you to pass arguments in any order.

There is probably some dark and forbidden reason why C++ can't do that.

ETA: That's basically what the post does.


In a general case you can't do that with separate compilation. [0]

  struct A { A(A*); };

  A* f(struct B *b);

  struct B {
    A a1;
    A a2;
    B(): a1(f(this)), a2(f(this)) {}
  };

  //in a different translation unit
  A* f(B *b)
  {
    return &b->a1; //or a2, we don't know
  }

[0] https://godbolt.org/z/xMb64ssYK


Your code snippet does not use the designated initializer feature that this comment thread is talking about.

Furthermore your code possibly contains undefined behavior depending on the behavior of the constructor of A.


As I stated above, it's the same kind of situation. See my another comment in this thread for an example with designated initializers.

>your code possibly contains undefined behavior

Only if f() returns a pointer to a2 (which is my point). Or did you imply that in the case when f() returns a pointer to a1 and it gets passed to the constructor of a1, provenance matters?


Actually, provenance matters:

"During the construction of an object, if the value of the object or any of its subobjects is accessed through a glvalue that is not obtained, directly or indirectly, from the constructor's this pointer, the value of the object or subobject thus obtained is unspecified." [0]

Reading an unspecified value isn't UB, that's good, but I don't understand why the standard says 'unspecified' because it clearly can be indeterminate if a sub-object hasn't been initialized yet.

[0] https://eel.is/c++draft/class.cdtor#2


You can't easily take the pointer to an other member in a designated initializer. It's still a problem for members that are implicitly initialized by their default member initializer, but that can be sorted out.


I think it is easy enough to be a potential footgun [0]:

  struct A { A(A*);};

  struct B {
    A a1;
    A a2;
  };

  void f()
  {
    B b{.a1 = A(nullptr), .a2 = A(&b.a1) };
  }

[0] https://godbolt.org/z/cGaxzh17T


That's a stretch to call it "easy enough", you are explicitly pointing the gun at your foot. That `b.a1` might not be explicitly UB, but that's quite suspect when b's lifetime didn't start yet. Accessing members through `this` in constructors have some special allowance to not make that UB.


Good point, I should've used direct initialization in this example.

  struct A { 
    int x;
    A(A *a) { if (a) a->x = 42;}
  };

  struct B {
    A a1;
    A a2;
  };

  void f()
  {
    B b{.a1{nullptr}, .a2{&b.a1} };
  }
This code is valid.

Now if I change the struct definition to

  struct B {
    A a2;
    A a1;
  };
it will become UB. Luckily it won't compile because of the difference between the order of declaration and the order of designated initializers.

The alternative way is to always initialize the sub-objects in the order of designated initializers (what do we do if not all initializers are provided?), but this would mean that the order of constructor calls wouldn't match the (reversed) order of destructor calls. Or we would need to select the destructor dynamically based on the way the object was initialized.

https://godbolt.org/z/MPoqEhTvf


My gripe was not the form of initialization of the elements, but forming `b.a1` before `b`'s lifetime has started. It hasn't started before all of the elements are initialized.


But do we need the lifetime of b to be started? Isn't it enough that a1's lifetime is started? Taking of address of a1 happens after that. [0]

Upd:

There is an interesting sentence in [class.cdtor] but I don't think it applies here because B has no constructors:

"For an object with a non-trivial constructor, referring to any non-static member or base class of the object before the constructor begins execution results in undefined behavior."[1]

[0] https://eel.is/c++draft/dcl.init.aggr#7

[1] https://eel.is/c++draft/class.cdtor#1


IMO one of the most disappointing things about C: it smells like it should be a straightforward translation to assembly, but actually completely is not because of the "virtual machine" magic the Standard uses which opens the door to almost anything.

Oh you would like a byte? Is that going to be a 7 bit, 8 bit, 12 bit, or 64 bit byte? It's not specified, yay! Have fun trying to write robust code.


Abstract. It's an Abstract machine, not a Virtual machine.


Size of byte is implementation-defined, not unspecified. Why is that a problem for writing robust code? It is okay to assume implementation-defined behavior as long as you are targeting a subset of systems where these assumptions hold, and if you check them at build-time.


Ahem, it's specified to not be 7.


Luckily, little of it matters if you simply write C for your actual target platforms, whatever they may be. C thankfully discourages the very notion of "general purpose" code, so unless you're writing a compiler, I've never really understood why some C programmers actually care about the standard as such.

In reality, if you're writing C in 2025, you have a finite set of specific target platforms and a finite set of compilers you care about. Those are what matter. Whether my code is robust with respect to some 80s hardware that did weird things with integers, I have no idea and really couldn't care less.


> I've never really understood why some C programmers actually care about the standard as such.

Because I want the next version of the compiler to agree with me about what my code means.

The standard is an agreement: If you write code which conforms to it, the compiler will agree with you about what it means and not, say, optimize your important conditionals away because some "Can't Happen" optimization was triggered and the "dead" code got removed. This gets rather important as compilers get better about optimization.


True, we are currently eliminating a lot of UB from the future C standard to avoid compilers breaking more code.

Still, while I acknowledge that this is a real issue, in practice I find my C code from 30 years ago still working.

It is also a bit the fault of users. Why favor so many user the most aggressive optimizing compilers? Every user filing bugs or complaining about aggressive optimizing breaking code in the bug tracker, very user asking for better warnings, would help us a lot pushing back on this. But if users prefer compiler A over compiler B when you a 1% improvement in some irrelevant benchmark, it is difficult to argue that this is not exactly what they want.


Sadly, at least in the embedded space some of us still deal with platforms where the proprietary core vendor's compiler routinely beats open source compiler cycle counts by a factor of 1.5 to 3.

The big weak region seems to be in-order machines with smaller numbers of general purpose registers.

GCC at least seems to do its basic block planning entirely before register allocation with no feedback between phases.


In practice, you're going to test the next version of the compiler anyway if you want to be sure your code actually works. Agreements or not, compilers have bugs on a regular basis. From the point of view of a programmer, it doesn't matter if your code broke because you missed some fine point in the standard or because the compiler got it wrong, either way you're going to want to fix it or work around it.

In my experience, if you don't try to be excessively clever and just write straightforward C code, these issues almost never arise. Instead of wasting my time on the standard, I'd rather spend it validating the compilers I support and making sure my code works in the real world, not the one inhabited by the abstract machine of ISO C.


> In practice, you're going to test the next version of the compiler anyway

> In my experience, if you don't try to be excessively clever and just write straightforward C code, these issues almost never arise.

I think these two sentiments are what gets missed by many programmers who didn't actually spend the last 25+ years writing software in plain C.

I lose count of the number of times I see in comments (both here and elsewhere) how it should be almost criminal to write anything life-critical in C because it is guaranteed to fail.

The reality is that, for decades now, life-critical software has been written in C - millions and millions of lines of code controlling millions and millions of devices that are sitting in millions and millions of machines that kill people in many failure modes.

The software defect rate resulting in deaths is so low that when it happens it makes the news (See Toyota's unintended acceleration lawsuit).

That's because, regardless of what the programmers think their code does, or what a compiler upgrade does to it, such code undergoes rigorous testing and, IME, is often written to be as straightforward as possible in the large majority of cases (mostly because the direct access to the hardware makes reasoning about the software a little easier).


No, it has to be at least 8 and this is sufficient to write portable code.


C++ has made efforts to fix some of this. Recently, they enforced that signed integers must be two's complement. There is a proposal currently to fix the size of bytes to 8 bits.


Yes, which is excellent (although 50 years too late, I'll try not to be too cynical...).

The problem is that C++ is a huge language which is complex and surely not easy to implement. If I want a small, easy language for my next microprocessor project, it probably won't be C++20. It seems like C is a good fit, but really it's not because it's a high level language with a myriad of weird semantics. AFAIK we don't have a simple "portable assembler + a few niceties" language. We either use assembly (too low level), or C (slightly too high level and full of junk).


"falsehoods 'falsehoods programmers believe about X' authors believe about X"...

All you need to know about null pointers in C or C++ is that dereferencing them gives undefined behaviour. That's it. The buck stops there. Anything else is you trying to be smart about it. These articles are annoying because they try to sound smart by going through generally useless technicalities the average programmer shouldn't even be considering in the first place.


> these articles are annoying

You’re being quite negative about a well-researched article full of info most have never seen. It’s not a crime to write up details that don’t generally affect most people.

A more generous take would be that this article is of primarily historical interest.


> You’re being quite negative about a well-researched article full of info most have never seen.

I don't think this is true. OP is right:

> These articles are annoying because they try to sound smart by going through generally useless technicalities the average programmer shouldn't even be considering in the first place.

Dereferencing a null pointer is undefined behavior. Any observation beyond this at best an empirical observarion from running a specific implementation which may or may not comply with the standard. Any article making any sort of claim about null pointer dereferencing beyond stating it's undefined behavior is clearly poorly researched and not thought all the way through.


I think you do point to a real issue. The "falsehoods programmers believe about X" genre can be either a) actual things a common programmer is likely to believe b) things a common programmer might not be be knowledgeable enough to believe.

This article is closer to category b. But the category a ones are most useful, because they dispel myths one is likely to encounter in real, practical settings. Good examples of category a articles are those about names, times, and addresses.

The distinction is between false knowledge and unknown unknowns, to put it somewhat crudely.


You are right, it was overly negative, which was not nice. Read it as a half-rant then. These types of articles are my pet peeve for some reason.


Haha all of the examples in the article are basically "here's some really old method for making address 0 a valid pointer."

This isn't like timezones or zip codes where there are lots of unavoidable footguns - pretty much everyone at every layer of the stack thinks that a zero pointer should never point to valid data and should result in, at the very least, a segfault.


Useless, but interesting. I used to work with somebody who would ask: What happens with this code?

    #include <iostream>

    int main() {
        const char *p = 0;
        std::cout << p;
    }
You might answer "it's undefined behavior, so there is no point reasoning about what happens." Is it undefined behavior?

The idea behind this question was to probe at the candidate's knowledge of the sorts of things discussed in the article: virtual memory, signals, undefined behavior, machine dependence, compiler optimizations. And edge cases in iostream.

I didn't like this question, but I see the point.

FWIW, on my machine, clang produces a program that segfaults, while gcc produces a program that doesn't. With "-O2", gcc produces a program that doesn't attempt any output.


I think that reasoning about things is a good idea and looking at failure modes is an engineers job. However, I gather that the standard says "undefined", so a correct answer to what "happens with this code" might be: "wankery" (on the part of the questioner). You even demonstrate that undefined status with concrete examples.

In another discipline you might ask what happens what happens when you stress a material near to or beyond its plastic limit? It's quite hard to find that limit precisely, without imposing lots of constraints. For example take a small metal thing eg a paper clip and bend it repeatedly. Eventually it will snap due to quite a few effects - work hardening, plastic limit and all that stuff. Your body heat will affect it, along with ambient temperature. That's before we worry about the material itself which a paper clip will be pretty straightforwards ... ish!

OK, let's take a deeper look at that crystalline metallic structure ... or let's see what happens with concrete or concrete with steel in it, ooh let's stress that stuff and bend it in strange ways.

Anyway, my point is: if you have something as simple as a standard that says: "this will go weird if you do it" then accept that fact and move on - don't try to be clever.


"undefined" means "defined elsewhere".


LOL. No.

Some languages/libraries even make an explicit distinction between Undefined and Implementation-Defined, where only the latter is documented on a vendor-by-vendor basis. Undefined Behavior will typically vary across vendors and even within versions or whatnot within the same vendor.

The very engineers who implemented the code may be unaware of what may happen when different types of UB are triggered, because it is likely not even tested for.


So it's defined in the compiler's source code. God doesn't roll a die every time you dereference null. Demons flying out of your nose would conform to the C++ standard, but I assure you that it would violate other things, such as the warranty on your computer that says it does not contain nasal demons, and your CPU's ISA, which does not contain a "vmovdq nose, demons" instruction.


No. The compiler isn't the only component of the system that will determine what happens when you trigger UB, either. There is UB all the way down to hardware specifications.

I used to be one of the folks who defined the behavior of both languages and hardware at various companies. UB does not mean "documented elsewhere". Please stop spreading misinformation.


> No. The compiler isn't the only component of the system that will determine what happens when you trigger UB, either. There is UB all the way down to hardware specifications.

I don't think you know what undefined behavior is. That's a concept relevant to language specifications alone. It does not trickle up or down what language specifications cover. It just means that the authors of the specification intentionally left the behavior expected from a specific scenario as undefined.

For those who write software targeting language specifications this means they are introducing a bug because they are presuming their software will show a behavior which is not required by the standard. For those targeting specific combinations of compiler and hardware, they need to do their homework to determine if the behavior is guaranteed.


Hardware also has UB, but what happens is still dictated by the circuitry. The relevant circuitry is both complicated enough and not useful enough for the CPU designer to specify it.

Often they use the word "unpredictable" instead. The behavior is perfectly predictable by an omniscient silicon demon, but you may not be able to predict it.

The effect that speculative execution has on cache state turned out to be unpredictable, so we have all the different Spectre vulnerabilities.

Hardware unpredictability doesn't overlap much with language UB, anyway. It's unlikely that something not defined by the language is also not defined by th hardware. It's much more likely that the compiler's source code fully defines the behaviour.


"I used to be one of the folks who defined the behavior of both languages and hardware at various companies"

But not at all companies, orgs or even in Heaven and certainly (?) not at ISO/OSI/LOL. It appears that someone wants to redefine the word "undefined" - are they sure that is wise?


> LOL. No.

It actually does. You should spend a minute to learn the definition before commenting on the topic.

Take for example C++. If you bother to browse through the standard you'll eventually stumble upon 3.64 where it states in no uncertain terms that the definition of undefined behavior is "behavior for which this document imposes no requirements". The specification even goes to the extent of subclassifying undefined behavior in permissible and how it covers the program running in a documented manner.

To drive the point home, the concept of undefined behavior is something specific to language specifications,not language implementetions. It was introduced to allow specific existing implementations to remain compliant even though they relied on very specific features, like particular hardware implementetions, that once used may or may not comply with what the standard specified as required behavior and went beyond implementetion-defined behavior.

I see clueless people parroting "undefined behavior" as some kind of gotcha, specially when they try to upsell some other programming language. If your arguments come from a place of lazy ignorance, you can't expect to be taken seriously.


I'm assuming it's meant to be:

  std::cout << *p;
?

I still think discussing it is largely pointless. It's UB and the compiler can do about anything, as your example shows. Unless you want to discuss compiler internals, there's no point. Maybe the compiler assumes the code can't execute and removes it all - ok that's valid. Maybe it segfaults because some optimisation doesn't get triggered - ok that's valid. It could change between compiler flags and compiler versions. From the POV of the programmer it's effectively arbitrary what the result is.

Where it gets harmful IMO is when programmers think they understand UB because they've seen a few articles, and start getting smart about it. "I checked the code gen and the compiler does X which means I can do Y, Z". No. Please stop. You will pay the price in bugs later.


> I'm assuming it's meant to be: [...]

Nope, I mean inserting the character pointer ("string") into the stream, not the character to which it maybe points.

Your second paragraph demonstrates, I think, why my former colleague asked the question. And I agree with your third paragraph.


Ah, I got confused for a minute why printing a character pointer is UB. I was thinking of printing the address, which is valid. But of course char* has a different overload because it's a string. You can tell how much I use std::string and std::string_view lol.

I reckon we are generally in agreement. Perhaps I am not the best person to comment on the purpose of discussing UB, since I already know all the ins and outs of it... "Been there done that" kind of thing.


>No. Please stop. You will pay the price in bugs later.

indeed. It is called UB because that's basically code for compilers devs to say "welp don't have to worry about changing this" while updating the compiler. What can work in, say, GCC 12 may not work in GCC 14. Or even GCC 12.0.2 if you're unlucky enough. Or you suddenly need to port the code to another platform for clang/MSVC and are probably screwed.


>I didn't like this question, but I see the point.

These would be fine interviewing questions if it's meant to start a conversation. Even if I do think it's a bit obtuse from a SWE's perspective ("it's undefined behavior, don't do this") vs. a Computer scientists' perspective you took.

It's just a shame that these days companies seem to want precise answers to such trivia. As if there's an objective answer. Which there is, but not without a deep understanding of your given compiler (and how many companies need that, on the spot, under pressure in a timed interview setting?)


> These would be fine interviewing questions if it's meant to start a conversation.

I don't agree. They sound like puerile parlour tricks and useless trivia questions, more in line in the interviewer acting defensively and trying too hard to pass themselves as smart or competent instead of actually assessing a candidate's skillset. Ask yourself how frequent those topics pop up in a PR, and how many would be addressed with a 5min review or Slack message.


Not quite.

Trivially, `&E` is equivalent to `E`, even if `E` is a null pointer (C23 standard, footnote 114 from section 6.5.3.2 paragraph 4, page 80). So since `&` is a no-op that's not UB.

Also `*(a+b)` where `a` is NULL but `b` is a nonzero integer never dereferences the NULL pointer, but is still undefined behavior since conversions from null pointers to pointers of other types still do not compare equal to pointers to any actual objects or functions (6.3.2.3 paragraph 3) and addition or subtraction of pointers into array objects with integers that produce results that don't point into the same array object are UB (6.5.6).


I prefer: “Falsehoods programmers believe about X” articles with falsehoods considered harmful.


Are you writing C or C++ code or are you writing, for example C or C++ code for Windows? Because on Windows it's guaranteed to throw an access violation structured exception, for example.


No it's not, even with MSVC on Windows dereferencing a null pointer is not guaranteed to do anything:

Here's a classic article about the very weird and unintuitive consequences of null pointer dereferencing, such as "time travel":

https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=63...


84% of 8 year olds? What the heck? What is an 8 year old even doing on social media? I knew many teenagers use social media, but kids under 10? Sure there's no way social media is a meaningful/useful experience at that stage of brain and social development.


8 year olds can watch TikTok videos for hours


BMI is a contributing factor, but so is neck and throat anatomy, regardless of weight. There are plenty of non-overweight people who have sleep apnea. And treatment sucks because so many doctors don't know anything past "you should lose weight".


> You want to let yourself think about things that you enjoy and motivate you INTRINSICALLY, not someone else because then you just keep needed to rely on their enthusiasm.

Legitimate question for debate: how does this differ for social media vs other media? Apart from social media being more addictive, all media is pushing someone else's thoughts on you, in some way. I can imagine old folks would've made similar arguments against TV and books.

(I ask this but still 100% agree social media sucks)


Social media is rapid fire short cuts and videos.

Destroys our imagination and creativity. Instant satisfaction.

When we imagine things we are exploring a tree of possibilities and following the branches that give us satisfaction.


It could be argued (and I certainly believe so) that because of the spammy bs nature the internet has acquired, now is the most important time to create your own, genuine content. To fight back against aggressive globalisation and commodification of the internet, and in turn, socialisation. I believe we need much less of "generic SEO-maxxing Instagram influencer with 10M followers", and much more "average person who put genuine thought and effort into something cool for a handful of consumers". No better time than now to consider the choice of where we want out future to lead.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: