Hacker News new | past | comments | ask | show | jobs | submit | tombert's comments login

Most of my gaming now is honestly FreeCell on my phone, but if we want to talk about what "gamers" would consider games, the only time I really enjoy multiplayer is if I'm playing with actual friends.

I never really had much enjoyment with playing with strangers on the internet. Most of them are much better at these games than I am, and it's just way too stressful. I also have some hesitation trash-talking total strangers, but I'm perfectly fine doing that with close friends.


Yeah, it really bothers me that as a society we've decided that ponzi schemes are actually fine as long as it has some loose "tech" branding associated with it. It seems like the startup strategy in Silicon Valley is "grow at all costs, worry about profit later, IPO, now it's the public's problem".

Of course someone could say "well they're not forcing you to buy the IPO'd stock!", and that's sort of true, but only in the strictest sense. My 401k, like I think nearly everyone's, is a mutual fund, and it invests in a little of everything. I also buy ETFs that do the same thing, because it's really the only way to preserve wealth, for better or worse. Even if I, for example, thought that WeWork's business model was unsustainable, I don't really have a way of "opting out" of buying their stock without effectively starting my own index fund, or having my cash lose value in an FDIC savings account.


> I don't really have a way of "opting out" of buying their stock without effectively starting my own index fund, or having my cash lose value in an FDIC savings account.

I've done that, out of necessity -- the US IRS hates foreign ETFs, and I live out of the US.

Market movers are almost certainly a Parato 80/20 thing, and most of the growth of the stock market, or even the S&P, is in a handful of companies.

Find the prospectus of any local Index funds and then start looking at their top 50 picks; cross reference that with a few others. Pull the 20 that stand out the most.


>I don't really have a way of "opting out" of buying their stock without effectively starting my own index fund, or having my cash lose value in an FDIC savings account.

Some other approaches:

* Buy long-dated put options for companies you think are overvalued, so your overall portfolio (retirement account+personal trading account) has 0 exposure to stocks you don't like. If a stock's price goes down, exercise the option before its expiration date and profit.

* Assemble a portfolio of sector ETFs and exclude the tech sector. Or buy regional ETFs in regions with low tech exposure. (If you're American, I recommend buying ex-America ETFs for hedging purposes anyways, since your career already gives you significant exposure to the American economy.)

Granted, you will be paying higher fees with these approaches, but given how dominant tech stocks are, if you really believe they are significantly overvalued, I think you should be willing to pay those higher fees.


With an ETF you don't have to do any of this work. And generally, the market tends to go up not down. For most stocks. Even the ones you think are no good.

You are not going to make much shorting in general unless you have a nose for identifying the next Theranos et al.


Most (all?) retirement plans offer you some amount of choice in funds to invest in, and most companies of the sort you're describing are not included in many of the more popular indices. For example, WeWork was never in the S&P 500. Similarly, target date funds are one of the more popular investments options available as by default and/or recommendation in retirement plans. The first one I checked (Fidelity's Freedom Index) applies its U.S. allocation to large caps, which again means it does not include many of the companies you have in mind.

Fair enough, I guess if the company never makes it to the S&P500 or NASDAQ-100 you're mostly shielded from this stuff if you do the default funds. There are some questionable tech companies on the S&P, like Uber for example, but not as many and nothing as dumb as WeWork.

I have a lot of VTI stock right now, which if I understand correctly invests in basically everything in the America stock exchanges, though I guess an argument could be made that I should have known that dumb companies being included in there was always a risk.

Still, I don't have to like it, and I do think that a lot of these companies IPOing when they don't really have any way of actually making money is an issue waiting to happen.


VTI is a minimum ten year horizon type investment though, which is why it’s often praised by the Boglehead crowd.

Hold it for 10-30 years and it’ll be up and to the right. On average 10% gains in a year, though like anything it always fluctuates


I have absolutely no plans on selling my stock for the next ten years, but it still means that I'm investing in WeWork whether I like it or not.

I agree it's a good investment for long-term stuff, it's the fund that I recommend to everyone.


Yeah, I hear you. It definitely feels like there's been a shift toward investing based on sentiment rather than fundamentals, and there's certainly an argument to be made that's not a good outcome for society.

Personally I feel like it's a bigger issue for individual investors that in recent years companies now IPO only in later stages or not at all and that much of the more profitable bits of the growth curve are now accessible only to the private markets.


> Of course someone could say "well they're not forcing you to buy the IPO'd stock!", and that's sort of true, but only in the strictest sense. My 401k, like I think nearly everyone's, is a mutual fund, and it invests in a little of everything.

Every 401k has multiple fund choices, so pick one that does not invest in recent IPOs.

In fact this should be very easy because most funds don't participate in recent IPOs! Depending on the 401k, you might not even have any fund that invest in recent IPOs.


I believe Warren Buffet was opposed to robo-trading strategies for this exact purpose. If the bulk of the money is going to fund anything with a market cap greater than $X, then it is useful for VCs to pump a stock up to $(X + Y) market cap to acquire funding via rebalancing.

From a VC perspective, you can exit as other funds rebalance into the stock at the inflated valuation.


The beauty of market cap weighting is only entrance or exit forces a rebalance.

Would be quite interesting if WeWork et. al. were schemes by the financial backers to capitalize on cap weighting strategies. The folks involved would not have been opposed to this in the past.

I have to admit that I have an extremely visceral, negative feeling whenever I see a mutex, simply because I've had to debug enough code written by engineers who don't really know how to use them, so a large part of previous jobs has been to remove locks from code and replace with some kind of queue or messaging abstraction [1].

It's only recently that I've been actively looking into different locking algorithms, just because I've been diving in head-first to a lot of pure concurrency and distributed computing theory, a lot of which is about figuring out clever ways of doing mutexes with different tradeoffs.

I've gotten a lot better with them now, and while I still personally will gravitate towards messaging-based concurrency over locks, I do feel the need to start playing with some of the more efficient locking tools in C, like nsync (mentioned in this article).

[1] Before you give me shit over this, generally the replacement code runs at roughly the same speed, and I at least personally think that it's easier to reason about.


What are some examples of people using mutexes wrong? I know one gotcha is you need to maintain a consistent hierarchy. Usually the easiest way to not get snagged by that, is to have critical sections be small and pure. Java's whole MO of letting people add a synchronized keyword to an entire method was probably not the greatest idea.

When, how, and why.

The biggest part of mutexes and how to properly use them is thinking of the consistency of the data that you are working with.

Here's a really common bug (psuedocode)

    if (lock {data.size()} > 0) {
      value = lock { data.pop() }
      lock { foo.add(value) }
    }
The issue here is size can change, pop can change, and foo can change in unexpected ways between each of the acquired locks.

The right way to write this code is

    lock {
      if (data.size() > 0) {
        value = data.pop()
        foo.add(value)
      }
    }
That ensures the data is all in a consistent state while you are mutating it.

Now, what does make this tricky is someone well-meaning might have decided to push the lock down a method.

Imagine, for example, you have a `Foo` where all methods operate within a mutex.

This code is also (likely) incorrect.

    value = foo.bar()
    if (value.bat()) {
      foo.baz(value)
    }
The problem here is exactly the same problem as above. Between `foo.bar()` and `foo.baz()` the state of foo may have changed such that running `foo.baz(value)` is now a mistake. That's why the right thing to do is likely to have a `foo.barBaz()` method that encapsulates the `if` logic to avoid inconsistency (or to add another mutex).

In java, the most common manifestation (that I see) of this looks like this

    var map = new ConcurrentHashMap();
    if (map.get(foo) == null)
      map.put(foo, new Value());
Because now, you have a situation where the value of `foo` in the map could be 2 or more values depending on who gets it. So, if someone is mutating `Value` concurrently you have a weird hard to track down data race.

The solution to this problem in java is

    map.computeIfAbsent(foo, (unused)->new Value());

Composing locks is where Java usually blows up.

And computeIfAbsent can end up holding the lock for too long if the function is slow.


Composing locks isn't a Java problem - it's a fundamental abstraction problem with locks. This is one of the reasons why you usually reach for higher level abstractions than mutexes.

> And computeIfAbsent can end up holding the lock for too long if the function is slow.

How is this different from any other lock-holding code written anywhere?


I’m saying Java is exceptionally bad at this because every object is its own mutex.

And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object. If there’s no object to make the critical section is very small. But as the object sprouts features you start smashing face first into Amdahl.


> because every object is its own mutex.

Not true in any practical sense.

> And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object.

What is the alternative you suggest? If you care about having the predicate actually hold, and you also don't want to have to hold the lock while constructing the object, then you're going to end up in an optimistic-concurrency scenario where you check the predicate under lock, compute the object, and check again before swapping the value in. You may end up having to throw your work away when you discover the predicate changed. Java is no better nor worse at doing this than anything else.


> Not true in any practical sense.

This is going to put a damper on any further conversation.

Even with coarsening and elision every synchronized function closes a lock on the enclosing object.


"every synchronized function"

Right. Synchronized is the key word here. The vast majority of code doesn't involve synchronized, and therefore the vast majority of objects don't have locks associated with them. That's quite important.

Those classes which do use synchronized were just going to create a ReentrantLock held for the duration of the call anyway, in which case it's all monitorEnter and monitorExit, regardless.

> This is going to put a damper on any further conversation.

Sadge.


> in which case it's all monitorEnter and monitorExit, regardless.

Oops, I need to correct myself!

ReentrantLock doesn't depend upon monitorEnter/Exit, but rather AbstractQueuedSynchronizer and LockSupport, which ultimately delegate to Unsafe methods like park/unpark and CAS (*compareAndSet*). Don't know why I had that confused in my head.

In any case, the point holds that "synchronized" as a language feature has mostly a zero cost for code that doesn't use it. It's a red herring when discussing modern Java concurrency.


Do people actually use `synchronized` methods in Java these days? It's been commonly described as an anti-pattern (for all the reasons discussed upthread here) two decades ago already.

The more useful question is has it been expunged from the JDK and common libraries. I think it's been more like 10-12 years since it really started being talked about in more than certain subcommunities and that's almost 20 years' worth of existing libraries.

OpenTelemetry is a fairly recent library. Even if you ignore some test fakes (where, let's face it, who cares), it still uses it in a few places, and uses lock objects in others. I don't see much evidence of recursion going on with the former. But that's how things always start and later there's running and screaming.


Some amount of legacy cruft is not unexpected, but it's sad that it can be seen in new code. In .NET, which has similarly problematic semantics with lock(), linters have been flagging lock(this) for ages.

I wonder where this patently bad idea of every object carrying its own publicly accessible mutex originated in the first place. Did Java introduce it, or did it also copy that from somewhere else? And what was the motivation?


Monitors came from Tony Hoare in the 70s and Java put an OO spin on them.

Can't attest to the history of `lock` statement from the top of my head but the API shape of lock and Monitor.Enter/Exit methods it is desugared to looks like Win32's EnterCriticalSection and LeaveCriticalSection. Other Monitor's methods like Wait and Pulse look like pthread's condvar and mutex functions.

.NET also has MethodImplOptions.Synchronized like Java does. However, the only place I have ever seen this attribute was on TextWriter.Synchronized implementation in CoreLib and nowhere else.

Java itself has `Lock` and `Condition`. In the end, most synchronization primitives do the same high-level actions and bound to end up having similar API.

As for `lock(this)`, much like with many other historically abused techniques that have become frowned upon - it's not bad per se if you own the type and know that it is internal and will not be observed outside of the assembly it is defined in, provided it is small enough. It's footgun-prone, but generally very few code paths will lock an arbitrary object instance at all, so most of the time it's something you see so rarely it has become "just write a comment why and move on" when using it. Of course this requires more deliberation and it's easier to default to blanket policies that ignore context. It can be difficult to get people to "use the appropriate tool" mentality.

.NET is also getting it's a separate `Lock` type, on top of all the existing synchronization primitives, to move a little further away from other legacy aspects of `lock`ing on object instances.


It's not Monitor itself that's problematic. It's that every object is implicitly associated with one, and anyone who holds a reference to an object can lock it. It doesn't matter if the type is internal - it can still be upcast to System.Object and leaked that way.

In practice this means that unless you can guarantee that you never, ever leak a reference anywhere, you don't know who else might be locking it. Which makes it impossible to reason about possible deadlocks. So the only sane way to manage it is to have a separate object used just for locking, which is never ever passed outside of the object that owns the lock.

And yes, this is absolutely bad design. There's no reason why every object needs a lock, for starters - for the vast majority of them, it's just unnecessary overhead (and yes, I know the monitors are lazily created, but every object header still needs space to store the reference to it). Then of course the fact that it's there means that people take the easy path and just lock objects directly instead of creating separate locks, just because it's slightly less code - and then things break. It's almost always the wrong granularity, too.

Thing is, I haven't seen this design anywhere outside of Java and .NET (which copied it from Java along with so many other bad ideas). Everybody else uses the sane and obvious approach of creating locks explicitly if and when they are needed.


Might want to move foo.add() out of the lock scope (assuming foo is a thread-private resource):

    value = nil
    lock {
      if (data.size() > 0) {
        value = data.pop()
      }
    }
    if (value) {
        foo.add(value)
    }

I digress but my autistic brain couldn't help itself. Provided that it's a recursive lock you could do this instead of adding a new method `foo.BarBaz`

    foo.lock {
        value = foo.bar() // foo.lock within this method is ignored
        if(value.bat()) {
            foo.baz() // foo.lock within this method is ignored
        }
    }
Also, to catch this bug early, you could assert foo is locked in `value.bat` or something. But that may or may not be feasible depending on how the codebase is structured

This is one of the areas where Zig's combination of anonymous blocks and block-based defer really pay off. To create a locked region of code is just this

    {
        mutex.lock();
        defer mutex.unlock();
        // Do mutex things
    }
It's possible to get this wrong still, of course, but both the anonymous scope and the use of `defer` make it easier to get things right.

Nothing can prevent poor engineering around mutex use though. I'd want a critical path for a concurrent hashmap to look like this:

    {
        shared_map.lock();
        defer shared_map.unlock();
        if (shared_map.getOrNull(foo) == null) {
            shared_map.put(foo, new_val);
        }
    }
Where the SharedMap type has an internal mutex, and a way to check it, and all operations panic if no lock has been acquired. It could have `shared_map.lockAndGet(OrNull?)(...)`, so that the kind of problem pattern you're describing would stand out on the page, but it's still a one-liner to do an atomic get when that's all you need the critical path to perform.

I don't think these invariants are overly onerous to uphold, but one does have to understand that they're a hard requirement.


This doesn't seem to add anything over and above what std::mutex in C++ or a synchronized block in Java offer?

Less than C++. defer() is strictly inferior to RAII.

Personally I've had issues with performance because of people using `synchronized` too liberally, where they end up locking a lot more code than necessary. I've also had issues with fairly typical circular-dependencies, causing deadlock, or at least pauses that aren't strictly necessary. Deadlock doesn't happen nearly as often as textbooks have led me to believe, but it can happen with sloppily written code.

In regards to Java, at this point I almost never use the `synchronized` keyword anymore and instead (if I can't easily map to some kind of queuing abstraction) use the ReentrantLock object simply because of the ability to have lock acquisition time out, and also letting you opt-in to fairness if you'd like. It's not as pretty but it's more flexible and as far as I'm aware it doesn't affect performance much.

For the most part, though, in Java, you can get away without (explicit) locks by simply abusing the built-in data structures. I know they're using their own synchronization techniques behind the scenes, but I trust those to be correct more than some ad-hoc stuff I'd write as an engineer.


Java's take on monitors was definitely not great, and people were emulating mutexes with them even in the language's earliest days.

Still there are a lot of things that can go wrong with mutexes: forgetting to unlock in the case of exceptions, priority inversion, recursive locking, deadlock, needlessly high contention, etc.

Java has had an excellent concurrency runtime with abstractions that are typically a better fit than a bare mutex for over 20 years now (c.f. Doug Lea). Synchronized still exists, because of Java's excellent backwards compatibility.


I've always disliked that lock cyclic dependencies is discussed as a hierarchy when what it really comes down to is a linear order of locks.

The problem with lock _hierarchies_ as a concept is that a lock really should represent serialization of access to a particular pool of data, and should make no assumptions that it being held implies some other lock's domain is also held. The code that results when people do not maintain this kind of rigor is quite terrible, but hierarchies tend to steer people into thinking that way because they imply recursively taking locks.

Stated differently: locks should be taken and released in a fixed order - so locks are ranked - but there should not be a model where all lower-ranked locks must be held for a given lock to be taken. The lock protects its domain and the ordering of take and release is to prevent deadlock, but there's no requirement for completeness.


I feel the similarly about C"s "volatile" (when used in multithreaded code rather than device drivers). I've seen people scatter volatile around randomly until the problem goes away. Given that volatile significantly disturbs the timing of a program, any timing sensitive bugs can be masked by adding it around randomly.

There seems to be a lot of voodoo beliefs around concurrent programming that lead to really bad things.

One of the best books I've read on it is Java concurrency in practice [1]. It does an excellent job of dispelling these occultic beliefs and letting the reader know exactly when and how concurrency should be implemented. It is applicable to more languages than just java, especially since many have adopted large parts of the java memory model.

The worst things I usually find when reviewing concurrent code is people either not using locks when they should, using locks when they shouldn't, and having inconsistent data guards. I've seen people throw in random locks to guard local non-shared state which is just crazy town but "Multiple threads are running this code, so I'm adding a lock".

I certainly prefer message passing over shared state. However, it's a little baffling to me why it's so hard for devs to grasp how to properly maintain shared state. Instead of just learning the basic rules, it gets couched in "It's just too hard to understand so keep adding things until it works".

[1] https://www.amazon.com/Java-Concurrency-Practice-Brian-Goetz...


> However, it's a little baffling to me why it's so hard for devs to grasp how to properly maintain shared state. Instead of just learning the basic rules, it gets couched in "It's just too hard to understand so keep adding things until it works".

Probably because most people aren't aware that there are basic rules to be learned. I'd imagine the typical experience is, you're very familiar with single-threaded code, and now you're trying to let other threads work with your data. You have heard that there are many pitfalls, and that there are special-purpose tools like mutexes to avoid those, but you look at the examples and find them mostly baffling. "Why do they perform these incantations for this data but not that data, or in this place but not that place?" So you come up with some weird mental model and move on with your life, never aware that there are underlying principles for maintaining shared state.

Personally, I didn't understand mutexes very well at all, until I started looking into what the atomic memory orderings from C++ et al. were supposed to mean.


Not too sure what the basic rules are and I'm not able to find any list of such rules.

For me the biggest challenge when sharing state is that the only benefit I can see for parallelism is performance, so if I'm not gaining performance there is no reason to use parallelism. If I use coarse-grained mutexes then I end up with straight forward to reason about code but I lose the performance benefit and in fact can end up with slower than single threaded code.

If I use very fine grained mutexes then I end up with faster code that has very hard to find bugs that happen on very rare occasion.

And then on top of that even if you do write correct fine grained locking, you can still end up with slow code due to cache behavior such as false sharing and cache coherence.

So ultimately I disagree that writing parallel code is simple unless you're willing to give up performance in which case you may as well just stick to single threaded code or use parallelism among independent data. Writing correct parallel software that shares state and actually delivers substantial performance benefits is incredibly difficult, and I am skeptical that there is a set of simple rules that one can simply read about.


> Not too sure what the basic rules are and I'm not able to find any list of such rules.

The actual rules are completely terrifying because they involve the physics of microprocessors. If you've watched Grace Hopper's lectures where she gives out physical nanoseconds (pieces of wire that are the same length as the distance light travels in a nanosecond, thus, the maximum possible distance data could travel in that time) you can start to appreciate the problem. It is literally impossible for the intuitive Sequentially Consistent model of how computers work to apply for today's fast yet concurrent processors. Light is too slow.

However generally people mean either Java's memory model or the C++ 11 (and subsequently 14, 17, 20) memory models used in languages such as C++, C and Rust. Those rules are less terrifying but still pretty complicated and the programming language promises to somehow provide an environment where these rules (not the terrifying ones) are all you need to know to write software. So that's nice.

It can be simple to write parallel code for a language designed to make that easy. Yes even if there's shared data. It only started to get trickier if the shared data is modified, so long as it isn't we can make copies of it safely and modern CPUs will do that without actual work by the programmer.


Are there popular languages that don't have memory models which make reasoning about concurrent models easier?

A language with a notion of threading and shared state is going to have something akin to read/write barriers built into the language memory model to tame the beast.


I think tialaramex is overselling the complexity of concurrent memory models in practice, at least for end users. In reality, all modern memory models are based on the data-race-free theorem, which states that in the absence of data races--if your program is correctly synchronized--you can't tell that the hardware isn't sequentially consistent (i.e., what you naïvely expected it to do).

Correct synchronization is based on the happens-before relation; a data race is defined as a write and a conflicting read or write such that neither happens-before the other. Within a thread, happens-before is just regular program order. Across a thread, the main happens-before that is relevant is that an release-store on a memory location happens-before an acquire-load on that memory location (this can be generalized to any memory location if they're both sequentially-consistent, but that's usually not necessary).

The real cardinal rule of concurrent programming is to express your semantics in the highest-possible level of what you're trying to do, and find some library that does all the nitty-grityy of the implementation. Can you express it with fork-join parallelism? Cool, use your standard library's implementation of fork-join and just don't care about it otherwise.


C?

C has the same model as C++ from the same era, so C11 is the C++ 11 model, C23 is C++ 20 and so on.

It's C so you don't get a comprehensive set of bells, whistles and horns like the C++ standard library, but the actual model is the same. At a high level it's all the same as C++ 11, the details are not important to most people.


> Not too sure what the basic rules are and I'm not able to find any list of such rules.

I'd suggest the book in my original comment, Java concurrency in practice.

> If I use very fine grained mutexes then I end up with faster code that has very hard to find bugs that happen on very rare occasion.

I agree this is a real risk if you are doing fine grained mutexes. But the rules are the same whether or not you want to follow them. If you have shared state (A, B, C) and you want to do a calculation based on the values of (A, B, C) then you need a mutex which locks (A, B, C). Certainly, that become a problem if you have calculations that just require (A, C) and you might want to avoid locking for B. In that case, you need a more complicated mechanism for locking than just simple mutexes which is certainly easy to get wrong. When the (A, B, C) actions happen you have to ensure that the (A, C) actions can't happen at the same time.

This isn't a complicated rule, but it is one that can be hard to follow if you are trying to do super fine grained locking. It's even trickier if you are going to abuse the platform to get correct results.

But fine v coarse isn't the problem I'm referring to when I say people get the simple rules wrong. Rather, than worrying about fine vs coarse grained locking, I very frequently see code where mutexes and concurrency primitives are just peppered everywhere and haphazardly. We might call that super coarse grained.


> For me the biggest challenge when sharing state is that the only benefit I can see for parallelism is performance, so if I'm not gaining performance there is no reason to use parallelism.

Aside from performance, another very common reason is to not lock the UI from the user. Even in UI-less programs, the ability to abort some operation which is taking too long. Another is averaging out performance of compute tasks, even in the case where it would be faster to handle them sequentially. Without some degree of parallelism these things are not possible.

Consider a web server. Without parallelism every single request is going to completely lock the program until its complete. With parallelism, you can spawn off each request, and handle new ones as they come in. Perceived performance for majority of users in this case is significantly improved even in the case of single processor system - e.g. you have 99 requests which each take a single second, and then one which takes 101 seconds. Total request time is 200 seconds / 100 requests = 2 seconds average per request, but if that 100 second request comes in first, the other 99 are locked for 100 seconds, so average is now > 100 seconds per request ...


>Aside from performance, another very common reason is to not lock the UI from the user.

This is not a good fit for parallelism, this is pretty much always accomplished using concurrency ie. async/await.


Assuming that the APIs & libraries that you need are async. Which is, unfortunately, not always the case for historical reasons.

> Not too sure what the basic rules are and I'm not able to find any list of such rules.

You may want to consider https://marabos.nl/atomics/ for an approachable overview that's still quite rigorous.


+1 for the Java Concurrency in Practice book. It's the book I recommend to nearly everyone who wants to get into concurrent programming. Goetz makes it a lot more approachable than most other books.

Goetz has come a long way. I knew one of the people who contributed to that book and he was a little frustrated about having to explain things to him he felt he shouldn’t have had to. The implication was he’d already had this conversation with some of the other contributors.

Sometimes though, the newbie is going to write the clearest documentation.


I loved concurrent code when I was starting out. I’d taken a pretty good distributed computing class which started the ball rolling. They just fit into how my brain worked very well.

Then I had to explain my code to other devs, either before or after they broke it, and over and over I got the message that I was being too clever. I’ve been writing Grug-brained concurrent code for so long I’m not sure I can still do the fancy shit anymore, but I’m okay with that. In fact I know I implemented multiple reader single writer at least a few times and that came back to me during this thread but I still can’t remember how I implemented it.


That's something I'm afraid of for my latest project. I did some concurrent stuff that wasn't 100% clear would actually work, and I had to write a PlusCal spec to exhaustively prove to myself that what I was doing is actually OK.

It works pretty well, and I'm getting decent speeds, but I'm really scared someone is going to come and "fix" all my code by doing it the "normal" way, and thus slow everything down. I've been trying to comment the hell out of everything, and I've shared the PlusCal spec, but no one else on my team knows PlusCal and I feel like most engineers don't actually read comments, so I think it's an inevitability that my baby is killed.


Maybe because I had a complete semester of multiprogramming in the uni, I see almost trivial to work in such environments, and cannot comprehend why is so much mystic and voodo. Actually is pretty simple.

I feel like it's not terribly hard to write something that more or less works using mutexes and the like, but I find it exceedingly hard to debug. You're at the mercy of timing and the scheduler, meaning that often just throwing a breakpoint and stepping through isn't as easy as it would be with a sequential program.

I feel like with a queue or messaging abstraction, it can be easier to debug. Generally your actual work is being done on a single thread, meaning that traditional debugging tools work fine, and as I've said in sibling comments, I also just think it's easier to reason about what's going on.


In most cases (in a C or C++ compiler, not Java) it's just straight up incorrect to use volatile for something other than memory mapped I/O. Yes, POSIX guarantees that in a specific case (signal handling IIRC) it'll do what you meant if you use volatile int. That's nice, but more generally this is not the right choice.

Unfortunately Microsoft enshrined the situation (on Windows, on their compiler, on x86 and x86-64 only) that volatile primitive types are effectively atomics with Acquire-Release ordering. This is of course awkward when Microsoft tries to bring people to a non-x86 architecture and it can't just give them this because it would suck really hard, so finally they have to grow up and teach their developers about actual atomics.

!! Edited to fix: Previously this said Relaxed ordering, the ordering guaranteed by Microsoft is in fact Acquire-Release, hence it's expensive to provide for architectures where that's not the default.


When Java implemented volatile it didn’t do anything. Later when they fixed the memory model to deal with out of order execution they made it part of the fence semantics, and then it actually made some sense.

If you only use volatile in C without any atomic operations or fences, then your multithreaded code is certainly incorrect.

The "volatile" keyword should never be used for C/C++ multithreaded code. It's specifically intended for access to device-mapped addresses and does not account for any specific memory model, so using it for multithreading will lead to breakage. Please use the C/C++ memory model facilities instead.

(As a contrast, note that in Java the "volatile" keyword can be used for multithreading, but again this does not apply to C/C++.)


> Please use the C/C++ memory model facilities instead

I should point out that for more than half of my professional career, those facilities did not exist, so volatile was the most portable way of implementing e.g. a spinlock without the compiler optimizing away the check. There was a period after which compilers were aggressively inlining and before C11 came out in which it could be otherwise quite hard to otherwise convince a compiler that a value might change.


The problem is that volatile alone never portably guaranteed atomicity nor barriers, so such a spinlock would simply not work correctly on many architectures: other writes around it might be reordered in a way that make the lock useless.

It does kinda sorta work on x86 due its much-stronger-than-usual guarantees wrt move instructions even in the absence of explicit barriers. And because x86 was so dominant, people could get away with that for a while in "portable" code (which wasn't really portable).


There's a lot to unpack here.

TL;DR: The compiler can reorder memory accesses and the CPU can reorder memory accesses. With a few notable exceptions, you usually don't have to worry about the latter on non-SMP systems, and volatile does address the former.

The volatile qualifier makes any reads or writes to that object a side-effect. This means that the compiler is not free to reorder or eliminate the accesses with respect to other side-effects.

If you have all 3 of:

A) A type that compiles down to a single memory access

B) within the same MMU mapping (e.g. a process)

C) With a single CPU accessing the memory (e.g. a non-SMP system)

Then volatile accomplishes the goal of read/writes to a shared value across multiple threads being visible. This is because modern CPUs don't have any hardware concept of threads; it's just an interrupt that happens to change the PC and stack pointer.

If you don't have (A) then even with atomics and barriers you are in trouble and you need a mutex for proper modifications.

If you don't have (B) then you may need to manage the caches (e.g. ARMv5 has virtually tagged caches so the same physical address can be in two different cache lines)

If you don't have (C) (e.g. an SMP system) then you need to do something architecture specific[1]. Prior to C language support for barriers that usually means a CPU intrinsic, inline assembly, or just writing your shared accesses in assembly and calling them as functions.

Something else I think you are referring to is if you have two shared values and only one is volatile, then the access to the other can be freely reordered by the compiler. This is true. It also is often masked by the fact that shared values are usually globals, and non-inlined functions are assumed by most compilers to be capable of writing to any global so a function call will accidentally become a barrier.

1: As you mention, on the x86 that "something" is often "nothing." But most other architectures don't work that way.


I’m surprised that’s true. C borrowed very heavily from Java when fixing the NUMA situations that were creeping into modern processors.

The C/C++ memory model is directly derived from the Java 5 memory model. However, the decision was made that volatile in C/C++ specifically referred to memory-mapped I/O stuff, and the extra machinery needed to effect the sequential consistency guarantees was undesirable. As a result, what is volatile in Java is _Atomic in C and std::atomic in C++.

C/C++ also went further and adopted a few different notions of atomic variables, so you can choose between a sequentially-consistent atomic variable, a release/acquire atomic variable, a release/consume atomic variable (which ended up going unimplemented for reasons), and a fully relaxed atomic variable (whose specification turned out to be unexpectedly tortuous).


Importantly these aren't types they're operations.

So it's not that you have a "release/acquire atomic variable" but you have an atomic variable and it so happens you choose to do a Release store to that variable, in other code maybe you do a Relaxed fetch from the same variable, elsewhere you have a compare exchange with different ordering rules

Since we're talking about Mutex here, here's the entirety of Rust's "try_lock" for Mutex on a Linux-like platform:

        self.futex.compare_exchange(UNLOCKED, LOCKED, Acquire, Relaxed).is_ok()
That's a single atomic operation, in which we hope the futex is UNLOCKED, if it is we store LOCKED to it with Acquire ordering, but, if it wasn't we use a Relaxed load to find out what it was instead of UNLOCKED.

We actually don't do anything with that load, but the Ordering for both operations is specified here, not when the variable was typed.


> remove locks from code and replace with some kind of queue or messaging abstraction

Shared-nothing message passing reflects the underlying (modern) computer architecture more closely, so I'd call the above a good move. Shared memory / symmetric multiprocessing is an abstraction that leaks like a sieve; it no longer reflects how modern computers are built (multiple levels of CPU caches, cores, sockets, NUMA, etc).


If you are doing pure shared nothing message passing, you do not need coherent caches; in fact cache coherency gets in the way of pure message passing.

Viceversa if you do pure message passing you are not benefitting from hardware accelerated cache coherency and leaving performance (and usability) on the floor.


That's good to hear! I am pretty removed from underlying hardware now, so it makes me happy to hear that better way of doing things is catching on even in low-level land.

> some kind of queue or messaging abstraction

Agreed. I find things like LMAX Disruptor much easier to reason about.


Even within Java, something like BlockingQueue will get you pretty far, and that's built into the runtime.

If I am allowed to use libraries, I end up using Vert.x for nearly everything. I think that their eventbus abstraction is easy enough to reason about, and even without using it simply using the non-blocking stuff it provides ends up being pretty handy.


Shared-nothing is typically The Right Choice in my experience as well. Maybe the odd atomic...

Message passing is just outsourcing the lock, right? For example a Go channel is internally synchronized, nothing magic about it.

Most of the mutex tragedies I have seen in my career have been in C, a useless language without effective scopes. In C++ it's pretty easy to use a scoped lock. In fact I'd say I have had more trouble with people who are trying to avoid locks than with people who use them. The avoiders either think their program order is obviously correct (totally wrong on modern CPUs) or that their atomics are faster (wrong again on many CPUs).


It's definitely doing synchronization behind the scenes, no argument here. BlockingQueues in Java seem to use ReentrantLocks everywhere. It's outsourcing the lock to people who understand locks better.

It just abstracts this detail away for me, and I personally trust the libraries implementing these abstractions to be more correct than some ad hoc thing I write. It's an abstraction that I personally find a lot easier to reason about, and so my thinking is this: if my reasoning is more likely to be correct because of the easier abstraction, and the internal synchronization is more likely to be correct, then it's more likely that my code will be correct.

I don't do super low-level stuff at all, most of my stuff ends up touching a network, so the small differences between the built-in synchronized structures vs the regular ones really don't matter since any small gains I'd get on that will be eaten the first time I hit the network, so a considerably higher ROI for me is almost always figuring out how to reduce latency.

If I did C or C++, I'd probably have different opinions on this stuff.


Every abstraction is about outsourcing the thing it's abstracting away. If using a queue solves your problem, you no longer have to deal with all the headaches that you can run into using a bare mutex.

> Message passing is just outsourcing the lock, right?

Kind of. If you can architect such that each channel has exactly 1 reader and 1 writer, you can send messages in a single direction with no locks. The basic idea is that you have a circular buffer with a start index and an end index. The writer can write an element and increment the end index (as long as end index+1<start index which doesn't have to be done atomically), while the reader can just read an element and increment the start index (as long as start index +1 < end index). This strategy needs to use atomic operations (which are basically free when uncontested, which they will be as long as the queue has a few elements in it)


> C, a useless language

You misspelled “fast as fuck” and “lingua franca of all architectures.”


> Message passing is just outsourcing the lock, right?

More or less, yeah. You can write an MPSC queue that doesn't explicitly use a lock (or even anything that looks like a lock).


> C, a useless language without effective scopes

Mutexes can be handled safely in C. It's "just another flavor" of resource management, which does take quite a bit of discipline. Cascading error paths / exit paths help.


The lawsuits that effectively legalized console emulation in the 90s were commercial products, enabling you to play PlayStation games on your PC or DreamCast

There’s even a video of Steve Jobs showing off Connectix on the Ma..

https://en.m.wikipedia.org/wiki/Bleem!

https://en.m.wikipedia.org/w/index.php?title=Sony_Computer_E...


I wonder how far this will go. Is Nintendo going to send a cease and desist for the MiSTer project?

Probably not, pretty much all those cores would be for machines where patents have fully expired, but who the hell knows?


You know, one of these days I really need to sit down and play with some of these "legacy" languages, like Fortran or COBOL or Ada or APL; languages that have certainly fallen out of popularity but are still used in some critical places.

It does make me wonder about millions and millions of lines of Java out there; Java has more or less eaten the enterprise space (for better or worse), but is there any reason to think that in 30-40 years the only people writing Java will be retirees maintaining old banking systems?


Cobol is still there not because of cobol itself, but because of vendor and platform lock-in. And I guess having monolithic codebase/platform.

it’s not even esoteric and difficult, just a lot of it without much structure visible to you.


This is what people miss about COBOL. It's not like people are compiling COBOL and running it on Linux on an x86 box. They are running it on legacy operating systems (and hardware) which provide a different set of underlying services. It's a whole different planet.

Negativo friendo.

The mainframe is turning into a middleware layer running on Enterprise Linux. We've containerized the mainframe at this point, and I mean that directly - eg. Running jcl, multiple CICS regions, all in COBOL that originated on z/OS is now running in k8s on amd64.


I hope you're right, but many comments here on HN suggest their experience with mainframes is very different. z/OS and its predecessors provided so many services completely transparently to the application that a mainframe to modernity migration is doomed to fail unless it can completely emulate (or design around) the capabilities provided by the OS and other subsystems.

Even ignoring the needs of the super high end customers like banks (eg, cpus in lockstep for redundancy), being able to write your app and just know that inter-node message passing is guaranteed, storage I/O calls are guaranteed, failover and transaction processing is guaranteed, just raises the bar for any contender.

K8s is wonderful. Can it make all the above happen? Well, yes, given effort. If I'm the CTO of an airline, do I want to shell out money to make it happen, risk it blowing up in my face, or should I just pay IBM to keep the lights on, kick the can down the road, and divert precious capital to something with a more obvious ROI? I think their "no disasters on my watch/self preservation" instinct kicks in, and I can't really blame them.

HN thread:

https://news.ycombinator.com/item?id=36846195


Like anything else, some places are awesome, some not. I’ve seen both. The worst ones are just like modern places with overcustomized PeopleSoft or SAP - except the blobs of off the shelf software were purchased 30 years ago by people long dead.

Other places stopped development 20 years ago and surrounded the mainframe with now legacy middleware. A lot of the “COBOL” problems with unemployment systems during COVID were actually legacy Java crap from the early 2000s that sat between the mainframe and users.


>If I'm the CTO of an airline, do I want to shell out money to make it happen, risk it blowing up in my face, or should I just pay IBM to keep the lights on

But that's the thing, we are at the point when "keep paying IBM" isn't the acceptable answer anymore.


I work on them full time (not doing application programming and so I can't really speak to COBOL) but this is mostly accurate as it relates to the environment.

A lot of these services are completely transparent to the application, but that doesn't mean they are totally transparent to the entire programming staff. The system configuration and programming is probably more complicated (and lower level usually, certainly YAML hasn't really caught on in the Mainframe world outside of the Unix environment) all things considered than something like k8s.

So that's where a lot of the complications come in to play. Every application migration is going to necessarily involve recreating in Kubernetes or some other distributed system a lot of those same automations and customizations that decades worth of mainframe systems programmers have built up (many of whom will no longer be around). And however bad the COBOL labor shortage really is, the shortage of mainframe assembly programmers and personel familiar with the ins and ours of the hardware and system configuration is 10x worse.

It should also be noted that not everywhere that has a mainframe has this issue. There is a wide disparity between the most unwieldy shops and the shops that have done occasional migrations to new LPARs and cleaned up tech debt and adopted new defaults as the operating system environments became more standardized over time. In the second case where a shop has been following the more modern best practices and defaults and has fewer custom systems lying around, ... the amount of effort for a migration (but also in a lot of ways, the motivation to take on a migration project) is lessened.

The case where some company is just absolutely desperate to "get off the mainframe" tend to be cases where the tech debt has become unmanageable, the catch 22 being that these are also the cases where migrations are going to be the most likely to fail due to all of the reasons mentioned above.


I hope you're right, but many comments here on HN suggest their experience with mainframes is very different.

HN is not the place to seek authoritative experience with something like COBOL.


[I work as a SA] . There are many companies that don't have a original COBOL source code only compiled objects which has been running for more than few decades. How can you guarantee that it will run perfectly in k8s . Major companies can never take that risk unless you give them some insurance against failure

There is a major drawback to this approach -- you need to have somebody who knows what they are doing. Total deal breaker in most of the places that have this problem in the first place.

"you need to have somebody who knows what they are doing"

That applies everywhere.

Your parent comment has managed to stuff a mainframe in a container and suddenly, hardware is no longer an issue. COBOL is well documented too so all good and so too will be the OS they are emulating. I used to look after a System 36 and I remember a creaking book shelf.

The code base may have some issues but it will be well battle tested due to age. Its COBOL so it is legible and understandable, even by the cool kids.

If you lack the skills to engage with something then, yes, there will be snags. If you are prepared to read specs, manuals and have some reasonable programing aptitude and so on then you will be golden. No need for geniuses, just conscientious hard workers.

It's not rocket science.


It's not the point I'm trying to make. Yes you can do fancy stuff like that and de-mainframing COBOL is to run in on k8s is the path I would personally choose if I had to deal with it. It sounds like a lot of fun and the sense of accomplishment to finally have it running should be great.

The problem is -- it's very smart and unique, while organizations that have this kind of a problem don't want to depend on unique set of skills of a few highly capable individuals. Everything needs to be boring and people have to be replaceable.

In this paradigm, vendor java with aws lock-in is a cost, but in-house fancy stuff with cobol on k4s done by smart people in house is worse -- it's a risk.


The need applies everywhere, the difficulty of fulfilling it tends to be an order of magnitude more in places that tend to run COBOL.

I'm working at one. You wouldn't believe the stories.


This is fascinating to me as an ex-mainframer that now works on a niche hyperscaler. I would love to learn more!

Will you let me know some of the names in the space so that I can research more? Some cursory searching only brings up some questionably relavent press releases from IBM.


Look up Micro Focus Enterprise Server and Enterprise Developer. They are now owned by Rocket.

I second this and know some of the folks who work on Enterprise Server. Good people. They have a partnership of some sort with AWS and there is a bunch of decent docs around Enterprise Server on AWS

Sounds like they’re talking about running IBM Wazi on Red Hat OpenShift Virtualization. As far as I know, there isn’t a System z-on-a-container offering, like you install from a Helm Chart or comes to you from an OCI registry. If it is the IBM I know, it’s completely out of reach of most homelab’ers and hobbyists.

IBM Wazi As A Service is supposed to be more affordable than the self hosted version and the Z Development and Test Environment (ZD&T) offering. ZD&T is around $5000 USD for the cheapest personal edition, so maybe around $2500-3500 USD per year?


Yup, but the COBOL application doesn't know you've done that.

A different kind of cloud you can say.

ha yes. There is actually a pretty cool product that is made by a division of Rocket Software named "AMC", it takes a COBOL app running on an IBM system and deploys it to a whole set of services on AWS. There are some smart dudes at that shop.

Doesn't surprise me at all, somebody out there should be smart enough to make good money on that and not be very loud about it either.

We're running RM/COBOL on RHEL8 VMs backed powered by VMware. I don't work with it, I'm in a different IT area, but our COBOL codebase supports the lion's share of our day-to-day operations.

COBOL is still running where it’s running because those old applications 1) work and 2) are very expensive to rewrite. Unimportant programs were abandoned. Simple one were migrated to Java decades ago. The useful-and-complicated — and often badly designed — are what remain.

If you’re a bank, you run COBOL. Estimates are 95% of ATM transactions go through a COBOL program.

But it doesn’t have to run on a mainframe! We’re adding COBOL to the GNU Compiler Collection. www.cobolworx.com.


Ada is an order of magnitude more modern and sophisticated than your other examples.

I expect Ada will capture 0.05% of the market for the next 100 years.


Ada will probably go the way of the dodo as Dependent types catch on. It's phenomenal how ahead of it's time it was, and continues to be. Contracts are an absolute killer feature, and I see a lot of people who are otherwise very serious about memory safety scoff about logical safety, not understanding just how powerful that construct really is.

Fair, I guess the list was “languages that I know were popular at one point but I don’t know anyone really using now”.

Ada definitely does seem pretty cool from the little bit I have read about it. I’m not sure why it’s fallen by the wayside in favor of C and its derivatives.


It's easy to get lost in the modern way we look at compilers and toolchains, but it wasn't always like this. Free compilers basically didn't exist 30+ years ago. Certainly none of the free compilers were good. For the longest time, your only options for Ada compilers were priced at government contractor-levels (think $10k per seat... in the 80s). It's also an extremely complicated language, while C isn't. A single, moderately skilled programmer who can at least make their own FSM parser can write a reasonably complete C compiler in the space of a month. There's no hand-rolling your own Ada compiler. Even just complying with SPARK is a herculean task for a team of experts.

This is much the same reason I'm highly skeptical of Rust as a replacement systems language to C. A multitude of very talented folk have been working on writing a second Rust compiler for years at this point. The simplicity and ease of bootstrapping C on any platform, without any special domain skills, was what made it absolutely killer. The LLVM promise of being easily ported just doesn't hold true. Making an LLVM backend is outrageously complicated in comparison to a rigid, non-optimizing C compiler, and it requires deep knowledge of how LLVM works in the first place.


if gnat (the gnu ada translator) from NYU had come out 5 years earlier, ada might have caught on with the masses.

Ada was mandated by the DoD for a bit. My understanding is that, in practice, this involved making a half-hearted effort in Ada, failing and then applying for a variance to not use Ada.

I actually met a programmer who worked on military jets. According to her, Ada is only used anymore for the older jets that were already programmed in it, and she worked in C++.

Military jets coded in C++. God help us all.

Most aerospace stuff is. The thing is, they have reams of very specific rules about how it's coded, how to verify that code, and how to verify the compiler of that code, and how to verify the code output from that compiler. It's not an easy process to replace, but its proven reliable just by all the commercial planes flying every day without falling out of the sky.

In theory, something like Rust could do the job instead, but they'd still have to verify the entire chain. Rust is for the rest of us to get something half as reliable as that while also being able to write more than two lines of code per day.


No need to be so dramatic. Shitheads will make software fail in any language. Memory "safety" will not help you correctly and in timely manner calculate position of flight controls for example.

One can write reliable, and I mean airtight good enough for medical devices and nuclear deterrence, in basically any even vaguely modern language (think Algol-60 or later). It’s simply a matter of disciplined design and running on hardware that’s sufficiently predictable.

yes, this is true. mainly due to a perceived lack of ada programmers on the market.

Often, I'm sure, but there are large code bases in Ada still. It's a shame, it looks like a really great language I would love. But it's a chicken and egg problem. If only Mozilla had decided on Ada instead of Rust! :-)

Ada doesn't offer any safety for dynamic memory. In fact, Ada is now adopting Rust's approach with the borrow checker.

Great! Time to jump on the Ada bandwagon then! ;)

Ada is pretty cool, but not sure if any more modern than APL. Both are actively maintained and useful in different areas.

While we’re at it, modern Fortran exists and has its boosters. https://fortran-lang.org/

Fortran is used in NumPy, so it's not going anywhere for a while.

Ada has seen quite a few major features added to it in the past couple of decades.

The one shop that really used it is now open to C++ and I expect Rust. But their projects tend to last a long time: 3 generations have flown in one of them, etc.

Modern fortran is actually fairly modern too. But most fortran codebases aren't modern fortran, they're Fortran 77. If you're lucky.

I agree that many modern Fortran codes aren't truly "modern" Fortran, but in my experience most codes have at least been ported to Fortran 90, even if they largely keep a lot of Fortran 77 baggage (especially the type system and indentation!). In all of my experience, I've really only encountered a single Fortran code being used currently that is actually Fortran 77 in the flesh. That said, I still think many Fortran codes would benefit from using more modern features, since so many are stuck in the past and are difficult to maintain for that reason.

The codebase I've been working in lately is mostly pre-77 FORTRAN, maintained as such for all this time. "Stuck in the past" is an apt description.

Perhaps I should have said "originally written in f77", and still look like it.

I program an Android app for a Fortune 100 company. Last commit where someone edited a Java file was last week.

Most of the new code from the past few years has been in Kotlin though.


This. Nobody wants to have the COBOL problem again, so the developer hiring money follows the programming language popularity market (with a certain regulatory approved laf ofc)

“laf” or “lag”?

Lag of course. Math doors only open once in 25 years, you know the drill.

That’s because it’s Android.

Fortran is pretty nice to write in if you are just writing numerical stuff. If I were just doing a pure numerical simulation, I would rather do it in fortran than c++ or python (without numpy which is just fortran and c++)

I feel like APL is worth the experience, because it's such a different paradigm.

I've got a soft spot for it as well because I actually used it. At work. On a PC. In the 90s. My assignment was to figure out how to get data into it, for which I ended up writing a routine that operated on floating point numbers as vectors of 1s and 0s and swapped the bits around to convert from Microsoft to IEEE format. While wearing an onion on my belt, of course.


Similar thing applies to SAP ABAP. It is like Java from a parallel world, where the accumulated cruft for maintaining backward compatibility is 3-4 times more than Java. It also like a low code/no code environment where the language, the UI, ABAP IDE etc is tightly coupled to one another. Like Java, it has continued to add more language features with time, but the legacy code using old constructs is still there in the codebase of many of the orgs.

Initially and up to some extent still now, it is verbose and wording wise, very similar to COBOL, then somewhere I guess in the late 90s, OO paradigm wave came in, and it had "OO ABAP" with classes and methods. Now cloud wave is influencing it and ABAP now has a new cloud flavor "ABAP for cloud" where most of the old constructs are not supported.


Fortran is not a legacy language.

Tryapl.org exists if want to play with APL - John Scholes' Game of Life and other excellent videos (https://www.youtube.com/watch?v=a9xAKttWgP4) might spark your interest

Other newer array languages exist too - https://aplwiki.com/wiki/Running_APL if want to explore the current space.


>but is there any reason to think that in 30-40 years the only people writing Java will be retirees maintaining old banking systems?

It feels like we're getting into that space already.


Nah not really. People just started replacing COBOL with java and employers are wise enough to hire people who are 30-40 years minimum from retirement.

It can also be upgraded in smaller chunks and finding enough developers for the tool is an important metric corporate is looking at.

If anything, banks are actively optimizing for developer experience to make sure 60% of new hires don’t run away in the first year. If anything, banks are better at navigating those kind of structural risks, they were just slow on undertaking such risks exist.

If you have an episode of existential anxiety because of dat AI eating mijn job, getting a union job in a bank is a way to hedge this particular risk.


> ...employers are wise enough to hire people who are 30-40 years minimum from retirement.

Um oh yeah, the reason we're hiring 20-year-olds is because we want to ensure we have lifelong support for the new system we're writing. Not because they're cheaper, they're still idealistic and naive, they'll work long hours for foosball tables and stacks, or anything like that...


In a place where you can imagine having COBOL, working long hours is frown upon and being idealistic beyond personal integrity isn't a good quality either. Not saying such places aren't cheap, as of course they are. Being cheap is their exact point.

> employers are wise enough to hire people who are 30-40 years minimum from retirement.

Uhm... loyalty is punnished and workers need to change jobs to keep 'market rate' wages. So dunno about that.

I think it is more about that newcomers to the job market are easier to abuse.


> employers are wise enough to hire people who are 30-40 years minimum from retirement.

Well I hope they’re wise enough to not let any good employment attorneys catch wind because that’s blatantly illegal.


The problem with such laws is it's trivial to avoid. Do they look old? I mean, you can't presume someone is thinking about age when they choose not to hire someone, but they definitely could be.

Discrimination is an almost "thought crime", meaning you can commit it entirely in your head. But the outcome is real. So it's very tough to spot, particularly when said discrimination also aligns with the most common societal biases.


It's not a requirement, but the outcome of hiring results demographics wise is very visible.

I think Android makes a difference here. Sure, a lot of people are on Kotlin, but a lot aren't.

"is there any reason to think that in 30-40 years the only people writing Java will be retirees maintaining old banking systems?"

I don't think so. But it's pretty much guaranteed that a lot of the people who are complaining about COBOL today are writing systems that will be legacy in 30 years. And the generation of programmers then will be complaining about today's programmers.

Especially when I look at node or python with tons of external packages (.NET going the same way), I don't see a good long term future.


I wrote a small program in Algol 68 once. It was horrible because it didn't even have heap allocation in the language, so things you'd think of doing in C (e.g., tree data structures) just didn't work. That and all the compiler errors were pure numerical codes which you had to go look up in the manual (not provided). And forget about getting line numbers.

I am very much glad I wasn't alive at the time this was the state of the art.


I ported some Algol code into C years ago, despite being completely unfamiliar with Algol I found code very easy to understand.

Found the paper with original code here, it's for a Reinsch spline: https://tlakoba.w3.uvm.edu/AppliedUGMath/auxpaper_Reinsch_19...


You're probably thinking of Algol 60? Algol 68 definitely had heap operations, the sample code on Wikipedia even showcases them to build linked lists.

IBM offers a free COBOL plugin for VSCode and a nice tutorial with it.

I started programming in COBOL (circa 1990) and took the tutorial just for fun earlier this year.


Fortran is alive and well in science and engineering. The more modern standards are much nicer to work with, but largely backwards compatible with stuff written 50 years ago.

I’m not sure I’d choose to use Fortran, but at one point I had to maintain an app that had a Delphi UI and Fortran business logic. The Fortran, although spaghetti, was much less frustrating to work with.

> in 30-40 years the only people writing Java will be retirees maintaining old banking systems?

I kinda suspect that if Java is still around in 30 years, what we call Java will be - at best - vaguely recognizable.


I can't say whether Java as a whole will ever become the next COBOL, but Java 8 already is well on the way there.

Sure, but if people, for example, started to declare bankruptcy due to gambling addiction, doesn't that mean that taxpayers like you and I are effectively subsidizing these gambling institutions?

That goes beyond moralism; most people don't want to pay higher taxes. I think that it's good that we have a safety-net for people who get into impossible levels of debt, but that does mean that we have an interest in figuring out ways to minimize how often bankruptcy is actually invoked.


I mean, nominally, but honestly how many of us actually use Git in a distributed fashion? I think most of us treat Git more or less like Subversion with local committing and much better merging.

I think what the person was referring to was something more along the lines of a DHT (e.g. Pastry or Kademlia), IPFS, or (as they mentioned) Tor, where it can be truly leaderless and owned by everyone and no one at the same time.


I think what they meant was GitHub, not Git.

A common conflation these days, and one GitHub works hard to reinforce.


Sure, but a vast majority of people who use Git will centralize it, with Gitlab, or Bitbucket, or SourceForge, even barring Github.

While the git program is allowed to be decentralized, pretty much everyone's workflow is decidedly not.


I thought that there was also speculation of people sharing ROMs directly on Discord, with the Yuzu admins being pretty ambivalent about the whole thing?

I only followed the story peripherally, so it's possible I'm wrong.


The Yuzu devs banned anyone even mentioning TotK in the Discord. However, they apparently had some private Discord or something where the Yuzu devs shared ROMs between themselves.

I agree. Take stock option contract trading out with it.

Stock options have legitimate uses. Like all tools, it can be misused.

Stock options can be used as a tool to hedge against risk.


I know, but I think that they're overwhelmingly used for glorified gambling.

It wouldn't bother me if it was just hedge funds or big corporations or multibillionaires who played with contracts, it bothers me that regular people do it too, and the average John Doe simply doesn't have the same multi-million-dollar option pricing algorithms that Goldman Sachs does. At that point, it feels like it's big corporations leeching money away from poorer people who don't know better.

Full disclosure, I do play with options occasionally, but I have mostly stopped, and I treat it like a casino, or as you mentioned to hedge against risk.


Most of the volume of options trading is done by institutions. By price it's mostly large traders paying each other to mitigate risk. Some smaller traders are getting chewed up in the process, but they are throwing themselves into the machine.

You make it sound like options exist for large traders to profit off individuals with access to less information. That's not how options are primarily used. That is however how sports betting is primarily used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: