I'd like to post a purely positive, encouraging comment here :-)
Thanks for the article, and good job implementing an entire editor from scratch in C! That's impressive.
Also, as a very accomplished programmer who happens not to have done much recent UI programming, who has read quite a bit about React/Redux/Rewhatever, Elm, The Haskell School of Expression, etc., I find articles like this extremely helpful: you took a simple example, and clearly and straightforwardly demonstrated the underlying goals and principles of "Unidirectional UI". While I mostly "get it" already from reading articles on React-based architectures, clear reiterations of the idea are a huge aid in getting the concepts solid.
Very much this! As a person who recently completed a medium-sized C project I can say the author's work is simply IMPRESSIVE on so many levels. It's true that there's no error checking and there's undefined behavior, but please note that the author is rather young and this project is in a usable state. All of these issues can be added/fixed with time as the author learns more about the language and programming in general. The author got good experience and this is what counts here. Clearly, the author is willing to get some practical coding experience next to the theoretical foundations he learns about at the university. I can only support that and sadly I didn't have this mindset while studying CS.
The way I see it, the author had an idea, sat down, put it to practice and implemented a functional prototype which can easily be expanded upon. This is probably how all these arcane *nix tools we use today came into existence. I don't imagine anyone wondering "should I use a rope or something more efficient, let me just spend 2 months trying out different structures" while coding one of the first text editors. Also, what the author did would not be simple to achieve in ANY language, because no language out there has a ProduceVimLikeEditorWithMinimalFeatures() function.
No, its every day syndrome around here for this type of thing, I've seen it frequently. A lot of people on HN are not receptive to someone posting a blog entry about their experiences re-implementing something for learning purposes if it isn't super advanced stuff. I don't really get it, but there are a lot of people who can't read a technical blog post from the perspective of someone with less experience than them trying to learn new stuff. So, you get a lot of the comments about how what you did is dumb and wrong, even though you are really just trying to build something for fun, learn some stuff, and post about your experience.
For the record, I enjoy posts like this. This is how you learn and hone new skills. Everyone does this kind of thing (well, lots of people do), but for whatever reason those that write it up get attacked sometimes.
>I don't really get it, but there are a lot of people who can't read a technical blog post from the perspective of someone with less experience than them trying to learn new stuff.
I strongly suspect a lot of posters of negative comments really aren't any more capable, but are just showing off by picking holes in other people's work. No advanced project like this gets done perfectly first time. Anyone who's worked on a complex project, especially on their own, would know and appreciate that.
Might be a bit of selection bias too. The people who had a neutral-positive reaction may not feel like they have much to add beyond "thanks" or "nice job", which sometimes feel like platitudes.
Sorry, don't get me wrong - I'm not the author; I just wanted to know why HN had voted this article to #1, and then was even more confused by the tone in the comments. Hopefully, the productive comments will end up on top, eventually!
It's because the challenge he set himself isn't hard and he didn't do a particularly good job of it either (other comments have covered that so I wont reiterate it) yet he is soapboxing his experiences as if the topic isn't entry-level stuff. And I mean "entry-level" quite literally as you learn about event-driven programming in high school or equivalent IT classes. This isn't something you need a CS degree to learn.
I mean kudos to the guy for trying something new but honestly I'd expect anyone working as a developer in IT to have learned this much before calling themselves a professional. And that includes Javascript devs since so much of frontend development is event-driven anyway.
Personally I get a little fed up by just how _bad_ many frontend developers seem to be these days. I'm not tarring everyone with the same brush as I'm known some really amazing frontend devs too (though they ususally consider themselves full stack). But I've lost count of the number of Javascript developers who I've had to explain HTTP status codes and the difference between GET and POST. It feels like frontend development is a place where people go if they like the idea of coding but can't be bothered to learn how to actually programme nor any of the principles they're building their code on top of. Thankfully the topic author bucks that trend a little but even in his case the lessons he is learning are painfully basic.
By the way it's a bank holiday (long weekend) in the UK so no Mondays blues from me. :D
Would you please not tear other people down like this on Hacker News, regardless of what you think of their project? Even if you're right on some underlying points, it degrades the community badly.
>I'd expect anyone working as a developer in IT to have learned this much before calling themselves a professional.
According to author's webpage[1], he's a "student at the University of Waterloo, class of 2021".
So probably age 17 or 18. He's not passing himself off as a "professional".
If majority of HN readers feel this article was beneath their skill level and a waste of time, that's more on the submitter (mxstbr) for misjudging its value to the HN audience rather than the article's author.
(Although at this time, the article does have 110 upvotes while simultaneously, the top-voted comment (poster mmjaa) is critical about its simplistic banality. So maybe the submitter got the pulse correct for half of the HN split-personality?)
The first year of SE is almost purely mathematics and electromagnetism. The first year programming courses are simple and are designed to simply get people up to speed who may have never programmed before (most already have).
The course which focuses on even-driven programming for user interfaces is 'CS 349 User Interfaces' which isn't until the 3A term. This is the course where students are exposed to MVC, observer pattern, avoiding polling etc. Second year term is also where you would get a lot of exposure to state machines.
As a note to the author of this post: Don't take any criticism that appears elsewhere in these comments too harshly. People who comment here often don't take much time to pay attention to the person who actually made their contributions, and as a result their level of expectations are usually the same regardless of whether a piece of work is created by someone who is a tenured professor, or a first year university student. If you continue to write and work on projects like this, you'll turn out great in the long run. You'll also grow thick skin.
> According to author's webpage[1], he's a "student at the University of Waterloo, class of 2021".
> So probably age 17 or 18. He's not passing himself off as a "professional".
Fair enough, though I am still surprised he hadn't learned that in formal education already. I was taught about event driven programming before I got to university. The schools I went to weren't particularly good either so it's not like I had a "better" education or come from privilege.
Maybe they just don't teach event-driven programming any more? :(
Maybe this is where regional differences come out as most students in the UK have had some programming lessons before starting uni (albeit they often haven't done any recreational programming).
That was my observations at the time anyway. Things may have changed and/or I might have introduced selection bias by hanging around with the nerdier kids. Who knows - too late to find out now.
Maybe this is where regional differences come out as most students in the UK have had some programming lessons before starting uni
At the level of professional-standard software architecture for UIs?
There have been some positive efforts in the UK over the past few years to introduce programming as a more serious subject at secondary school level (roughly, ages 11-18). I think it's great if kids get to write a simple mobile app to chat with their friends or make a simple game, instead of only learning things like how to use a word processor or graphics program. I think it's great if there are teachers in schools who themselves have the skills and knowledge to help the kids do that, too.
However, there are plenty of new starters entering the industry who have just graduated with degrees in CS or related subjects from good universities and still won't have been exposed to these kinds of ideas at more than a trivial level. I'd be surprised and impressed if I had a school-leaver turn up for an interview for a programming position with more than a passing knowledge of these subjects.
For what it's worth, I don't think the original author does themselves any favours by inflating their experience as they do. The way they describe themselves in the introduction on their web site seems rather out of proportion to their actual experience, and their CV screams "I read a guide to how to write impressive-sounding CVs and I'm trying way too hard". But as a card-carrying member of the "I thought I knew everything at the start of my career and tried way too hard" club, I think you just have to see that as a combination of genuine enthusiasm and understandable immaturity, and give them credit for having a go and sharing their experience. I'm happy to see someone at that stage in their development taking an interest and spending the time to experiment and discuss and learn, and surely we should all be encouraging the next generation of programmers along that path?
Lucky you, most of my peers in college had no formal programming experience from hs coming in. I did, but we barely even grazed oop, and I come from a pretty decent highschool. Be careful with anecdotes, your personal experience doesn't represent the general population
I actually wasn't making an assumption, I was talking from my own experiences. Myself and my friends did learn about event-driven programming prior to university.
We were taught both Visual Basic and Pascal and taught the difference between event driven (VB) and procedural (Pascal). Though weirdly we were never taught about OOP, but it was a fairly average UK college and a pretty general computing course so there was a lot to cover in 2 years.
As I said elsewhere, I think this is clearly an article for a junior audience. But I think he did an good enough job at explaining his main concepts to his peers/readers. In fact, the whole "I coded an editor in C" is at best a irrelevant eye-catcher, as you can copy-paste about a gazillion implementations from anywhere. So because of that, any seasoned dev should probably not fall for such a "catchy" header in the first place... :-)
> It feels like frontend development is a place where people go if they like the idea of coding but can't be bothered to learn how to actually programme.
Maybe they go into it because of the incredible demand.
It is written in C++, but using a style that is unlike what most C++ developers use: it is composed mostly of pure functions. The architecture is very much like that of Elm programs, and there is even a `store` class like in Redux.
Like the project from OP, it is about 2K lines of code. But it also supports asynchronous loading/saving, copy-paste (you can copy-paste a 1GB file instantly), very robust undo, dirty markers, UTF-8 (not perfect), etc.
The magic? Immutable vectors based on RRB-Trees. This is like the vectors in Clojure/Scala, but also supports log(n) slicing, concatenations, insertions---these operations are fundamental for a text editor, and I'd say, to most interactive software supporting big data models [1].
The code has been written in a style as simple as I could. I'd like to think that even non-C++ developers might be able to understand it. It might be specially interesting for web developers invested in Clojure, Elm or Redux and the single-atom architecture.
Probably one day I should go ahead and just write a blog post about it or a tutorial or something. But so far I don't have a blog I am not really a social media person. The code is still moving also...
PS. And thanks a lot to @kostspielig her help writing and reviewieng the code! <3
Awesome project, you're inspiring me to pick back up a text editor project I had started to learn Rust a while back. Yours is much more featureful, however!
By the way, if anyone else is thinking about doing the same, HN had a great discussion on an article (also very useful) on the various data structures one can use for text editors: https://news.ycombinator.com/item?id=11244103
Thanks for the link and the encouragement! You have your Rust project somewhere online? I'd love to take a look at it.
I sometimes fantasize about rewriting Ewig in Rust, as a learning project (have been reading about it but didn't dare to write code in it yet). But I'd like to use RRB-Vector there too. Sadly, this data-structure is not really trivial to implement... (the concat algorithm in particular). Maybe one day I'll find the time :-)
It's not on a public repo, and I abandoned it half-done a couple of years ago while using Rust Nightly, so it will need some work I think to run on current Rust. I stopped because I got frustrated and confused with Rust's lifetimes, and just quit Rust for over a year.
Fortunately, a few months ago, after another attempt at Rust writing a simple ML-like language, something clicked and now I'm enjoying writing Rust. If you do attempt Rust, just expect to be frustrated for a while with Rust's ownership system, it's a new concept for almost everyone. I think it's kind of like monads in Haskell, you struggle with it until suddenly it "clicks" and then it doesn't seem very hard, but it then becomes hard to convey your understanding to someone else.
Rewriting Ewig in Rust, including the persistent tree-based immutable vector, would be an excellent exercise. Indeed, just starting with the RRB-Vector would be a great project. By the way, your immutable data structures library is quite impressive, it would be nice to have such a library in Rust. As far as I know, there is no such maintained library of functional data structures in Rust.
I tried a rust one as well [1], (antirez based). It's a decent amount of fun, would like to try implementing some of the more interesting features/data structures.
> In other words, UI programming is about mapping incoming events to a series of effects.
Sort of like programming with Monads.
This is one of the reasons why programming with Monads is so powerful: the programmer can explicitly compose and control effects.
Good on the author for digging in and learning new things. I'd recommend following up with reading the source to vi and doing some research on programming patterns of the day. One of the things we're pretty bad at preserving is context; it'd be neat to read about what the author discovers (ie: why was vi written the way it was? What did the original authors discover? etc).
I've tried to push pretty hard for writing rationales for any project we make, and we've also tried to adopt making records of "big decisions" including a tl;dr of the context. Unfortunately, these tend to take a backseat to "moving tickets from left to right on a board" and in prioritizing what gets on the board, unfortunately writing things down for posterity simply never makes the cut. :o(
Those who cannot remember the past are doomed to repeat it.
And as I like to say: software is built in the image of the organization and processes that created it.
If the organization I work for doesn't openly condone it I still journal what I work on and spend time on documentation and specifications before we work on an important feature or project.
If we make a mistake or forget why a certain architectural decision was made I've found it immensely helpful to have context for those decisions. It helps make planning changes much easier.
> This is one of the reasons why programming with Monads is so powerful: the programmer can explicitly compose and control effects.
Yes, it's true, but lest anyone come away confused by this statement, monads are about much more than effects. I understand that's not what you're saying here, but my first months of dabbling in Haskell were marked by a deep misunderstanding that monads were used for side effects, and nothing more.
It's worrying how sometimes the most basic programming concepts can appear revolutionary to web developers. So much so, they've recently stumbled across the concept of a read input-update-render loop, and are now trying, with frameworks like React, to contort the DOM tree, just to get back to how UI was programmed since decades.
There's a fairly old concept called "immediate mode UI", which even gets rid of the tree structure, and keeps the whole state on the user side, giving you full control over when elements are updated and where they are placed. These days it's used mostly in video games, since they already rely on a 60 FPS loop, and the UI complexity tends to be low.
If not for the necessity to use standard platform form elements due to their various quirks and interactions with the system, I'd be doing UI programming only in immediate mode with custom controls, simply because how convenient, and simple it is.
The rate at which you have to update, and what you have to update doesn't change between retained and immediate mode. The only difference is that, with retained mode, the library you're using has, by design, all the knowledge it needs, so it can manage all of that for you. The whole point of using immediate mode is to get rid of this black box from your program, so that you can explicitly state what you want to happen, when, and in what order. It's only "a ton of code" if you write everything from scratch, so i fail to see how that's an argument.
while (true)
{
processInput()
updateState()
paint()
}
That will kill battery unless you write code to not update until input or app state changes, in which case you're building up to your own retained-mode system.
Yes, if you were concerned about battery life, you'd typically not update until input or app state changes.
As for building up to your own retained mode system: not really, at least not in my view. Whether you update at a constant rate, or update only in response to user input (or whatever), you're still doing the key things that differentiate immediate mode-type GUIs from retained mode-type GUIs: firstly, you're regenerating the entire UI from scratch each time; and secondly, you're handling the input events while doing the regeneration.
This is what makes immediate mode-type GUIs so easy to use in many cases. You never have to make sure each widget is linked up with whatever it relates to, and you never have to ensure the widget list is kept in sync with the list of things, and so on. You're always building the widget list from the (authoritative) list of things, and processing the related input events at the same time.
Immediate mode is simply a way of designing your API so that there's no creation, destruction, callbacks, etc. It's a convenience to the programmer, because they can explicitly express what they want to happen at a given moment, and the application state is known at any given time. It's a general concept that can be applied to any code you write, and it most certainly doesn't forbid you from keeping track of dirty regions between frames, that would be ridiculous.
I'm currently working on an web ide that use the Canvas instead of the DOM. The Canvas is very nice to work with, it's high level and hardware accelerated.
Storing every character in a doubly linked list seems like an extremely inefficient use of memory. Depending on your architecture, that could use 24 bytes per ASCII character (if the char in the struct is padded with 64-bit pointers). I imagine that vi wouldn't have gotten very far if it had been that memory constrained.
> Depending on your architecture, that could use 24 bytes per ASCII character (if the char in the struct is padded with 64-bit pointers).
Quite possibly much more than that because of allocator overhead. Main sources of overhead are rounding (i.e. the allocator might give you a 32-byte slot when you ask for 24 bytes) and bookkeeping (it needs to keep track of all allocations, which tends to require the storage of addresses and bitmaps .. in some kind of a data structure).
OK, but Atom isn't pitched as a "lightweight vi." If I want to ssh into an embedded device and view a 2MB log file, then I sure wouldn't want that to require 300MB of RAM.
Yes, a common way to store the text in editors is a gap buffer[1]. It allows efficient text insertion/deletion in one point and doesn't have too much overhead.
Is the (pseudo?) code there for state mutation representative of a widely used pattern?
It looks like using a pattern from a functional programming book, but applied to mutable data. What I mean is this snippet:
function handleAddTodo(state) {
if (!newTodoField) {
state.error = 'Empty field!'
}
state.todos.push(state.newTodoField)
state.newTodoField = nil
return state;
}
This looks like a function newState = fun(oldState) but there is one huge problem: the new state is the same data as the old state? In a Rust-like type system this might be OK because you couldn't accidentally keep using the original state after sending it to this method, and in an immutable scenario there would be no problem because you created a new state instead of mutating. But this looks exactly like mutating the argument sent to a function.
So out of curiosity, do any modern UI frameworks in JS (this looks like that type of code) do this, or was it just an unfortunate example?
Vuex works like this: you actually mutate the state instead of creating a new one as it's expected with Redux. But this mutation takes place is a single, controlled place (in the store mutation functions), so you get the benefit of centralising the state management code and avoid the cumbersome immutable transformations patterns which Redux requires.
That said, after having worked with Vuex, I much prefer the immutable newState = f(oldState) pattern used with Redux.
That's an unfortunate example. What you described as `newState = func(oldState)` is (conceptually) how practically all of this generation's state management libraries are modelled.
This is not an unfortunate example. With the absence of persistent data structures it is inefficient or difficult to write the state transition functions as pure functions.
Once you move to making mobile or desktop front end you appreciate how easy front end web dev really is.
Contrary to popular opinion I think the DOM, HTML, and CSS, are an awesome way of defining and styling a scene graph. When I did some C++ front end I missed them every day.
They are good if one likes to write boilerplate with lots of C style coding in a C++ application.
MFC always suffered from not being as high level as OWL or VCL, because the internal devs at Microsoft thought that Afx (the original implementation) was too high level.
By WTF, I think you actually mean WTL, which is based on ATL, full of templates and low level COM style programming. Another one that gives no pleasure using.
WinAPI was already out of fashion on the Win16 bit days, I was already using Turbo/Borland C++ with OWL on those days.
The only reason to use Win16 directly was to wrap APIs not exposed to OWL.
Sadly due to how Borland got themselves mismanaged, OWL lost to MFC in the early 32 bit days, and only enterprise customers with deep pockets adopted C++ Builder with VCL, which allows for VB/Delphi style programming with C++.
The author builds a terminal-based text editor in C from scratch. It's very well written and the incremental diff-style format makes it easy to follow along.
It could happily do 5M random single character edits / second on my old laptop. Its wild what you can pull off with C. Much more memory efficient too (it uses about 4 pointers of overhead per 180 characters).
I'd love to see a version which supported traversal using more methods though - that library needs to support searching by line as well as character count.
I tried porting it to Rust a few times but unfortunately couldn't figure out how to port it without introducing indirection everywhere. Modern 'C replacement' languages don't let you do the same funky memory management tricks. (Like using a dynamically sized array at the end of a struct allocated in the same block.)
Implementing efficient datastructures in Rust often requires `unsafe` code. Unlike in C, though, you can use Traits (which provide both static and dynamic dispatch, depending on usage) to provide a safe, zero-cost interface.
Even using unsafe I couldn't figure out how to pull it off in rust without doubling my heap allocations.
Each item in the skip list has an associated randomly sized list of next pointers. In C I store that list at the end of the node data structure, and dynamically allocate the struct based on how much memory I need (basically, size of static array of characters + size of dynamic nexts array). In Rust in theory I can make my struct unsized to do the same thing. (Well, it pushes the nexts height out into the pointer but that's not too bad). But there doesn't seem to be any syntax to initialise the values in my unsized struct array itself once I've allocated it - even in unsafe.
If you can figure out simply how to initialize a custom heap-allocated unsized struct I'll happily keep working on it. My conclusion at the time (and, still, after trying out tokio) is that rust isn't mature enough to replace C yet.
Just to nitpick, Traits aren't (just) what make Rust fast. There's a lot of help from the compiler and the type system that allows the final compiled code to remove bounds checks in a safe manner.
Since there is only a passing mention of the gap-buffer datastructure in the post you linked to, I am leaving this here https://en.wikipedia.org/wiki/Gap_buffer
Super easy to implement and gets you a long way. I am a little surprised that it has not come up yet on a thread on editor implementations.
In fact the author of the blog post talks about it in his very next post. Here's his lede
Last week, I promised to continue my discussion of ropes.
I’m going to break that promise. But it’s in a good cause.
I am a novice in this area, so forgive me if I am missing something, but I have some questions regarding the two approaches in the article, specifically if there is a performance trade-off being made between them.
To my untrained eye, the initial version seems to be doing a minimal set of updates based on the input it receives, whereas the latter version will call the monolithic render procedure no matter how small a change is made to the state.
Does each of the sub-procedures in renderTodo! check internally if the state actually changed (by keeping a copy of the previous state?), or do they just unconditionally re-render the entire UI? If the latter is true, then doesn't this supposedly better way of structuring code come with a large performance penalty? Or is it simply the case that this penalty is outweighed by the added test-ability and improved consistency of the state?
I believe that React and similar frameworks run a diffing procedure to find out which parts of the UI actually need to be updated (so a more minimal set of updates is actually applied). In the first version, however, it seems like the same diff occurred naturally as part of the code's structure - only a subset of the render* functions are called - and therefore the diff does not need to be computed at run-time.
Is this something worth being concerned about, or is letting the framework compute the diff on every state change not that big of a deal?
> I believe that React and similar frameworks run a diffing procedure to find out which parts of the UI actually need to be updated (so a more minimal set of updates is actually applied)
That's true. React diffs the virtual DOM and applies the changes to the actual DOM. However, the fastest is still to diff the actual data to be rendered, and determine whether the render function needs to run at all (this is opt-in via React's shouldComponentUpdate() lifecycle method).
What is the difference between Event Sourcing and the Command Pattern here? For decades the Undo/Redo functionality has been implemented by the Command Pattern it seems.
And looking at what Event Sourcing is, it looks a lot like the Command Pattern which purpose is to encapsulate into an object a side-effect performed on some data structure. That object can then be executed at a later time, reverted, logged, etc...
I learned a lot about programming from reading the source code to it. It's a gem of clarity and design (and you could reasonably argue that many of my changes to it messed it up).
All i can say is "watch this kid!". He is showing the motivation, perspectives, insight and intelligence to be able to do important work in the future. Great article and the C code worth reading to learn from. Thanks for sharing and looking forward to reading future blogs and seeing what you create.
> In other words, UI programming is about mapping incoming events to a series of effects.
So I guess the word `event-driving programming` is lost in time.
And I don't think the Unidirectional pattern described is suits for C programming. When I see people use C, I think performance, and unless the compiler optimized it away the state mutating function is gonna cost a lot of unnecessary memory copy. And worst offender is that unless you have React-like UI diff-ing library, it's gonna be a costly entire screen redraw event for the most simple update.
> unless you have React-like UI diff-ing library, it's gonna be a costly entire screen redraw event for the most simple update.
He's using ncurses, which actually does that - it only redraws the parts of the screen that change. It can require a bit of coxing at times to ensure that things get redrawn properly but I'm pretty sure he's good - I didn't see anything in his code that would force a full redraw every update.
Write a game or an editor with support for macros and see if it isn't faster to first mutate the state and then finally render, rather than doing thousands of ad-hoc screen updates, most of which won't be visible in the end anyway.
A cursory look into the linked github pull request changes made me spot this, and I must admit that never in a million years would I've thought to erase the possible endline character of a string returned by getline with this:
line[strcspn(line, "\n")] = 0;
I don't think I'm going to start, either, even if it is a nifty oneliner. Anyway, I've always found this kind of little projects a good sport.
That's not correct. If the scanned string doesn't contain any of the search string's characters, strcspn will behave like strlen, and the snippet ends up doing effectively nothing.
However, the statement is inefficient in what it does (and even then, not the most obvious way to achieve that, which your comment also highlights in a roundabout way).
Nice article, good summary of the purpose of unidirectional UI approach like Redux's, although it doesn't get as much into persistent data structures as maybe it could have.
What's the point of the exclamation mark after certain function names? Is that a pseudocode thing? Is that supposed to be shorthand for an impure function? (if so, not sure why `handleKeyboardEvent` doesn't have an exclamation mark.
The more I read, the more I felt that the author was long-windedly discovering FSM's, only describing them using the language de-jour - i.e. JavaScript-ecosystem-ism's.
Is this the state of things today - that kids start off as highfalutin' "developers", whereby 95% of their apps are written by others (because: "npm -i <all the things>"), and then .. eventually, from the top-down, trickle through the stack learning "the things", giving it all a new fancy name, and blogging about it? Because to me, this seems more like devolution at work in the computer science industries, not some kind of radically derived insight.
I'm not trying to be overly critical, but for a lot of us, "discovering that C-based apps are, weirdly, similar to what us React-ites are doing" sure seems like a step backwards from real stack competency.
I don't think the article was that bad for "junior-JS-to-JS-dev talk", which I think is clearly the main audience. Yes, I'd agree, for any seasoned dev, an article about the basics of functional programming, state machines, and the basics of unit testing and API design is a tad boring ;-). But that's no reason to get that upset about an article that clearly wasn't written for such an audience. Rather, such a response is not exactly encouraging those "kids" to learn complex programming concepts/languages in the first place, I think, and instead seems a bit offensive, to be honest.
> such a response is not exactly encouraging those "kids" to learn complex programming concepts/languages in the first place, I think, and instead seems a bit offensive, to be honest.
Yes, exactly. I mean, this dev is someone who clearly is working toward a deeper level of understanding of CS concepts, and here some people seem to be criticizing him for that. It's frustrating.
Hmm; I guess that means a solution would be to personalize HN rankings on the similarity of voters' earlier votes on other articles with your own voting behavior (as a proxy for article preference...) :-)
Some of us are just tired of people who don't respect the people and work that came before them.
There was a guy at the second-to-last place I worked who didn't know who Alan Kay was! What's worse, he wasn't ashamed of his ignorance.
At the last place I worked someone was refurbishing an old "expert system" written in CLIPS [1], he asked my advice and I said, "Consider using Prolog." He said, "What's Prolog?"
These are guys getting paid six figures and they do not care to educate themselves on the background of their chosen field. It's lame.
Could you imagine a physicist who didn't know who Newton was?
> The more I read, the more I felt that the author was long-windedly discovering FSM's
Then you missed the main point of the post, which was event flow, not state management. I'm wondering if sometimes older devs (I'm one) read these posts in anticipation, just waiting to find something that they recognize from elsewhere, just so they can say "aha, that's just X, nothing more!". In this case, you did recognize that yes, you can use a FSM to store application state, but the point of the article was much more event flow, specifically unidirectional UI architecture. It's not a new thing, by any means, but I thought it was a well-written post on how she/he expanded their understanding of this pattern.
> giving it all a new fancy name, and blogging about it
Are you suggesting that in this case, unidirectional UI flow is just another name for a FSM?
I guess technically everything physical is an FSM with enough states, but that wouldn't be a particularly strong statement. That being said, I'd argue a lot of such posts are of that form (one big idea that's well-known in TCS) modulo some amount of additional cool/novel/well-exposed/interesting stuff I didn't know before (the latter of which is what I really read the post for).
Well, as someone else said, just about anything physical could be modeled as a state machine. That doesn't mean we use the term state machine to describe just about everything physical.
Are you seriously suggesting we abandon terms like "unidirectional UI flow" and just describe those architectures as state machines? You would be missing the most important part of the term, which is the flow of events. State machine says nothing about that.
I mean, if you're reducing the event flow of a UI to a state machine, why use the term MVC, right? It's just a state machine at heart, right? A lexer? Oh, that's just a state machine, forget about the term lexer. The various Markov models? Oh, just a state machine. etc. etc.
>just about anything physical could be modeled as a state machine. [...] Are you seriously suggesting we abandon terms like "unidirectional UI flow" and just describe those architectures as state machines?
It's an invisible underlying theme of many programming debates and the participants often don't realize it's happening.
(Personally, I like "splitting" for communication of nuance to others but I also like "lumping" to understand that many things are variations of an underlying principle. E.g. a "file system" is a "database" which is lumping but if I want to communicate to others, splitting is better and I'll call it the "Mac HFS file system" instead of "Mac database" which would just confuse the hell out of everyone.)
I do understand the semantic disagreement we're having here, but most of all I'm upset (really upset this morning) about the tone he/she and others here have toward someone who is exploring new CS ideas and techniques and learning things, for God's sake. And also, taking the time to write up what they learn, which is a step farther than I get on most weeks. It just makes me sad to think that "Hacker News" is so down on someone who seems to be the prototypical curious hacker who is just starting out his programming career.
>I'm upset (really upset this morning) about the tone he/she and others here have toward someone who is exploring new CS idea [...] It just makes me sad to think that "Hacker News" is so down on someone who seems to be the prototypical curious hacker
I totally understand your frustration.
This 17 or 18-year-old[1] didn't meet mmjaa's standards[2] to write about programming and apparently, many HN'ers feel the same way.
Maybe that's how the majority of HN community wants it to be. I honestly don't know.
[2] a common "get off my lawn!" type of sentiment[1]: "To me, the "Full Stack" developer is simply someone who cares about whats happening under the hood. The "millennial developers" simply don't." -- https://news.ycombinator.com/item?id=14990097
The point is, this isn't a "new CS idea and technique" - in fact, this technique is as old as computer UI's themselves.
I have no idea how old this developer is, my only issue is that there is a re-invention of terminology and a whole new ontology being defined here, for something that has been a stock-standard CompSci concept for decades.
Its one thing to call me out as a grumpy old fart "not on my lawn"-ist - but its another thing to call out people who constantly re-invent the wheel and give it a new coat of paint, too. That is really my only motivation: to indicate that there is very little to be gained, productively, from ignoring the lessons of the past, "re-discovering" something, then slapping a new label on it. This doesn't do our industry any good - yet the practice is rampant. It seems, ever generation of CompSci graduates or so, someone gets the idea to invent something cool, which they've never heard of before, and proclaim that it is a unique insight into the field - whereas the fact is, its just a new coat of paint over an empty hole where the understanding should be.
We go through this psychosis every 4 or 5 years or so, in the computer world - so many times, we've re-invented the wheel and given it a new lick o' paint, calling it cool. I would, personally, like to see a bit more care given to this aspect of things by newcomers to this field - and rest assured, you can sit on my lawn as long as you cut it first.
Is this the state of things today...from the top-down, trickle through the stack learning 'the things'
Yes, the path most people will take today is from high-level languages and high-level abstractions, down to low level abstractions. It's not any better or worse than a bottom-up approach for learning. The fact that people are going down through the stack, trying to learn and understand things as they go is to be celebrated, not to be rued.
A top-down approach for learning makes logical and logistical sense in 2017, given the demand for developers operating on high-level abstractions. The bottom-up approach made sense, circa 1972 when C appeared. Neither approach is 'better', IMO.
My grandad used to say if you learn to sail a small boat properly, you can then learn to sail a big boat, but if you learn to sail a big boat first, you'll never really learn to sail a small boat properly.
what the grandfather is trying to say by my own understand is learning to sail small boat when big boat sinks small boat sailing skill is the only thing that will save your hide.the old man is wise
you'll never really learn to sail a small boat properly
This analogy lost its sway for me with the word 'never'. It's not impossible, in either case.
Perhaps working with a small boat / low-level language is indeed 'harder', so you may be better equipped to deal with the higher-level easier language, after learning the low-level language. Teaching high level languages down to low level languages would be an increase in difficulty, and thus that becomes one logical path for learning. I'm not saying this path satisfies all learning use-cases, merely perhaps the majority in 2017.
I'm not familiar enough with sailing to say whether the same principle transfers to boats...
I think I somewhat misremembered the quote - it should have been "If you learn to sail a small boat, you can sail a big boat. If you learn to sail a big boat, you can't sail a small boat."
It's not about easier/harder. It's about responsibility and understanding.
Sailing a small boat is a "full stack" activity, from trip planning to navigation to the low level mechanics of jibing and tacking, even the act of physically keeping the boat upright if you're using a centerboard.
Sailing a large boat usually sees you in a specific role, doing a specific niche thing, and you never get an overview of the whole process.
The same applies to working in a large team of any sort, which is why you see software developers who can't write FizzBuzz but who nonetheless do fine in their particular niche.
It's true that today's devs are glorified plumbers.
On the other hand, so much gets done, it would be insane to go back to "C for all the things!".
It's a trade off, you're either a high level hacker, piping stuff together and watching for leaks, or your a low level system guy, squeezing the carpet.
I would argue that a lot more of the worlds productive plumbing depends on C-based apps/libraries/frameworks than we would care to admit - its just that its 'not sexy' enough for the young-uns to get behind and start using, since everyone knows that C is a greybeard language and, therefore, not cool.
The same argument ("so much gets done") was made for Visual Basic back in the day, you know. Oh, how the hoipolloi cried and crowed for the death of C developers back then. Oh, how they were wrong, oh so wrong!
I'm not saying the world doesn't have a debt to pay to the Node/Javascript camp - it surely does. I just feel that there is something really missing in the scene, if in fact things that C developers knew and applied well 'back in the day' are just now being re-discovered as "Cool New Shit™" by those who chose - willingly - to ignore the very mechanics of the components their stacks are highly dependent on.
Still, I guess its not worth complaining. Its good to know that JS guys can get over the wall, and discover that we've all been using state machines for decades now, and so on, eventually. I just wish there was a lot less hubris on the "crowing about it" side of things. Honestly, JS guys: you should know a little C. Its still there, underneath all the cool shit, and you're still heavily, heavily dependent on it, no matter what your "npm up" tells you...
> Oh, how the hoipolloi cried and crowed for the death of C developers back then. Oh, how they were wrong, oh so wrong!
Which is kind of true for us on Windows.
Most of the C has been replaced by C++, to the point that even the new C runtime library is actually written in C++ with extern "C" entry points.
Also for those of us on Android, where using the NDK is an exercise in patience writing JNI boilerplate, because Google doesn't want us using it more than strictly necessary.
Oh and the two most beloved open source C compilers (gcc and clang) are actually written in C++.
Now I admit that on traditional UNIX systems and embedded development on hardware constrained systems, getting rid of C is an impossible task.
> since everyone knows that C is a greybeard language and, therefore, not cool.
As a person who works as a front-end developer but after hours writes almost exclusively in Rust I think I could shed some light on what's going on here:
Since JS is so accessible a lot of the folks who work with it don't even have a formal education in Computer Science(like the author, who's a CS student), so many of them never touched a compiled language in their life, not to mention subjects regarding CS.
To these people most of the concepts in C(like manual memory management) are entirely foreign. It's not that they don't _like_ C, it's that they're utterly unaware of the mindset one has to have to write things in it.
In my purely anecdotal experience, learning web-based technology over C/C++ was not a choice I made personally. It was a career path that was foisted upon me by job availability and hiring requirements.
I have yet to see a job posting for a fresh-off-the-boat C developer. It's nearly always 5-10 years industry-related experience required. Web, on the other hand, seems more willing to take on the new guys.
>The same argument ("so much gets done") was made for Visual Basic back in the day, you know.
And it was right, to the argument is moot. VB was a very productive language. Any issues where with the ecosystem moving on, not with VB (which had its warts, but all languages do, C first of all).
>Oh, how the hoipolloi cried and crowed for the death of C developers back then. Oh, how they were wrong, oh so wrong!
Again, they were right, oh so right. We do 1000x the programming we did in 1980 and 1990, but we don't use 1000x more C programmers (or assembly or pascal, two other popular choices at the time). Tons of code that used to be that, is now written in higher level languages.
Honestly, I think you underestimate things a bit. In a "modern" development stack you rely on a lot of work by others. But it's far from the "glorified plumbing" you describe. It's not as black and white as you describe it to be.
To me, your basic "high level programming" job (which usually includes the JS folks with their modern stacks) typically is all about translating business / application requirements into logic that computers can understand and interfaces that people can use.
Certainly, as a higher level programmer, I do think it is great to know as much lower level concepts as possible (it helps your higher level programming skills). But the truth is, business simply wants to Get Shit Done Fast. And having frameworks or libraries do a lot of the work for you saves a lot of time. Hence the huge amount of usage of "glorified plumbing" in the field.
(I don't think even C++ folks are immune to the "glorified plumbing" effect, eg: they have Boost, Poco, QT, etc.)
As far as I know, the separation between this side of programming and the more mathematical / inner detail "engineering" oriented side has always existed. (When I went to college, the "higher programming" pathway also had business courses, where the "engineering" side concentrated more on algorithms and the like. Now, even the business programmer folks did take one course in C++ back then...)
It's a trade off, you're either cut and paste blue collar programmer, piping stuff together and watching for leaks, or your a hacker/engineer guy, doing CS.
even MIT caved in, using python + libs in courses. They clearly said: today people are wiring solutions. I find it a bit problematic because when there's no solution you feel lost. But they have decades of experience .. I hope it's their very best insight.
> I'm not trying to be overly critical, but for a lot of us, "discovering that C-based apps are, weirdly, similar to what us React-ites are doing" sure seems like a step backwards from real stack competency.
This is something that I try to impart on all new hires. Collective we have been making great strides in this industry lately, and in my experience, the more inexperience devs tend to get blinded by all of the shiny. This isn't inherently bad; a lot of CS breakthroughs are tough to learn without the appropriate domain knowledge. But practical stuff that most devs use ins't anything fancy. It's the same concepts from years ago with config files attached to it.
We don't stress learning algorithms so that you can write a perfect quicksort implementation from memory. We want you to know the fundamentals so that tomorrows new technology becomes trivial to learn.
All this screams to me "education". The self taught diy thing is great, but on mass scale it's inefficient. If all these devs (I could include myself) could spend 6 months doing all this together sharing the common patterns and then go out fixing other problems ?
One, who just has a free 6 months? Two, do you honestly think you can fit a comprehensive CS education that would satisfy people like you in six months?
1. Most westerners born outside the USA. And in the USA anyone willing to take out a loan.
2. The core requirements for a cs major at a typical university (read: not CMU,MIT,Stanford,Berkeley) and especially at a non flagship is basically 4 or 5 courses, which you can for sure cram into 6 months if that's all you're studying.
Personally, I'd rather work with people who learn some part of that on their own (or even not at all) but have a stellar education in reading, writing, and math. The former are often provided by good high schools. The latter not so much. At least in the states.
But if all you want is just the core fragment of just the cs, six intense months are enough.
> I doubt very much you could cram that much in 6 months.
No, you cannot cram an entire degree into six months. Hence the second paragraph under my second point.
But the core CS sequence? 4-5 courses is a rough semester, but it is a semester!
And if you already have some mathematics or especially some programming background, the sequencing that causes those courses to stretch into 1.5 years is already resolved.
Yes, that's exactly my point. You can get a simple introduction to CS in six months, especially if you already know how to program.
No, you're not going to be publishing papers or talking through OS implementation details with CMU grads or designing computer vision systems with Berkeley grads. But you'll have the baseline knowledge that agumonkey is referring to in their comment.
> The core requirements for a cs major at a typical university (read: not CMU,MIT,Stanford,Berkeley) and especially at a non flagship is basically 4 or 5 courses
I went to a school that is very far away from CMU, MIT, Stanford, Berkeley, etc and my core requirements were A LOT more than 4 or 5 courses. I'd have to check by I can name at least 8 core requirements off the top of my head.
> which you can for sure cram into 6 months if that's all you're studying
So your suggestion is for people to look at a CS syllabus and find a way to teach themselves all of the core courses?
I have my bachelors in CS. I never really programmed until college. 6 months into my BS I had barely wet my feet with Scheme, learned a little bit of discrete math, and had a VERY basic introduction to formal logic. I knew exactly one programming language.
> I'd have to check by I can name at least 8 core requirements off the top of my head.
I'm familiar with a vast array of CS curricula at many types of institutions. I have no doubt that this is true, but it's also not universal.
To be clear, I'm referring to the courses that everyone from that institution will have taken before graduating. So almost all CS majors require much more than 4 courses, but only 4 courses worth of certain, common, shared knowledge.
If you only count courses offered by the CS department, and that everyone must take to receive a major, I think 4-5 is fairly typical. Maybe 6-7. Especially at places that aren't ABET.
Most programs will also require 3-4 additional courses for the major (that the college/university does not require for graduation). E.g. at least one or two mathematics courses, sometimes a Physics course.
And of course all those programs require some electives -- sometimes selected from subject area "buckets".
But my point is that it's certainly possible for two students to get a CS degree from the same (decent or even good) institution and only share 4 "core" CS courses. Typically an intro course or two, a lower-division algorithms-style course, something architecture-y, and something systems-y.
> So your suggestion is for people to look at a CS syllabus and find a way to teach themselves all of the core courses?
Maybe.
This approach would certainly be inappropriate for a student with no background in Mathematics or programming.
But a strong programmer can breeze through or even skip those intro courses.
And a student with a strong mathematics background -- e.g., an accountant, other science major, or finance person learning to code -- can breeze through that otherwise time-intensive algorithms course.
> The core requirements for a cs major at a typical university (read: not CMU,MIT,Stanford,Berkeley) and especially at a non flagship is basically 4 or 5 courses, which you can for sure cram into 6 months if that's all you're studying.
Huh? That's an incredibly terrible CS curriculum. 4-5 courses is at most 20 credit hours. A typical undergrad degree requires about 120 credit hours of courses before you can graduate. I went to a second tier state school and our CS curriculum had 12 common required courses (excluding seminar courses) that everyone took, plus three upper level electives of your choice. Basically half of the four year degree was CS.
12 CS courses, offered by the CS dept, and that everyone takes means that either A) your department was internalizing a LOT of service courses; B) left very little room for differentiation within your major; or C) provided very little time for exploration outside of your major.
Either way, that level of straight-jacketing and/or in-sourcing is atypical.
Even at top schools that number is usually below 10. E.g. CMU 6 (well 7, but really 6); Berkeley 4-6; MIT 8; Standford 6.
These are all off the high end of my 4-5 estimate, but only because they ALL in-source a course that at most places would be farmed out the math dept and therefore not counted by my criteria. And MIT also a comm course. Plus, there's the other extreme of places that only require the intro sequence and an algorithms course, leaving all else to the students.
In order to get up to 12 at any of these places you'd need to only offer one course for a lot of the "breadth" requirements, or else in-source the calculus sequence (or similar silly things).
Again, I'm counting courses offered by the CS dept, AND that everyone takes. The minimum viable "common knowledge" we can reliably assume is shared among all CS majors, as offered by the CS dept. IME 4-5 is pretty typical. Maybe 6.
Yes, we also assume some background in other fields of study from CS graduates (some calculus, reading/writing skills, etc.) But those courses are not the ones I was referring to with my 4-5 count. They were also addressed in my original comment.
Under the CS parts, only the elective in category A, B, and C is where student can pick and choose. Everything else under the grey curve across the pdf are:
> courses offered by the CS dept, AND that everyone takes.
Good points but what these people learn in segments in random order would probably fit into 6 months. I don't even believe CS education fits in 5 years anyway. I was just trying to suggest how to avoid diy lib chaos.
I consider myself a self-taught professional developer, since it's become the job that pays my bills, but I've never had any formal education on the subject. I regularly find out things like this, because I haven't been educated on the subject and because I don't have infinite spare time to study programming patterns. I know they exist and when I feel the need to, I look them up, but otherwise, yeah; "npm -i <all the things>" all the time.
I don't feel bad about this at all, since my colleagues describe me as a competent developer and my employer can't find enough people for all the work we have to do. If we only wanted to hire developers who understand C, we'd have a really big issue.
What can I say? Learning the language du-jour landed me a high paying job right out of school.
I had a choice at one point : Join the workforce doing web work (and make money right now) vs. spend more time in school and learn more while gaining debts and making no money.
Right now I am debt free, living a good life writing modern JS.
In my free time, I try to learn more low-level CS knowledge. It's however hard to justify the time spent doing this since none of it is of use during my day job.
If University was free and we didn't have to work to eat, I would know everything from the ground up. Sadly, this is not the world we live in.
I couldn't agree more, and I've always been a strong advocate of learning things bottom up, as opposed to top down.
However, it always amazes me how powerful abstractions such as high-level languages are. In particular, it allows people who know virtually nothing about the underlying mechanisms to develop cool applications. This is a great thing, and I think we should value it, even though it comes with some major problems.
I meet quite a lot of low to middle level programmers that stick to what they know because it's "the basics", and hence completely lack in experience in:
1 - Writing elegant modern code
Working with experienced C / Java coders to write proper Python is a challenge. They tend to use twice to many lines, ending up with slower and less readable code. They don't use the additional productivity to produce cheap goodies like generated doc, more unit tests, etc. They over-engineer stuff, but let the user experience down. And don't ask them to setup a server without the help of a sysadmin.
2 - Trading efficiently machine time for coder time
Aka you cost 100 $/h, a bigger server cost the same amount a month. Keep the double nested loop and go code something else.
And please setup code auto reload and preprocessors. Because THAT will save time and money down the road.
3 - Knowing the modern ecosystem and how the stacks fit together
No you don't need share this across threads and set a mutex here, put that in redis.
Don't write that in XML, you have JSON/YAML/TOML for that.
Actually don't write that in a file, you have sqlite.
No, you don't want to JOIN that, you have Arrays and JSON types in PostGres. If you got a lot of those, mongo or elastic search will do.
No you don't write sockets manually, you have http, zeromq, wamp and the likes to do 99% of the jobs.
JSON is too slow ? You have bson, msgpack, protocol buffers...
Yeah, this ORM is slow. But it generates the entire validation system, including HTML forms. And it allows for 12 plugins to share a common API to provide auth, history, permissions, registrations, a REST API, security measures, sanitizing and data validation for free. Plus the server has 32 cores so just go with it.
4 - being able to keep up with user expectations
No I have no idea how to code in assemble. I can tell you however that the search completion you don't want to add here because it's too much work would prevent the user to think your app is from 2004. And I can get decently performance version of it for tomorrow at a cost effective price.
----
Bottom line, everybody has limited time and resources and can only learn so many things at once.
Also different industries have different needs. And sometime you want low level programmers, sometime you want high level programmers.
One could hire somebody who has the skills of both, but you probably can't afford him or her. And he/she is probably not interested this mission anyway
1) You're talking about familiarity with the language and the ecosystem. Take a Python dev and put them into C, and you'll get a buggy, illegible mess. Give your C developer a few years to grok Python, and they'll be as elegant as anyone who cut their teeth on Python (if not more so, for having experience in more than just Python).
2) Funny, there's a near-constant stream of articles on how companies who succeed have to spend significant amounts of money money and time re-write their code, because those nested loops weren't just fast enough, and were causing the company to hemorrhage money.
3) "put that in redis" At which point you incur a network roundtrip to the redis server.
"you have JSON/YAML/TOML" So, which one? They're not cross compatible. Oh, and don't forget to add custom encoding for non-JS compatible values. JSON is just the latest iteration of XML these days, complete with XSLT, X-Path, XSD, processing directives, and namespaces.
"you have sqlite" SQLite has been available for decades, and in use by programmers just as long. Oh, and it's written to a file on the filesystem. And doesn't support transactions.
"Arrays and JSON types in PostGres" Oof. Hope you don't want performance to go with that. Joins are much more performant, and have the bonus of being flexible when the user spec changes.
"You have bson, msgpack, protocol buffers" Binary serialization formats have been around as long as there have been computers. People have been sending raw structs with checksums over the wire as long as there have been wires.
"Plus the server has 32 cores so just go with it." This again. Doesn't scale, and will cost your company when you do have the scale.
4) I'll be honest, I don't get your point here. Low level programmers can't keep up with user expectations? Why do you think `ls` has several dozen flags? Users wanted them.
All the points you raised actually prove mine. You have a mind set on details that are important to your missions, but in many ones those are waste of time and resources. There is a reasons both profiles exist: they all serve different yet useful purposes.
There's a lot I could address with your comment, but since I'm short on time here, where are you getting servers that they all cost the same amount regardless of specs?
I think server operation costs are a very legitimate concern, though. People always say "well, you're not Google so it's not that important," but that's precisely the issue. No small company has the budget to splurge on the biggest and best server infrastructure. A small DigitalOcean/AWS/Heroku plan is probably the most any small business could hope for (dedicated servers = hardware costs out of pocket + maintenance costs + power fees + ISP fees) and any relatively complex server is gonna eat through that quickly. Micro-optimizations may not be worth it, but just switching from an interpreted language to a compiled language can often reduce the performance overhead by an order of magnitude.
The amount of oversimplifying you're doing seems to skip every flaw in your argument in my opinion. That's not to say you're necessarily wrong... a C programmer probably isn't worth it over a Python programmer. But writing code with double nested loops for no reason or ignoring basic algorithms is going to cost you big time in the long run.
I usually have far lower server bills than my colleagues because i don't pick up saas when i don't need too. This is the an optimization that does save money : knowing when to manually setup servers vs going aws/firebase/whatever.
As for the doubled nested loops, as usual, it depends. Data size, boudaries, number of concurrent operations, cpu costs, acceptable loading time...
It's a general trend. Many young developers rediscover old things and describe them using familiar terms. At first it amazed me, then I understood it's more or less natural and there's nothing we can do. It's like when your son discovers the Beatles and gets excited.
There is plenty to re-use in C, eg. BSD's queue.h or linux' list.h for linked list alone. Because the author choose not to do so doesn't mean it is the fate of the whole language. Though, it is indeed a little harder than firing "npm".
It is not surprising that C gets that much of a bad reputation when devoloper show such poor capacity at managing errors; malloc(3) return values are never checked and everything is expected to always succeed, throughout the source tree.
Competent C devs are rare these days. Just too many did retire. At the very start of my professional career I thought of becoming embedded dev, but I was disheartened by the job market.
The very few jobs I found were like "Senior C dev, minimum 7 years subject matter experience, $65k" at the time when web dev roles were paying 85k+ and were so easy and dumb...
What distinguishes a good C developer? A high self discipline. That language gives you a lot of ways to introduce bugs, but unlike JS you do have clear realisation why they appear most of the time.
You can use my own experience as a little bit of anecdotal evidence. I started out of collegue and started programming in C/C++ as a game dev. I started at 40K AUD in Sydney in 2002 just before the housing boom here in Australia. Granted stayed with the business for 4 years. During the time my colleges got a job in web development CRUD application systems for the web. Their starting salary was 85K.
I remember walking into the CTO/CEO office at the time asking how much they valued experienced C/C++ developers. They honestly said at the time that (frank) who had 10+ years doing game dev they couldn't see any point in paying him anymore than 60K a year. I thanked them for their time and went back to my desk.
Next day I walked in and handed in my resignation. Both the CEO/CTO was shocked. The next week I walked into a web development shop (oracle/java/javascript) and tripped my salary.
Game dev is built on the backs of people sacrificing salary for "working in games". You really can't compare it to any other job. All my friends who worked with C/C++ in game dev that moved out of game dev, but still used C/C++, at least doubled their salary.
A job is a job. For me the simple truth of the matter is I couldn't support the quality of life I wanted to live working in games and the salary and rewards they where offering. It was fun, but going back to C/C++ is something I would avoid at all costs, although if the salary doubled over night I would put up with C++ although I think I would take up drinking.
Agreed. I feel like I've missed my retirement date
because I started professional IT late (97) but started
coding in the 80s....
There is a train wreck coming|ongoing for this industry that I don't think will be recognized by current practitioners as anything other than the 'failure' of older technologies. That is sad but it is by corporate tier design to replace the irreplaceable and make change and breakage a commodity and measure of progress.
> The very few jobs I found were like "Senior C dev, minimum 7 years subject matter experience, $65k" at the time when web dev roles were paying 85k+ and were so easy and dumb...
This is a big problem. I love C. I'd love to have a C job. I don't even mind the salary as much.. but practically nobody is looking for C devs, and the few that are, seem to require incredible credentials.
Working in the outer NYC metro area as an embedded C/C++ dev. Pay is significantly lower than web dev work and I'm "overpaid" for my position. Never going to get a raise.
It can still fail. Also it is not very bright to depend on some kernel configuration in order not to crash unpredictably. Also it's not portable. Some targets don't have MMU.
Do other *nix systems behave differently regarding overcommit of memory and malloc() really only erroring out when virtual memory is exhausted (which basically never happens)?
It can happen, I had various issues with my VPS provider which set limits on virtual memory use in his virtualization solution. Programs would crash with malloc failures despite overcommit being enabled.
The program will crash on null pointer dereference most likely. Malloc will not abort. Some libraries (like glib2) wrap malloc so that it aborts on failure, but standard malloc will never crash.
Why would a system do that? Either it overcommits, in which case the malloc won't fail (programs get killed only when they try to actually use the memory) or it doesn't, in which case there's no reason to kill it.
Yes, I know about the OOM killer, but it doesn't trigger when you malloc(), only when you actually try to use the memory. In fact, that's the point of the OOM killer: since many applications allocate more memory than they actually need, Linux lets them allocate more memory than what is in fact available, and only kills 'em off when the bluff fails.
Wasn't in the case on Linux due to overcommit that allocations always succeed anyway? So a novice C developer, working on 64-bit Linux may assume that malloc calls never have to be checked.
Thanks for the article, and good job implementing an entire editor from scratch in C! That's impressive.
Also, as a very accomplished programmer who happens not to have done much recent UI programming, who has read quite a bit about React/Redux/Rewhatever, Elm, The Haskell School of Expression, etc., I find articles like this extremely helpful: you took a simple example, and clearly and straightforwardly demonstrated the underlying goals and principles of "Unidirectional UI". While I mostly "get it" already from reading articles on React-based architectures, clear reiterations of the idea are a huge aid in getting the concepts solid.