> Why C? fewer hipsters : hipsters dislike memory managment
Is this meant to be ironic? Spinning up web apps with C in 2017 seems like a supremely hipster thing to do.
Edit: it would appear to be ironic
When I look back I've got to be honest and say that it was actually a thing of beauty. The heavy lifting required for some core functions was quite bad but the whole thing pales in comparison to some of the enterprise and microservice based c# behemoths I have looked after since which require literally hours of head scratching to deliver a simple change.
I'd rather spend that time writing C than scratching my head. I'd have a lot more hair now.
However in our case only the DB drivers and Apache TCL module were written in C, everything else was pure TCL.
This is to C devs what the micro bit is to kids learning computing - a starter experience with server side dev
Well I think I'll pass this one and hopefully don't need to maintain such apps in the future.
On a capability-secure processor running in a VM in case new classes of attack against C code are discovered
* Iron ( https://github.com/iron/iron )
(Clojure, Redhat, Apache, Postgres is good too)
JAWS - JX OS + Ada Web Services
I think it will all be ReasonML.
Since it's basically OCaml with more JS-friendly syntax, it will lead the masses of JS devs to OCaml, they will find MirageOS, build their servers with it and no one will need anything else anymore.
According to archive.org this page exists since 2015:
> SQLite is a "self-contained, embeddable, zero-configuration" database. And it's bundled (for now…) with stock OpenBSD.
It has been 8 months and 3 weeks since SQLite was removed from OpenBSD base.
But if you want to develop your modern and hipster-free webapp, then it's still available in ports!
A first and last name are useless without additional WHOIS fields for confidence, such as postal code, phone, etc...
I found 246 Mengzhu Wang's on LinkendIn, over 30 in IT any of which could own some of those domains.
If you really need a web development solution that's completely out-of-the-box on OpenBSD (aside from perhaps some third-party Perl packages), you might be able to use a "BPHL" stack (BSD Perl Httpd Ldapd) and use LDAP in place of a general-purpose database. That's probably a horrible idea, but whatever.
Currently, I don't see, why I should use it, except going at least ten years into the past.
SQLite is great, but I can use it with other useful tools instead of a naked web server.
I would like to see nginx support here instead of just OpenBSD's web server. Perhaps this is because nginx won't work with pledge?
Not really. I mean, using vectors and strings is nice, but that's about it for safety. You'll still get a shit-ton of memory leaks which isn't great for long-running web apps.
If the code has any explicit new/delete or malloc/free, then something is wrong with the design.
This part has a joke, which makes the overall website not read like a joke. I’m pretty sure it isn’t, even.
Or at least the pool of "hipsters" I know like to pick it for use cases where they haven't considered fully whether or not it's actually going scale.
If by "hipster" you mean, "People who are sick and tired of shipping insecure software that violates consumer trust and has tends to case downtime" then by all means, ship C code that handles untrusted input.
I think it's criminal to propose this without massive auditing, which is what HLL runtimes end up getting.
C is a bad choice for handling untrusted input precisely because it makes it very difficult to prevent logic errors that disclose user data in unexpected ways. The security community has done its best to prevent the even more disastrous class of breakout errors that comprimise the entire resource (and OpenBSD is great for this, way better than containers).
But as my comment was specifically addressed at the choice of C, I don't feel like I need to sweet talk an OS I already say lots of nice things about.
Maybe you'd like to respond with all sorts of great literature about how the C spec is not full of holes and gotchas?
And don't even get me started on that "simple example binary."
Oh that's a new one. But now that you mention it, I'm starting to recall all the operators that will fprintf the contents of ~/.ssh/ to the network socket upon misuse.
Wouldn't it be nice if we could use a simple assert to test the state and inputs before we proceed to shovel private keys in a totally not privsep'd process handling public, unauthenticated connections. Heck, even a simple condition would do, if we could just return an error and stop further processing when things look bad. But you're right, that kind of code would've been too advanced for 1958 or whenever we got this language...
There were already better, safer, languages being used for systems programming like Algol and PL/I dialects, almost 10 years old when C appeared.
If only folks wishing for a simpler time could do so outside of security critical code.
And on the other hand, we have code that doesn't bother check that the inputs from the outside world are valid and that they do not cause integer wraparound. The former is a problem you can repeat in just about any language, while the latter is relevant to many languages. Guess what, I know how to check inputs. I know how to prevent integer wraparound. And C doesn't make it hard for me to do so.
The next and last interesting part of the disclosure is mostly concerned with leaking private keys. Now what did I say about juggling private keys in an internet facing process? Just because the ssh devs didn't isolate that part into a separate process doesn't mean it can't be done (and honestly I have no idea why I would be juggling private keys during the generation of a web page).
So there you have it, old code from a time before explicit_bzero, juggling private keys, not checking inputs and running on a system without malloc_options. You can lament it all you like but that doesn't mean everyone has to do it wrong. It shows that you can do it wrong, not that C makes it hard to do it right.
Not that any other language is on trial here, but there are languages that would naturally make such a bug into a compile time error.
I fail to see how C helps you avoid making such an error. Because C's general standpoint here is that there is no such thing as an error, there is no such thing as a type, and it is perfectly acceptable to have undefined behavior sitting within trivial edit distance of common code patterns.
> The former is a problem you can repeat in just about any language
Actually, no, that's false. A lot of popular languages check arithmetic. Faulting in such a case would have saved the day but hey, faster execution. Even old languages like lisp did this.
Not many languages are as lackadaisical as C is about this. But error handling (at any stage) has always C's weakest point. I can't think of a C successor who hasn't named C on this front, then tried to improve upon it.
> It shows that you can do it wrong, not that C makes it hard to do it right.
Insufficient abstraction means that you can't reuse code, so instead of getting it right once you have to get it right every time.
And what's the compelling reason to use C? It's "simple" and "close to the metal" but the problem domains are anything but. Availability and instrumentation bias encourage people to use C for "efficiency", trading off correctness for faster code. It's a tradeoff one can make, but if you're working with other people's data you should think twice.
How many times faster does a piece of code need to be to make up for violating a user's privacy? How many times "simpler" does code need to be for someone reading it to justify not making every effort to avoid security faults?
But then, I worked with financial data a lot, and my work ended up being associated with a national scale bank with an API. The sheer amount of attacks my code had to endure was on a completely different scale than most people will ever experience.
I'm sure there are languages where the use of any condition forces you into making sure there is some explicitly taken branch for all possible inputs -- and perhaps that language also magically knows what you must do inside each branch. Show me all the projects that are using these for web developement, which is the context for this discussion. Otherwise it is not fair to bash C over it.
All the mainstream languages I see in web development allow you to make the exact same mistake.
> I fail to see how C helps you avoid making such an error. Because C's general standpoint here is that there is no such thing as an error, there is no such thing as a type, and it is perfectly acceptable to have undefined behavior sitting within trivial edit distance of common code patterns.
But there are errors. There are types. UB is not relevant to what you are replying to. The problem in question was about a condition that was not considered. Again, show me how your average web language tells the programmer that he didn't write some if condition or stop bashing C over it because you're dreaming of features in some unicorn language nobody uses in the real world anyway.
> Actually, no, that's false. A lot of popular languages check arithmetic.
Please read again. I said "the former", referring to input validation. That is relevant to every language accepting untrusted input.
> Insufficient abstraction means that you can't reuse code, so instead of getting it right once you have to get it right every time.
That statement is so wrong I can only conclude that you're smoking something or you haven't programmed in C and you are completely oblivious to the work the OpenBSD folk (and many others) are doing to fix these issues in existing reusable library code as well as to introduce new, safer APIs. Sure you can pretend that everyone who wants a buffered output stream to a socket has to write their own circular buffer and repeat the same mistake. You are wrong, and if you had paid attention you would see counterexamples (libevent is a popular one) that prove you wrong. You're just hating on C but don't know it.
> And what's the compelling reason to use C?
I'm not trying to convince anybody to use it and my reasons are my reasons -- the strawman you make of performance isn't the key. But it doesn't matter.
I'm not sure why I'd play the game when "mainstream" is basically a way to discount any offering. But in C++, C#, or basically any class-based language you can code to guard against this. Functional languages with types provide strong guarantees against this. Ocaml and Haskell come to mind as well known examples..
> But there are errors. There are types.
Not according to the compiler. Anything can become anything else.
> That statement is so wrong I can only conclude that you're smoking something or you haven't programmed in C and you are completely oblivious to the work the OpenBSD folk (and many others) are doing to fix these issues in existing reusable library code as well as to introduce new, safer APIs
I'm aware of the work, but C's problem is not that it lacks more library code.
> Sure you can pretend that everyone who wants a buffered output stream to a socket has to write their own circular buffer and repeat the same mistake.
I don't say they have to. It's just that C's language design makes it easier for people to do so. Very different statements.
> You're just hating on C but don't know it.
I see where this is now going. "If you knew you'd like it." I'm not going to waste anymore of either of our time if this is the new talking point.
You mind clarifying that? I constantly have compile-time errors from type mismatches. You can cast a variable to a different type, but you can do that in any language.
you can't implicitly convert a char to an int, your compiler will take that as a fatal error. About the closest you could get is char and short since they're essentially the same data type, but even then the compiler might throw an error over an implicit conversion.
Well, for one people overload types. The practice of, for example, using return codes as error values. The second problem is unions. They exist to make people converting bytestreams into structures happier, but often get misused elsewhere.
And void* is essentially a black hole, but it's a very common black hole to see in programs.
Yeah and that's the most solid criticism against it! While it's true that it's quite a bit more difficult to cause fatal errors in a program in node, the language does little to help you solve these problems once you expose that functionality to it.
And of course, I'm on record as a big fan of Purescript and Elm, with more bias towards the former.