> This is a very common misconception that stemmed from a recent conversation with one of my non-Rust engineer colleagues. For him, it was inconceivable that a Rust program would panic because of an out-of-bounds runtime memory fail.
I feel like this is just a misunderstanding (probably from a somewhat junior engineer, or at least an engineer unfamiliar with low-level programming) between segfault and panic.
Proving that array-out-of-bounds never happens is just a special case of proving that an assertion error never happens, which is intractable in the most general case.
Consider a program in which an array was accessed based on the index provided by some user input -- how could this ever be proven to never go out of bounds?
Rust won't segfault here like some naive C will, but it will panic, because you hit the assertion that the index is less than the length of the array.
Formal verification to prove that a program will never panic is a very neat field unto itself. There is some work in this space in Rust, but it's bolted on top, not built into the language (this is the case for most formal verification of most non-research languages).
IMO, a program failing due to a parse error on the input, and a program failing due to a bounds check are functionally the same thing (provided you are using a language with bounds-checked arrays so you don't segfault). You're just moving the bounds checking from the array access path to the input collecting path. I could be misunderstanding you, though.
My referenced program is just proven for input sizes.
If I wanted to do something to validate an entire program subset similarly, I'd make a type with the bounds I needed that gets fed into that algorithm which is proved for those ranges. This isolates the proved code from user input, but the newtype with range ensures those checks are done prior to being fed into the algorithm. You can't "prove" user input, there has to be validation, but you can force that validation prior to entering the proofed code.
Well if you go for proof of absence of runtime errors, it's SPARK, which is a (compatible) subset of Ada, and comes with a toolset to perform verification. Proof of absence of array-bounds errors or any runtime error (well except for OOM and stack-overflow - for now).
I'd add that Frama-C does also this kind of proof and IIRC Eve (Eiffel Verification Environment ?) too, though I find the SPARK workflow far easier and advanced (but we use it more so... I'm biased).
I also wanted some wiggle room for languages that are fairly researchy but some people do use them in real life for hosting blogs and such, like Idris.
There might be a few others. https://github.com/google/wuffs for example isn't general purpose or mainstream, but it's meant to solve practical problems, so I don't think I'd call it a research language. Opinions may vary.
> Consider a program in which an array was accessed based on the index provided by some user input -- how could this ever be proven to never go out of bounds?
In fairness, the approach to similar operations in Rust is usually by returning an Option or a Result, so it is arguably counterintuitive that a few specific operations like array indexing panic instead.
(you can do Vec.get(), but normally the "safe" behavior is the default and the "unsafe" behavior is optional and more verbose, e.g unwrap)
IMO this kind of just moves the goalpost from "prove the program never panics" to "prove the program never hits the `None` or `Err` path" which is functionally identical in most use cases.
Obviously I am not suggesting that people use `arr[i]` everywhere and try to catch the panic, this is an antipattern -- use `.get(i)` if you don't know the bounds. I'm just talking about for the purposes of program formal verification.
Aside:
I think the language design team making the most terse `arr[i]` code not need unwraps or anything was the right move. Doing `.get(i)?` or `.get(i).unwrap()` all over the place is too much visual clutter.
If the user really wants, they can make a custom slice wrapper that implements the `Index` trait in a way that returns `Option`.
> IMO this kind of just moves the goalpost from "prove the program never panics" to "prove the program never hits the `None` or `Err` path" which is functionally identical in most use cases.
If my goal is to show that a piece of code never panics, then it suffices to show that the None or Err path doesn't panic; I don't need to show that it's never hit.
It's a common misconception due in large part to Rust's marketing.
Claim: Rust enables you to eliminate many classes of bugs at compile-time. (https://www.rust-lang.org, "Why Rust?", "Reliability")
Reality: Rust enables you to eliminate an extremely limited class of bugs at compile time, and also inserts if statements into your program to detect some other classes of bugs at runtime and crash cleanly.
While it would obviously be nice to error at compile time, the docs of the slice type clearly say the code mudt panic in this case. Depending on the context, [T; 3] may be better
> Consider a program in which an array was accessed based on the index provided by some user input -- how could this ever be proven to never go out of bounds?
It's a type constraint on the index. In most programming languages we're used to the index value is some machine type, like a 32-bit unsigned integer, but that's arbitrary, the language can say OK, this is an index into an array of sixteen things, so, the index must be between 0 and 15 inclusive.
Now, depending on the software you might just have to write bounds checks yourself to satisfy this constraint, so you just made more work for yourself, but if your software actually deals in types that can and perhaps should be constrained properly elsewhere this approach means you get a compiler error not a runtime failure.
Suppose we've got code that does foo[k] where foo is an array of eight things and k is an unsigned variable with the bottom four bits of a value from an image file that's supposed to be the CGA medium resolution colour palette ID (0, 1, or 2). Alas the file might be corrupt or malicious and those bottom bits could be, for example 10.
In C and C++ foo[k] compiles, and when the corrupt file is used there is Undefined Behaviour
In Rust foo[k] compiles, and when the corrupt file is used the program panics on this line
In WUFFS foo[k] doesn't compile, variable k is the wrong type for indexing foo.
Now, WUFFS is not a general purpose programming language. You should not write an operating system, a web server, a compiler in WUFFS. But, you should write your thumbnail making routine, your PDF validation code, your audio compressor in WUFFS, because in choosing not to be a general purpose language WUFFS is freed to simply always be safe. That WUFFS thumbnail code might make the RCE attempt from Leet Hacker into a blue rectangle, or crop all the executives photos to just their nose, but it simply cannot accidentally create a reverse shell, or spew passwords into the thumbnails, or a billion other things which Rust would make difficult to do by mistake but never impossible.
>> Consider a program in which an array was accessed based on the index provided by some user input -- how could this ever be proven to never go out of bounds?
> It's a type constraint on the index
Consider a program in which the index type is provided by some user input -- how could this ever be proven to never go out of bounds?
You can move the problem elsewhere (and indeed often should!), but eventually it becomes a condition check and error handling behavior appropriate to the scenario. The default error handling behavior is to halt the program.
I agree with some of the points, but some of them are a bit weird. Like: creating a section called "Rust is unpredictable", because one library panicked?
Another thing is with the out of bound reads. Sure, you can make an out of bound read, but I'm pretty sure most of the production projects use clippy that would error out on this and require you to use `v.get(0)` instead of `v[0]`.
I read this as a relatively junior Rust programmer getting the lay of the land. I can certainly understand the "unpredictable" label from that perspective: there are a lot of people just learning Rust who might be "oversold" on what the language can and can't do, and are surprised when their programs fail hard rather than statically preventing all bugs (or, worse, silently doing the wrong thing).
> Another thing is with the out of bound reads. Sure, you can make an out of bound read, but I'm pretty sure most of the production projects use clippy that would error out on this and require you to use `v.get(0)` instead of `v[0]`.
To be precise: you can't cause an OOB read in safe Rust. You can write code that would read or write OOB, but Rust will always insert a panicking bounds check (unless it can prove statically that the operation is always in range).
> there are a lot of people just learning Rust who might be "oversold" on what the language can and can't do, and are surprised when their programs fail hard rather than statically preventing all bugs (or, worse, silently doing the wrong thing).
Sure, that sounds like a good explanation. I guess I'm here too long to expect perfect stability from anything. That said, I still think that all things considered Rust ecosystem is one of the more reliable I've worked with. Both from the development and production experience.
In the context of development I very rarely have any problems with installing or using libraries. Definitely much less frequently than in npm, rubygems or pip.
In the context of production it's even better I think. Yes, occasionally you run into a problem, but most of the times systems written in Rust just work without virtually no crashes. The only big issue I ran into in the past was the Warp web framework having a memory related bug, which led to it using the entire available memory given enough traffic. It was an early version of the framework, though, so hardly something I would be very surprised about.
Good point about out of bounds read! Totally correct, what I meant is that you can panic when reading outside of array range, but my non-systems programming language background shows here :)
The good thing out of bounds access in a safe language usually isn’t that they don’t panic. It’s that they do panic instead of returning what happened to be adjacent in memory. So v[5] won’t accidentally return w[2] instead.
But out of bounds array access handling isn’t any better in Rust than in other safe languages.
The real benefit of Rust in that scenario is to ensure you don’t access v[0] after freeing v, for example.
You obviously can, but indexing with `[]` is the default way one would think of doing and it can certainly be surprising that it panics.
Besides, the slice API is full of "infallible" functions that panic when given bad parameters - just search for "Panics" in https://doc.rust-lang.org/stable/std/primitive.slice.html - because it's expected that it would be more annoying for people to deal with the fallibility (like having to manually `.expect()` or bubble up the error) when they "know" that it wouldn't fail.
"Infallible" is not a term that the Rust library docs use, to my knowledge. You'll note that none of the panicking APIs on that page claim to be infallible, only memory safe.
(I happen to think Rust can and ought to substantially improve the ergonomics of infallible API design, as well as provide more compiler support for avoiding fallible dependencies. But it's unreasonable to treat fallibility as a deficit, since it's explicitly outside of Rust's safety model.)
I put "infallible" in quotes because a fallible API would be one that returns Option / Result where the None / Err encodes the failure. So on the surface these API look like they can't fail, except of course they do, by panicking.
I understand the fallible/infallible distinction. My point was that the concept of fallibility/infallibility is not encoded into Rust's semantics: any function, even an "infallible" one, can panic (e.g. on stack overflow or an underlying memory allocation error).
(I think we're in agreement about fallible APIs being confusing and a footgun.)
Infallible is the type of errors which can't happen. It will some day be a synonym for ! (aka Never, the type of return and break and such things) but even today it's an Empty Type.
Empty Types are in some sense even smaller than the Zero Size Types like the empty tuple. The ZSTs have a value, it's just always the same value and so it needn't be stored anywhere. The Empty Types don't even have a value. So not only do you not need anywhere to store their value, you don't even emit code to handle this value. Generic error handling code evaporates at compile time for Infallible.
Right, but that's a subtly different type of infallibility: it allows the compiler to optimize instantiations of traits that are generic over error types by establishing that the error is impossible in a concrete implementation.
The other kind of infallibility is "the absence of an error encoding for a function means the function cannot fail". The Infallible type can't provide that, since a function have an infallible `Err` case but still arbitrarily decide to panic at runtime.
Edit: in other words, Rust's type system isn't strong enough to encode "full" infallibility. We'd need some kind of effect typing for that.
It's opt in, like most things in rust. Any time you have an option, you can simply unwrap it to say "I'm asserting I know this is a 'some' value, and if it's perhaps not, then please panic". Forcing an unwrap in every single case would indeed kill performance.
No. There would be no difference in performance if it returned an Option and required the caller to unwrap. That's what `[]` is doing internally anyway. As are all the other functions I linked that are documented as panicking - they're doing tests manually and `panic!()`'ing if those tests fail anyway.
As I said in the original comment, the reason these functions panic is that the common case for them is where the caller is using parameters that ought to be valid already (either hard-coded or already checked). So it would be less ergonomic to require the callers to handle an error that they know won't happen.
And those optimizations apply regardless of whether the function checks the parameter internally and panics on failure, or if you check the parameter before calling the function and unwrap the function result. Again, there is no difference. There's really no need to argue about this. Feel free to godbolt it.
I suppose I'm used to C/C++ compilers not going as far down the optimization rabbit hole as Rust seems to, where they often miss more complex optimizations like this in sufficiently advanced wrapper types etc.
Nice, love to see posts where people give their experience like this. It's a rare thing.
> the Cargo compiler
Really small thing, cargo is a build tool, not a compiler. `rustc` is the main compiler, cargo wraps that and provides helper utilities for stuff like package/ project management.
> Detecting an out-of-bounds access at compile time requires a deeper analysis of the code that would significantly slow down compilation time (which is already too slow IMO).
The compiler actually is likely doing this stuff already because it's trying to optimize away any branches. I bet it already can see in these basic examples that the panic is unconditional. There are crates that leverage this fact, like `no-panic`, to try to see if your code is panicking.
Also worth noting that "panic" is kinda like a runtime exception in Java and it can be caught. In fact it's automatically caught at the thread boundary - that's going to just be the main thread in your example.
> This section is about a recent behavior I observed where our team woke up to one of our crates’ (dependencies’) dependencies starting to panic in production under specific conditions
Bummer! Especially for a crate as popular as reqwests. My suggestion would be to assume any code can panic and build around that. That means a few things.
1. It means to test your code. Integration tests are great for catching bugs like this, even a basic smoke test would likely catch it.
2. Use a control plane that will automatically restart services, monitor their health, and reverse deployments if they don't get healthy soon enough.
Ultimately, no matter the language, crashes happen. That said, I get that it's frustrating and no one likes a panic.
> A quick idiomatic alternative to dealing with this problem could be overriding Rust’s panic_handler but unfortunately this is only possible in #![no_std] projects.
FWIW people are thinking about how to solve this for the std case.
One option is capabilities. This is kind of like what `unsafe` is - in order to call 'unsafe' you have to be in an unsafe context.
There are other options as well. Anyway, the point is, I hear you and so do others - understanding where a panic might occur is definitely of interest to rust devs.
> Program binaries can be huge
Indeed, but not really inherently huge. You can get quite small - I suggest stripping your debug info, that's a ton of the space.
> On a more personal note, I wish to be more involved in shaping Rust’s future by making contributions to its software while keeping learning its concepts and intricacies.
Honestly, this is a good start, it's always important to let the people behind the language know what's working for you and what's not.
My final suggestion would be to consider asking for help in either the Rust discord, subreddit, or user forum.
I don't get the compile speed hate. Rust compilation is inconceivably fast to me. Like what are you incrementally compiling that takes you out of the dev flow?
Linking is often the slow part for large projects. Switching to the Mold linker can help some, but it can still be relatively slow on a lot of hardware.
Overall I agree with you though, it isn't slow enough on my hardware to interrupt my dev-flow.
It’s always the libraries that do it. It’s pretty easy to pull in enough library code that your own app code is nothing compared to compiling the web server and ORM you imported.
But even then I haven’t seen it as that much of an issue.
Depending on what part I touch, a recompile is 20 to 60 seconds. This is a project at work — an HTTP server that runs some stuff on an interpreted language we’ve developed in-house.
Not the end of the world, but a bit annoying nonetheless.
I have a Rust monorepo with about a dozen microservice apps and two dozen shared libraries.
The compile times are inconvenient, to put it mildly. Production release builds take approximately 40 minutes and are especially painful when I forget to codegen the SQL schema definitions (sqlx).
At some point I need to invest in CI devops and use Bazel build caching.
Pain aside, the ability to share code is glorious. Clion + Rust plugin does a tremendous job navigating it all.
On my phone a Rust program of significant size can take multiple hours to finish compiling, where a C program of comparable size would take about an hour or slightly less.
No reason to discard the language, for sure, and I don't avoid compiling from source even on this machine, but it's definitely worse than C.
Not the same person, but programming on phones with Bluetooth keyboards is very common for people learning in developing countries since everyone has a phone, and many do not have a laptop.
I suspect you misunderstood the word "bridging". I interpret that as FFI. Rust <-> C FFI has some tedium, but it's straightforward. The only complexity comes if you don't fully understand C.
Rust has excellent FFI support so this statement is about as controversial as saying there is no great difficulty integrating a 900k+ lines of C code from and in to C++.
Rust will never be a contender for C, because Rust does not embody the same lasting concepts C does. Rust is an obvious competitor to C++, where complexity growing without bound abounds.
But for those of us looking for a C alternative, we have it, and it's Go. Is it a perfect replacement? I don't think so. But it is the closest spiritual successor the industry has.
"Go bears a surface similarity to C and, like C, is a tool for professional programmers, achiev- ing maximum effect with minimum means. But it is much more than an updated version of C."
"Go is sometimes described as a ‘‘C-like language,’’ or as ‘‘C for the 21st century.’’ From C, Go inherited its expression syntax, control-flow statements, basic data types, call-by-value param- eter passing, pointers, and above all, C’s emphasis on programs that compile to efficient machine code and cooperate naturally with the abstractions of current operating systems."
-Preface of "The Go Programming Language"[1][2][3]
"We’ll start with the now-traditional ‘‘hello, world’’ example, which appears at the beginning of The C Programming Language, published in 1978. C is one of the most direct influences on Go, and ‘‘hello, world’’ illustrates a number of central ideas."
- Chapter 1.1 Hello World, "The Go Programming Language"
> But for those of us looking for a C alternative, we have it, and it's Go. Is it a perfect replacement? I don't think so. But it is the closest spiritual successor the industry has.
I don't understand the "spiritual successor" part: Go intentionally broke ABI compatibility with the C world and intentionally does a lot of very un-C-like things: a large standard library, a GC'd runtime, a compiler toolchain that reimplements the "standard" toolchain, etc.
Go is a clear productivity, reliability, and security improvement over C. But I don't know if I would characterize it as a successor language, other than in the "came from the Bell crowd" sense. Which also applies to C++.
Edit: Go is also not well known for binary interoperability, which is one of C's main selling points.
I had this conversation before and I think I know what they mean; they don't mean it in a low-level, technical sense. It's meant more in the sense that Go is very much of a similar philosophy in terms of abstractions not getting in the way of your work. That is, if you're a moderately competent Go dev there's little to decipher looking at pretty much any codebase.
In other words, you'd spend most of your time familiarizing yourself with the code itself, not Go.
I think I understand that as a virtue of Go, but I don't see how it makes it a successor language: C famously has no abstractions, meaning nothing that gets in the way of your work and nothing that helps it either.
The most productive C codebases are the ones that have existed for years, where every possible utility and type has been built from scratch out of need; Go programmers are able to be productive from the get-go.
Yes, if Go is a 'successor' then obviously it has to offer something over C, like a standard library. However it is still very much a low abstraction, get out of my way language 'in the spirit of C'.
It's obviously not a true successor as it has a GC for one, but one way I like to think about it is that if Java is in some ways a more convenient C++ for application development, if you will, then Go would be that but a more convenient C, not C++
The 'C successor' comes more from the philosophical approach of Go's core team to the language design being similar to that of C, not it being a practical successor to C as such.
Okay, but we haven't really gotten to what about the "philosophical approach" of Go's design makes it a spiritual successor. It can't be the GC, as you've pointed out. It can't be the standard toolchain reuse (cf. Unix composability), or a universal ABI for other languages/ecosystems to integrate with either.
If it's just this "get out of my way"-ness, then my impression is that it's not as much a successor to C as it is just a good, modern, high-productivity language. You don't really see Go programmers reimplementing basic data structures (or stamping them out, over and over again) the way you do in C land.
"Go bears a surface similarity to C and, like C, is a tool for professional programmers, achiev- ing maximum effect with minimum means. But it is much more than an updated version of C. It borrows and adapts good ideas from many other languages, while avoiding features that have led to complexity and unreliable code."
"Go is sometimes described as a ‘‘C-like language,’’ or as ‘‘C for the 21st century.’’ From C, Go inherited its expression syntax, control-flow statements, basic data types, call-by-value param- eter passing, pointers, and above all, C’s emphasis on programs that compile to efficient machine code and cooperate naturally with the abstractions of current operating systems."
"All programming languages reflect the programming philosophy of their creators, which often includes a significant component of reaction to the perceived shortcomings of earlier lan- guages. The Go project was borne of frustration with several software systems at Google that were suffering from an explosion of complexity. (This problem is by no means unique to Google.)
As Rob Pike put it, ‘‘complexity is multiplicative’’: fixing a problem by making one part of the system more complex slowly but surely adds complexity to other parts. With constant pres- sure to add features and options and configurations, and to ship code quickly, it’s easy to neglect simplicity, even though in the long run simplicity is the key to good software."
Simplicity requires more work at the beginning of a project to reduce an idea to its essence and more discipline over the lifetime of a project to distinguish good changes from bad or perni- cious ones. With sufficient effort, a good change can be accommodated without compromis- ing what Fred Brooks called the ‘‘conceptual integrity’’ of the design but a bad change cannot, and a pernicious change trades simplicity for its shallow cousin, convenience. Only through simplicity of design can a system remain stable, secure, and coherent as it grows.
The Go project includes the language itself, its tools and standard libraries, and last but not least, a cultural agenda of radical simplicity. As a recent high-level language, Go has the bene- fit of hindsight, and the basics are done well: it has garbage collection, a package system, first- class functions, lexical scope, a system call interface, and immutable strings in which text is generally encoded in UTF-8. But it has comparatively few features and is unlikely to add more. For instance, it has no implicit numeric conversions, no constructors or destructors, no operator overloading, no default parameter values, no inheritance, no generics, no exceptions, no macros, no function annotations, and no thread-local storage. The language is mature and stable, and guarantees backwards compatibility: older Go programs can be compiled and run with newer versions of compilers and standard libraries."
The authors Frequently Asked Questions sections regularly refer to C.
C doesn't have a standard ABI, so I don't understand why you keep making statements as if it does.
As stated above: this is arguably a property of Go, but very definitely not a property of C.
The rest of this is "Go borrows good ideas from C," which nobody doubts. It doesn't establish the kind of successor relationship you've claimed. Most of the ideas mentioned also apply to Rust.
Apart from a paucity of syntax, in what sense does Go obey "less is more"? It's well known for having a large and high-quality standard library (it has an X.509 parser! Even Python doesn't have that!).
In many ways, the "less is more" property exists on separate axes for C and Go: C's syntax is relatively complex while Go's is relatively simple. C's standard library and runtime are minuscule; Go's are large.
It's all very academic and I don't have a strong opinion on it either way but I would say C has a relatively simple syntax (no classes, no inheritance, few keywords, few symbols in its syntax etc.) but its semantics (i.e. how to use the simple syntax to produce correct software) are fairly complex.
In comparison, Go has both; a relatively simple syntax and semantics, (except for a few exceptions).
I guess people focus on the syntax part of Go to say it's like C and the semantics being easier is what makes it better/a successor? It's all rather vague and I'm not a proponent of this mantra myself, but that's how I understand it anyway.
With regards to the stdlib, that doesn't violate 'less is more' if you argue from a 'things to learn' point of view. A large stdlib doesn't really add to the complexity of the language feature set itself. It just provides utilities you don't have to write/hunt for.
In comparison Rust just added support for generic associated types. That's a language level feature, in that sense Rust is more akin to C++ than C.
Go added generics, but apart from that it doesn't tend to add 'big' language features.
And if you adopt this mindset and close one eye it can just about make sense to liken C and Go.
> It's all very academic and I don't have a strong opinion on it either way but I would say C has a relatively simple syntax (no classes, no inheritance, few keywords, few symbols in its syntax etc.) but its semantics (i.e. how to use the simple syntax to produce correct software) are fairly complex.
I agree that C's syntax is visually simple, but it's actually pretty complex from a parsing perspective: it's both context sensitive and binding sensitive, meaning that a correct C parser needs to be aware of the current parsing context's "live" bindings (including other functions) in order to resolve ambiguities like this:
int foo(int(bar));
(`foo` could either be a function or a local `int` variable.)
I can understand that ultimate mindset! Rust is certainly a much more complex language from a syntax and semantics perspective, and (I think) the community wouldn't want to claim the "C successor" label anyways.
The syntax is simple, but more importantly they have left out a lot of things other languages take for granted: inheritance, exceptions (panic is not oft used) and of course till recently generics.
Yes the std library is larger but I’d say that is a good thing and is not the language but the ecosystem.
I think Rust is a competitor for C in much the same way that C++ is a competitor to C. And I suspect it will eventually gobble up more of C's market share than C++ has, because it's a much better language (no disrespect to C++ - Rust is at least 20 years newer), the linux kernel being a good example of this effect.
There probably is still a niche for a simpler language than Rust or C++, but I don't see Go fulfilling that niche. Zig seems like the strongest candidate here.
Agreed, but I don't see Zig ever fulfilling that niche either as the compiler is too opinionated (like Go) in ways that are off-putting to a significant subset of developers, myself included.
Is this referencing things like “won’t compile if there are unused variables”? I agree this is a problem, but I have hope that this will actually get changed in zig, which seems to be more of a community run project than Go.
Yes, but based on the available information in the Github issue(s), I don't think it is going to change, although I'd be happy if it did.
I believe Zig and Go's adoption is essentially capped with the compiler being this pedantic, as it overly bisects potential adopters. It is too polarizing. Some people love it, some people dislike it, but for the people who dislike it, it is enough to write off the entire language.
> But for those of us looking for a C alternative, we have it, and it's Go.
What exactly have you been using C for?
Go has a very heavy runtime and is garbage collected. It also cannot be embedded within other languages. Go has very different performance characteristics. Go on embedded might be possible but would be a big exercise for the reader.
I'm really not sure what you mean by Go is C alternative.
The only similarity between C and Go I see is the syntax and that there were no generics, and even that is no longer the same. The only other similarity is the need to cast to and from the equivalent of `(void *)` to get around the typechecker.
A language with a GC and a green thread runtime can perhaps replace C in specific applications, but I definitely wouldn't call it a spiritual successor to C. There's way too much extra stuff between you and the CPU. Odin and Zig seem most likely to take that crown one day. Maybe.
You might consider yourself a C programmer but you'll see a push back being accepted as one when you look at Go as a C alternative. For a language to be a C alternative it should offer at a minimum full memory control and layout. The compiled instructions would map very closely to the code with barely any runtime related calls sprinkled and definitely not GC ones. Go is a great language, but vastly different than C.
>Oh, yeah, I missed that sorry.
>
>>Is this not a breaking change?
>
>Some would argue it's a "fixing" change.
yikes. This article really speaks to so many of the pain points I had with Go; the language itself is fine but the whole standard library has so many just...odd...decisions that you don't notice until they blow your foot off when something off the happy path happens like, as the author notes, a server stops responding or your system boots with a wall time of unix epoch.
Depends on what you think C is. If you think it's a simple language, then yes, Go is reasonable to view as C's successor. But if you think of C as a language for writing OSes and doing low-level programming, I'm not sure that Go is there.
What lasting concepts of C do you have in mind? I think Go if anything is a contender for Python: a significant number of use cases that require C e.g. embedded, hardware interaction, kernel etc. are really not good use cases for Go at all.
Go is the language i use the most and in almost no sense would I consider it a replacement or contender for C. If anything it's replacing things in the opposite direction, I rarely use python now.
C is used for low level stuff e.g. embedded, OS, drivers. Go is GC'd which is a huge barrier. This is honestly one of the most bizarre takes I've ever seen on Rust vs Go. If I had to bet I'd say Rust will never replace C but it won't be because Go did so instead.
I feel like this is just a misunderstanding (probably from a somewhat junior engineer, or at least an engineer unfamiliar with low-level programming) between segfault and panic.
Proving that array-out-of-bounds never happens is just a special case of proving that an assertion error never happens, which is intractable in the most general case.
Consider a program in which an array was accessed based on the index provided by some user input -- how could this ever be proven to never go out of bounds?
Rust won't segfault here like some naive C will, but it will panic, because you hit the assertion that the index is less than the length of the array.
Formal verification to prove that a program will never panic is a very neat field unto itself. There is some work in this space in Rust, but it's bolted on top, not built into the language (this is the case for most formal verification of most non-research languages).