
Goals and Priorities for C++ - cyber1
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2137r0.html
======
SiVal
I find the combination of 1) C++, 2) a goal of "Code that is simple and easy
to read, understand, and write", and 3) a non-goal of backwards compatibility,
to be strange.

The reason C++ is the federal tax code of programming languages is that it is
the accumulation of decades of ideas that seemed good at the time--some even
were--where each better way to program ended up as the lifeblood of some
special interest group that can never be removed, only piled on top of.

By subsetting C++, you can customize it into almost anything you want with
just a few exceptions. You can't meaningfully simplify it in a backwards
compatible way without breaking most subsets.

So these people are saying it should be simplified in a non-BC way, the only
real way to do it, that preserves their own favored subset.

That's what most of us call a new programming language. Trying to turn C++
into !C++ guarantees they'll be fighting over this for years. With the
resources Google has, I don't understand why they don't just throw off all
C++-based constraints and use those years to create exactly what they need.
With half the PL PhDs on the planet in their employ, you'd think somebody
would be available. (Maybe even enough for multiple, parallel teams req'd to
share ideas with each other.) It would take a few years to mature enough to
rely on, but they'd be spending those years in the C++ cmte debates anyway. A
new language called "non-BC C++" will need new tools, new libs, etc., anyway,
and they'll have the current C++ to keep using until their alternative is
ready.

Why not just create that simple, easy, customized-for-server apps language
from scratch instead of making it by breaking C++?

~~~
mehrdadn
Yeah. Literally the only thing I can think of off the top of my head that you
can probably remove from C++ without significant damage is virtual base
classes, and even that I'm not really sure about. (Curious if anyone here has
had much of a use for them.) But anything else seems like it would just
fundamentally destroy the language for various slices of its users.

~~~
w0utert
Maybe I’m misunderstanding, but by virtual base classes you mean base classes
with strictly pure abstract methods, that can be used with virtual
inheritance? Because in that case there is a very straightforward and useful
use case for them, which is composing interface classes using multiple
inheritance.

For example in some piece of graph-based code I’ve been working on recently,
all types returned through public interfaces are pure abstract interfaces.
There are different reasons for this, one of them being difficulty to
automatically wrap the implementation classes themselves to Python and Java
with SWIG, because they use CRTP and other template constructs that are
fundamentally incompatible with automatic script interface generation. There
is a base interface class INode with an implementation class template Node<T>,
but there are also derived interface classes that add additional interfaces to
INode, such as ICompositeNode for example. Without virtual base classes it is
impossible to have an implementation class CompositeNode<T> that derives from
both ICompositeNode (for the interface) and Node<T> (for the implementation),
because it is ambiguous. Virtual inheritance solves this.

I never considered this a very special or difficult use case, it seems pretty
much any kind of component-bases programming using abstract interface classes
would require this sooner or later?

~~~
mehrdadn
Can't you just cast though? Like this:
[https://gcc.godbolt.org/z/Mq74Bo](https://gcc.godbolt.org/z/Mq74Bo)

I get that it's a bit of an _inconvenient_ workaround, but it doesn't seem
_impossible_ to handle? (At least as far as I've thought it through. I might
be missing something though.)

~~~
w0utert
I would have to go back to the code to see what the difference is with your
example, but I think the problem was with overload resolution, where the base
interface/implementation classes have a function by the same name as the
derived interface/implementation classes but with a different signature. The
compiler error was already at the declaration site, and not at the point the
classes where instantiated.

Maybe I'm not even fully grasping the problem myself, but in any case virtual
inheritance on the interface classes resolved it ;-S

------
brandmeyer
This proposal should read "Google's Goals and Priorities for C++", seeing as
how almost half of the authors are from that company.

ABI stability? Backwards compatibility? Shipping binary code? The classic Unix
compiler&linker model? Those are some of my favorite features of C++ over the
competition. Just because GOOG chose to abandon them in their processes
doesn't mean the rest of us should be forced to.

~~~
digitalcapybara
The first paragraph of the abstract:

"This paper describes the goals and priorities which the authors believe make
C++ an especially effective programming language for our use cases. That said,
our experience, use cases, and needs are clearly not those of every user. We
aren’t pushing to directly build consensus on these points. Rather, this is
presented as a vehicle to advertise our needs from C++ as a high-performance
systems language."

I would say they are quite transparent about that...

~~~
mehrdadn
>> This proposal should read "Google's Goals and Priorities for C++"

> I would say they are quite transparent about that...

I don't even see the word "Google" anywhere?

~~~
uluyol
I don't think the intended audience of this document is hacker news, it's the
C++ language committee. So the question is whether the members of the
committee would recognize the authors as belonging to Google (which I assume
they would).

------
guggle
As a non-C++ programmer having to deal with C++ from time to time I don't
really have a problem with the language (yes, it's huge, but there is no need
to use everything) but with the tooling. Especially it's never clear to me how
I should deal with dependencies and build systems, it's all different from
project to project and often feel like a kludge coming from languages where
setting up the environment is not much more than one or two commands. Even
dealing with maven files and maven central repository felt easier than most
C++ projects I had to work with.

In lieu of C++ we hear a lot about Rust these days, with focus on performance
and reliability. But I suspect the rate of adoption is such because of the
productivity aspect (it seems to me the language itself has non-trivial
concepts and its fair share of syntax cruft from what I saw): having
integrated tooling, dependency management, all being a de facto standard for
the language. Same for Go, it's just very easy to start a project, add a
library, compile a project... everything is included.

I guess it's probably not a priority for experienced C++ programmers as they
are probably used to it, but I'm sure more people could build stuff in C++
without those barriers.

~~~
cmrdporcupine
The tooling is improving. It's so much better than it was. With CLion I have a
modern refactoring IDE of excellent quality (just don't try to use it with
large code bases like Chromium, though, ahem). Build systems have improved
remarkably. We have two excellent open source compilers.

I actually enjoy working in modern C++ much more than I did in Java, which I
did professionally for almost 10 years before this.

I like the idea of Rust, and followed it since the earliest days when Graydon
proposed it. I like the syntax and the tooling. But every time I begin a
project in it, the cognitive overhead of the memory safety features just
throws me a loop. I have no doubt if I were to dig into a large and
established code base using it it would click within a few days, but starting
on my own... I lose focus quickly.

I do like Zig, though. It is a very nice C++ alternative.

~~~
twoquestions
> I actually enjoy working in modern C++ much more than I did in Java

What do you like about working in C++ more than Java?

Generally, I hear from people working in C++ that it's the only tool for their
job, and only use it because no other tool measures up (in a despairing tone).
If you don't mind my asking, what do you use it for?

~~~
neutronicus
I am not the person you're replying to, but in my experience / opinion if you
want to do distributed high-performance numerical computing, executing on
heterogeneous hardware, C++ is really far and away the best language to get
this done. If you're CPU-only, maybe Fortran is a better option but when GPUs
enter the mix I don't see how you're going to have an easier time getting the
requisite performance with anything else.

I mean, like, you can interface with TensorFlow via Python but the big-kid
parts of the implementation are in C++ / CUDA.

------
hsaliak
Fuchsia makes the list of OSes to prioritize but not *BSDs. This doc is biased
toward the interests of Google. Which is Ok. They should just state that
upfront rather than saying “our use cases” and leave it to the reader to
figure it out.

~~~
why_only_15
macOS is arguably a BSD

~~~
saagarjha
Nominally.

------
kazinator
Okay, humor me.

What are you going to do if you declare that implementations must be little
endian? (It seems that's where that particular goal is headed.)

Will this be well-defined behavior?

    
    
       // Writing a 1 to u64 requires that u32 will read 1,
       // because everything is little endian!
    
       union u {
          uint32_t u32;
          uint64_t u64;
       }
    

Or, if not, what's the point?

What's the good in it, other than that: whereas a big endian machine can still
have a C++ compiler, that compiler just can't be _called_ conforming any more?

Nothing changes other than classification. Nonportable code depending on
little endian continues not to work on that machine, though now that code
might be blessed as "strictly conforming".

Someone wanting code to work on that machine will make it work the same way as
before, and quite possibly keep their code strictly conforming. Like before,
it will be possible to express code without endian dependencies, and that will
work on those "broken" implementations that support big endian systems.

What happened to the idea that systems require programming languages, not the
other way around?

~~~
dodobirdlord
> What's the good in it

Compilers can be made more simple, saving the time of expert compiler authors
to work on improving their compilers in other ways. This lines up with some of
the other listed goals,

> Syntax should parse with bounded, small look-ahead.

> No semantic or contextual information used when parsing.

i.e. drop features from the language that significantly complicate the
compiler.

> What happened to the idea that systems require programming languages, not
> the other way around?

Hardware got very standardized and software got very expensive.

~~~
kazinator
[The middle two quoted items did not appear in any version of my comment; my
comment is very distant from the area of parsing.]

> _Compilers can be made more simple_

In relation to this issue, I do not see how. Today, someone can make a C++
compiler for a little endian system, or a retargettable one only for a group
of little-endian systems, without any additional difficulty coming from the
fact that C++ can also be implemented on big endian. Quite literally, that
compiler developer need exert zero brain cycles even thinking about big
endian.

C++ doesn't require implementors to have anything to do with big-endian.

A language standard that doesn't talk about endianness at all is smaller and
simpler than one which specifies it, because that represents extra detail
which generates extra requirements that require more sentences.

For C++ to "support" big-endian, all it has to do is not mention it: not give
any requirements about it. That's not something that then needs to be dropped
at any time.

A compiler project can drop big-endian; that's obviously different. ISO C++
isn't a compiler project, though.

~~~
dodobirdlord
The two middle quotes come from the article. They illustrate that the authors
are interested in removing aspects of the language that contribute to tooling
complexity.

The compilers that the authors care about are a small number of open source
compilers that have very comprehensive support, and consequentially are very
complex, probably just gcc and clang. With regard to this perspective,
something like “lines of text in the standard” isn’t a useful metric. Few
people are going to read the entire standard, so there’s little harm in making
it longer, but many people are going to work on large compiler projects, and
many, many people are going to use the compilers these projects produce. The
standard can be made more complex in ways that reduce the required complexity
of comprehensive optimizing compilers.

~~~
kazinator
Numerous instances of wording in the article strongly suggest that the authors
seem to be working under the confusion that ISO C++ is a compiler project,
rather than a language specification.

> _The compilers that the authors care about are a small number of open source
> compilers_

If so, that is an unacceptable position for people on an ISO committee for
that programming language.

~~~
dodobirdlord
> Numerous instances of wording in the article strongly suggest that the
> authors seem to be working under the confusion that ISO C++ is a compiler
> project, rather than a language specification.

There is no possibility that any of the authors are working under this
confusion. The language specification heavily influences compiler projects
because compiler authors tend to target standards. The argument that authors
can just write their own non-standard compiler doesn't generally carry much
water. Moreover it is extremely common for the standard to be written
specifically to assist compiler authors. Consider the numerous instances of
undefined or implementation defined behavior.

> If so, that is an unacceptable position for people on an ISO committee for
> that programming language.

The purpose of a standards committee is to standardize, not to standardize
anything in particular. If the committee can be persuaded that these use cases
are important to a large body of C++ programmers and that the alternative may
be a split in the community, the committee may be inclined to adopt some of
these topics as areas for improvement.

~~~
kazinator
> _not to standardize anything in particular._

Oh no, no. There is something particular to standardize: namely something that
is already out there and working.

Secondly, if there are multiple such somethings that are out there and
working, but are not compatible, then this is where the standard is really in
its own element, to help iron out that situation and improve interoperability.

In this second area, the standard may engage in a bit of invention. Pure,
unprompted invention is something ISO standards should eschew.

------
CoolGuySteve
> 2\. Non-goals

> 2.1. Stable language and library ABI

> 2.2. Backwards or forwards compatibility

> 2.3. Legacy compiled libraries without source code or ability to rebuild

For fucks sake, C has been stable for decades but somehow C++ just can't
manage?

This is such an obnoxious attitude. At least let us automatically generate a
set of C-style functions that take an opaque void* representing a C++ class
instance if you can't be bothered to do the work.

~~~
google234123
Calm down my friend. Why should C++ be exactly like C? There are reasons that
C is a declining language. Also, I don't get why you think they have obnoxious
attitude? They do go into detail for each point, I don't think your
characterization is fair.

~~~
_bxg1
If anything it seems like C++ is the one in decline. C is still the only
option for lots of use-cases; C++ is a) succumbing to feature-bloat and b)
having different parts of its userbase siphoned off by Go, Rust, Swift, even
C# (games; both Unity and Godot).

~~~
pjmlp
The only thing keeping C around are FOSS UNIX clones and embedded developers
that are religiously stuck with C89 and their PIC compiler extensions.

Even Arduino like boards use C++ nowadays.

Unity and Godot have their graphics engine written in C++.

Rust, Swift are dependent on LLVM, written in C++.

Go, it isn't really in the same league of programmer expressiveness.

~~~
worik
I have spend the quarantine hacking in C. On FreeCIV.

Still used lots and lots of places

~~~
hannofcart
Freeciv fan here. What are you working on in it?

~~~
worik
Forked Attitude AI and making a rules based AI

------
jonstewart
Most of this I agree with. Frankly, though, Stroudtrup’s design goals are
better: multi-paradigm programming, you don’t pay for what you don’t use, etc.

Some points: * There are still tons of 32 bit machines out there—old Windows
machines chugging along, usually disconnected (thankfully), and you’d like to
be able to use your _current_ codebase to target them.

* If C++ is to focus on performance, it needs much better tooling around UB, be a bit more permissive of old behavior that now triggers UB, with formal semantics, AND it needs to define semi-portable SIMD vector operations. Getting the utmost performance out of modern CPUs entails using vector operations.

* It also makes me sad to say goodbye to big endian.

~~~
gumby
Though I cut my teeth on BE machines, I know that’s in the past (as are their
non-power-of-two word lengths). But 64-bit machines is an unnecessary jump.
Even 32-bit microcontrollers are expensive for many applications.

~~~
gpderetta
I doubt that C++ dropping 32 bits is ever going to happen.

------
wscott
C++ needs to just steal the idea of Epochs from Rust
[https://vittorioromeo.info/index/blog/fixing_cpp_with_epochs...](https://vittorioromeo.info/index/blog/fixing_cpp_with_epochs.html)

By default, everything is backward compatible, but to use new features you
need to declare this compilation unit as being part of C++23 (or 26, 29). Then
code that uses the new stuff also ignores the old and can have different
rules. But it can still be combined with legacy code and libraries. You know
at compile and linking time when crossing boundaries and can do the right
thing. This actually combines nicely with modules since we already need a new
build infrastructure to take advantage of modules.

~~~
plq
Nothing is stopping you from passing different -std=c++XY arguments to
different compilation units in your codebase. I don't know about other
compilers but object files compiled with msvc, gcc and clang (the ones I
happen to work with) using different -std values (starting with c++11) are
compatible* and the respective teams working on these compilers reportedly
make conscious efforts to make sure this remains the case.

CMake for instance makes this very easy with its set_properties() call.

* Of course there are caveats and corner cases. See eg. [https://stackoverflow.com/questions/46746878/is-it-safe-to-l...](https://stackoverflow.com/questions/46746878/is-it-safe-to-link-c17-c14-and-c11-objects) But these seem like they could easily be avoided in real-world use-cases.

~~~
wscott
Yes I can add new features to some files or I can compile with c++17 and link
to an old C library. What I can't do is remove features or change the default
calling conventions/ABI and still link with old code. If you declare in the
interface and in the code that this is new code with new rules then this is
possible. When you cross library boundaries the compiler adjusts. That is what
rust allows and what C++ could do.

------
dwrodri
Interesting post. I'm bummed that there isn't more interest in bringing C++ to
ASIC/Accelerator scene from the committee. I think projects like Nvidia's
Thrust[1] show that C++ is poised to be fantastic medium for software
developers to break into experimenting with FPGAs, GPUs, and potential future
commercially available equivalents to the TPU. There is some really cool
cross-platform software infrastructure that is still under active development.

1 = [https://thrust.github.io/](https://thrust.github.io/) 2 =
[https://mlir.llvm.org/](https://mlir.llvm.org/)

~~~
jcelerier
> Interesting post. I'm bummed that there isn't more interest in bringing C++
> to ASIC/Accelerator scene from the committee.

isn't that purely an implementation issue ? e.g. both Apple and NVidia have
brought C++ to the GPU, one with Cuda and the other with Metal... that did not
require any help from the committee.

~~~
dwrodri
No and yes. Is it strictly the language’s responsibility to deal with the
implementation? Not really. I agree in theory.

In practice, Google is behind this statement. Nvidia is behind this statement.
Two of the largest supercomputing facilities in the US are backing this. The
document heavily implies that these are the goals for C++ based on their use
cases. These are also organizations that have had a massive role getting the
accelerator space to where it is today.

Sure, MLIR is technically a project underneath the LLVM foundation, but isn’t
it mostly Google employees who were working on it. Lattner has moved on from
there and is now setting his sights on RISC-V it seems.

As someone who likes modern C+ + and is interested in new, open hardware, I’m
really happy RISC-V is being made a priority, along with bare metal
compilation. But I’m also confused and here’s why.

I agree entirely that CUDA support for modern C++ is moving along nicely,
especially since CUDA 11 supports C++17. However, a good chunk of these
authors are Nvidia employees and now they’re implying their “use case” for C++
isn’t associated with accelerators?

------
Animats
It's amusing to see the C++ committee finally taking safety seriously. I tried
to get some interest in safety from there many years ago, but they were off
into template la-la land back then.

------
mianos
Not to be negative but this reminds me of Python 3, where they want to make a
largely but not completely compatible new C++. Like Python 3, it would be
great to get a bunch of things fixed and force people to use the new, better,
way of doing things. But, this comes with complications.

I am a C++ and Python developer. I did C++ for 20 years then python for 9,
then C++14/17 for 3. I really like the new C++ and think it could be made into
a better language while still retaining the deterministic performance.

What we don't want is Python 2 to Python 3 situation. That might mean calling
it a new language.

~~~
Pxtl
Honestly, I think python 3 broke too little. I mean, if you're going to break
backwards compatibility, make it worth it. That should've been the time to
lock down the language enough that PyPy and other non-cpython interpreters
could finally thrive.

~~~
CJefferson
I agree. They changed 'print', which broke basically every non-trivial program
ever written, but then didn't clean up so many other smaller issues.

------
saagarjha
> We of course continue to care that C++ supports all of the major, modern
> platforms, the hardware architectures they run on, and the environments in
> which their software runs. However, this only forms a fairly small and
> focused subset of the platforms that have historically been supported and
> motivated specific aspects of the C++ design. We would suggest this be
> narrowed as much as possible.

:(

~~~
AnimalMuppet
Well... take abandoning non-8-bit byte sizes. Are there _any_ such
architectures where someone was planning on shipping a C++23 compiler, but
this would make it difficult or impossible? My impression is that for such
platforms, you'd be lucky to get C++11.

Non-little-endian might be a bigger issue - it would rule out quite a few
embedded CPUs.

~~~
phkahler
If your code depends on endianness you're either over-optimizing or doing
something wrong. At least that's what I've learned over may years of reading
the lessons of many people.

~~~
dodobirdlord
If you're going to deploy your protocol buffer parsing library to literally a
million servers you may be prepared to optimize the last shred of performance
out of it. The authors have an unusual use case.

------
cozzyd
It's clear these authors don't care much about the embedded community, which
is too bad, since C++ is basically one of the two choices you have (the other
being C). I prefer C on embedded (just because it's easy to accidentally
allocate memory in C++), but there are large embedded ecosystems in C++
(Arduino, mbed).

~~~
ausjke
what made you think they don't care about embedded space, just curious. Yes I
use c++ for embedded though much less comparing to C.

~~~
cozzyd
From the article:

 _A specific open question is whether supporting 32-bit hardware and
environments (including for example x32) is an important goal. While some
communities, especially in the embedded space, currently rely on this, it
comes at a significant cost and without significant advantage to other
communities. We would like to attempt to address the needs of these
communities within a 64-bit model to simplify things long-term, but this may
prove impossible._

This is not just microcontrollers (hardly niche, but obviously different
performance envelope), but also plenty of 32-bit Linux single-board computers
(e.g. BeagleBoneBlack). Not to mention the earlier mention of endianess other
than little.

~~~
ausjke
this is crucial, thanks! I'm to forgo c++ efforts due to this
decision/intention.

on the other hand, rust does not support 32bit arm at tier1 either.
[https://forge.rust-lang.org/release/platform-
support.html](https://forge.rust-lang.org/release/platform-support.html)

golang so far still supports 32bit, but who knows for how long, after all it's
Google who can do anything they want, plus golang is too fat for many embedded
boards.

Thank goodness we will have C stick around for many decades in the long run,
along with ash probably lua5.1 for scripting that is.

~~~
ausjke
also I hope it's for the future new c++ standard, i.e. for c++17, c++20 we
will have full 32bit support, that should suffice for one decade I assume.

otherwise we need get rid of c++ for 32bit embedded fast.

------
twoodfin
_We believe that many divisive issues in the committee come from a
disagreement on either stated or assumed priorities._

I’m curious: Which divisive issues would have had obvious resolutions if there
had been broad consensus on the C++ committee to adhere to these goals &
priorities?

~~~
dodobirdlord
Perhaps standardizing the byte as 8 bits, for one example.

Edit: Actually, they specifically call out little-endianness as a priority.

~~~
twoodfin
Surely the standard wouldn’t drop support for big-endian? And I don’t see what
you’d gain in exchange. Defined semantics for casting int* to short*? Woo?

These are smart, system-software-focused engineers, so there must be some
interesting directions being blocked off by insistence that the standard be
byte-order agnostic (and word-size agnostic, since they’re so keen on
64-bit)...

~~~
dodobirdlord
As with most of the things the authors describe, the advantage would be a
simpler compiler and a simpler language for static analysis. There are a lot
of things in the standard that the authors just don’t have any use for that
they would prefer to see removed in the interest of faster compiles, easier
compiler development, and more performance optimizations. Removing support for
big-endian might actually significantly speed up compilation. I’m sure it adds
a ton of branching to hot code paths to check some architecture endianness
enum every time the compiler wants to reason about things like bit shifts and
known-bits.

~~~
gray_-_wolf
Could you not just produce basically two compilers, one for big and one small
endian? I think that could be done with templates and such. Sure, bigger code
size, but that does not seem something that would pain people from google (as
I can tell looking at golang).

~~~
dodobirdlord
Possibly, I'm no professional compiler author. But I imagine doing so would
require templatizing a lot of logic, and would significantly slow down the
compilation of the compiler itself, which is an important metric for
development velocity of compiler improvements. (The linux kernel is famous for
taking about the same amount of time to compile through its history. If
compile times get long enough people start paying attention to optimizing
compile time instead of performance.)

It was eye-opening when I read this blog post and learned that LLVM spends
about 0.4% of all compilation time reading the bytes of the string "null-
pointer-is-valid" over and over. Hot code paths can be pretty hot.

[https://nikic.github.io/2020/05/10/Make-LLVM-fast-
again.html](https://nikic.github.io/2020/05/10/Make-LLVM-fast-again.html)

------
1over137
Interesting that they didn't list any of the BSDs in their list of OSes they
think should still be supported.

------
stephc_int13
From my perspective, people who wrote this have a lot of very good ideas and
principles about software engineering.

But the impression it gives me is that C++ is not fulfilling the expressed
requirements, and it is not currently moving in the right direction.

I read it as a quite strong critic of the current state of the language...

~~~
Koshkin
To me it looks like the paper describes exactly where C++ has been going since
its inception to this day.

~~~
jejones3141
I'd be very interested in any example of C++ having "Code that is simple and
easy to read, understand, and write" as a goal.

~~~
titzer
You mean can't do overload resolution, run template specialization and
selection, implicit conversions, and move constructor semantics all in your
head at once?

------
roca
Previously discussed at
[https://news.ycombinator.com/item?id=22702041](https://news.ycombinator.com/item?id=22702041)

------
fwsgonzo
[https://gist.github.com/fwsGonzo/6b12f502a3873725c17f44dc5e2...](https://gist.github.com/fwsGonzo/6b12f502a3873725c17f44dc5e206c2e)

Very preliminary benchmark C++ Vs Rust on RISCV. C++ is just in its own
league. Alone.

Having been at CppCon I can attest to the atmosphere of performance first in
C++.

~~~
steveklabnik
What is the context here? This doesn’t describe the test at all, it’s just a
bunch of numbers.

~~~
fwsgonzo
The explanation is long but I have written about it here:

[https://medium.com/@fwsgonzo/adventures-in-game-engine-
progr...](https://medium.com/@fwsgonzo/adventures-in-game-engine-programming-
part-3-3895a9f5af1d)

The benchmarks are done here:
[https://github.com/fwsGonzo/script_bench](https://github.com/fwsGonzo/script_bench)

I am trying to reach out to someone who is really good at Rust to see if
there's ways to balance the scales.

One of the things I am dealing with:
[https://gist.github.com/fwsGonzo/ff0b7f41c521eb0cc4212f3c42f...](https://gist.github.com/fwsGonzo/ff0b7f41c521eb0cc4212f3c42f0bdc4)

~~~
steveklabnik
Seems like you're getting the bounds check, which makes sense, there's no real
way to eliminate that with this simple example.

> I am trying to reach out to someone who is really good at Rust to see if
> there's ways to balance the scales.

If you use reddit, posting to /r/rust will be super helpful. If you don't,
users.rust-lang.org is a decent spot. If you tweet, I can retweet from the
rust account.

------
gok
The first paragraph for the ABI non-goal section attacks a bunch of straw men
about why a stable ABI is bad, then the second paragraph says they would
actually be ok with doing what people actually want when they say they want a
stable ABI.

~~~
InfiniteRand
My read this is a matter of not wanting to be forced into maintaining a stable
ABI vs admitting that a stable ABI is good and useful and encouraging other
people to handle it.

Less charitably, it is saying, yes stable ABI is good but it is not our
problem.

------
cyber1
Mb this is time to design a new language with goals described in this paper.
Google has enough resources and experience to do this in the near future.

------
jimbob45
I feel like you could make a near-C++ that could really get traction.
Literally compile to C++ (and then compile from there) and just fix a ton of
cruft in the language including:

\- Stealing C# attribute notation instead of having the ridiculous __stdcall
sort of convention

\- Making a real fucking keyword for pure virtual functions instead of = 0

\- A real keyword for include guards

\- Function pointer syntax sugar

I now realize I'm describing D but D went too far. I just want like three nice
changes that still allow near unchanged compilation to C++.

~~~
MaxBarraclough
As others have said, it's not worth breaking away from the standard language
only to make a few shallow changes. It needs to be a fairly radical
improvement or it's just not worth it. If Kotlin were too similar to Java, it
couldn't have offered much reason to adopt it.

If you want a modern language that compiles down to C (not C++), there's Vala.
It even has some kind of async support now.

