This is so true. I feel like I've learned more about programming in the last two years making Cassette[1] than in the past decade of professional software development.
Every developer should make a language at some point. I want to see everyone's weird, half-baked languages! I want to hear about what other people struggled with, and the tradeoffs they've made!
This looks neat. I tried VB.NET a few weeks ago, and felt like it was mostly pointless since almost all resources are done in C# and so you have to translate from C# to VB.NET anytime you want to do anything, which can be a bit of a pain.
But one thing I really liked was the "begin end" type of code blocks.
There's also a lot less visible sky in a lot of places in a city—it only takes four or five stories to limit your view and block the moon a lot of the time. And in my experience, it's less the brightness of lights than the sheer volume of light from a city that reduces the apparent brightness of the moon and stars. I'm always amazed how you can see the halo of a city over the horizon at night just from its lights.
But has that been your experience with the full moon specifically? That thing is surprisingly bright—I'm thinking less about direct visibility of the orb than I am about just the general illumination in the area.
And again—the effect size is much smaller for the urban group than the non-urban groups, so I'm not suggesting that the light is starkly noticeable if you're not looking for it, just that it's probably detectable by the human eye and processed from there unconsciously.
In my mind you'd have to literally be unable to detect the difference in light between full moon and no moon (with a sensor with similar sensitivity to our eyes) before speculating about human gravitational senses becomes a rational move. We know we have eyes, and we know that what they're wired up directly to frequently surprises us.
For some background on the quest to discover new elements, I highly recommend this video: https://www.youtube.com/watch?v=Qe5WT22-AO8. The main story about Ninov's fraud is pretty interesting, but the beginning does a good overview of the recent history.
Soon as you said Ninov, I knew whose video that was immediately. BobbyBroccoli seems to make rather good documentaries on weird subjects, and they seem to be pretty well researched.
> "a + ogonek accent" and another (properly) sent "a with ogonek" (these print the same but are semantically different!)
How can these possibly be semantically different? Isn’t the point of combining characters to create semantic characters that are the combination of those parts?
There's a semantic difference between "accented letter" and "different letter that happens to visually look like another language's accented letter".
"Ą" in polish is not "A" with some accent. And the idea behind unicode was to preserve human written text, including keeping track of things like "this is letter A1 with an accent, but this is letter A2 that looks visually similar to A1 with accent but is different semantically". Of course then worries about code page size resulted in the stupidity of Han unification, so Unicode is a bit broken.
Unless there's some nuance I'm missing, I think you're reading too much into the word "accent".
Especially because the codepoint is actually called "Combining Ogonek".
And for anyone writing in Cyrillic, it's actually more accurate to use the combining form, even as its own letter, because the only precomposed form technically uses a latin A.
But my main point is that I do not think there is supposed to be any semantic difference in Unicode based on whether you use precomposed or decomposed code points.
"Ą" is a separate letter in polish alfabet, not an accented variant of "A".
There are writing systems where combining accents are used to represent just variation on a letter. Use of combining characters for "Ą" (and "Ć" and "Ł" and many other so-called "polish letters") is, at best, a historical artefact of trying to write them in deficient encodings.
It doesn't matter that it's a separate letter in an alphabet, you're denying the obvious - it IS an accented (or ogonek'ed) variant of A, and you can achieve this in Unicode in 2 ways: having one id for a precomposed variant and composing the variant from two ids.
There is no semantic difference, just an encoding one, the end result looks the same and means the same thing (well, to a point, it still depends on the context - like what language you mean - but within the same context it's the same thing and there are even Unicode rules to treat it the same like in search etc.)
And precomposed is just the same historical deficiency - you could've just as well designed a more compact encoding with no precomposed letters, only combinations
Personal choice, is what I'd say. You can get a lot of mileage out of implementing a dynamic language with NaN boxing[1].
It really depends on the kind of language you're trying to build an interpreter for, and what purpose it could serve. For dynamic languages, I'd say looking at the core types of Erlang is a great place to start (Integers, Symbols, Functions, etc.). For a statically typed language things get more complex, but even with just numeric types, characters, and some kind of aggregating type like a Struct you can build up more complex data structures in your language itself.
It was my first time making a language, so I built into the language whatever was needed to build something cool with Crumb.
Wanted first class functions to simplify the parse step (they can be treated like any other value), but I needed a different mechanism to invoke “native code” vs user-defined methods, so there’s two different types for that.
Needed some kind of compound data type, but I didn’t want to deal with side effects from pass by reference, so Crumb implements lists, but they are always pass by value :)
I'm a full stack developer with 11 years of experience. I'm really good at Javascript and Elixir, and I have experience with Django, Wordpress, and Rails. Recently I've been tinkering with programming language design.
I'm Zack, a freelance full-stack web developer in Seattle. I've worked with clients on projects ranging from geography-based census data analytics to processing state COVID test results. I've designed web APIs, integrated with many third-party APIs, and restructured application architectures.
I have experience with Elixir, Node.js, React, Django, Typescript, and familiarity with other technologies.
Quake renders itself per-pixel. With a mid-90s CPU (no GPU). And it does vastly more computing than just one bit op per pixel. Any remotely modern computer could fill hundreds (if not >1000) of 96k frames per second. The STM32f7 could likely do it at real-time frame rates with some optimizations.
Naw, Quake was heavily optimized by some of the best x86+pixel gurus ever, like Michael Abrash. It look a lot of work to get there. But there have been cycle-optimized pixel graphics since the very first video game. Quake was also lower res and far simpler lighting and lower quality than modern games. Rendering pixels on the CPU without a metric ton of optimization effort was always slow, and exponentially slower the further back you go. GPUs are the primary thing making pixel rendering fast now, and it’s because of hardware and a parallel programming model, not because of software.
I might be biased because I do perf work and everyone I know does perf work, but I feel like there’s more people doing optimization now than there were total programmers 20 or 30 years ago. You’re not wrong, and there are plenty of people who write software and don’t feel the need to worry about perf because there aren’t any people that it slows down. Michael Abrash might agree with that being reasonable, since he defined optimization to mean perceptible wait time:
“Before we can create high-performance code, we must understand what high performance is. The objective (not always attained) in creating high-performance software is to make the software able to carry out its appointed tasks so rapidly that it responds instantaneously, as far as the user is concerned. In other words, high-performance code should ideally run so fast that any further improvement in the code would be pointless.
“Notice that the above definition most emphatically does not say anything about making the software as fast as possible. It also does not say anything about using assembly language, or an optimizing compiler or, for that matter, a compiler at all. It also doesn’t say anything about how the code was designed and written. What it does say is that high-performance code shouldn’t get in the user’s way — and that’s all.”
I'm Zack, a freelance full-stack web developer in Seattle. I mostly work in Elixir, and I've worked with clients on several Elixir web projects ranging from geography-based census data analytics to processing state COVID test results. I've designed web APIs, integrated with many third-party APIs, and restructured application architectures.
I also have significant experience with Node.js, React, and custom Javascript front-ends, as well as some experience with Django, Ruby on Rails, and Wordpress. In my spare time I've written a software 3D rasterizer and a functional language called Cassette in C.
Every developer should make a language at some point. I want to see everyone's weird, half-baked languages! I want to hear about what other people struggled with, and the tradeoffs they've made!
[1]: https://cassette-lang.com