Amen. It always baffled me that cross compiling was ever considered a special, weird, off-nominal thing. I’d love to understand the history of that better, because it seems like it should have been obvious from the start that building for the exact same computer you’re compiling from is a special case.
A few things come to mind, but I wasn't even alive then so what do I know XD.
On one hand, it seems rather strange, because back in the early days of C (and later C++) there were far more CPU architectures in play. Every big Unix hardware vendor had their own CPU architecture, whereas today we only have about six. (In my mind: x86, arm, mips, risc-v, ppc, and s390x)
But it might be that in the early days of C/C++, development involved connecting to large shared Unix environments where the machine you developed on what always the machine (or at least the same type of machine) the program would run on, and also that those vendors weren't exactly incentivized to make developing for competitor's architectures easy.
The tough truth is that there already is a cargo for C/C++: Conan2. I know, python, ick. I know, conanfile.py, ick. But despite its warts, Conan fundamentally CAN handle every part of the general problem. Nobody else can. Profiles to manage host vs. target configuration? Check. Sufficiently detailed modeling of ABI to allow pre-compiled binary caching, local and remote? Check, check, check. Offline vs. Online work modes? Check. Building any relevant project via any relevant build system, including Meson, without changes to the project itself? Check. Support for pulling build-side requirements? Check. Version ranges? Check. Lockfiles? Check. Closed-source, binary-only dependencies? Check.
Once you appreciate the vastness of the problem, you will see that having a vibrant ecosystem of different competing package managers sucks. This is a problem where ONE standard that can handle every situation is incalculably better than many different solutions which solve only slices of the problem. I don't care how terse craft's toml file is - if it can't cross compile, it's useless to me. So my project can never use your tool, which implies other projects will have the same problem, which implies you're not the one package manager / build system, which means you're part of the problem, not the solution. The Right Thing is to adopt one unilateral standard for all projects. If you're remotely interested in working on package managers, the best way to help the human race is to fix all of the outstanding things about Conan that prevent it from being the One Thing. It's the closest to being the One Thing, and yet there are still many hanging chads:
- its terribly written documentation
- its incomplete support for editable packages
- its only nascent support for "workspaces"
- its lack of NVIDIA recipes
If you really can't stand to work on Conan (I wouldn't blame you), another effort that could help is the common package specification format (CPS). Making that a thing would also be a huge improvement. In fact, if it succeeds, then you'd be free to compete with conan's "frontend" ergonomics without having to compete with the ecosystem.
The paper came out of work on ENIAC and was adapted to follow the approach in the paper but Baby was built from outset to use that approach and its design much more closely matches the architecture that has been used by almost all digital computers since. I don’t dispute that ENIAC is important but it’s role is more nuanced than this article implies.
The von Neumann report was written after von Neumann had several discussions with the ENIAC team about how to make a better computer as a successor for ENIAC.
The report was not published formally, but it was "leaked", so it does not have any credits for the ideas contained in it.
Because of this, with few exceptions it is impossible to determine with certainty which parts of the report are original ideas of von Neumann and which parts are ideas that von Neumann might have learned during the discussions with the ENIAC team.
An example of an idea that certainly did not come from the ENIAC team was the proposal to use an iconoscope CRT as the main memory (which was implemented first in the British Manchester computers, so such a memory became known as a Williams-Kilburn tube). The ENIAC team had a different idea of what to use as a memory, i.e. delay lines taken from radars. Von Neumann replaced this suggestion with a CRT, because he thought that a random-access memory is better.
The von Neumann report had an exceptional importance because it defined with perfect clarity what a digital computer should be, which should be its structure and then provided a detailed description of how such a computer should be designed, which was good enough to enable anyone who read the report to build such a computer. This effect really happened, and a great number of teams at universities, government agencies, independent research centers like IAS and various companies, both in USA and in other countries, have built electronic computers in the following decade, exploring various design options.
There is no doubt that the clarity of the report is due to von Neumann and whichever were the ideas of the ENIAC team about a future computer, they were much more jumbled.
Because the ENIAC team did not publish their ideas (and they did not intend to, because they already wanted to monetize what they had learned about computers, by founding a private company), it does not really matter what they thought. The world has learned how to make general-purpose electronic computers from the von Neumann report.
ENIAC was a programmable computing automaton, but it was not a digital computer in the modern sense of the word, i.e. a digital system with 4 levels of closed positive-feedback loops (the complexity of a digital system is determined by the number of levels of nested positive-feedback loops, combinational logic has 0 levels, a memory has 1 level, an automaton has 2 levels, a processor has 3 levels and a computer has 4 levels; these are minimum numbers, as a real device may have more levels than strictly necessary, to achieve various advantages).
The ENIAC team’s decision to spin off and incorporate was surely pushed along by how they got screwed multiple times by the academics - Goldstine and Von Neumann, plus the university itself. It’s easier to celebrate the free publishing of ideas if your name is at least going to be on the paper.
It seems like you’re not trying particularly hard to avoid the idea of “monetizing” the computer to sound pejorative. It was the creation of the computer _industry_ that transformed the world and established the import of computers, was it not? You were never getting there without monetization. What good is the spread of ideas if someone, somewhere doesn’t eventually start selling computers? This grates especially hard given that the academics were the ones who acted unscrupulously by lifting their ideas and publicizing them without permission or credit. (“Leaked” is a charitable way to say “deliberately disseminated without caution”).
I agree the stored program is important, but the stored program is of ENIAC vintage, even if it wasn’t implemented on it. And Eckert and Mauchly definitively came to this idea before the involvement of Von Neumann. The thing is, they had an obligation to finish the machine they had promised to build for the army before pursuing such a major redesign. So all they COULD do was informally collect their ideas for a 2.0. Von Neumann arrives, absorbs what they’re up to, synthesizes it (including ‘the big idea’ that ENIAC was missing), and the rest is history. That synthesis is published without their names, and that is why we talk about the Von Neumann architecture. Look, I’m sure it’s true that the crispness of that paper can be attributed to Von Neumann, but it’s a non-sequitur to assume that Eckert and Mauchly’s ideas were jumbled. They were at least organized enough to be building a working machine in the background, and if we’re going to argue that the important thing was promulgation of enough information for others to replicate, than the practicum is more important than mathematical tidiness.
In fact, if we’re talking about how the ideas spread, the paper is frankly overblown. The Moore school lectures were really what caused the Cambrian explosion of electronic computing. There, you can find Eckert and Mauchly utterly central to the elucidation of how to build a general purpose electronic computer. And hey look, there they are, deliberately sharing the ideas out to interested practitioners, in a more pragmatic and direct way than the paper.
What I’m building to here is that E&M starting a company was not evidence that they were just out to make a buck. On the contrary, what it shows is that they had _foresight_ about what the next interesting chapter was bound to be. With the Moore School Lecrures, the ‘publishing of the ideas’ stage was over - the next step was to begin building more machines that could do more computation for more users. And while there was plenty that happened afterward to refine the theoretical model, they were absolutely correct that that’s where the action was. In fact, I think that if you look at what many of these proposed fathers of computing did next, it’s an excellent litmus test of how central they actually were. Some of the sillier ones like Atanasoff just forget about their supposed invention and go on with life - that’s a tell that they weren’t that interested in general-purpose, high speed computing. Whereas E&M’s follow-on work was to advance the field even in the face of great setback. This also completely deconstructs the idea that they were just thinking about artillery, or just thinking about weather. They were thinking about _computing_, and their careers afterwards demonstrate this.
I was sad to see you guys shut down - I think you were on to something with deterministic faster-than-realtime replay. Not surprised it was hard to find paying customers, but for what it's worth, my engineering self thought that you guys were solving the right problem. As far as I can tell, it's still not solved, and the shocking truth is that everyone is just Living That Way.
The other thing that is important is how to provide a more query-like interface to tease out the data you actually want your node to react to, yet in a way that will be deterministic. You need to guide users away from introducing non-determinism, which can be tricky because innocent things like a message buffer with a max size can lead to such situations.
I have talked with one of the key people at Xronos (https://www.xronos.com/), who are trying to attack related problems. Still, even they aren't quite as pre-occupied with _replay_, which is crucial.
I think the sad truth is that the second evolution of all this frameworking simply hasn't come together convincingly enough, and in one place, for it to gather momentum. It turned out to be hard. And now that it has taken too long, it's my bet that ROS2 and all of its imitators will get lapped by holistic deep approaches. Not the stupid stuff happening with these fake humanoid robot companies mind you, but still - something holistic and deep. Something coming out of the predictive coding research e.g., or world models, etc. Training in simulated environments with generative systems is going to lead to behavior so much more sophisticated than gluing together all of our little services. Roboticists have their own version of the bitter lesson coming soon.
I was sad, too. If there was a way I thought to continue doing it, I would. But as it is I'm actually considering getting out of robotics at this point, I've had enough of everyone "Living That Way".
Firstly, the models that pass the Math Olympiad aren’t the same models as the ones you’re saying “pass the Turing test”. Secondly, nothing actually passes the Turing test. They pass a vibes check of “hey that’s pretty good!” but if your life depended on it, you could easily find ways to sniff out an LLM agent. Thirdly, none of these models learn in real time, which is an obviously essential feature.
We’ll know AGI when we see it, and this ain’t it. This complaining about changing goalposts is so transparently sour grapes from people over-invested in hyping the current LLM paradigm.
There doesn't seem to be a super-rigorous definition of the Turing Test, but I don't think it's reasonable to require it to fool an expert whose life depends on the correct choice. It already seems to be decently able to fool a person of average intelligence who has a basic knowledge of LLMs.
I agree that we don't really have AGI yet, but I'd hope we can come up with a better definition of what it is than "we'll know it when we see it". I think it is a legitimate point that we've moved the goalposts some.
The real answer is that once LLMs passed a "casual" application of the Turing test, it just made us realize that the "casual Turing test" is not particularly interesting. It turns out to be too easy to ape human behavior over short time frames for it to be a good indicator of human-like intelligence.
Now, you could argue that this right here is the aforementioned moving of the goalposts. After all, we're deciding that the casual Turing test wasn't interesting precisely after having seen that LLMs could pass it.
However, in my view, the Turing test _always_ implied the "rigorous" Turing test, and it's only now that we're actually flirting with passing it that it had to be clarified what counts as a true Turing test. As I see it, the Turing test can still be salvaged as a criteria for genera intelligence, but only if you allow it to be a no-holds-barred, life-depends-on-it test to exhaustion. This would involve allowing arbitrarily long questioning periods, for instance. I think this is more in the spirit of the original formulation, because the whole idea is to pit a machine against all of human intelligence, proving it has a similar arsenal of adaptability at its disposal. If it only has to passingly fool a human for brief periods, well... I'm afraid that just doesn't prove much. All sorts of stuff briefly fools humans. What requires intelligence is to consistently anticipate and adapt to all lines of questioning in a sustained manner until the human runs out of ideas for how to differentiate.
ELIZA fooled plenty of people (both originally and in the study you just linked) but i still wouldn't say Eliza passed/passes the turing test in general. It just shows that occasionally or even frequently fooling people is not a sufficient proxy for general intelligence. Ofc there isn't a standardized definition, but one thing I would personally include in a "strict" Turing test is that the human interrogee ought to be incentivized to cooperate and to make their humanity as clear as possible. And the interrogator should similarly be incentivized to find the right answer.
Turing gave a pretty rigorous definition of the Turing Test IMO. Well, as rigorous as something that is inherently "anecdotal" can be, which is part of the philosophical point of the Turing Test.
First of. The Turing test has a rigorous definition. Secondly, it has been debunked for almost half a century at this point by Searle’s Chinese room thought experiment. Thirdly, intelligence it self is a scientifically fraught term with ever changing meaning as we discover more and more “intelligent” behavior in nature (by animals and plants, and more). And to make matters worse, general intelligence is even worse, as the term was used almost exclusively for racist pseudo-science, as a way to operationally define a metric which would prove white supremacy.
Artificial General Intelligence will exist when the grifters who profit from it claim it exists. The meaning of it will shift to benefit certain entrepreneurs. It will never actually be a useful term in science nor philosophy.
>Secondly, it has been debunked for almost half a century at this point by Searle’s Chinese room thought experiment.
Searles thought experiment is stupid and debunked nothing. What neuron, cell, atom of your brain understands English ? That's right. You can't answer that anymore than you can answer the subject of Searles proposition, ergo the brain is a Chinese room. If you conclude that you understand English, then the Chinese room understands Chinese.
> Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.
> The man would now be the entire system, yet he still would not understand Chinese.
Really, here the only issue is Searle's inability to grasp the concept that the process is what does the understanding, not the person (or machine, or neurons) that performs it.
Then you deeply underestimate how difficult the problem is, and deeply misunderstand where all the effort has been spent in developing autonomous vehicles.
If all the effort has been spent in trying to replicate the human brain then I am comfortable saying that is a mistake.
We have a tool that can tell with great accuracy how far away an object is. The suggestion that we should ignore it and rely on cameras that have to guess it because “that’s how humans work” is absurd, frankly.
Before you can learn how far away an object is, you must decide: which laser return corresponds to which object? In fact, what counts as an object? Where does a tree stop and become a fallen tree branch? Is that object moving towards me? Is the apparent velocity of this point represent the fact that the object is moving, or that it's rotating, or that it's flexing, or dividing, or all 4? Is that object moving towards me but that's ok because it's a car that's going to stay in its lane? What's a lane? What's my laser return for where the lane is? Should I stop at this intersection? What's my laser return for whether the light is red? Am I in the blind spot of the car in front of me? Is he about to shift into my lane because he doesn't see me? What laser return do I get to tell me whether his indicator is on?
The problem of understanding what is happening in front of you while driving is preposterously more complicated than just a point cloud of distances. That is .01% of the problem. To solve the remaining 99.99%, you need interpretation of photons and sound waves into a semantic understanding that gives you predictive power to guess how the physical world will evolve and avoid breaking the rules of the road. Show me a mechanized way of understanding the causes of how the physical structure of the world is about to evolve, and I'll show you something that is imitating a human brain, however poorly. The cameras give you _plenty_ of data to determine 3D structure, at a higher resolution than the laser, without being emissive, for cheaper. It's a completely reasonable approach to focus your limited computational hardware on interpreting the data you have instead of adding more modalities with their own limitations that (according to nature) are demonstrably unnecessary.
The world is more complicated than slogans and pitchforks and Elon Bad.
People get into accidents not because they don't know with great accuracy how far away an object is.
They get into accidents because they make bad decisions and get distracted.
If AI makes better decisions and don't get distracted, the amount of accidents will already be greatly reduced compared to humans.
Having lidar in addition to cameras will be of marginal benefit (but a benefit to be sure) when you realize what is actually important: proper modeling of the environment. And for this, cameras are better at providing than lidar, so you still will want cameras anyways.
The focus on lidar is really a red herring. You merely push the computational budget you have to understanding a point cloud instead of vision. You're back to square 1 of "how can I properly model the environment given this sensory modality". This is the part that essentially needs human level understanding of the world that you're missing.
As the other commenter says, you deeply misunderstand the problem.
It would be interesting if, with all the anxiety about vibe coding becoming the new normal, its only lasting effect is the emergence of smaller B2B companies that quickly razzle dazzle together a bespoke replacement for Concur, SAP, Workday, the crappy company sharepoint - whatever. Reminds me of what people say Palantir is doing, but now supercharged by the AI-driven workflows to stand up the “forward deployed” “solution” even faster.
There's a line in the first season that runs as an undercurrent through the whole show ("Computers aren't the thing. They're the thing that gets you to the thing"). Joe originally says this to make the viewer think about technology, evoking the dawn of the personal computer and subsequently the internet. But later on, you're invited to re-interpret that statement as being about people: computers and technology were the thing that got the main characters to work together. It's the -people- that are the thing.
Part of what makes the show so good is that it's one of the few renditions in TV / movies of the joy of engineering something, and the constant tension that comes from working with great people. Great people inspire you, but they also challenge you. The show does a great job of portraying realistic conflicts that arise between different personality types and roles, as well as cleverly exposing the limitations of those personalities. With just Gordon, you'll get a stable and well engineered product but it won't be revolutionary. Joe has the vision but he can't actually _do_ the substantive part. Cameron has great substance and technical ability, but she's impractical and inflexible. Donna is responsible, effective, and clear-eyed - but unchecked, purely rational decisions erode the soul of a company into nothing. These differences frustrate our characters, and yet there can be no success without them.
I think many of us spend our whole careers chasing those rare moments where the right people are in the room solving problems, butting heads, but ultimately doing things they could never do all by themselves.
But invoking No True Scotsman would imply that the focus is on gatekeeping the profession of programming. I don’t think the above poster is really concerned with the prestige aspect of whether vibe bros should be considered true programmers. They’re more saying that if you’re a regular programmer worried about becoming obsolete, you shouldn’t be fooled by the bluster. Vibe bros’ output is not serious enough to endanger your job, so don’t fret.
I’m currently engineering a system that uses an actor framework to describe graphs of concurrent processing. We’re going to a lot of trouble to set up a system that can inflate a description into a running pipeline, along with nesting subgraphs inside a given node.
It’s all in-process though, so my ears are perking up at your comment. Would you relax your statement for cases where flexibility is important? E.g. we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Data flow graphs could arguably be called structured concurrency (granted, of nodes that resemble actors).
FWIW, this has become a perfectly cromulent pattern over the decades.
It allows highly concurrent computation limited only by the size and shape of the graph while allowing all the payloads to be implemented in simple single-threaded code.
The flow graph pattern can also be extended to become a distributed system by having certain nodes have side-effects to transfer data to other systems running in other contexts. This extension does not need any particularly advanced design changes and most importantly, they are limited to just the "entrance" and "exit" nodes that communicate between contexts.
I am curious to learn more about your system. In particular, what language or mechanism you use for the description of the graph.
We’re using the C++ Actor Framework (CAF) to provide the actor system implementation, and then we ended up using a stupid old protobuf to describe the compute graph. Protobuf doubles as a messaging format and a schema with reflection, so it lets us receive pipeline jobs over gRPC and then inflate them with less boilerplate (by C++ standards, anyway).
Related to what you were saying, the protobuf schema has special dedicated entries for the entrance and exit nodes, so only the top level pipeline has them. Thus the recursive aspect (where nodes can themselves contain sub-graphs) applies only to the processor-y bit in the middle. That allowed us to encourage the side effects to stay at the periphery, although I think it’s still possible in principle. But at least the design gently guides you towards doing it that way.
After having created our system, I discovered the Reactor framework (e.g. Lingua Franca). If I could do it all over, I think I would have built using that formalism, because it is better suited for making composable dataflows. The issue with the actor model for this use case is that actors generally know about each other and refer to each other by name. Composable dataflows want the opposite assumption: you just want to push data into some named output ports, relying on the orchestration layer above you to decide who is hooked up to that port.
To solve the above problem, we elected to write a rather involved subsystem within the inflation layer that stitches the business actors together via “topic” actors. CAF also provides a purpose-built flows system that sits over top of the actors, which allows us to write the internals of a business actors in a functional reactive-x style. When all is said and done, our business actors don’t really look much like actors - they’re more like MIMO dataflow operators.
When you zoom out, it also becomes obvious that we are in many ways re-creating gstreamer. But if you’ve ever used gstreamer before, you may understand why “let’s rest our whole business on writing gstreamer elements” is too painful a notion to be entertained.
Since you still have C++ involved and if you are still looking for composable dataflow ideas, take a look at TBB's "flow_graph" module. It's graph execution is all in-process while what you describe sounds more distributed, but perhaps it is still interesting.
> we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Actors are still just too general and uncontrolled, unless you absolutely can't express the thing you want to any other way. Based on your description, have you looked at iterate-style abstractions and/or something like Haskell's Conduit? In my experience those are powerful enough to express anything you want to (including, critically, being able to write a "middle piece of a pipeline" as a reusable value), but still controlled and safe in a way that actor-based systems aren't.
reply