there's iconography of a partially eaten fruit on the cases, and some of them glow.
eta: i'm just saying if i had a glowing half drank beer or partially eaten pizza on my laptop in a business meeting i am getting weird looks. Just because you all normalized glowing fruit doesn't mean the rest of us take you seriously.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
If I engineer a bridge I know the load the bridge is designed to carry. Then I add a factor of safety. When I build a website can anyone on the product side actually predict traffic?
When building a bridge I can consult a book of materials and understand how much a material deforms under load, what is breaking point is, it’s expected lifespan, etc. Does this exist for servers, web frameworks, network load balancers, etc.?
I actually believe that software “could” be an engineering discipline but we have a long way to go
> can anyone on the product side actually predict traffic
Hypothetically, could you not? If you engineer a bridge you have no idea what kind of traffic it'll see. But you know the maximum allowable weight for a truck of X length is Y tons and factoring in your span you have a good idea of what the max load will be. And if the numbers don't line up, you add in load limits or whatever else to make them match. Your bridge might end up processing 1 truck per hour but that's ultimately irrelevant compared to max throughput/load.
Likewise, systems in regulated industries have strict controls for how many concurrent connections they're allowed to handle[1], enforced with edge network systems, and are expected to do load testing up to these numbers to ensure the service can handle the traffic. There are entire products built around this concept[2]. You could absolutely do this, you just choose not to.
I think it is in certain very limited circumstances. The Space Shuttle's software seems like it was actually engineered. More generally, there are systems where all the inputs and outputs are well understood along with the entire state space of the software. Redundancy can be achieved by running different software on different computers such that any one is capable of keeping essential functions running on its own. Often there are rigorous requirements around test coverage and formal verification.
This is tremendously expensive (writing two or more independent copies of the core functionality!) and rapidly becomes intractable if the interaction with the world is not pretty strictly limited. It's rarely worth it, so the vast majority of software isn't what I'd call engineered.
If I need a bridge, and there's a perfectly beautiful bridge one town over that spans the same distance - that's useless to me. Because I need my own bridge. Bridges are partly a design problem but mainly a build problem.
In software, if I find a library that does exactly what I need, then my task is done. I just use that library. Software is purely a design problem.
With agentic coding, we're about to enter a new phase of plenty. If everyone is now a 10x developer then there's going to be more software written in the next few years than in the last few decades.
That massive flurry of creativity will move the industry even further from the calm, rational, constrained world of engineering disciplines.
> Bridges are partly a design problem but mainly a build problem.
I think this vastly underestimates how much of the build problem is actually a design problem.
If you want to build a bridge, the fact one already exists nearby covering a similar span is almost meaningless. Engineering is about designing things while using the minimal amount of raw resources possible (because cost of design is lower than the cost of materials). Which means that bridge in the other town is designed only within its local context. What are the properties of the ground it's built on? What local building materials exist? Where local can be as small as only a few miles, because moving vast quantities of material of long distances is really expensive. What specific traffic patterns and loadings it is built for? What time and access constraints existed when it was built?
If you just copied the design of a bridge from a different town, even one only a few miles up the road, you would more than likely end up with a design that either won't stand up in your local context, or simply can't be built. Maybe the other town had plenty of space next to the location of the bridge, making it trivial to bring in heavy equipment and use cranes to move huge pre-fabbed blocks of concrete, but your town doesn't. Or maybe the local ground conditions aren't as stable, and the other towns design has the wrong type of foundation resulting in your new bridge collapsing after a few years.
Engineering in other disciplines don't have the luxury of building for a very uniform, tightly controlled target environment where it's safe to make assumptions that common building blocks will "just work" without issue. As a result engineering is entirely a design problem, i.e. how do you design something that can actually be built? The building part is easy, there's a reason construction contractors get paid comparatively little compared to the engineers and architects that design what they're building.
Software packages are more complicated than you make them out to be. Off the top of my head:
- license restrictions, relicensing
- patches, especially to fix CVEs, that break assumptions you made in your consumption of the package
- supply chain attacks
- sunsetting
There’s no real “set it and forget it” with software reuse. For that matter, there’s no “set it and forget it” in civil engineering either, it also requires monitoring and maintenance.
I have talked to colleagues who wrote software running on microcontrollers a decade ago, that software still runs fine. So yes there is set and forget software. And it is all around us, mostly in microcontrollers. But microcontrollers far outnumber classical computers (trivially: each classical computer or phone contain many microcontrollers such as SSD controllers, power management, wifi, ethernet, cellular,... And then you can add appliances, cars etc to that).
If something in software works and isn't internet connected it really is set and forget. And far too many things are being connected needlessly these days. I don't need or want an online washing machine or car.
True, using a library in a cheap coffee maker you can maybe set it and forget it. I have an old TI-85 calculator that’s never needed to update its OS, while Apple has obsoleted multiple generations of applications in its never ending upgrade cycle.
But for mission critical applications the bar is a little higher. Isn’t this why we have the ongoing dialogue about OTA updates for Teslas etc and the pros and cons of that approach? Because if you can’t OTA patch a bug, you have to issue a recall [0]. But if you have internet connectivity, as you rightly point out, then you have a whole new attack surface to consider.
Indeed it isn't easy, but for car software, why couldn't you do the software upgrade offline, while at the mechanic, or via a USB drive with a signed installer, or via a phone app plugged into a USB port in the car? For a basic car there really isn't a need to be always online.
My car just has a bluetooth stereo, and it isn't very old. Yeah it is a basic model, but I really don't need or want connectivity in it. The one argument I could see would be showing maps, but I need offline maps anyways since I often lack any sort of mobile phone connection where I'm going. And you can update maps on a monthly basis (mobile phone app over USB while parked at home would work perfectly for this). Currently I just run OsmAnd on my phone with openstreetmap data downloaded in advance. Realtime traffic information perhaps could be an argument, but again, better to distribute that via FM radio that has better coverage (or even AM radio in some parts of US as I understand it).
And cars might be the odd one out. There really is no excuse for exposing washing machines and other applicances online. Especially since they are likely to last for a lot longer than the software will be supported. The fridge and freezer at my parents is around 20 years at this point for example. My washing machine is over 10 and going strong. I doubt they would get software security support for that long.
What you are describing sounds like a specific subset of professional engineering discipline, but I'd argue that "engineering" is much larger -- it isn't only "engineering" when you do it well and responsibly, after all.
I'd propose a definition of engineering that's more or less just "composing tools together to solve problems".
There are also fundamentally different acceptance criteria for a bridge vs a website. Failure modes differ. Consequences of failure are nowhere near the same, so risk tolerance is adjusted accordingly. Perhaps true "engineering" really boils down to risk management... is what you're building so potentially destructive that it requires extremely careful thought and risk management? Engineering. If what you're building can fail, and really cause no harm, that's just building.
The way the authors of the book on material strengths got those numbers, was through testing. If you're using mature technologies, that testing has been done by others and you can rely on it for your design, at least in a general way. Otherwise you have to do the testing yourself, which is something a structural engineering project might do also, if it's unusual in some way.
We have a long way to go but large software companies have gotten really, really good at scaling to handle larger and larger traffic loads. It's not like there are no materials to consult to learn current best practices, even if there are still more improvements to be made.
I own two Tesla’s. When conditions are adverse, i.e. fog, heavy rain, the system simply shuts off and reverts back to manual driving. Elon has said several times that humans can drive with two eyes and Tesla should be able to drive with X number of cameras. however, it suffers from the same problems humans do: if it can’t see it can’t drive and ironically that’s when it reverts back to human control.
I definitely agree that in principle a computer can drive with cameras alone. I don't know whether it's a useful statement. Like a human can determine the genre of a movie merely by watching it. I wouldn't suggest to blockbuster in 1990 that they should collect no genre metadata for movies because the database server should automatically sort it out on its own. (Nowadays somewhat feasible with ML of course, but 20+ years later.) What sensors/data you need is a question of where computers are now or will shortly be, and it seems that for now they need the extra structure of LIDAR for best effectiveness.
>I definitely agree that in principle a computer can drive with cameras alone.
Obvious things first, cameras have way worse contrast and low light sensitivity than human eyes.
Humans have much more evolved logical thinking capacity, even the stupid ones can figure stuff out that modern AI struggles with.
Humans have other sensors, too that they use to plausibility check the picture they see. I.e. one of the best sensor fusion systems on the planet.
When in doubt humans can figure out whether it's a lens occlusion or a some other artifact in their vision by virtue of moving their head around.
There's probably other things I'm not thinking of. In any case to make full self driving work we should first start by using all available tech to make it safe. When you have safe tech you can slowly start removing individual sensors while verifying that safety remains high. As the experience and system evolves there will be optimization potential.
And until we have that low light thing and high contrast figured out, camera alone doesn't cut it.
Right, but if these things are so rare that we all only know the one viral example, I feel like that lends credence to the models basically generally not having this problem.
Researchers built the Winnograd Schema Challenge more than a decade ago to assess common sense reasoning, and LLMs beat that challenge task around GPT 4.
They're not so rare. Hallucinations have been spotted everywhere, but the "driving a car to the car wash" is an amusing one that's been recently publicised. Developers aren't going to point out every time an LLM hallucinates an entire library.
I'd add to this, any moderately involved logical or numerical problem causes hallucinations for me on all frontier models.
If you ask them in isolation they may write a script to solve it "properly", but I guess this is because they added enough of these to the training set. But this workaround doesn't scale.
As soon as I give the LLM a proper problem and a small part of it requires numeric reasoning, it almost always hallucinates something and doesn't solve it with a script.
If the logic/math is part of a larger problem the miss rate is near 100%.
LLMs have massive amounts of knowledge, encoded in verbal intelligence, but their logic intelligence is well below even average human intelligence.
If you look at how they work (tokenization and embeddings) it's clear that transformers will not solve the issue. The escape hatches only work very unreliably.
If you ask this of any current day AI it will answer exactly how you would expect. Telling you to drive, and acknowledging the comedic nature of the question.
That's because AI labs keep stamping out the widely known failures. I assume without actually retraining the main model, but with some small classifier that detects the known meme questions and injects correct answer in the context.
But try asking your favorite LLM what happens if you're holding a pen with two hands (one at each end) and let go of one end.
Not unlikely that you're talking to a lot of AI-based AI boosters. It's easier to create astroturfed comments with chatbots than fixing the inherent problems.
Nice. My test was always a blond bald guy. It always adds hair. If you ask for bald you get a dark haired bald guy, if you add blond, you can't get bald because I guess saying the hair color implies hair (on the head), while you may just want blonde eyebrows and/or blond stubble.
Well that, but Elon is also downplaying the quality of the human vision system compared to the cameras Tesla's have.
They're just not that good - nowhere near human vision performance. And a human in a car has a surprisingly good view of the road and a very fast pan tilt system to look around.
Tesla's do not actually have 360 degree full binocular vision coverage - nor the ability for a camera to lean left or right to improve an ambiguous sensor picture.
So while I fully believe that vision only self driving could work, within the constraints of automobile platforms and particularly the Tesla and it's current camera deployments, it is not remotely similar enough to human visual fidelity for that to solve a valid argument.
Tesla’s actually have zero binocular vision coverage because the cameras have different focal lengths and are too close even if they did have the same focal lengths.
They are also below minimum vision requirements for driving in many states.
> When conditions are adverse, i.e. fog, heavy rain, the system simply shuts off and reverts back to manual driving.
I also own a Tesla, and there is no indication shown to the user that FSD's vision is degraded. They need to add this in.
For example, numerous times I have been driving my Tesla with FSD activated with ostensibly a clean and clear windshield when suddenly the car will do the "clean the windshield in front of the camera routine" without any indication that the car's camera is degraded. If people haven't seen this "clean the windshield routine", the wiper fluid is dispensed and the wiper will vigorously wipe in front of the camera only -- the rest of the windshield only gets a cursory wipe.
This indicates to me that the camera has poor visibility and I am not informed or aware of this as a driver, which is concerning. I am often curious if there is a thin occluding film on the windshield in the camera box in front of the camera, or something that has degraded FSD's vision, but they do not give you the ability to view the camera feed, nor do they notify you that the vision is degraded. I think a "thin occluding film" may be in the camera box because my normal windshield outside of the camera box started to show a thin chemical film after a couple of months, which apparently (according to a Google search) happens when a new car off-gasses, adding a thin film of chemical byproduct to the windshield. This is my first new car so I've no idea if this is normal or not.
> yes it does, and it's annoying as all hell. Dirt, sun, etc all pop an alert about degraded performance
As with all things FSD, it does sometimes and not others. I've driving my parents' Tesla with FSD engaged and it did complain when the windshield got dirty but didn't say anything when it drove into fog. (I took over manually.)
Out of curiosity, was the camera view compromised? I would probably take over too, but like the poster above, I get the warning in all kinds of conditions.
Absolutely could be a clouded windshield on the inside (where it's really hard for normal people to clean). I brought this up when I got my last Model Y that it was foggy and they said it was "fine". Took it into service over a year ago and noticed they cleaned it. Clearly it's a problem but they're not being too transparent about it. I suspect they don't want to because it's not the easiest thing to remove the cover for normal people to clean.
Recent Tesla updates will detect dirty glass inside the camera enclosure and offer to schedule (one!) free glass cleaning. You can do it yourself if you have a trim tool. (A thin plastic prybar) https://www.notateslaapp.com/news/3327/tesla-now-offers-free...
I've always hated this argument. Why should I want a system that can drive "just as any human driver can"? I want it to be much much much better than the best driver out there, like 100x or 1000x better.
That argument is dumber and dumber any time I think about it. And we haven't even gotten into the fact that human eyes and its partner in crime the brain work much different than a camera.
That’s how Musk works. He waves his hands and uses words like “orders of magnitude” and “first principles” and then you end up with 250-meter long tunnels under your city with Teslas driving back and forth in it and fanboys forget this thing called subways ever existed.
> humans can drive with two eyes and Tesla should be able to drive with X number of cameras
Systems built from cameras that are only nearly as capable as human eyes and software that is only nearly as capable as the human brain will fall short overall. To match or surpass human performance, the individual components need to exceed human abilities where possible--and that's where LiDAR provides an advantage.
> the system simply shuts off and reverts back to manual driving.
That's not good enough. Too many accidents at manual takeover. The new standard, which Mercedes has demonstrated and China is mandating, is that the system must be able to pull the vehicle over and stop safely when there are problems.
My Lexus does this too. I rarely get it due to weather however it’s how I know I’m past due for a car wash (dust on sensors)
In any case, it seems reasonable to me that the human should be making the decisions once conditions become adverse. It’s a simple liability issue for the car company but also I’d rather trust my own judgment if it’s only 80% certain it’s not driving me off a cliff.
One of the coolest features I saw like this was on a Jaguar XJL I had recently, that had an air particulate sensor and would automatically switch to recirc cabin air when that count was too high (i.e. dusty / smoky conditions).
yea, when it rains the world
stops and we all sit home and wait for the Sun to make an appearance. coolest part is that some places in the US get like 200+ rainy days and you get to stay home cause you have no choice, schools closed etc :)
> There isn't some kind of god riven right to transportation, it is always conditions permitting.
If the condition is a little fog and little rain and little snow/sleet I hate to break it to you but those are very permitting. In most of the continental US the number of days where driving conditions for an (below)average human and such that it is wiser not to get on the road is very small. If the "robo"taxi technology you posses cannot match that of a (below)average human you got nothing but vaporware you've been pitching as "done deal" for more than a decade.
> Well, even robotaxi's can't beat the laws of physics. There isn't some kind of god riven right to transportation, it is always conditions permitting.
> Elon has said several times that humans can drive with two eyes and Tesla should be able...
And this is an amazingly stupid statement. Humans drive with most of their senses, not just vision. In fact our proprioception plays an important role in driving.
Even Tesla's use of cameras is poor because they're monocular and fixed in place. Most humans have binocular vision and those visual sensors have multiple degrees of freedom and the ability to adjust focus.
Even if you wanted to only use vision for navigation it's irresponsible to not use binocular configurations that get more reliable depth sensing.
>> "Elon has said several times that humans can drive with two eyes and Tesla should be able to drive with X number of cameras"
This must be one of the most stupid takes that gets repeated non stop by Tesla fans.
I just don't get it. Humans also have emotions and other biological senses that Computers don't have. You just cannot compare both.
What makes human so good at driving is that they can react relatively well to unknown new events. Teslas cannot do that, and with the current hardware never will.
Elon is deeply involved in engineering decisions in his companies, and has by all accounts deep knowledge in those areas.
And yet randos on the web keep asserting he's not an engineer. Is there any factual basis for this? Is it just that he doesn't have a degree with that word in the title?
Being an engineer is neither having a degree in engineering which he doesn't have nor managing them and it certainly isn't owning a company that employs them. It's working as an engineer.
He continually says dumb things that aren't true or reasonable and has never worked in the field he's a rich boy who bought things with daddy's aparteid money.
Anyone who owns a tesla vehicle with "full self driving" is probably chuckling to themselves about Tesla ever making useful general purpose robots any time soon. Disclaimer, I own two tesla's with FSD and it's far from "full" or "self". I am very sceptical of robotaxis unless they have the appropriate sensors & SW (e.g. Waymo) which Elon has not done.
Finally, I know lots of people who own cars, but none who own robots. Many friends will not have Alexa in their homes due to privacy concerns. How many people will trust Elon to have a robot in their homes and assume he's being benign and safe with your personal data?
South Dakota has a population of less than 1 million people and the complexity of a CTO job of a state like South Dakota would be quite low. It is < 0.3% of the US Population and likely has de minimis benefit programs.
I suppose I'm an optimist. I believe it is possible to create a secure online voting system. My life savings might be held at Fidelity, Merrill, or elsewhere, my banking is online, 90% of my shopping is online and it all has "good enough" security. Plus most banks seem to be well behind the state of the art in security. I believe with the technologies we have available today, we could create a secure, immutable, auditable voting system. Do I believe any of the current vendors have done that? NO. But I believe it could be done.
People of limited technical ability can understand the checks and balances of a paper voting system, which legitimizes outcomes. No digital voting system I'm aware of has this characteristic.
Elections in most countries involve tens of thousands of volunteers for running ballot stations and counting votes.
That is a feature, not a problem to be solved. It means that there are tens of thousands of eyes that can spot things going wrong at every level.
Any effort to make voting simpler and more efficient reduces the number of people directly involved in the system. Efficiency is a problem even if the system is perfectly secure in a technological sense.
I find that argument lacking. Each of those people is also a potential weak link or even an adversary from a security standpoint. Would I rather have 10,000 weak links or one software system with rigorous testing and logging?
Money are stolen electronically every day - we do not know how to build secure systems. Considering the stakes for national elections (civil war or government instability) good enough is not good enough.
I agree with you on local elections - electronic voting is good enough for town or even state level elections. The stakes are dramatically lower.
It's of course possible. In fact electronic voting could be safer. The issue is that voting has nothing to do with technical details of safety and everything to do with trust. If your electorate doesn't understand modular arithmetic, then there's no point to electronic voting.
"trust" is a fuzzy concept - people use iMessage and have no concept of how it's architected or how it works. But they trust it. Why? because trust is something that is transferable. If you trust me, and I tell you iMessage is safe then you have a high likelihood of trusting iMessage. If this is reinforced by other people you trust, even better. There would be ways to create a voting system in the open, and have it validated by third parties. If you've ever bought stock it's because underlying the transaction, and auditor has certified their financials...
We have ID.gov and we have blockchain. If we can ensure that the person submitting the vote is indeed that person, would it matter whether it was online, in a booth, or by mail?
I can’t tell if this is a serious comment or humor.
reply