Depending on corporations to have a moral foundation is a losing bet. It has to come from the outside.
Here’s a possible out: Senior engineers stop working huge corporations and use these tools to start their own businesses. (Given today’s hiring situation, this may not even be a choice.) As the business grows, hire junior developers as apprentices to handle day to day tasks while senior engineer works on bigger picture stuff. Junior engineer grows into a senior engineer who eventually uses AI to start their own business. This is a very abbreviated version of what I hope I can do, at least.
So depending on people to do harder work for less pay--that is the winning bet?
Your solution cannot work at scale, because if the small companies you propose succeed, then they will become corporations, which, as you say, cannot be depended upon to do the right thing.
Won’t these just make it less likely that you can publish your work, and end up damaging your career in the short term? As opposed to getting published, having a career, with a long tail risk of being found out later?
And you could mitigate that risk by publishing research that doesn’t really matter, so no one ever checks.
Please give specific examples. I keep seeing vague comments like this about her, but very little in the way of specifics. Without specifics, this is just ad hominem rumor mongering.
Extreme specifics: her comments on work out of MIT on Color Center Qubits was basically "finally an example of actual progress in quantum computing because of reason A, B, C". That statement was in the class of "not even wrong" -- it was just complete non sequitur. People actually in the fields she comments on frequently laugh at her uninformed nonsense. In this particular case, the people that did the study she praised were also among the ones laughing at her.
This is still extreme vague without A,B,C and an explanation why there is no connection, i.e., specifics re why she was wrong. Just more vague reference to other people's reactions
She said "because the color centers are small which would enable miniaturization". Miniaturization is the last thing these are good for, and while they are "small" for a human, they are gigantic for an electronic device. She had absolutely no idea why these new devices are useful but made sweeping comments about how they will change the field. She hyped up something silly while at the same time she complains about hype over actually interesting results. She betrayed complete incompetence on the topic while pretending to be an expert. And she does that constantly, over such trivialities, that it is just exhausting to argue about it.
So it's not "because of reason A, B, C", but just "A", and "small which would enable miniaturization" is also not "non sequitur", let alone "complete".
By the way, the "B" was the ability to "target individual qubits more easily". In what way is this "complete non sequitur"
And you also "forgot" to specify "why these new devices are useful" so that we can't check whether she even mentioned it to asses whether she has no idea (is it interconnectedness of the modular systems ("C"), single-step ease of transfer on the CMOS, compatibility with modern semiconductor fab processes, remote control, something else entirely?)
> while pretending to be an expert
Where was that? Could you link to her statement pretending to be an expert on quantum computing?
> that it is just exhausting to argue about it.
Indeed, it's much easier be very vague in your arguments, because then you can't verify any claims and don't have to respond to when such verification fails to match
I saw one startup with about fifty engineers, and dozens of services. They had all of the problems that the post describes. Getting anything done was nearly impossible until you were in the system for at least six months and knew how to work around all the issues.
Here’s the kicker: They only had a few hundred MAUs. Not hundreds of thousands. Hundreds of users. So all this complexity was for nothing. They burned through $50M in VC money then went under. It’s a shame because their core product was very innovative and well architected, but it didn’t matter.
I don't think I learned basically anything about "fancy architecture" from my undergraduate courses except, ironically, reasoning about coupling and overhead.
I don't remember one solitary lecture on CI/CD, microservices, or even just deployment in general, in Uni. The closest that our comp. sci. classes ever came to touching on anything but the code itself was making us use SVN.
I've never heard of or seen a Software Development bachelors degree?
I've seen Information Systems programs, that are usually CE, after-hours tracks. Neither Harvard, Yale, nor MIT have a software dev one, just Comp. Sci. I'm calling BS (no pun intended) on "software dev degrees" as a thing distinct from CS in any widespread fashion.
I have worked with a company with ~100k MAUs with ~4 teams, even then it often feels the system is over-microserviced (about two dozen services I think).
Definitely some stuff makes sense (especially since it has a lot of IoT stuff), but micro-service added was used mostly as a way to develop new stuff without having to deal with the legacy monolith. The core of the application could easily be a single service backed by one big RDBMS with a few ancillary services around it.
The legacy monolith is still there kicking and screaming, it didn't need "breaking up" it needed (and I assume still do) need a major refactoring.
I’m not sure where’s the downside. The engineers got paid, they managed to put “founder” on their cvs, and enjoyed the ride. Now they are
more prepared for their next adventure.
The only ones who lost money were the investors, but nobody cares about them.
I wish we called hallucinations what they really are: bullshit. LLMs don’t perceive, so they can’t hallucinate. When a person bullshits, they’re not hallucinating or lying, they’re simply unconcerned with truth. They’re more interested in telling a good, coherent narrative, even if it’s not true.
I think this need to bullshit is probably inherent in LLMs. It’s essentially what they are built to do: take a text input and transform it into a coherent text output. Truth is irrelevant. The surprising thing is that they can ever get the right answer at all, not that they bullshit so much.
In the same sense that astrology readings, tarot readings, runes, augury, reading tea leaves are bullshit - they have oracular epistemology. Meaning comes from the querant suspending disbelief, forgetting for a moment that the I Ching is merely sticks.
It's why AI output is meaningless for everyone except the querant. No one cares about your horoscope. AI shares every salient feature with divination, except the aesthetics. The lack of candles, robes, and incense - the pageantry of divination means a LOT of people are unable to see it for what it is.
We live in a culture so deprived of meaning we accidentally invented digital tea readings and people are asking it if they should break up with their girlfriend.
People use divination for all kinds of real world uses - when to have a wedding, where to buy a house, the stock market, what life path they should take, stay or break up with their partner. Asking for code is no different, but we shouldn't pretend that turning the temperature to 0 doesn't make it not divinatory.
Randomness, while typical, is not a requirement for divination. It simply replaces the tarot deck with a Ouija board.
What's being asked for is a special carve out, an exception, and for the reason of feeling above those other people with their practice that isn't my practice, which of course is correct and true.
This is exactly what I've been saying: it's not that LLMs sometimes "hallucinate" and thus provide wrong answers, it's that they never even provide right answers at all. We as humans ascribe "rightness" to the synthetic text extruded by these algorithms after the fact as we evaluate what it means. The synthetic text extruder doesn't "care" one way or another.
Or maybe we could stop anthropomorphizing tech and call the "hallucinations" what they really are: artifacts introduced by lossy compression.
No one is calling the crap that shows up in JPEGs "hallucinations" or "bullshit"; it's commonly accepted side effects of the compression algorithm that makes up shit that isn't there in the original image. Now we're doing the same lossy compression with language and suddenly it's "hallucinations" and "bullshit" because it's so uncanny.
> Or maybe we could stop anthropomorphizing tech and call the "hallucinations" what they really are: artifacts introduced by lossy compression.
That would be tantamount to removing the anti-gravity boots which these valuations depend on. A pension fund manager would look at the above statement and think, "So it's just a heavily subsidized, energy-intensive buggy software that needs human oversight to deliver value?"
I would guess that for every app that was vibe coded and made $80k in revenue, there are thousands of attempts that went off a cliff or into a wall, generating $0 revenue and wasting a human’s time. So it’s just survivorship bias. Sure, it’s possible to make something useful, but you will probably fail.
I’m using AI to increase my productivity, but whenever I’ve vibe coded (not intentionally, just by getting caught up in the dumb vibes) I’ve regretted it. I’ve ended up with a tangled mess.
Unless I spend a considerable amount of time writing a wall of text to do anything beyond simple tab completion I have yet to see it save me a significant amount of time. The 1 time it helped me come up with a complex solution gets overshadowed by 10 rabbit holes of hallucinated API usage that I now have to sort out, clean up, etc.
This was the rabbit hole that I started down in the late 90s and still haven’t come out of. I was the webmaster of the Analog Science Fiction website and I was building tons of static pages, each with the same header and side bar. It drove me nuts. So I did some research and found out about Apache server side includes. Woo hoo! Keeping it DRY (before I knew DRY was a thing).
Yeah, we’ve been solving this over and over in different ways. For those saying that iframes are good enough, they’re not. Iframes don’t expand to fit content. And server side solutions require a server. Why not have a simple client side method for this? I think it’s a valid question. Now that we’re fixing a lot of the irritation in web development, it seems worth considering.
Server-side includes FTW! When a buddy and I started making "web stuff" back in the mid-90s the idea of DRY also just made sense to us.
My dialup ISP back then didn't disable using .htaccess files in the web space they provided to end users. That meant I could turn on server-side includes! Later I figured out how to enable CGI. (I even went so far as to code rudimentary webshells in Perl just so I could explore the webserver box...)
this here is the main idea of HTMX - extended to work for any tag p, div, content, aside …
there are many examples of HTMX (since it is a self contained and tiny) being used alongside existing frameworks
of course for some of us, since HTMX brings dynamic UX to back end frameworks, it is a way of life https://harcstack.org (warning - raku code may hurt your eyes)
If you want it more straight-forward and simple hypermedia approach, then check out https://data-star.dev (highly recommended, there are great youtube video's where the maintainers discuss their insights). Following up where htmx took things.
I used the seamless attribute extensively in the past, it still doesn't work the way GP intended, which is to fit in the layout flow, for example to take the full width provided by the parent, or automatically resize the height (the pain of years of my career)
It worked rather like a reverse shadow DOM, allowing CSS from the parent document to leak into the child, removing borders and other visual chrome that would make it distinguishable from the host, except you still had to use fixed CSS layouts and resize it with JS.
I mean in 1996s netscape you could do this (I run the server for a website that still uses this):
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">
<html>
<frameset cols="1000, *">
<frame src="FRAMESET_navigation.html" name="navigation">
<frame src="FRAMESET_home.html" name="in">
</frameset>
</html>
The thing that always bugged me about frames is that they are too clever. I don't want to reload only the frame html when I rightclick and reload. Sure the idea was to cache those separately, but come on — frames and caching are meant to solve two different problems and by munching them together they somewhat sucked at solving either.
To me includes for HTML should work in the dumbest way possible. And that means: Take the text from the include and paste it where the include was and give the browser the resulting text.
If you want to cache a nav section separately because it appears the same on every page lets add a cache attribute that solves the problem independently:
> The optimal solution would be using a template engine to generate static documents.
This helps the creator, but not the consumer, right? That is, if I visit 100 of your static documents created with a template engine, then I'll still be downloading some identical content 100 times.
XSLT solved this problem. But it had poor tool support (DreamWeaver etc) and a bunch of anti-XML sentiment I assume as blowback from capital-E Enterprise stacks going insane with XML for everything.
XSLT did exactly what HTML includes could do and more. The user agent could cache stylesheets or if it wanted override a linked stylesheet (like with CSS) and transform the raw data any way it wanted.
I'll still be downloading some identical content 100 times.
That doesn't seem like a significant problem at all, on the consumer side.
What is this identical content across 100 different pages? Page header, footer, sidebar? The text content of those should be small relative to the unique page content, so who cares?
Usually most of the weight is images, scripts and CSS, and those don't need to be duplicated.
If the common text content is large for some reason, put the small dynamic part in an iframe, or swap it out with javascript.
If anyone has a genuine example of a site where redundant HTML content across multiple pages caused significant bloat, I'd be interested to hear about it.
I care! It is unnecessary complexity, and frankly ugly. If you can avoid repetition, then you should, even if the reason is not obvious.
To give you a concrete example, consider caching (or, equivalently, compiling) web pages. Maybe you have 100 articles, which share a common header and footer. If you make a change to the header, then all 100 articles have to be uncached/rebuilt. Why? Because somebody did not remove the duplication when they had the chance :-)
You can message the page dimensions to the parent. To do it x domain you can load the same url into the parent with the height in the #location hash. It won't refresh that way.
I know it’s possible to work around it, but that’s not the point. This is such a common use case that it seems worthwhile to pave the cowpath. We’ve paved a lot of cowpaths that are far less trodden than this one. This is practically a cow superhighway.
We’ve built an industry around solving this problem. What if, for some basic web publishing use cases, we could replace a complex web framework with one new tag?
> XHTML 2 takes a completely different approach, by taking the premise that all images have a long description and treating the image and the text as equivalents. In XHTML 2 any element may have a @src attribute, which specifies a resource (such as an image) to load instead of the element.
The content of the div can be used to support legacy browsers. It can have a link, iframe, a message or an outdated version of the content/menu/header/footer etc
> We’ve built an industry around solving this problem. What if, for some basic web publishing use cases, we could replace a complex web framework with one new tag?
I actually did that replacement, with a few enhancements (maybe 100 lines of code, total?). It's in arxiv pending at the moment. In about two days it will be done and I'll post a Show HN here.
> Woo hoo! Keeping it DRY (before I knew DRY was a thing)
I still remember the script I wrote to replace thousands (literally) slightly different headers and footers in some large websites of the 90s. How liberating to finally have that.
As a dev from the early 90s, I share the sentiment. Watching javascript become more and more complex and bloated for little to no benefit to the end user.
That’s the reason it doesn’t get implemented. Nobody wants the simple I/O operation based inclusion. The moment you try to propose it, there’ll be demands to add conditional logic or macros. More relevant for the web is instantiating html templates, which will undoubtedly get piled onto such a feature. And pretty soon you have yet another:
The difference between "a line of JS" and a standardized declarative solution is of course that a meek "line of $turing_complete_language" can not, in the general case, be known and trusted to do what it purports to do, and nothing else; you've basically enabled any kind of computation, and any kind of behavior. With an include tag or attribute that's different; it's behavior is described by standards, and (except for knowing what content we might be pulling in) we can 100% tell the effects from static analysis, that is, without executing the code. With "a line of JS" the only way, in the general case, to know what it does is to run it (an infinite number of times). Also, because it's not standardized, it's much harder to save to disk, to index and to archive it.
I think of all the “hygienic macro” sorts of problems. You really ought to be able to transclude a chunk of HTML and the associated CSS into another document but you have to watch out for ‘id’ being unique never mind the same names being used for CSS classes. Figuring out the rendering intent for CSS could also be complicated: the guest CSS might be written like
.container .style { … }
Where the container is basically the whole guest document but you still want those rules to apply…. Maybe, you want the guest text to appear in the same font as the host document but you still want colors and font weights to apply. Maybe you want to make the colors muted to be consistent with the host document, maybe the background of the host document is different and the guest text isn’t contrasts enough anymore, etc.
HTML is a markup language, not a programming language. It's like asking why Markdown can't handle includes. Some Markdown editors support them (just like some server-side tools do for HTML), but not all.
Including another document is much closer to a markup operation than a programming operation. We already include styles, scripts, images, videos, fonts...why not document fragments?
Markdown can't do most of those, so it makes more sense why it doesn't have includes, but I'd still argue it definitely should. I generally dislike LaTeX, but about the only thing I liked about it when writing my thesis was that I could have each chapter in its own file and just include all of them in the main file.
This isn’t programming. It’s transclusion[0]. Essentially, iframes and images are already forms of transclusion, so why not transclude html and have the iframe expand to fit the content?
As I wrote that, I realized there could be cumulative layout shift, so that’s an argument against. To avoid that, the browser would have to download all transcluded content before rendering. In the past, this would have been a dealbreaker, but maybe it’s more feasible now with http multiplexing.
With Early Hints (HTTP code 103), it seems especially feasible. You can start downloading the included content one round-trip after the first byte is sent.
No, HTML is fundamentally different because (for a static site without any JS dom manipulation) it has all the semantic content, while stylesheets, images, objects, etc. are just about presentation.
I think the distinction is "semantic on what level/perspective?". An image packaged as a binary blob is semantically opaque until it is rendered. Meanwhile, seeing <img> in the HTML or the file extension .jpg in any context that displays file extensions tells me some information right out of the gate. And note that all three of these examples are different information: the HTML tag tells me it's an image, whereas the file extension tells me it's a JPEG image, and the image tells me what the image contains. HTML is an example of some kind of separation, as it can tell you some semantic meaning of the data without telling you all of it. Distinguishing and then actually separating semantics means data can be interpreted with different semantics, and we usually choose to focus on one alternative interpretation. Then I can say that HTML alone regards some semantics (e.g. there is an image here) while disregarding others (e.g. the image is an image of a brick house).
I'm not sure what isn't computing. Presumably you know (or have looked up) the meaning of "semantic"? Images and videos are graphic, not semantic, content. To the extent they are rendering semantic content, that content should be described in the alt tag.
I'm not defending it, because when I started web development this was one of the first problems I ran into as well -- how the heck do you include a common header.
But the original concept of HTML was standalone documents, not websites with reusable components like headers and footers and navbars.
That being said, I still don't understand why then the frames monstrosity was invented, rather than a basic include. To save on bandwidth or something?
Frames were widely abused by early web apps to do dynamic interfaces before XHR was invented/widely supported. The "app" had a bunch of sub-frames with all the links and forms carefully pointing to different frames in the frameset.
A link in a sidebar frame would open a link in the "editor" frame which loaded a page with a normal HTML form. Submitting the form reloaded it in that same frame. Often the form would have multiple submit buttons, one to save edits in progress and another to submit the completed form and move to the next step. The current app state was maintained server side and validation was often handled there save for some basic formatting client side JavaScript could handle.
This setup allowed even the most primitive frame-supporting browsers to use CRUD web apps. IIRC early web frameworks like WebObjects leaned into that model of web app.
Oh my goodness, yes you're right, I'd forgotten entirely about those.
They were horrible -- you'd hit the back button and only one of the frames would go back and then the app would be in an inconsistent state... it was a mess!
You needed to hit the reset button (and hoped it worked) and never the back button! Yes, I suffered through early SAP web apps built entirely with frames and HTML forms. It was terrible.
I don't love JavaScript monstrosities but XHR and dynamic HTML were a vast improvement over HTML forms and frame/iframe abuse.
Really well written web form applications were a delight in 2001 and a large improvement over conventional applications written in Windows. It helped that application data was in a SQL database, with a schema, protected by transactions, etc as opposed to a tangle of pointers that would eventually go bad and crash the app -- I made very complicated forms for demographic profiling, scientific paper submission, application submission, document search, etc. If you did not use "session" variables for application state this could at worst cause a desynchronization between the browser and the server which (1) would get resynchronized at any load or reload and (2) never get the system into a "stuck" state from the user viewpoint and (3) never lose more than a screen full of work.
Try some other architecture though and all bets were off.
Amazon's web store looked and worked mostly the same as it does now, people were very impressed with MapQuest, etc.
Applications like that can feel really fast, almost desktop application fast, if you are running them on a powerful desktop computer and viewing them on another computer or tablet over a LAN
The original concept of HTML was as an SGML subset, and SGML had this functionality, precisely because it's very handy for document authoring to be able to share common snippets.
Don’t get them to make design decisions. They can’t do it.
Often, I use LLMs to write the V1 of whatever module I’m working on. I try to get it to do the simplest thing that works and that’s it. Then I refactor it to be good. This is how I worked before LLMs already: do the simplest thing that works, even if it’s sloppy and dumb, then refactor. The LLM just lets me skip that first step (sometimes). Over time, I’m building up a file of coding standards for them to follow, so their V1 doesn’t require as much refactoring, but they never get it “right”.
Sometimes they’ll go off into lalaland with stuff that’s so over complicated that I ignore it. The key was noticing when it was going down some dumb rabbit hole and bailing out quick. They never turn back. They’ll always come up with another dumb solution to fix the problem they never should have created in the first place.
I feel like they bit off more than they can chew. I want to like it, too. But they’re trying to create a whole new JavaScript ecosystem from the ground up, and a lot of it depends on maintaining a seamless compatibility layer that’s always a moving target.
It’s not just node. They have Fresh, which depends on Preact, which is a compatibility layer over the React API. Why? To save a few K on bundle size? They have JSR. Why?
The sales pitch is great: Typescript that just works. But in my experience, it didn’t “just work”. I tried building something with Fresh and ran into issues immediately. I bailed out.
I agree with all of this and hope it comes to pass. I don’t see how it’s possible without drastically changing the capitalist values that underpin our society.
I think capitalism is the best way to structure an economy, but it’s a terrible way to structure a society. Nowadays, we use capitalist values as the benchmark for what matters in almost every context, and then we wonder why our society is a dystopia.
Do you mean a market/profit-based economy when you say "capitalism"? Because it's becoming increasingly obvious that giving all the profit to the people with the investment capital is not a great way to structure either an economy or a society. There are other ways to harness the profit-motive than capitalism.
I agree. Capitalism definitely has its merit especially when comparing with Feudal, but then again, maybe we need something new after all those centuries.
I sincerely hope that we discover some serious alien stuffs (like TMA-1) so that humans can finally have a very good excuse to move ourselves to a new paradigm.
Here’s a possible out: Senior engineers stop working huge corporations and use these tools to start their own businesses. (Given today’s hiring situation, this may not even be a choice.) As the business grows, hire junior developers as apprentices to handle day to day tasks while senior engineer works on bigger picture stuff. Junior engineer grows into a senior engineer who eventually uses AI to start their own business. This is a very abbreviated version of what I hope I can do, at least.
reply