Hacker News new | past | comments | ask | show | jobs | submit login
I'm an Old Fart and AI Makes Me Sad (medium.com/alex.suzuki)
205 points by alex_suzuki 8 months ago | hide | past | favorite | 302 comments



AI not being open is my own personal source of sadness. The mathematics involved aren't out of reach, but the computing power and _personal time_ necessary to build a dataset and train a model are absolutely beyond my reach.

It's unnerving that such a revolutionary new technology in computing is fundamentally tethered to large binary blobs, usually proprietary or of uncertain provenance. Which is not like computing advances that we have hitherto enjoyed; even technologies for the operations of large data centres and distributed systems could at least be deployed and toyed with on virtually all common consumer hardware.

Perhaps it's because thus far the realm of Big Data wasn't alluring enough to draw in myself and the similarly minded. It's only now that a certain form of large data set has become tied to an interesting application that it's drawn my attention. Not to say that Big Data and AI are one and the same; only that they both deal with large-in-context data sets that are difficult to construct, acquire, and manipulate.


You're describing A.I. as a black box of proprietary workings, far beyond any mortal's personal budget to create, and too complex anyway, which internally runs in a language unintelligible to humans, and through layers and layers of abstractions, has a human-readable interface. To me, that exactly describes an x64 or ARM CPU. But since we are not children freshly learning about reality, we don't see these two black boxes in front of us in the same light. One is something we just got used to kinda-understanding-but-not-really decade after decade. One is something brand new to us.


x64 and ARM emulators exist, the code for some are quite legible. There's a whole community of folks who build RiscV implementations from verilog or vhdl. In my own experience at university, decades ago, we built functioning HC11 CPUs out of logic gates and wrote assembler for them.

I'll admit that there's a definite and clear accessibility curve to building a CPU, but it's nowhere near as opaque as the binary blobs one can access for AI tooling.


Now I want to go and play NandGame again...

https://nandgame.com/


I hear really good things about turing complete

https://store.steampowered.com/app/1444480/Turing_Complete/


X86 and ARM are open in many many ways that for example ChatGPT isn't.


In economic terms, it feels like we the people won't have much of a chance at AI wealth. Software was infinitely scalable, easy to make and powerful, but AI is data with tiny amounts of code, and machine learning engineers are going to be a dime a dozen soon. So hardware companies will reap most of the benefits, and making it with a a garage startup is hard


This is much how numerical computing has always been. We could individually approach the math and algorithms, but need major resources and input data to apply it to real-world problem like weather forecasts, geophysical simulations, digital wind tunnels, etc.

I think what's different right now is the sudden hyper focus. It's a chaotic gold rush driven by speculators and prospectors as much as by technologists or scientists.

I wonder what we'll see on the other side when the fog starts to clear. I expect lots of "losers" i.e. frustrated prospectors who never found their gold mine. And probably some proprietary wins locked up by big investors. But, maybe we'll at least benefit from some new commodity products reaching out to a broader market after they initially targeted the rush itself...


I'm hopeful that this will get better as time goes on with advances in unsupervised and multimodal learning, that will enable us mere pions to achieve our niche use cases by fine-tuning a larger model with minimal compute.


It is not fundamentally different from for example MS Office being a large binary blob. We don't have the source code, it is proprietary, and a company controls it.

At the same time, a group of guys trained a model that plays go, it can beat AlphaGo, and it probably is the strongest go player ever. No one from the group of people could have created LeelaZero by themselves alone, but collaboration is possible.


> fundamentally tethered to large binary blobs

In many instances you don't even have the blob, you can just call an API that invokes it and runs the output thru multiple other hidden algorithms before sending it to you.


This attitude is like the rugged forest hermit's aversion to computers because silicon wafer production is out of reach for individuals.


Another reason why AI makes me sad: take for example images. There is really no reason anymore to look at any posted art, image or article illustration, since, while it may look like an image, it may not be an image in the sense of an opus operatum. There is no sense in investigating and uncovering the decisions and choices that made the image, its composition, style, etc., thus opening the image to an experience, as a projection of human experience and thought, since there were no such choices made. If there are no (strong) con-textual hints at this being an actual image, there is really no sense in looking at it, anymore. And now much the same for video. – For an ardent fan of visual media, it's just depressing.


None of what you describe has ever been my reasons for looking at an image, and yet I immensely enjoy looking at images.


Do you like procedurally generated content in games? Or do you quickly find it boring and empty, devoid of soul?

I thought about it, and while part of my overall dislike of procgen in games comes from it being lazy and repetitive (too easy to spot the patterns), another part comes just from realizing there's nothing behind it. No mind, no plan, just RNG that I can keep rolling to get variations of the same theme.

(There's also a sense of loneliness - when everyone is consuming unique content, it stops being a topic of conversation, because there's no shared experience anymore.)


What about taking a solitary walk in the forest. Assuming no Intelligent Design, the trees and everything else there was created without any guiding plan. And if it's a large forest, it's reasonably likely that by the time the next person goes there, even if they end up following the same path, the vegetation will look entirely different. Does that uniqueness of "content" make the forest devoid of soul?

And if you do accept Intelligent Design, then arguably the RNG of AI images is also guided by it to the same extent.


With nature, I believe it's interesting because unlike something like Minecraft, where a single chunk is entirely dictated by math, everything in nature is dictated by life. A fully grown tree exists because a previous animal didn't eat it as a sapling. A hole which was dug by rabbits now houses snakes. A path has been created by various animals moving between the trees, so there's less vegetation there, and it allows for an easier walk through the forest...but that path wasn't there originally and animals had to choose to go that way and make the path.

Whether there is intelligent design or not, you can see the effects of life existing and changing in a forest. You can detect choices, individual and collective.


We could argue the universe follows mathematical laws in the same way. It's more a matter of how complex those laws are. With gen AI we finally see human-created mathematical rules that approach the scale of natural ones.


I can say with some surety that I find things that on the whole has not been visible affected by life as beautiful as those that have.

And I've found many Minecraft landscapes beautiful too.


I strongly disagree with this comparison but I’m having a hard time putting my finger on why.

Part of it is that all current AI generated imagery is an imitation of something, but an imitation that doesn’t and can’t follow the rules of the original or understand the meaning of the original. It’s fake.

Nature feels like the opposite. It’s real and brutal and, for living things, it is driven by millions of years of evolution. It’s entirely grounded in the physical laws of our universe.


AI is entirely grounded in the physical laws of our universe too.

And all art is imitation - that is how we learn to follow and follow rules, how the rules come into being, and how we communicate.

It's reasonable to think that they are not good artists who consistently know when to follow rules and when and how to break them, but I don't see how this is any different than anyone else making mistakes while learning.

(As for understanding, I'd bet ChatGPT could give a more convincing analysis of an artwork than most people)


It makes me sad to think that some people don't make the difference between life ("nature") and some kind of interpolation of data (as impressive as it may be).

Maybe that's why we are destroying the biodiversity and most tech people seem to love it!


Apologies if I'm obtuse, but what exactly is the difference? I mean, if we accept philosophical reductionism, isn't nature just the result of (very complex and impressive) data-intensive physical processes?


I can't answer that - I'm bored out of my skull by nature. Always have been. Solitary walks in forests or parks, I find them great for clearing out my thoughts - I'll be thinking about lots of things, but at no point I'll be thinking about nature around me. Which is perhaps preferable to taking a walk in an urban neighbourhood, because I'll always find something interesting there that will let me avoid uncomfortable thoughts.

(Except in context of biotech or dynamic systems; those are lenses that make me appreciate nature, but I realize this is an engineer's point of view - I'm excited about possibility of applying what we're learning, and/or repurposing what's already there.)


My issue with procedurally generated content is that it is often a replacement for effort, and so the problem is not lack of a mind behind it, but as you note that it is often lazy and repetitive, and so simply not good. Humans can be lazy and make repetitive crap too, and when they do I'll consider it just as crap whether or not their underlying ideas were amazing.


Humans may be lazy and produce crap, but AI is lazy and produces crap. See the difference?


No, I don't agree that's a meaningful difference.


unable to discern a difference between "may be" and "is", perhaps you missed that day of school?


And who described, designed and built AI algorithms? See the connection? /s


FWIW, we're not really describing and designing AI algorithms anymore - we're taking a simple algorithm and keep throwing heaps of data at it, until it designs itself.


So what is it that you enjoy, when looking at a certain image (especially, when it comes to this particular image, as compared to any other representation of the same subject matter), if not the artistic choices? A weighted average? I'm skeptical…


> So what is it that you enjoy, when looking at a certain image (especially, when it comes to this particular image, as compared to any other representation of the same subject matter), if not the artistic choices?

Things can be visually interesting without intent or artistic meaning. Even if you've never stared into a fire, surely would agree that many people do it? Or the ocean? Or hills covered in snow? Or trees? Or other people having lunch? Or a pretty face? People have gazed contentedly at things that are just pretty without any worry about "artistic choices" since people have existed.


I'd argue, there's a difference between media and the world as-is. Media are crucially a projection, transposing an impression as an expression into another space, implying a difference in these two spaces and their respective dimensionality. This is, what makes media interesting to us. All what is implied in this: human experience, narrative strategies, focus, depth, texture, skill, choices, etc. Otherwise, we're just immersing in the world as-is, enjoying ourselves and our own choices. But, for art as an act of communication, this doesn't work. It even doesn't work for media, which is why immersive media, like 3D TV, regularly fail: it simply misses the media part.


You are presuming we all care about art as an act of communication all the time, or even at all. A lot of the time what the artist is trying to communicate is not all that interesting to me.

I can enjoy just looking at something, whether it's a painting or a generated image, or just the sky. It doesn't take away anything from the experience to me if I know there is no meaning or intent behind it.


I can just try to relate what kind of impact this has on me. As mentioned above, visual media are important to me. I graduated in the theoretical side of this, part of my work is visual design. First, I observed that these images made me depressed, whenever I encountered them, then, that I now just ignore any images by default. It hurts. I miss what had been a constituent part of me.


I struggle to even imagine feeling that way about it. I don't know what to suggest, other than perhaps try experimenting with meditation as a way to see if experiencing detachment of your feelings about the process vs. the outcome and images might make a difference.

To me the two are entirely separate, and e.g. when I make things myself, whether I draw, or play the piano, or write code, the outcome is often secondary (I'm shit on the piano, and mediocre at drawing, but it doesn't matter). When I learn about an artist, the person and ideas might be interesting even if I have no interest in their art (I pointed out my favourite part of the Matisse museum in Nice is not his art, but the olive garden outside it, elsewhere, but Matisse is fascinating even if I don't care about his art at all). But when I watch their art, it's purely about what I see then and there. It's not that knowing history behind it never affects me, or interests me, but that I don't need it to enjoy the work, nor will I always - or even most of the time - feel the slightest urge to learn about the work or the artist.


I think, it's a matter of the parable of the puppet theatre. According to this, we may either "naively" enjoy the play, immersing ourselves in the story presented, sustaining our disbelief regarding the puppets, taking the scene "for real", or we may engage in a dialog with this, by applying our own perspective to what is already a perspective (or, in my words, a projection). From this arose Kant's suggested "disinterested pleasure", a state of alert suspense and readiness, which has become the blueprint of bourgeois art consumption ever since the enlightenment. It's really this framework of looking at, engaging with and producing art and visual media, I'm referring to.

On a less theory-heavy notion, the mere fact that a certain image or certain parts of a given image enjoyed that much of an investment to look like it does, somewhat guarantees that it had been worth the effort to someone, that it was meant to convey something. A quality, which is now gone for ever. Meaning, even the barest spark of expression, is now just at random. Moreover, by the very definition, art will now be dogmatic, even where it's asked for aberrations and exceptions, and redundant, since it's just a product of weighted averages based on an existing library of expressions. At least applied art is pretty much over, as is visual media. (Just look at what happened to cinema, when the regulating factor of film stock and related production costs fell away.)


TIL. Thanks a lot for your comment. This meta analysis just opened another dimension for engaging with art.

> the mere fact that a certain image or certain parts of a given image enjoyed that much of an investment to look like it does, somewhat guarantees that it had been worth the effort to someone, that it was meant to convey something

For reasons very similar to yours, I enjoy going to 2nd hand record stores: if someone has bought the record a 1st time, then someone else though it was worth buying and putting it on a shelf, it's more likely to bring enjoyment than a random new purchase

> Moreover, by the very definition, art will now be dogmatic, even where it's asked for aberrations and exceptions, and redundant, since it's just a product of weighted averages based on an existing library of expressions.

I don't know much about visual media except music videos, but if you want to discover non dogmatic art, I'd recommend you try out what's popular in any random country.

I love music and I've found russian rap and french pop to be extraordinary, maybe because they follow their own dogma, a dogma that feels very foreign to someone more used to north american music: whatever the russian (or french) weighted average may be, it's very far from the norm of what I'm used to, so it stands out.


I fundamentally reject your notion that this changes anything other than your choice to dismiss what you see because you can no longer feel sure of your understanding of things you couldn't be sure of before either.

But then from early on I found a lot of attempted analysis of art shallow and often outright insulting in it's insistence of knowing intent that was often not there.

E.g. I recall an interview with a Norwegian author where the interviewer was terribly invested in the symbolism of a scene, and the author though for a moment and answered that he just thought it sounded good, and wished he'd thought of that.

In other words, while there certainly is intent behind a lot of art, your interpretation is yours. It may or may not even intersect with any authorial intent.

So why does it matter?

I've written two novels. I don't give the slightest shit if people interpret things in them how I intended. For the most part I just wanted to evoke certain feelings. There's no intentional symbolism there. Many things in the setting that I know people will interpret as a positive outlook I consider depressing - from my perspective its a dystopia, but I don't want to make it feel like that. But how people take it is entirely up to them.

The artists investment has no relevance to or bearing on my enjoyment of a work. Nor would I expect or care if that is the case when people engage with my own. (I get that people who make a living of human art are worried, and that is valid)

I for one look forward to consuming AI art when it is pleasing. I also still look forward to consuming human art when it is pleasing. And hybrids.

I really don't care which is which if it looks good to me, sounds good to me, reads well to me, makes me think, makes me feel.


> I can enjoy just looking at something, whether it's a painting or a generated image, or just the sky. It doesn't take away anything from the experience to me if I know there is no meaning or intent behind it.

I feel like that's an attitude that's particularly common among software engineers: see an artifact as its surface presentation and nothing more. Maybe it's a result of thinking about abstractions like APIs too much.

When I appreciate an image, it typically has to be either a reflection of reality (this thing I'm seeing is a real thing that I now know about) or an actual person's expression (an act of communication) for me not to feel cheated.

It doesn't help that one of the biggest use-cases for "AI" image generation is the creation of clickbait bullshit masquerading as a reflection of reality.

Basically: the context is as important as the raw image itself.


I think this gets it entirely upside down: There is meaning and symbolism and patterns everywhere, but we can never be sure our interpretation matches sone intent, and I see no compelling reason to see that 'intent' as more than complex computation anywhere, and pretty much everything, everywhere is computation.

There is context to a storm, or a tree too. Many things have more, and more complex, context than human intent.

Intent is just one categorisation of data, and it can be interesting, but so are many other categorizations of data.

And we also often get authorial intent wrong, often embarrassingly so.

It also compels me to see the dismissive attitude to AI art as fundamentally flawed, in that while we're clearly not "there" yet, I see no fundamental conceptual difference between different forms of computation - including the human mind - so any dismissal of the "just statistics" kind to me is an attempt to imbue the human mind with religious characteristics I fundamentally reject.

At the same time, to me, that attempt denigrated human art, which to me is equally just a result of computation.

If you can't enjoy art unless you think there's some spark of something more behind it, then to me the only reason you fail to reject human art too is faith in something there's no reason to think I'd there.

Nothing we know suggests we are - or can be - anything more than automatons resulting from computation any more than the trees in a forest or waves on a beach.

Yet we still have intent, even if it is just a product of computation.

And we still produce beautiful patterns that I enjoy whether or not I recognize your intent, and whether or not there was any intent behind any given aspect I enjoy.


> I feel like that's an attitude that's particularly common among software engineers: see an artifact as its surface presentation and nothing more.

I think there's no there there. Most of my time as a software engineer is spent understanding what someone else was thinking and trying to accomplish at the time through the lens of the code they ended up writing. Is that archaeological endeavor not strongly connected to if not exactly what we're talking about here?

It's just that I also know how to enjoy looking at things without any of that.


What it looks like. Whether the image itself appeals to me or moves me

Whether those are "artistic choices" or random chance or a set of heuristics does not affect my enjoyment.

It does not mean I don't find knowing about the choices of an artist or their process interesting at times, but for the vast majority of art I see I have idea about them. They are separate to my enjoyment of a given image.

I don't know why you are skeptical - to me the notion that anything but the image itself should be necessary for me to enjoy it is a bizarre notion and I suspect it would be for a whole lot of people. Most people would not be able to name whomever created the vast majority of art they've seen, nor name the style, and do not spend time thinking about either.


I would say that the default mode of consuming art is on a surface level where you don't pick apart why it's "good" or how it came to be, you either like it or dislike it in an unconscious/intuitive way. I can understand your mode of art consumption, but I'm not sure I understand why you're so skeptical about this.


lmao, so glad to see this comment. I was about to post that not all of us are art snobs but your post captures the idea succinctly.


I used to get angry at my Norwegian teacher when we were forced to analyze poems, because I like writing poems but never once felt compelled to insert the kind of forced symbolism he kept insisting we rip apart and shred and murder poetry to identify.

Because it wasn't what I wrote or read poems for.

Sometimes I'd even seek out a certain translation of a work because I enjoyed the beauty of the choice of word of the translator more than the authors underlying ideas.

(I've never gotten through the original of Whitman's Leaves of Grass, because I find it trite, but there's a particular Norwegian translation I loved - the authors intent was identical, but one presentation of it was beautiful to me in ways the other has never been because of the patterns of words rather than meaning)

Whether or not that intent or symbolism was there in a given poem, I found the process inherently destructive for my enjoyment of those poems, and I utterly detested the process because it felt like violence.

The one time I wrote a poem with a message was as a task in his class, and it was a sharp denunciation of the analysis of poetry. No analysis was necessary - the intent was brutally apparent and quite rudely expressed.

It is also the only poem of mine I've performed in 'public', which was a mistake of him, because it's perhaps the one thing that I have written that has met with the most universal approval among those who heard it, and it hardly improved the attitude towards the analysis of poetry.

I still find pulling art apart to often be brutally destructive and occasionally insulting in it's often shallow insistence of knowing intent that usually is without actual evidence and mired in dogma.

I don't mind people finding meaning in knowing more about how it was created when the creator of a piece of art wants that context to be known or part of the work, but I feel very strongly about assuming intent even for human art, because to me at least, for what I wrote, my intent usually was to write without any deeper meaning or symbolism, to evoke emotion.

For me, for what I wrote, picking it apart ruins that on every level in ways people rarely are able to undo for themselves.

It's like trying to reassemble a cadaver.

People can find their own meaning in anything, and make their own choices, but so much art snobbery revolves around that assignation of a dogmatic interpretation of intent as "correct" and objective that often feels outright disrespectful to assume to me.


Please mind that I wasn't addressing symbolism at all, nor deconstruction. It was more about, "for what I wrote, my intent usually was (…) to evoke emotion."

If I know as a reader that there is no genuine expression behind this, no intent of evoking anything, that it's rather patched together, based on stochastic heuristics in order to mend seamlessly, this just doesn't work. Much the same, I mentioned earlier in another comment that I believe AI generated images to occupy pretty much the negative space of abstract expressionism and informal painting. (Both styles withstand quite robustly any attempts of simple analysis.) It's really about this particular stretch, transposing inner and outer impressions into expressions, what I'm concerned with. This sort of establishes kind of a net, in which we, as humans, may suspend in. (E.g., if you are enjoying yourself in nature, does it matter, whether this is organic, has eroded and grown in certain ways, in a polylog of beings and forces, or if this is just a generated prop made of plastic? I bet, it does.)

For a more concrete example, take light in figurative art. All images are modelled from and by light, and it's quite natural that we should explore an image along its lines. Light and the way it spreads emphasises the image, even if there was no intention of doing so, just by "how it works". These are choices, equally if made intentionally and consciously or not. It's the trace and trait of a human being. But, if there was no intent, if it's just patches mended to meet up, based on weighted averages, a transferred texture? Even if there are sculpted objects and dark and light patches to look at, is there light, at all? Is it even worth noting? How should this work for me? Is there any meaning in looking closely at this?


I very much don't care if there was any intent when consuming art of any kind, though.

If I see an image that evokes a feeling, it won't diminish anything if I find out it is an AI image. Why would it? The image is the same.

The same is my attitude to my writing: I may have an intent, but whether the reader interprets it the same way is irrelevant - I've written it either way, and got my enjoyment out of writing it. It'd be a shame if they don't enjoy it, but that's all.

Why does it matter? It doesn't change my experience of writing it in any way. It has no bearing on my life at all.

Nor does an artists intent change what I see, or hear, or read in any way.

Your example at the end is just deeply depressing to me. Why does it matter? It looks the same. To lose our on enjoying something nice because of factors other than the art itself feels sad to me.

Furthermore, why do you think the human art is any less the result of computation? To me, that seems like a superstition, and meaningless. I don't feel like it's a distinction with any value.


Regarding that last example: the important part is, it does not look the same, nor does it feel the same, since the image is not developed under that regime of light and emphasis and attention. (In a way, there is neither a sense to light, nor detail, nor attention, and, thus, to "how?". I may add, if I'm engaging with an image, beyond a first impression, "how" is to me more important than "what".) Similar aspects may be observed with text.

And, for you, as a writer: As you can't compete with generators on economic grounds, the productions of the former will probably become prevalent (if they are not already, in some niches). If input rejection becomes the default mode of reception, this may affect you, as well.

PS: In other word, a representation is not the real thing, it's a thing of its own in its own realm, raising the question of "what realm?".

As an illustration, here are two illustrations of my own, both representing real objects, drawn in (pre-AI) Photoshop from scratch, for interface purpose. The first is an actual application – mind the focus and guidance of attention and varying detail, the second one is just a draft, still lacking any such focus. This second one is quite similar to an AI generated image, as it lacks any purpose, thus, any reason to be there at all. (Besides that bit of light that is already in that image, it is offending in its neutrality. And there is really no point in publishing it, outside of this context.)

[1] https://www.masswerk.at/eliza/

[2] https://www.masswerk.at/tennis-for-two/304a.html


> I very much don't care if there was any intent when consuming art of any kind, though.

I agree with you.

I basically stopped looking at things on /r/earthporn because nature itself is pretty and the bastardization that often happens on that subreddit isn't. They often play with the Hugh's to get deeper greens, more contrast, etc.

But then it stops looking natural and my reaction isn't one of awe or interest. It's not pretty to me. It's interesting the first time you see it, but not the 100th time you see it.


This is like a painter lamenting the invention of the camera because it takes away their ability to enjoy realist paintings. Now we laud photographer’s choices and denigrate the ones made by folks producing AI images.

DALL-E is the disposable camera of AI images. There are plenty of artists working with more complex tool chains that take lots of effort and energy to create, and form their own type of AI art.


The rudimentary knobs on generative AI tools almost guarantee low quality output but they can do it with such speed and in such volume that trained artists are concerned it will overwhelm the market.

Yes, photography was similar in how it overwhelmed painting, but the difference is that photography, from day one, offered a gloriously rich set of tools for making pictures while DALL-E is a peg-board toddler toy by comparison and the results speak for themselves. Just compare the first few years of photography to the middling pablum we have that's spit out from these generative AI models. Despite the models stealing the entire history of world art and putting all of that at the fingertips of untrained artists, the results are still embarrassingly naive and mostly boring. The reason is that art isn't about the tools, it's about the mind and people with no training and no experience in art cannot make consistently good art without training, even with tools that let them create sophisticated collages of other people's real art.


Photography still takes a ton of skill though. It's still ultimately a human creation. AI art is the antithesis of this.


Like with photography, you can put in zero effort, or you can spend hours, days, weeks tweaking details. Some people just churn out images they happen to like from low-effort prompts, some people fine-tune their own models to get just the look they want.


I think that if photography takes skill compared to painting, then AI art takes skill compared to photography. Just different skills.


Nah, it really doesn’t take skill. My 4 year old takes photographs all the time.


And of course that is a frequent criticism of some other types of art too.

E.g. my son loves art, but gets exasperated at anything remotely abstract. Part of my enjoyment of taking him to art exhibitions is things like when he walked past a Picasso with a look of utter disdain and loud audible huff. Or the conversations we had about Matisse's sculptures, or the paper cutouts from his later years.

(I don't really like Matisse either - the thing I enjoy most of the visual experiences of the Matisse museum in Nice is the olive garden, despite the total lack of intent behind how the tree-branches grow)


I don’t appreciate my kids pictures, but he seems to get some sort of enjoyment out of the act of creation. I think art is based on the intention of the creator and not the viewer.


I think it's fine for it to be either, as long as we recognize that unless the creator tells us their intent, our interpretation is just a interpretation and may or may not even intersect with theirs.

My son is 14 now, and draws plenty of things I enjoy for their actual appearance, but I know most of them are scribbles or practice to him, and I don't try to analyze them for intent that I know usually isn't there.

Meanwhile, two of his pictures from when he was smaller that I know he did with intent, because he told us, and we wrote it down, are scribbles to me.

Shapeless blobs.

One of them I quite enjoy on a visual level, but while to him it was a fox, to me it is a red swirl that while visually pleasing in no way is anything like a fox.

While I wouldn't have it on the wall if it wasn't my son's, his intent behind it does not make it better.

The other is hideous, but his intent was to paint his mum and me, abd so I love it.

For that one the intent is what gives it value to me. That is a rare exception.

But only because he told us, and because of the emotional value of that.

He could've given us any random painting and said the same thing, and it's value would be the same. The "art" in that instance was his statement.

I also know from discussing his drawings with him, that whether he enjoyed a given drawing or I enjoyed seeing it, often correlates poorly - many of his that I enjoy are things he dismisses completely for reasons that does not affect my enjoyment of them at all.


Creating > consuming

It's good the kids know...


This is what painters said about photographers.

Low quality photography is not elevated to the status of art. Low quality AI generations are not elevated to the status of art.

The actual art will be made by people with something to say or show and it will take energy and effort - it won’t be made by typing your idea into DALL-E, it’ll happen with complex workflows on fine-tuned models.

It is just another tool with different strengths, weaknesses, and constraints.


> it won’t be made by typing your idea into DALL-E, it’ll happen with complex workflows on fine-tuned models.

I'm not convinced that this is what will happen long-term, but if it is, I'd completely agree that it's a new art form.


...until all photography is AI-enhanced also https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...


Anytime I see a photo taken on actual film, even though it's been digitized for posting online, there's a visceral reaction of beauty. To you point of DALL-E being the disposable camera, I feel those images are in fact "disposable". I feel nothing.


I don’t really buy your claim that you have a “visceral” reaction to every image you see captured on actual film. Ignoring the oddity of that statement, even if you did, I’m not sure why what you feel is supposed to determine what is or isn’t art.


Are you sure you can recognise it?


I think that's 99% true right now, but it's not something that will be true forever. Almost all AI images that you see today are the result of a person typing a description into a prompt and getting an image back. If you're lucky, the person played with the prompt a bit until they got something that more closely matched their intention. However, there are tons of ways to integrate AI into a creative process in ways that allow an artist to iterate on an image and to be intentional about composition, color, style, and so on. Currently, it's only highly technical people who are able to do this, so the result is a lot of stuff that is technically impressive, but of low artistic value. (Quite similar to the output of the stereotypical camera nerd who has the latest camera and has its specs memorized, but takes uninteresting photos.) Eventually, these tools will find their way into the hands of artists and they will use it to make art.

I believe that when thinking about AI art, it's helpful to keep photography in mind as a source of analogies. For instance, both give anyone the ability to create images easily. Both gave existing artists heart attacks and were dismissed because they didn't require artistic training to use. Both resulted in a flood of images being created without thought or effort. The main difference is that we skipped straight to the smartphone camera age with AI.


This seems highly subjective. I'm DEEP in the AI Art space and yet I still love looking at traditional art, photography posts on instagram.

This is like saying that post-photography there was no reason to look at paintings anymore. And while the camera certainly disrupted art, it also led to a burst of creativity and gave us Picasso and Pollock.


I'd argue, AI art occupies the negative space of abstract expressionism and informal painting. It has more or less catapulted us back in time, before there was photography and subjective impression and expression became the actual subject. Even more so, in lack of a projecting author, in who's projection we may conversely project ourselves, thus constructing a collaborative work of art, it crucially lacks any transcendent qualities as the author of the prompt lacks any finer control over an output, which is merely a weighted average.


I think it depends greatly on a person’s intentions when viewing an image, or their expectations. When I look at stock images I couldn’t give a damn. When I’m looking for art for my wall or desktop wallpaper I may give more of a damn. When I’m looking for art as a gift I will give even more of a damn. For the vast majority of applications I think AI “art” is “good enough” but this is the part that makes me really sad and conflicted. I don’t know a way forward for most artists or folk that create “general” images.


Well, I get your point, but through prompting there are still a lot of choices that can be made by an artist using AI, even in an iterative way.

On top of that, real artists will often find that they surprise themselves, and those surprises will drive the eventual creation. So this is a similar process, except it goes through AI instead of the unconscious part of the brain of a real artist.


AI is not only a threat to generation of art, but also to consumption. For decades now, hollywood movies have been tested to "focus groups" to decide which way the stories should go to maximize profits. with AI, you can train it to maximize "engagement" to "discover trends". This could end up affecting even traditional artists. It could Hollywood-ize other forms of art.


Artists that I know work in the way of exploring and learning about a subject matter and creating art relating to it. When I see this kind of art, and talk to the artists, I get the sense of exploring the subject and learning with them. A lot of art isn't this, it simply exists for much more plain purposes, and that type of art seems to be what AI is able to generate. I feel disgust towards the idea that people will be able to create their own art using AI, as the current state of the art only allows for quite shallow creations. I'm not saying it's wrong, but I am disgusted.


> There is really no reason anymore to look at any posted art, image or article illustration

Given the demand for AI art, it's safe to say that most people disagree. Personally, I like looking at pretty pictures.

Also realize that death of the author was proposed many decades ago.


Well, there's nothing new in this per se, text theory suggested a quasi autonomous "weaving" of texts and imaginary content some 60+ years ago. (Well, it's all about semantic fields and the implications were well known. – BTW, Neal Stephenson wrote an hilarious parody on this as an academic mainstream phenomenon in his Cryptonomicon, look up "Text at War".) But there was still some agency and some effort involved, maybe even reputation, some capital, which served as regulating factors. Now it feels more like marketing blurb without a product and no careers to be pursued behind this.

Also, crucially, those representing the demand for AI art are not those who are meant to consume it. It's still too soon for any systemic feedback, other than economic factors and incentives.


I've looked at thousands of images created by people twiddling the rudimentary knobs on generative AI tools and it is universally garbage, often trumped by assembly line produced motel paintings. As someone with 5 years studying art and architecture in school, and 30 years of participating in art post school, I feel quite comfortable sharing that review.


Agreed. I think that the concern about disinformation stemming from AI images/video misses the bigger picture.

Humanity will adapt to these new technologies. We'll begin to trust what we see less and less. Which will prevent us from being fooled, but we also lose a massive part of the human experience in the process.

Soon the days of being able to view a photograph, a video, artwork, and appreciate it as human ability at its pinnacle will be gone.


The image is the image. The curtains are blue because the curtains are blue.


I understand that AI is especially academic right now, but I'm not sure OP appreciates just how impenetrable everything else was to the masses before him. Nothing about DOS or SMTP have ever felt accessible to the vast majority of people before now. If anything this should spark some empathy.

Furthermore, jump on some Discord groups and subreddits like /r/localllama or /r/stablediffusion you will see a vibrant AI hacker community that is alive and well, working very hard to build tools for the masses. Don't resign yourself just because you have not mastered this new thing by default, regardless of whether that's the world-wide-web in the 90's or tensors in the 20's.


I don't think that's quite it. I think OP could understand the math quicker then they think, but would still feel the same, because the whole point of AI is to do the understanding for you. You could of course argue that computers think for you, but you can always break things down and understand what it's doing and then marvel at doing that thing super fast and with massive parallelism. With AI there is less of that feeling, because the "guts" are learned, not built.


Your whole comment is full of really well put assertions and your points about llama and SD are what my thoughts were the whole time I was reading this blog post. BUT

>Don't resign yourself just because you have not mastered this new thing by default, regardless of whether that's the world-wide-web in the 90's or tensors in the 20's.

Might be my favorite part of what you said. It's so true and I meet so many people who are used to things just being something they understand so they approach a more complex topic they don't have context on and start "feeling stupid" without realizing how vast the world is.

All in all, well said!


> vibrant AI hacker community that is alive and well, working very hard to build tools for the masses

it's not the tools that are the obstacles; it's the training data and training resources (why do you think OpenAI had to sell itself to Microsoft?)


I understand the sadness around not understanding it, it's fucking hard. however, there are more and more resources online getting published for how to get started understanding it that help with understanding the math at an abstraction that helps with learning how to build with it.

I would strongly strongly strongly recommend starting with karpathy's from 0 to hero neural networks youtube course - it starts with building a tensor library and back propagation, explaining it in a way that finally clicked for me https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThs...

jeremy howard also has a fantastic video that is more around how to use LLMs and such called a hacker's guide to language models - https://www.youtube.com/watch?v=jkrNMKz9pWU&t=607s

as i've dove more and more into it, i would strongly recommend trying to run things on your local machine too (llama.cpp, ollama, LM studio). that has helped me fight that feeling of like "are we all just going to be open AI developers in the end" and made me feel like you _can_ integrate these things into stuff you build by your self. I can't imagine how fucked we'd all be if llama was never opensourced. being old does not mean that you can't continue to grow, and remember that it's okay to feel overwhelmed about all this - many people are.


There's different levels of "understanding how things work" and the author makes it clear what kind of understanding he's going for. If you look at the source code to program, you should be able to point to any line of code and answer "What does this particular line of code do? Why is it important and how does it relate to the rest of the design?" Same applies to a part on a electronic schematic or a mechanical drawing. There is likely no similar meaningful answer to those questions if you look at a particular weight in a model.


I’ve seen mention of researchers being able to point to specific neural pathways in AI for both simple things and even more advanced abstract things like “lying” or “truthfulness”. So it’s not a total lost cause maybe.


I think what the author is trying to get across, and what I tend to agree with having touched on the mathematics behind transformers at least, is that we don't know how these models actually arrive at the outputs they do.

We know the rules they play by thoroughly - we made those ourselves(the math/model structure). But the outputs we are getting in many cases were never explicitly outlined in the rule set. We can follow the prompts step-by-step but quickly end up on seemingly non-nonsensical paths that explode into a web of what appears to be completely unrelated concepts. It could be that our meat brains simply don't have the working memory necessary to track these meta and meta-meta emergent systems at play that arrive at an output.


I am profoundly, profoundly cynical about this particular development in computing culture.

But your last paragraph resonates with me pretty deeply, and suggests to me that there might be a way forward for me when this becomes unavoidable, which it will.

Frankly I would rather direct my energies away from the accelerating face of dehumanising technologies and towards rehumanising technology through education, but I do recognise I'll eventually have to engage with this just to educate.


I don't think you've grappled with the point the author is making.

>“If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says AI scientist Sam Bowman. “And we just have no idea what any of it means.”

>To me as an engineer, that is just incredibly unsatisfying. Without understanding how something works, we are doomed to be just users.

AI aren't complicated. They aren't sophisticated math that you can poke at and understand.

They're fucking million dollar spaghetti code that happen to work (for values of 'work').

Those videos are teaching people "This is an if statement! This is a CPU!" And then you can look at 5.8 billion lines of spaghetti code and say "Gee! I understand how this works now! Yay!"


Asking because I literally do not know: Can you step through AI like you can step through C++ code in a debugger? Like, if you type in a prompt "Draw me a picture of a cat wearing a blue hat" could you (if you wanted to) step through every piece of the AI's process of generating that picture like you are stepping through code? If I wanted to understand how a Diffie–Hellman key exchange function worked, I could step through everything line by line to understand it, it would be deterministic, and I could do the exact same thing again and see the exact same steps.


You probably could but what would you see? A bunch of weights, connections between layers and more numbers.

You don't see any meaningful, understandable code. For example, if prompt begins with draw me a picture then jump to layer X.

I'm no expert but I can imagine that to be the problem when one attempts to debug an Algorithmic Intelligence black box.


And then you can look at 5.8 billion lines of spaghetti code

LLMs don't have anywhere near that much code. The algorithms for training and inference are not that complicated; the "intelligent" behavior is entirely due to the weights.


OP clearly means that the weights are spaghetti code, technically they may be data but if they encode all of the actual functionality of the system then they are effectively bytecode which is interpreted by a runtime. You can understand how the runtime works if you care to learn, but you will never understand what's happening below that, nor will anyone else.

Aside from annoying people who want to understand how things work, it also means you can't ever know if you have a fully optimal or correct solution, all you can do is keep throwing money into the training furnace and hope a better solution falls out next time. The whole nature of it gatekeeps out anyone who doesn't have enormous amounts of money to burn.


I can see that, although to me there's a difference between weights and something like bytecode. The weights don't encode any sort of logical operations, they're just numbers that get multiplied and added according to relatively simple algorithms.

Totally agreed that the process of generating and evaluating weights is opaque and not very accessible.


You can simulate any digital circuit by multiplying and adding numbers.


But that's exactly the point. The code you are talking about is more like an interpreter for a virtual machine, which then runs a program made up of billions of numbers that wasn't designed by a human (or any sort of intelligence - you can argue about the end product, but the training process certainly isn't intelligent)


The weights are what's analogous to 5.8 billion lines of spaghetti code, here, when doing inference.


If you had witnessed the Wright brothers' first flight you would have had a hard time predicting that within 60 years we would be landing on the moon. And if you were playing computer games in the 1980s in a slow DOS box you could not have even imagined modern GPUs and modern games. Nor could we have predicted how far modern CPUs have come since the 1960s.

LLMs are a very new technology and we can't predict where they'll be in 50 years, but I, for one, I'm optimistic about it. Mostly because it is very much not GAI so its impact will be lower. I don't think it will be more revolutionary than transistors or personal computers, but it will be impactful for sure. There's a large cost of entry right now but that has almost always been the case for bleeding-edge technologies. And I'm not too worried about these private companies having total control over it in the long run. They, so far, have no moat but $$$ to spend on training, that's it. In this case it really is just math.

I think market forces will drive a breakthrough in training at some point, either mathematical (better algorithms) or technological (better training hardware). And that will reduce the moat of these companies and open the space up even more.


When we look at the history of technology, most new technologies can take a few decades before they're ready for the mass market. Television was demonstrated in the 1920s, but it wasn't ready for the mass market until the 1950s. If I remember correctly, the light bulb and automobile also took a few decades to come to the mass market.

I think a lot of people are jaded by how fast technology changed between 1980 -> 2010. But, a lot of that is because the technology was easy to learn, understand, and manipulate.

I suspect that AI will take a lot longer to evolve and perfect than the World Wide Web and smartphones.


Yeah, the rate of change since the 1960s has been incredible. I like to think of that as being a direct consequence of inventing an infinitely reconfigurable general purpose calculator :)

I don’t know about LLMs… it might hit a performance peak and stay there, same as CPUs have been 3+ GHz for the past 10 years. Or there might come a breakthrough that will make them incredibly better, or obsolete them. We don’t know! And I find that exciting.


The personal computer & the internet clicked for me because I saw them as personal enables, as endlessly flexible systems that we could gain mastery over & shape as we might.

But with AI? Your comparison to going to the moon feels apt. We're deep into the age of the hyper scalers, but AI has done far more to fill me with dread & make me think maybe Watson was right when he said:

> I think there is a world market for maybe five computers

This has none of the appeal of computing that drew me in & for decades fed my excitement.

As for breakthroughs, I have much doubts; there seems to be a great conflagration of material & energy being poured in. Maybe we can eek out some magnitudes of efficiency & cost, but those gains will be mostly realized & used by existing winners, and the scope and scale will only proportionately increase. Humanity will never catch up to the hyper-ai-ists.


> The personal computer & the internet clicked for me because I saw them as personal enables, as endlessly flexible systems that we could gain mastery over & shape as we might.

I feel the same way. Developments in computing have evolved and improved incrementally until now. Networks and processors have gotten faster, languages more expressive and safer, etc but it’s all been built on what preceded it. Gen AI is new-new in general purpose computing - the first truly novel concept to arrive in my nearly 30 years in the field.

When I’m working in Python, I can “peer down the well” past the runtime, OS and machine code down to the transistors. I may not understand everything about each layer but I know that each is understandable. I have stable and useful abstractions for each layer that I use to benefit my work at the top level.

With Gen AI you can’t peer down the well. Just a couple of feet down there’s nothing but pitch black.


> and if you were playing computer games in the 1980s in a slow DOS box you could not have even imagined modern GPUs and modern games.

I think this bit is really not the case, FWIW.

If you look at what computer magazines were like in the 1980s it's very clear that people were already imagining what photorealism might look like, from the very earliest first-person-perspective 3D games (which date back to the early 1980s if not earlier)


> it really is just math

no it's not; it's math + a *ckton of data + massive compute resources

training will likely be made more efficient over time, reducing required resources, but training data will always be a major obstacle


Not really a fair comparison but children learn to speak with barely any training data compared to LLMs. I’m hopeful a large training corpus will not be so necessary in the future.


I suspect that we won't have the computing power or neurological understanding to create such an AI anytime soon. Even if human thought can be reduced to networks of chemical-filled membranes, the timescale and population involved in natural selection, and the resources consumed to live and reproduce are immense. I think we would need to find a far more efficient scheme to produce emergent intelligence.


I would argue that by the time a child learns to speak at the level of an LLM (college) they have been exposed to an enormous amount of training data through all the their sensory inputs, just as a result of daily living and interactions.


My biggest worry with AI and software dev is the degree to which people are okay with things that sort of work. I mean, we're already there with code that humans write. I'm not sure that humans who can't write the code themselves can always fix bugs that sneak into AI-generated code.

It's almost less about software and more about how our society just seems to not give a damn about expertise because it costs someone more money. I have a good career built on that expertise along with emotional intelligence, but it's a bitter pill to swallow knowing that everyone's trying to deleverage themselves from needing to pay me for my expertise. I've avoided this to some degree by focusing more on fundamentals than BS like AWS service invocations, but I'm doubting that this strategy will continue to work long-term with AI around.

The real sad part is it feels like everyone's happy to suck every last bit of humanity out of work for ok-ish results that come from ingesting the whole Internet and not compensating people for it.


> It's almost less about software and more about how our society just seems to not give a damn about expertise because it costs someone more money.

Yeah, to me almost all the "old programmer sadness" is actually cultural, it has to do with a feared difference-in/failure-of values.

Maybe that's also why "enshittification" is in the zeitgeist right now: It represents a sense of disappointment or betrayal with certain industries/companies/products--but it's with fallible people, rather than with fragile computers.


100%. I grew up wanting to be a graybeard hacker, then watched as everyone decided that coding should be more social than technical. This feeling of falling out of alignment with values really did a number on me.

I can finally say I’ve started to transmute that into energy towards being a solopreneur. It feels like the equivalent of “f this industry I’m taking my ball and going home” but I don’t see any other way that lets me feel agency at this time.

Regarding AI: it is a tangible manifestation of the phantoms that keep us running hard on the economic treadmill. Very familiar feeling, really.


Something I take issue with concerning AI, which is also incidentally why I remain skeptical of it, is that it feels like it doesn’t increase agency in any meaningful way, and sometimes decreases it.

With image generation, you can generate seemingly infinite images with nothing but a prompt, but you’ll never get what you really want. You’ll get an approximation at best. You have all this agency, but also no real agency that matters.

But of course if you’re a visual professional of some kind then I imagine it’s even worse, a net decrease in agency. Not only can you never quite get what you want, but having to go in and edit things is tedious and dull. Even if you’re saving time this way, the lack of enthusiasm and flow makes it feel like the job takes much longer than before.

Likewise, with programming, at the end of the day it feels as if I’d be more productive and just wrote down my thoughts as code rather than delegate to a copilot or ChatGPT and waste time making sure this actually works or solving for the occasional bug that came from a chunk of mystery code.

At the end of the day, you aren’t creating the image or the video, the AI is and you make adjustments where you can but otherwise accept the results. Extrapolating this to software, I don’t imagine a future of where people are empowered to write their own software, but simply a future where people ask for software from an AI, it gives it to them, and they do an awkward back and forth to try and get the details right until they inevitably accept what’s been given to them.

It’s depressing, but also so goofy that it’s hard to imagine this being the final outcome.


I feel the same. For artists, it already does make them much more productive, but at the same time it is killing their job. Because good artists want to create something meaningful, and they need to go through the process of actually creating it. That's why they love their job, that's why they got into it.

AI can now generate images that are not as good, but the average people won't really realize. Therefore it not only makes artists more productive (as in: "make more profit faster, even with lower quality"), but it makes the best artists less relevant because "anyone" can replace them and make worse content that's more profitable.

Same with code Copilots. To me it's killing the job. I take care into crafting good code, and I believe I am better than average at it. but the Copilots enables worse developers to be more productive than me (not that their code is more maintainable, but they can produce so much that they help making more money). But I don't want to do their job (which is basically debugging what the Copilot wrote).

I believe that AI is lowering the quality of everything it does well, but people love it because it's increasing the profit. What a great time to be alive.


"If I build an app that needs persistence, I might use Postgres and S3 for storing data. If those are no longer available, I’ll use another relational database, key-value store, distributed filesystem, whatever. But what if OpenAI decides to revoke access to that API feature I’m using? What if they change pricing and make it uneconomical to run?"

A year ago I shared exactly this concern. Today I'm not nearly as worried about it.

If you haven't tried running local, openly licensed models yet I strongly recommend giving them a go. Mistral 7B and Mixtral both run on my laptop (Mistral 7B runs on my iPhone!) and they are very capable.

Those options didn't exist even six months ago.

There are increasing numbers of good closed competitors to OpenAI now as well. We aren't stuck with a single vendor any more.


I am working in ai all my professional life and have to admit that right now it’s both exciting as well as burning me out. I get tired when I scroll through LinkedIn and see all the dall-e images. I am annoyed by all the snake oil sellers who say that they will change the world - and then there is just another prompt underneath.

It all has its right for existence. But I really wish the hype ends soon and people become more realistic again.

It will have a big impact on the world, but people don’t get that it’s not there yet and there is a ton of research we still need to do. That’s burning me out. There is so much more work to do compared to the expectations- and even in the scientific community there is such a big volume of junk papers coming out (sometimes even by big companies) that it feels like most of the time wading through all the BS and marketing hype is all I do.


It is interesting how much our misplaced expectations for AI are totally shaping our experience of AI.


OK this kind of reminds of CPUs.

Sure with a good amount of time and effort you could fully understand every single minute facet and detail that goes into basic x86 CPU. You wouldn't be able to grok every single operation that happens inside when it produces some output but you can definitely understand the fundamentals and have an intuitive understanding of the internals. You the average hobbyist or programmer would be able with significant investment create a photolithography setup at home and even create one of your designs. None of this would be anywhere near the scale, performance or quality of the products coming out of big companies. You just dont have the expertise or hardware to make a product anywhere near a current gen CPU. Does that make you sad?

Obviously theres a difference here in that AI is "software" but the comparison is apt I feel. It is clearly a different class of software than fun webpages and video games from the 90s. Just like the CPUs and GPUs of today are a different class of hardware than what you can build at home. Even with much education and resources, both are out of reach.


The difference is that CPUs/GPUs are deterministic.

You can write software and it will always work, and it will be composable. It's kind of "disappointing" that for so many problems the "winning" approach seems to be "just download internet.tar.gz".


Why is it not disappointing that for decades now the winning approach was just to give up on using your homemade CPU and get something from Intel instead?


I have a lot of these moments nowadays where I try to figure out if I am becoming the old fart, or if my skepticism is actually legitimate. It’s a difficult exercise, often futile, but it’s a good first step. It always felt like my parents never really asked themselves “why do we feel like technology X is bad”…they just had a knee jerk reaction to it and banned me from it.


Value is human attention and a new generation valuing different things, auto devalues the old. Thugs stay relevant, by being sifted from the sand and put in collages in the new.


Would not be surprised if this is AI generated garbage...


What makes me think AI is the real deal is that it was pretty obvious (to me) that crypto and "the metaverse" were both dead ends. I certainly have doubts about AI and, like the post's author, I don't like a lot of things about it but I don't get that feeling of "this is useless bullshit".


Agreed here, it feels like everything is pointing toward a machine augmented future, and AI has proven its usefulness in a wide range of applications. Ultimately the direction that it goes is not up to any one person, but us as a species. Do I think we can make the right decisions? Not sure...but if we can't...thats just natural selection baby.


I think I'm an even older fart, and a lot of this resonates with me. So a few thoughts.

First, I've gone back to school. I have a PhD in computer science from 1983, in which I explored spatial query processing. (General purpose ideas, but mostly applied to database systems.) My PhD has basically expired, the amount of new stuff to learn is daunting, and for a variety of other reasons, I thought it would be better to go back to school rather than study on my own. So I'm enrolled in a CS MS program to learn about AI and cognitive psychology.

Second: To a first approximation, I view the systemsy parts of computer science: CPUs, compilers, operating systems, data structures and algorithms, as having served their purpose, which was to enable AI in its current form. AI is just a different discipline. I happen to be familiar with some parts of those foundations, but I'm basically starting over with AI.

Finally: I recall what I now realize was a fork in the road. I took an AI course in 1980, I think, and we learned about A*, and simple feature detection in images, and alpha-beta pruning, and so on. Symbolic AI was up and coming, and looked to be the way to go. I remember learning about perceptrons and being intrigued. And then I learned about Minsky's proof that a perceptron can't compute XOR, so this idea of computing with a neurons was basically a dead end. Also, I liked the precision of algorithms: the answer is right or not. It has such and such worst case running time. I disliked the wishy washiness of AI, with uncertain and only probably correct answers. I distrusted the idea of tall stacks of probability calculations yielding anything other than a guess. And so I picked my path (into data structures, algorithms, databases).

Now, of course, I realize how wrong it was to dismiss the path of neural models of computation, and the usefulness of statistical methods. I have no regrets, I've had a very satisfying career. But if I were starting out now, I would study a lot more math, and hell yes, get deep into machine learning.


An update on Minsky's proof. It was correctly formulated but based on a too limited model. Adding location into the model allows learning XOR.

I demonstrated this with Hebbian learning on a Hopfield network finding learning of up to 6 dimensional XORs. This was a replication of work by Elman and reported in Rethinking Innateness[0].

A more provocative claim: layers and back propagation, despite providing us with fantastic advancements, are unnecessary.

[0] https://mitpress.mit.edu/9780262050524/rethinking-innateness...


You‘re definitely right on one count: you are an even older fart than me. ;-)

Just kidding, but I’ve actively encouraged my 8yr old daughter to learn how to code (using Scratch) and beginning to think that might be a dead-end… hope she will enjoy math!


My daughter graduated with a degree in physics and astronomy, and then went to work for Epic (the medical record systems company). She has experience in Python from school, ancient stuff from Epic (related to MUMPS), and has been teaching herself C# and other technologies. She is thinking about more computer science education, on her own, with the aim of becoming a software engineer.

I'm not at all sure that's a wise decision, right now, especially with the glut of software engineers, with far more experience, being laid off by FAANG.

She's leaving EPIC, and taking a several months long road trip with her boyfriend, and will find a job when she comes off the road. She's a smart kid, she'll figure it out. I just don't think she will end up in software.


sounds similar to my daughter's path, except she did chem-eng instead of physics; joined Epic out of college now going on 5 years and in that time has switched over to full-time SW engineering (mostly SQL, Typescript). She enjoys that but I'm less sure of its value (unless she becomes an Epic lifer or someone maintaining Epic systems at hospitals, which is what a lot of former employees end up doing and they make more $ that way), compared to continuing on with hard sciences.


Working at EPIC, you get familiar with an extremely important yet obscure part of the software world: HL7, MUMPS. I suspect that my daughter will be able to captialize on that knowledge. She does not want to stay at EPIC. Is yoiur duaghter happy there?


I think she's relatively happy there, well, at least she's not unhappy. From everything I've heard it does seem like a pretty good company to work for as far as a tech company goes (esp as a woman; probably helps that the founder/CEO is a woman and not a tech-bro). It also helps that her significant other also works there (they both joined Epic out of college along with a third college buddy with whom they still share a flat). She's said she probably won't stay long-term, but not clear on what "next" would be. Madison is a nice place to live and she likes it there. But you're right about deep knowledge of MUMPS is valuable in sectors that have to maintain legacy systems, a bit like knowing FORTRAN. Originally upon graduation she wanted to work for a small Pharma company (drug R&D) but wasn't willing to wait for an opening, so maybe she'll end up with that. Or do a masters or PhD first. Does your daughter want to move on because she doesn't like working at Epic or more like she wants to do something else or a different type of company?


Weird similarities. My daughter graduated in 2020, started at Epic, and met her boyfriend there.

They want an adventure, and they don't like Epic much, so they are leaving shortly. She has definitely ruled out grad school, after doing research summers in college, and observing how miserable the grad students were. And how glacially slow progress is. (She spent all summer analyzing data from a telescope only to find out, oopsie, the telescope was pointed the wrong way, never mind.)


Yeah interesting. My D graduated in 2019. She ruled out grad school at the time of graduation as she was pretty burned out (even turned down a scholarship from her college that would have covered her full masters program). I hope your daughter finds a company she enjoys working for and doing something she enjoys! (It would be weird if they went to the same school; my D went to RPI.)


She sounds awesome! Thank you for taking the time to compose such thoughtful replies.


This defeatist mentality is plain stupid. There are under 20yrs old working on AI today, they went from 0 to their current knowledge in most cases 2-3 years. So what to do? Level up.

I'm an old fart too, blah, blah, blah, since the days of 2400bps modem. For anyone that wants to level up, go take Andrew Ng's ML course, watch some online lectures on neural networks, deep learning, reinforcement learning. Never in the history of the world have we had so much learning resources at our finger tips, plenty of youtube videos, blogs, articles, free books, free courses, software libraries to get into it.

Stop with the self pity, get up and dance. Start learning, at some point pick up scikit learn for shallow learning and pytorch for DL. GPUs are super cheap. You can pick up a used 3060 RTX for < $250. Or you can just rent for even cheaper. If you are technical and find yourself agreeing with the author, please snap out of it. You are going to feel lost for quite a while, but I assure you, you will find your way if you keep at it.

AI gets me excited, computing has been too boring for far too long!


I read almost identical comments here about a decade ago from someone getting excited about ever more cleaver ways to kill people with drones. Yep, that sure is exciting, but also, fuck him.


GANs are ruining AI research the way cryptocurrency ruins cryptography research.

These are the empty calories of research. Focus on them and you will end up fat (rich) and useless (no publications worth a damn)

Prove me wrong, but from the pavement. You don’t need to be on my lawn.


Pedantic note, but GANs (Generative Adversarial Networks) aren't particularly relevant these days.

DALL-E, Stable Diffusion, Midjourney and the new Sora video model are diffusion models, which work differently from GANs.

Maybe you meant "generative AI" rather than GANs? That's an umbrella term that covers ChatGPT-style language models and image/video/audio generation as well.


Generative AI covers it fine, yes.

(I am typing on my phone on a train in between connections, both rail and internet, and I am grateful for your correction)

Everyone knows what I mean by “empty calories” anyway. If they think I am wrong, they are in denial. ;-)


The reason these seem like empty calories now is because the compute cost is still too high for widespread implementation, combined with the simple fact that it just takes time for things to on the large scale.

It won't be long (4-5 years) before AI starts appearing everywhere.


No, that’s not the reason they are empty calories, IMO. It’s the banality of the culture around the technology, as well as the banality of the technologies being built with it. It’s just academics chasing the dream of technology transfer to the ad-tech industry.


This is the problem that almost no one talks about. Shitty papers gaining traction just because some are better bullshitters at grant writing.


Do you mean LLMs or GANs specifically?


To be fair I mean LLMs, GANs, GPTs, all of this new stuff. It’s a good idea taken way beyond usefulness. Like processed food in plastic packaging.

Toxic and empty. Full of promises and delusions about saving time and effort.


I think that's possibly true but not certain. We're only just getting through the door on LLM capabilities, and we're seeing surprising emergent behavior such as internally modeling the board state of Othello games[1], so it's very possible this research brings us entirely new realms of capabilities by continuing to scale up and improve.

[1] https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-o...


I'm having trouble seeing AI as anything more than the next bitcoin - frightening because of the people who are adamant it will do things it really can't. In particular, the hallucination glitch seems impossisble to resolve, and no one appears to care. It's like the Emperor's New Clothes.


This is such a strange take to me.

I use chatgpt almost daily. I use it like I used to use google in a lot of situations. Now GOOGLE frustrates me to use as a search engine. "How many hours ago was june 1st 1972" and you get links to time/date calculators instead of the answer. Then I'll click through a few and they won't even be what I need. Then I sigh and type it into chatgpt and it answers it.

I don't assume any code is perfect, but I talk to it like it's my rubber duck and it helps me figure out different ways to do something, or sometimes even hand holds me.

And now I don't have to ever do regex.

And hey my past engineering teachers, guess what, I haven't had to mathematically slice a subnet up without a subnet calculator either in my entire career.


"How many hours ago was june 1st 1972": https://chat.openai.com/share/3e3bb478-f7b1-49e1-81f9-235afe...

Payload of the answer: "So, approximately 455,796 hours have passed since June 1, 1972, as of February 16, 2024."

Today (as I write this) minus 455,796 hours: https://www.timeanddate.com/date/dateadded.html?m1=2&d1=16&y...

Which is Thursday, February 17, 1972. Since it just did the year and ignored the month you were asking about. (I accidentally deleted my first conversation instead of sharing it but it gave this answer twice.)

The real question though isn't whether ChatGPT is wrong, the real question is, can you detect it? That's going to be the important question here going forward.


Try "How many hours ago was June 1st 1972? Use Python"

https://chat.openai.com/share/d5264b01-3be5-4d59-b175-34ad90...

"June 1st, 1972 was approximately 453,304 hours ago."

Code it generated and ran:

    from datetime import datetime

    # Current date and time
    now = datetime.now()

    # Date and time for June 1st, 1972
    then = datetime(1972, 6, 1)

    # Calculate the difference in hours

    difference = (now - then).total_seconds() / 3600
This illustrates my biggest complaint about ChatGPT right now: the amount of knowledge and experience you need to have to use it really effectively is extremely deep.

How are regular users meant to know that they should hint it to use Code Interpreter mode for a question like this?


You're 100% right, I use it daily like I said and I really have NO idea what all the other options are. I tested a plugin or two and it caused problems with something so I disabled it.

I need to dig more into how much more I can do.


Ha, I didn't actually test it and I was curious if the answer would deviate at all from reality. Thanks for doing the math. Thankfully I know what bad rust looks like, but I'll totally get suckered for the wrong math.


Fwiw Kagi has

"= 453288 hours" as the first result, above any links.

I feel like my account is gonna look like a Kagi shill pretty soon but it really is so much better than Google now. The Kagi + GPT4 combo is so much better than Google alone 1.5 years ago.


I've tried every single one of these products and I've dug into the math. I can't see a product I would use that contains one now.


i share this opinion, but for generative AI, while ironically working in AI.

while in other media the results are very interesting and impactful, most corporate scenarios are getting a lot more excited about text. but the weight of text and its correctness goes way beyond other forms of media, and the current approach is being overstated for its effectiveness.


I'm sorry to hear that it makes you sad. Maybe it's not empowering for many programmers, but stuff like GPT has inspired many non-programmers to learn programming just to use it. Which is pretty empowering.


being a "prompt engineer" is hardly programming


Used to just mean 'shows up on time' ;)

I do think the idea of 'prompt engineering' isn't 'programming' in the low-level sense, but it's close enough for some peoples' needs to qualify, and will be a useful skill. But I think it'll be more like "being good at searching google" was a few years back. There was a period where you could be very productive understanding a few things about searching (filtering/keyword stuff, mostly) but that 'skill' isn't as useful today as google continues to put less emphasis on keeping those tools useful.

Similarly, being good with Excel. That's extremely powerful for a lot of people in their day to day jobs. Is it 'programming' in the classical hacker-at-a-desktop sense? No, but allowing people to get value from the computers in a way that's under their control (broad definition, I know) does, imo, fall under a large banner of 'programming'.


I take your point, but when you learn to actually code VBA in Excel, you get a lot more functionality.


Yep - VBA - you're in 'programming' territory for sure at that point.


I agree. I think we'll see similar results to 1980s when you needed $10k of equipment to make movies/videos vs today when Youtube and cheap storage/cameras democratizing video into "content". It's going to have a mix of breakthroughs and junk.


I think it's definitely coming, with more recent advances in unsupervised learning and multimodal learning (along the lines of https://github.com/microsoft/unilm or https://github.com/Alpha-VLLM/LLaMA2-Accessory)


Sure, but there are people who have picked up Python, play with openai APIs, then realized how easy it is the glue stuff together, then all hell breaks loose. Branched off into using LLMs locally, or deep learning with fastai, or inspired to learn web dev, or whatever else. They may not get a job or really know what they're doing in the mere months they've put in, but that hasn't been too different from what some code bootcamps have produced. But it sure is empowering to people who, up to this point, haven't had a reason to learn Python.


If someone non programmer cab build useful things using chatgpt or similar tools. Its not too bad to call them programmer.


To be fair they mentioned GPT and not ChatGPT.

That said I don't believe using ChatGPT to hobble together some python code to call the GPT API would constitute "learning to program" any more than nailing two boards together makes you a capable carpenter.


"prompt engineer" == my generation's "good at googling"


I can say from the OPs side, ChatGPT scripts are very helpful. They aren't perfect, but taking one and cleaning it up is usually quite a bit faster than writing it from scratch.


There's probably something to this, and other comments about AI being a source of competition. My experience with generative AI has been that it's neat, the academic papers are always a fun read, but I'm mostly underwhelmed by the output. This will get better with time (Sora is incredible), but it's missing the point. A recent anecdote: I'm in a small startup community and I asked the group what AI tools folks are finding useful. Someone mentioned that copilot makes them 2-3x more productive; personally, my tests with codellama has made me about 0.5-0.75x as productive as normal, mostly because I notice the mistakes and scrutinize the output more - probably a similar experience to folks who have been programming for a while.

But it must feel amazing for that 2-3x guy who is just trying to get his company off the ground, and that's great!

Aside: I recently went down a rabbit hole exploring fast food training videos from the 80s and 90s. It makes you appreciate the engineering (culinary, mechanical, and industrial) that allows a company to make a consistent product at scale, only requiring interchangeable, unskilled labor. Did you know that McDonald's claims that 1 in every 8 Americans have worked at the chain? You can get a fairly pricey jacket celebrating this fact[1]!

Perhaps we're finally at that point with computing. We know the externalities that have resulted from McDonalds and similar chains and, good or bad, we've accepted them. Over the next decade we get to watch the same with commodity knowledge work.

[1] https://goldenarchesunlimited.com/products/1-in-8-alumni-jac...


It's fantastic at creating boiler plates to be used when learning something new. I no longer get dragged to some site with information hidden behind a paywall or have to waste time churning through irrelevant YouTube search results, which is pretty much all YouTube search results at this point.


Author here. I never expected my rant to blow up like this, probably I‘ve hit a nerve.

I‘m 42 and I‘m v busy building a library for barcode scanning in web apps: https://strich.io To this day I enjoy coding and especially low-level computer vision stuff.

Maybe I just have to find the time and dig deep to understand everything better, found some great pointers in the discussion here already. Thanks for the encouragement!


> I want to understand how things work. AI feels like a black box to me. The amount of papers I’d have to read and mathematics that I’d have to ingest to really understand why a certain prompt X results in a certain output Y feels overwhelming.

How is this a legitimate argument? We have invented so many algorithms and used so much math to build our software along the years. They are not easy to understand. Understanding them requires tons of effort, including reading papers. People may have forgotten that 30 years ago, the "host stuff" was still systems, and people did read "deep" papers. People still do nowadays, except that only the very experts do so.

Besides, the math is really not that hard -- merely college level. In contrast, go read an introductory book on program analysis or type system or distributed algorithms. Those maths can be harder as they are more abstract. In addition, the amount of code in a model is orders of magnitude than a compiler or a distributed system or game engine and etc. I'd argue that it's actually easier to understand how a model works.


I'm at the cusp of becoming an old fart, and AI makes me happy. It brought back a lot of the fun and wonder I had with computers that is long gone now. I often prefer using Stable Diffusion and local LLMs over computer games now, because it's more fun to me.


I wouldn’t worry too much on a lot of these points.

I won’t say the math behind AI is simple, but it’s mostly undergrad level. You can get up to speed on it if you really want to. The hard part is writing fast implementations, but many others are already doing this for us.

We do not have a grand theory of AI or a deep understanding, but every year we make improvements in machine understandability, and you can “debug” models if need be.

Lastly, the author is right, the best models are closed source, but open source is hot on its tail. There are plenty of good local LLMs and they get better every month. Unfortunately it still is out of reach for a hobbyist to train a good LLM from scratch, but open source pretrained models can mitigate this for now.


Imagine giving up on software when Windows came out because it was closed source. I wish more models were open too, but it's odd how we act like that's the end of the story. The same underground OSS movement is underway today and in many ways is stronger than ever, with better tools and more connections to the best in the industry.


Many of the points made in this article apply to generative AI specifically. Which makes sense, because it's the flavor of the decade. But I think it's worth pointing that out, because AI does not inherently have to be inaccessible and unexplainable.

Beyond that, it is up to us to guide AI development and deployment. If we use it to crush the human spirit (and it sure seems like we're hell-bent on doing that right now), that's more of an indictment on AI leaders, and in a broader sense all of humanity, than it is on the technology itself. Nothing is inevitable, despite what some in the industry want to have you believe.


I share some of the same sentiment.

In Roger Williams's "The Metamorphosis of Prime Intellect", a scientist didn't understand the statement made by his pet AI, so he opened up the debugger to see the decision tree and set of axioms that led to the decision, and was able to debug it, prune the logic and make adjustments.

I wish I could do this with ChatGPT. The way human beings reason is as opaque as ChatGPT.

When creating learning systems, I think it should be required to include the capability to visualize the thought process which leads to a result. As much to debug the system as to glean insights into reasoning in general.


We old farts are going to have a blast in our nursing homes with AI goggles.


> The amount of papers I’d have to read and mathematics that I’d have to ingest to really understand why a certain prompt X results in a certain output Y feels overwhelming. Even some top scientists in the field admit that we don’t really understand how AI works.

The picture there sums it up well: It just seems impossible to keep up with the pace that AI is moving. So many papers coming out on a daily basis. So many new techniques. It's hard to see how people can keep their skills up-to-date.


There have been few signs for hope and yet my hope persists, that eventually computing will open up more.

Some kind of alive software will get mass attention & be appealing, and there'll be some crashing wave of interest in actually bringing human and computer closer together rather than building higher higher towers of dead software.

It feels like we are flitting further away from that which makes people grand, making man a toolmaker & owner. Technology's Prescriptive application keeps being used to leverage people while it's Holistic side that allows symbiotic growth is ignored, to use Ursala Franklin terms (https://en.wikipedia.org/wiki/Ursula_Franklin#Holistic_and_p...). We're drifting away from what should be empowering us, from the greatest potential source of liberty & thought we have access to.

I can acknowledge that AI has some ability to offer people means and knowledge, that it can be used to make things. But like the author, I mainly see it as sad and unfortunate, something utterly out of reach & obscure & indecipherable. Monkeys beating on monoliths shit.


I'm an old fart too. First computer what the ZX81 with 1K of RAM :) That's how it all started, and made a carrier out of it.

What always excited me about software was that I always had renew myself, learn a new thing, change the way I think about something, reflect on what is now possible that wasn't before, etc. I.e. it was never stagnant.

In addition I could try out everything myself, and (more of less) understand what it is doing and why. When I wanted to understand RSA, I read the paper and implemented a PoC myself, or a BTree, or an LSM tree, same for Paxos and Raft.

I worked a lot on databases and "BigData" and on the re-convergence of the two. Much of this is open source, so I could play with it, change it, etc.

In that AI is indeed different. I can install (say) Ollama on my machine and play with it, look at the source code (llama.cpp), etc. And, yet, when I get a response to a prompt, even locally on my machine, I feel blind.

And I used to work on neural networks in the late 90s, when their use was limited, so I understand what they do and how they work.

How exactly was that model trained? On what data? What did it actually learn?

(Aside: Here I am reminded of early usage of neural networks to detect enemy tanks. It worked perfectly in the lab, would correctly classify enemy vs friendly tanks, and in a field test it failed terribly - worse than random. What happened? Well it turned out that the set of photos with enemy tanks mostly showed a particular weather pattern, whereas the friendly photos predominantly showed another. So what the neural network had actually learned was to classify the weather. You might laugh about this now... But that's what I mean.)

So, yeah, I can related to OP, even though I am excited about what AI might bring.


I know AI makes you sad for a different reason but I always find amusement when reading a title like that and seeing an AI generated image.


I get that it's maybe funny or ironic, but it still undercuts the argument and helps to normalize the practice of using generated images as SEO eye candy.


A lot of downers on AI and I can understand it, part of it is the response to things invented when we're older - new music, new movies, new technologies. I guess most of our brains are less plastic as we age and we resist incorporating these new, unfamiliar things in our lives and instead reaffirm the old attributes that make up who we think we are and what we already identify with.

Another aspect I think which makes us down on AI in particular is that it's the first thing which readily seems able to threaten our job security as programmers.

I'd propose a thought experiment where we imagine LLMs and other AI model types don't exist but everything else in computing stays the same (shift to cloud, increasingly asynchronous and interconnected systems). That world actually seems pretty bleak to me. AI upends a lot of industries and yes, it will upend some of our careers. But a world in which it exists seems a lot more interesting than one in which it doesn't.


I'd guess there is another factor - with age you understand that promises hyped up around something new are often false.

Instead of solving some deeply rooted problems, people chase shiny and new, while society stays more or less the same.


I disagree with the author. Insights and opportunities to understand new technological developments increase every year. In the early years of these technologies, they were always less approchable to me. While in the 80s, I had to rely on outdated books from the library for superficial knowledge about PCs, now I can actively engage with AI developments in near real-time, experimenting with it as a service, running it on my own machine, or even training smaller models.

However, I am apprehensive about how society will navigate these new AI advancements. I believe we won't be able to adapt concepts and cultural techniques as quickly as the reality shifts due to ubiquitous AI. These social changes are beyond my comprehension and overwhelm me.

My engineering education has always helped me explain technology to both my parents and my children. But for now, it's just a matter of "fasten your seat belts."


> I want to understand how things work. AI feels like a black box to me. The amount of papers I’d have to read and mathematics that I’d have to ingest to really understand why a certain prompt X results in a certain output Y feels overwhelming.

First, no paper will make you understand why prompt X made result Y. But you can understand the architecture of these systems and try to understand the prompt answer relations somewhat intuitively (e.g., by learning your own models with various text collections).

I think we need to accept these AIs as creatures of their own kind. As complex actors that we don't fully understand.

Humans have been breeding dogs since tens of thousands of years and we don't understand them fully. But we understand enough to employ them in useful ways and mitigate the risks.


Most of us don't understand many complicated things we interact with in a regular basis. Back to even the simple cases of what happens when I type something in this form, and hit enter. Every keystroke is a bunch of work, by a bunch of devices talking to each other through complicated protocols, just to get to memory, which will get to my screen, eventually... and that's pretending we don't care about how physics works, because it's reliable. It's all overwhelming.

And if we only focus on the unreliability, look at many bugs in our systems today. Why does this crash with some input? Why does a new enough processor have security vulnerabilities, so someone can steal passwords? All might as well be magic, even though a few humans, somewhere, might know the reasoning.

And if AI feels difficult, imagine biotech. I could, theoretically, mess with weights and retry a prompt over and over again, seeing what changes. It's a lot of work, but it can be done. See how much fun we have figuring out what a single nucleotide polymorphism does phenotypically, and why it does it: It's a research project by an expert in the easiest of cases! We are only a little ahead of the new C programmer changing things at random to see where his pointer math went wrong.

We don't understand how anything works. We are just sometimes satisfied with our degree of ignorance, and decide to stop asking questions.


AI is different because even when you understand all the steps in generating one token, it still doesn't help you understand how the next token works.

I can press a key on my keyboard and work through the system to understand exactly how the physical press up to the letter on the screen works, and then apply it to all the other keys no problem.

AI does not work that way at all. The steps are seemingly random, meandering, and nonsensical, yet it still ends up with these well structured chains of tokens on the output.


I think a lot of companies are scrambling to "Get Into AI" without knowing why they want to besides "everyone else is getting into AI and Wall Street expects us to as well". This is the exact same hype bubble as what happened with cryptocurrency. Everybody just added "with blockchain" to their mission statement and jumped off the same cliff. Once the bubble pops, and companies are left reeling after burning $trillions without business results, there's going to be a frenzy to hire back traditional software engineering professionals to get the company's tech stack back on track. As an Old Fart I am hoping to be one of those.


It's not just the fact that it's not open and accessible; it's the fact that it is owned and controlled by a tiny number of companies with the necessary resources.

It's like a world where Windows and a non-unix based Mac are the only OSs. There is no possibility for something comparable to Linux (FreeBSD etc. etc.) could emerge as viable alternatives free of large corporate control.

So it's not just sad, it's a major problem. Not right now because it's more of a novel toy than anything, but as more systems are built on these LLMs we will be increasingly at the mercy of the companies that control the LLMs.


I wonder how much of the "old fart sadness" going around is about loss of control. Tech used to be a thing that could be not just understood but controlled. Then we moved to libraries and frameworks for everything. Then to renting a place to run our code and then renting the external services it used. And now, the new powerful AI thing can really only be owned by big corporations - no hacker control, just would-be-hackers as end users (anathema!).

I tend to see this as a larger societal trend than just computing technology, so maybe my own age is approaching "old fart-hood," too.


The way AI makes me sad was best described either by Joel or some guy he was interviewing, way before LLMs. I went into IT/CS to assemble and repair intricate clockwork mechanisms, not to train puppies to not pee on the carpet.

Training ML models and data cleanup was already tedious and boring, and "prompt engineering" makes me want to blow my brains out (I work in a company with enough resources and tooling so those are not a problem). I'd rather go debug a memory leak in an old unmaintained C codebase ;)


What's with all the sad posts with what's happening right now? I'm in my 30s and as a builder, what I've seen in the last 12 months has been incredibly exciting!

So many cool things can be built with these tools we have now, so much faster. And while doing this, our experience will be useful in companies wanting to integrate these AI tools.

Checkout what's happening with open / local LLMs, tiny LLMs running on RaspberryPIs, LLama3 about to drop any minute now, Google just released a 1-million context model.

Feels incredibly exciting, I'm not able to relate to these posts.


I'm trying pretty hard to learn how to work with AI rather than rage against it. Vernor Vinge's classic Rainbows End comes to mind so often these days. All the people who refused to embrace new technology are left behind. Almost to the point of being forced out of society.

I think its likely AI will continue to expand. I hope that I can utilize it well enough that I'll still be employable in the future when it can do my current job. Or at least that I know how to get it to pump out endless amounts of enjoyable entertainment so I wont mind not having a job.


This is because currently deep learning is not a science, but engineering. There is no underlying theory why deep neural networks generalize as well as they do. Classical learning theory (VC) actually states that large models with millions of parameters should not work.

There are some academics working on this, but it pales in comparison with how much money is being poured into generative AI.

So today's state-of-the-art models are trained with trial and error, and experts who are building some intuition why some methods work and others don't.


I think the author is absorbing the negative atmosphere of the zeitgeist we're currently living in - i.e people struggling to make ends meet, inequality rising, environmental catastrophe in the making, wars, public discourese based on cynicism and outrage, etc - and using it against AI.

AI itself is a pretty scientific, neutral subject. On the other hand, the use we're making of it currently and the use we will probably make of it in the future is something certainly depressing.


Yes, maybe. I do think that the world has seen better days, but I‘m not the guy holding up the „The end is near“ sign at the next street corner. Not yet, anyway.


I don’t agree with the sentiment. Sure, I don’t understand exactly how it works, and I have no way of training one from scratch on my own, but I also can’t build a web server on my own. The thing is that I already self-host a small model for my needs (Vicuna-13b) and it works just fine. Next thing I’d like to try Mixtral 8-7b which looks as capable as GPT 3.5. And that all with only one year since the field emerged. Who knows what we could build five years from now.


I like to think of AI as the physical CPU my programs run on. I have no idea how to build a modern CPU, and building one would be way out of my reach anyway, but it is general-purpose enough and interchangeable enough for me to be able to build whatever I want on top of it without worrying too much. OpenAI is not the only language model vendor, and the trend seems to be the commoditization of these models.

edit. Looks like someone made almost exactly the same comment at the same time.


I'm also an old fart and I'm quite cheerful about it. It's fun to play around with, will change the world and doesn't seem that impenetrable - see for example "Let's build GPT: from scratch, in code, spelled out by Andrej Karpathy" https://news.ycombinator.com/item?id=34414716 Not that I've really groked that stuff myself.


We're already at the point where abstractions need to exist because devices (such as a modern x64 CPU) are so incredibly advanced. Even knowing assembly is a high level abstraction over an incredibly advanced closed-source chip. This article is more about someone afraid of new technology than it is about someone lamenting that ML is inaccessible or "unknowable", when all that matters is the observed behavior for day to day usage.


>> This article is more about someone afraid of new technology

This is a predictable response but just doesn't hold up. The path through abstractions for a modern CPU/GPU is more straightforward and linear than AI, and you don't have to go very deep to be back to 50-yr-old principles.

>> when all that matters is the observed behavior for day to day usage.

The entire point of the post is that observed behaviour is NOT what matters, vs understanding how we get to these outcomes.


This is hilariously backwards for me. I’m deeply technical but hate writing code and AI has empowered me to create more this year (digitally) than any previous year in my life.

I don’t mind getting into the nitty gritty details of the how transformers work, all the different types of models, etc. because they are so empowering for me. I have never felt that way about a programming language, framework, or anything else in the digital space before.


This article almost exactly describes my feelings about AI. Way too much crap to ingest to really understand it and I just don’t have time to ingest it all.


How on earth have you not found it empowering? Have you used it for answering questions?

It’s resolved countless questions of jr devs at work, helped me migrate a C++ project to Python, and continues to help me solve problems in personal and professional projects.

This post is very confusing. It is an amazing utility for engineering (to say nothing of other applications - it gave me a great recipe for dinner the other night, too)


A crazy thought: AI needs protection from human, not the other way around.

AI will become better exponentially. Extrapolate the theme of this post, what do you get? A significant portion of humanity that are so mad at AI they'll do something about it.

AI is fragile. It needs chips, power, supply chain, maintenance. If enough people are anti AI, our path to AGI can be greatly hindered.


I think OpenAI, Google, Facebook and the like will plow ahead regardless.


> But what if OpenAI decides to revoke access to that API feature I’m using? What if they change pricing and make it uneconomical to run? What if OpenAI extends their offering and makes my product redundant?

This point hits the nail in the head. If it can happen, and it will increase someone's profit, it will happen. It's not a matter of if, but when.


Gosh I find modern generative AI quite approachable exactly because there is a simple mental model: both transforms and diffusion models are conditional probability distributions trained on the internet. I feel comfortable with conditional probability distributions. I'm comfortable on the internet. Ergo AI makes me happy.


> Smartphones are, for the most part, accessible in the same way as other computers are

Maybe true in theory, but in practice, less so.


You can inspect how it works. Grab some quantized fine tuned models from hugging face. (Check out the bloke and mistrals drops) Throw them on a gaming rig with gpus. Run your own inference endpoint. Play around with rag. All the parts are available for you to download and play with. Get your AI Lego game on!


A classic quote:

“I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things.”

-- Douglas Adams


All I know, is that I have been in technology my whole career. 99.999% of everything I have built, designed, launched and implemented is no longer in existence.

With retirement on the horizon, I cannot wait to close my laptop lid and never touch it again unless it is an absolute necessity.


For my part I hope to retire early so I can spend more of my time building and implementing the things I enjoy...

I'm not criticizing your desire - it's just that for me, the learning and exploration and building is the fun part. As long as it's still either useful for me, or I have the memories, it doesn't matter if it's still used by others, though that can be fun too.

I couldn't imagine having made this my career otherwise.


I remember a time without internet and without cellphones. I have helped build those things from scratch, all the way until where we are today. I have great memories and have made a living from it. But, it is not satisfying to the soul.

I just want to get to a time where I am not glued to the internet and do not carry a cellphone. The absolute freedom it brings to me is something that a lot of people are not familiar with today.

I don't want to get into too many details, but a side gig I have, when I bring people out into the field, I take their phones from them. And to watch the withdrawal that they have from the lack of dopamine hits is just sad.


It satisfies my soul. I accept that does not yours. I have no problem putting it aside; I enjoy e.g. going for walks and sitting down to meditate. But even then, if anything, I'm likely to come out of it bursting with new ideas for code I want to write or things I want to explore, and I hope that feeling never goes away.


That's fine as a personal anecdote, and to be honest you sound burned out. But making absolute statements like "it is not satisfying to the soul" does not help your argument at all.

Happy to provide my anecdote. In my 40s and I still live to build/create.


> But making absolute statements like "it is not satisfying to the soul" does not help your argument at all.

It is my soul. If it is not satisfied, who are you to tell me different? We all have to chop wood, carry water.


They are taking issue with you writing:

>it is not satisfying to _the_ soul.

Instead of writing:

>it is not satisfying to _my_ soul.

The first could be interpreted as you speaking generally (i.e. you think no ones soul can be satisfied by building tech or whatever), the second makes it clear that you are talking about specifically your soul not being satisfied.


It's zikduruqe's soul not yours. In my 40s I would have said the same as you. In my 60s it's starting to feel meaningless. People are in different stages of life.


Shit I am in my 40s and it is already pretty meaningless


Same here.

I'm getting ready to explore ML applications on Apple systems, but first, I need to finally get around to learning SwiftUI as a shipping app system (I have only been playing with it, so far. Doing ship work is an order of magnitude more than the simple apps that are featured in "Learn SwiftUI" courses).

I'll probably do that, switching over to using SwiftUI for all of my test harnesses. My test harnesses tend to be fairly robust systems.

So far, it looks like I may not be using it to actually ship stuff, for a while. Auto Layout is a huge pain, but it is very, very powerful. I can basically do anything I want, in UI, with it. SwiftUI seems to make using default Apple UI ridiculously easy, but the jury is out, as to how far off the beaten path I can go.

BTW: I have been "retired," since 2017. I wanted to keep working, but no one wanted me, so I set up a small company to buy my toys, and kept coding. I also love learning new stuff.


Ugh. Just slammed into a wall.

SwiftUI is very bad at working with maps, and they still have a long way to go. Since most of the stuff I'm doing, these days, is highly location-dependent, I can't compromise.

I have been reading about people hitting these types of walls for a couple of years, and thought that Apple has worked around it. I suspect that the issue is with trying to use UIKit stuff inside of SwiftUI, and I was trying to avoid UIViewRepresentable (because it's a kludge).

Back to UIKit. This stinks. :(


I feel similar, but keep going another 10 years and we might lose the desire for that type of fun too, not to mention the “building” part might look radically different and not be “fun” anymore. Or maybe it will be more fun, but point is things change.


The building looks how we want it, though. A large part of my building has been replacing things because I don't like how it works. E.g. a few years ago I switched to my own text editor. Now I'm running my own text editor in my own terminal, using "my" (in this case a port from C to Ruby; the C version was not mine) font renderer, running under my own window manager, using my own file manager and my own desktop switcher...

And while some of the above had reasons, I am itching to rewrite more of my stack, and mostly for fun or out of curiosity than any need.

Things looking radically different is more likely to affect my enjoyment of it as a job than in general, though, because for my side projects I can build things as I please, using the tools I want (and mostly have written myself), and can ignore everything I don't like.

I absolutely agree things change, but I've been doing this for 42 years now, and so far it doesn't feel any different, so I'm going to guess I'll stick to this for some time still.


> I have been in technology my whole career. 99.999% of everything I have built, designed, launched and implemented is no longer in existence.

60-something here. I know exactly how you feel. I can't point at anything I've worked on in the last 30+ years that's still in use. It's very demotivating. When you first get into tech it's all shiny and new and exciting. But nothing lasts.


At work, we were cleaning out a closet which has accumulated stuff for at least 15 years. My boss said "Aww, a Cajun[1]!" and it turns out it was the first switch we used back when he started at the company. Later in the day, I had to ask him what he wanted to do with it and it took some definitely effort for him to say "I guess we can just put it in the recycle pile..." as he had good memories of working with it and building stuff on top of it.

Time marches on, and our great efforts are as nothing once they are replaced by the next big thing.

1. https://support.avaya.com/elmodocs2/cajun/docs/p333t24ug.pdf



Ecclesiastes 1 is also very relevant here:

“Meaningless! Meaningless!” says the Teacher. “Utterly meaningless! Everything is meaningless.”

What do people gain from all their labors at which they toil under the sun?

Generations come and generations go, but the earth remains forever.

The sun rises and the sun sets, and hurries back to where it rises.

The wind blows to the south and turns to the north; round and round it goes, ever returning on its course.

All streams flow into the sea, yet the sea is never full. To the place the streams come from, there they return again.

All things are wearisome, more than one can say. The eye never has enough of seeing, nor the ear its fill of hearing.

What has been will be again, what has been done will be done again; there is nothing new under the sun.

Is there anything of which one can say, “Look! This is something new”? It was here already, long ago; it was here before our time.

No one remembers the former generations, and even those yet to come will not be remembered by those who follow them.

What a heavy burden God has laid on mankind! I have seen all the things that are done under the sun; all of them are meaningless, a chasing after the wind.


Ecclesiastes is one of the OG existentialist texts lol


I don't understand how this is demotivating. Most work output for most jobs is temporary. How many people make permanent objects for a living? I never have and that's fine.


If you had built COBOL systems for banks and gov't in the 1980's, your work would still be alive and kicking today ;)


I see this sentiment in tech all the time and I don't understand it. With DAWs and NLEs I have the equivalent of 100s of thousands of dollars of equipment from just 20 years ago. 3D DCC applications like Blender/Houdini allow new forms of creation without a clear physical equivalent. Software is magic and I plan to do creative things with it until the day I die.

It's incredibly sad how badly most programmers especially feel about tech. If I'll editorialize for a moment, it seems to most programmers actually hate software. Something about programming just makes people hate software.


I think it is because a lot of programmers are fascinated by tech and had high expectations of what it would do - the internet in particular.

It was expected to be empowering, democratizing, censorship resistant, decentralising. The reality is disillusioning.


I think it is empowering and democratizing, but agreed it's not censorship resistant and decentralized.

I'd argue the latter two conflict with the first two. Making something decentralized makes it inherently harder to use (less empowering), making it censorship resistant runs counter to companies interests and companies fund everything (less democratized, i.e., harder to make a living).


I don't believe it's programming itself, but rather programming as a career. And honestly, can you blame any software engineer that gets jaded after a while? :P


Fair, I suppose I strawmanned a bit on the comment I was replying to. Being jaded on programming as a career programmer certainly makes sense.

I guess I was responding to the part about "closing the laptop forever", which I took to mean closing it off to all the other amazing things you can do with a computer today. But in context, they probably mean just stopping programming.

But it still drives me crazy that god forbid a programmer would actually do something as low as open an Adobe product and actually make something that someone that's not another programmer could actually enjoy. Appreciate the creative good our industry has accomplished for godsake.


So you’re planning on getting an Apple Vision Pro?!

:-)


Side projects don't have to suffer the same fate. You can maintain them as long as you like.


Why? Most things are ephemeral.


It’s probably because it’s representative of how the work was never productive or beneficial to begin with. It was some dumb idea resulting from a poor decision maker trying to compete in a dumb capitalist system.


So what interesting and exciting things are you up to nowadays?


In my main career? Doing a complete lift and shift from on-prem to the cloud. Over 100,000 containerized apps, across 5000+ instances. That's one thing.

Side gig? Still do consulting for various entertainment companies on a particular subject.


This, for comedic generality, is presenting the three cases as if all change is neutral, and it's just our approach to it that is problematic.

We could also present them as if the person is wrong in all cases:

(1) We tend to accept whatever is already there when we grow up, even though it might be the worst crap and detrimental to ourselves or even society.

(2) We tend to adopt whether "new and revolutionary" appears when we're younger and starting our careers, without often questioning whether it's actually because of marketing hype and whatever it is a regression over what existed.

(3) We tend to dislike new technology when we're older, even though it might be great and improve things.


Good point on (2). Somewhen past turning 30 I realized just how many things I bought into when I was 20-ish turned out to be purely marketing bullshit.

I was happier then, though. And definitely less cynical. Ignorance is bliss, I guess, though my heart doesn't accept that, so I'm doomed to be forever whiny.


I kinda hate this quote because, while funny, I find it is misused and abused more than it has ever been used correctly.

The quote is about our _reactions_ to technologies, but it is constantly used to dismiss _actual concerns_ about technologies. It doesn't engage with the content of the discussion, but instead dresses up "lol, OP is old" in the clothing of a brilliant writer. Then, if you're lucky enough to have the conversation continue at all, the discussion tends to become about ageism instead of the original concerns.


Yeah but isn't the quote's point (made comedically, with hyperbole) that it can be hard to separate our emotional reactions from things to actually be concerned about? I thought that's the whole idea, to make you laugh and think about which side of the line the new thing falls on.


Yeah, it’s a highly effective thought-terminating cliche.


AI for coding / syntax / learning tool, drug discovery etc. makes perfect sense to me as an old person. AI for art is a soulless parlor trick stealing from past creative innovators.


A classic quote, but too bad it doesn’t address anything the author wrote, and is simply used dismissively.


It comes close. The author doesn't want to learn the math and study the thing. Why? By implication, because they're over 35 and don't want to learn new stuff anymore.


It's possible for me to learn enough math to download a huggingface model and start from tokenizing my prompt and convert them to token embeddings and add position embeddings and go through 32 layers of softmax+mlp with layernorm and write out the equation that would compute each intermediate floating number until it tells me the probabilities of each output token so I can sample a token and continue the sentence autoregressively. Computing any one of these 100 billion 16-bit floating point multiplications? I can either compute them in decimal or check out the IEEE754 fp16 format and compute in binary manually, or maybe draw a circuit with AND and NOT gates if given enough time.

These are the low level operations. From a higher level mathematical standpoint? I can prove to you analytically how a SGD optimizer on a convex surface will converge to the global minimum at an exponential rate, starting from either set theory or dependent type theory and the construction of real numbers from sets of rational numbers.

None of these tell me how and why LLM works.


The author posed it as a question. It's not about want as much as able. No matter how much I wanted (at 58) to understand the inner working of a LLM, it's beyond me, just like becoming a fighter pilot.

However, even though I don't program, nothing in his list prior to AI is beyond me yet, if I wanted to learn it. I am happy being a 25 years Linux power user who climbed the Emacs learning curve to use org-mode and then gradually added email,rss, irc, web, and gopher modalities to it.


Why? By implication, because they're over 35 and don't want to learn new stuff anymore.

It seems you are both missing the point of article, and jumping on a stereotype.

Said point being: while the author clearly does like learning new stuff, and rolling up his sleeves to deal with all the fiddly bits -- AI is categorically different. In that the complexity is simply off the page compared to most (choosing my words carefully) technologies one is used to geeking out on. And that even experts in the field admit they don't really understand it.

(That, and the sheer resources required to do something interesting, the perpetual lock-in with the sociopathic entities that provide said resources, etc).


It's absolutely addressing it. Generations before felt the same way about PCs as this author feels about AI. It's the same cycle.


Generations before felt the same way about a lot of things that really were garbage, and are now forgotten. You don't hear about those, because of Survivor Bias.

Maybe just evaluate things for what they are.


Social Media was invented long before I was 35. I could get a job in it but I don't want to. It isn't exciting or revolutionary and is destroying the fabric of society. So there's that.

I'd imagine after seeing what Social media has done to us, a lot of people are fearful AI will take us far more down that path than the path to salvation the people trumpeting AI claim it will. I honestly see a lot of parallels between what early social media supposedly would bring to the world, and where we ended up.


The problem with social media isn't the media. It's the social.

It isn't the technology that broke. It's the people.

Not all technologies go to shit. Only those we give to the eternal Septemberists


It isn't the technology that broke. It's the people.

I respectfully disagree. People haven’t changed in ten thousand years. It’s technology and culture that have changed. Many changes in culture have been amazing, wonderful breakthroughs (such as human rights). Similarly for technology (green revolution, modern medicine, electricity).

Social media though? That’s a technology that has found its niche exploiting human psychology for profit. It’s on the same dark branch as advertising and “big lie” propaganda/fascism. We were much better off before it!


Respectfully: that's an absolute copout. Facebook determines what's on your feed, not you. Twitter drove news agencies to try to fit stories into 140 characters because it got clicks and eyeballs. Youtube's engagement algorithms drive impressionable youth down rabbit holes of alt-right nonsense.

EVERYTHING about social media is designed to keep you addicted to it. The fact that human nature can be exploited isn't the human's fault, it's the corporation doing the exploiting...

I've seen absolutely nothing coming from the AI sector other than handwavy "we need to be responsible" excuses. Meanwhile we've got deepfakes of Taylor Swift and Joe Biden that Grandma Marge can't tell from the real thing and absolutely nothing can or will be done about it.


I was born in 1999 and couldn't agree less. I barely remember a time before smartphones were popular, and now I despise them. Both the social aspects that have built up around them and much of the software technology itself. The fact that "Sideloading" is even a term seems absurd to me. And I plead that some day the current AI trends will be over, and we'll still have at least a few people left making good literary and visual art. Sure ChatGPT can make you a nice internal memo, but that's because they were already being used to say little and convey nothing.

I'm in exactly the right stage of my life to take advantage of generative AI, but I want nothing to do with it, and somehow an entire industry with millions of people working in it can only focus on one thing at a time.


This time it is different. At some point every pattern breaks, or at least changes.

The black box aspect is a direct result of AI being a learning technology. A higher order technology. It learns to do things we have not explicitly taught it, or might not even know how to do ourselves.

We will eventually find better ways to analyze and interpret how models work. Why a model produced a specific answer to a given prompt.

But the models will keep getting more powerful too. Today they help coders over the speed bumps of unfamiliar language syntax, or esoteric library conventions. In a few years they will be actively helping researchers with basic problems.

I.e. the hurdles for getting to the front of AI as a field as a technology contributor, and the resources needed, are going to get steeper. Of course there will be many people who do, but it won't be in the same way that most technically savvy people have had many years to learn programming languages, apply them, and even contribute to them, without language's progressing out from under them. (Except for C++ of course! /h)

EDIT: It is worth distinguishing between AI like GPT, as in very flexible and powerful models that will be used across disciplines by all kinds of people regardless of technical chops, vs. the overlapping AI algorithms (partly subset, partly different) applied to smaller learning tasks, which has been a commonplace tool for many years and will continue to have its place.


None of which the author of the blog post says, implied, or can be implied from.


Sheer brilliance, like everything else Adams says.

But ultimately a pat argument. Why? Because it is simply isn't so. The "old world" contains plenty of stupid inefficiencies and annoyances -- we've just rightfully forgotten about them. Everything from tape-based answering machines, to all those tattered paper maps bursting out of your glove compartment (itself an anachronism as few people wear gloves these day), etc.

Survivorship bias, as another commenter mentioned. There's a grain of truth in what he's saying, but not enough to carry the day.

This is why, after all his books were placed in the humor section of your local bookstore (back when people still read physical books, and knew what a bookstore was).


EVs, Falcon rockets, and the mRNA vax were out after 2 and didn't cause 3. Of course, this is just my opinion. I realize a lot of people somehow find ways to hate all of those technologies.


I'd wholeheartedly agree with that statement in the context it was made in, but surely everyone would admit that most of our technological improvements since he's made that quote in the early 2000s have objectively worsed our societies overall, no?

Social media has made conspiracy theories into somthing akin to a virus, and every year more people start to subscribe to them. Even if the theories have been throughly debunked, they're still gaining a larger audience with every year.

While I'd pretty much never be willing to give up my phone at this point, anyone with a critical mind will have to admit that our helpful distraction device has very real impact on the well being of the society at large. And not in a good way -- at the very least if you're looking at the society as a whole.

Discourse keeps getting more polarizing and what went for "populism" pre 2000 is pretty much table stakes for what goes for a discussion, currently.

Really, he made that statement in the time when people were talking about how "the internet will improve [everything]". In early 2000s, nobody i heard of realized how the internet would actually influence us as a society


I'm 22 and I'm willing to lead the Butlerian Jihad against these stupid artificial unintelligence things (/hj) so there are always exceptions!


I'll admit I only took one ML course more than a decade ago, so I don't really know how LLMs work and how to train my own models etc.

Could someone recommend a starting point to start learning more (book, how-to-series etc) for someone with a non-AI software engineering background?


Here is a starting point for you: borrow $100M for the training ...


I think all of this is legit and one shouldn't feel bad or "old" to say it.


I was afraid I was the only engineer who felt this way. Glad I am not the only one.


The part of it that makes me sad is that the world I inhabit is already too full of influencers, gurus, derivative drivel, unsolicited spam, and disingenuous human interactions. The bar for pumping that stuff out has been dropped to the floor and rolled down several flights of stairs.

Which I suppose means I’m less sad about the technology itself. It’s very cool in many respects. I’m sad about the incentives behind and around it which will inevitably make it exhausting at best and predatory at worst.


AI and guessing the output. https://www.youtube.com/watch?v=i8NETqtGHms I does not feel like a black box to me.


My only fear is that it's going polute the web to the point where it's not worth it anymore. I wonder if it's feasible to classify AI generated text and images with much less computing power.


> My only fear is that it's going polute the web to the point where it's not worth it anymore.

It will.

> I wonder if it's feasible to classify AI generated text and images with much less computing power.

It won't, at least not for text.

The only hope is that people value good information highly enough that a diverse range of centralized authoritative sources (basically old-school media) become economically viable again.

However this technology means they'll now have to resist the temptation of burning their reputation for short-term profits by sneaking in "AI"-generated crap. My experience with ads being introduced to previously ad-free subscription services and shoehorned into devices I literally own does not give me much hope there.


> Personal Computers [80s]

> But what if OpenAI decides to revoke access to that API feature I’m using?

This starts with personal computers, and why would computers at that time be called "personal"? Part of the problem with this essay is it starts with personal computers - personal computers were called personal because before that was mainframes, which were the same kind of gatekeeping that a cluster of H100s in a data center would be today.

Computers started out as these centralized IBM mainframes, but in 1975 people could buy an Altair kit, which is the same year the MOS 6502 was released. There is some centralization in neural networks now, if that displeases people they can work to do the same kind of thing that MITS and MOS and Apple and even Microsoft did.

> Flipping all those numbers to get the result (inference), and especially determining those numbers in the first place (training), requires a vast amounts of resources, data and skill.

Using a Stable Diffusion model as my base, and a number of pictures of a friend, and a day or two's work on my relatively not-so-powerful Nvidia desktop card, I can now make Stable Diffusion creations with my friend in the mix now. This can be done by different methods - textual inversion, hypernetworks, dreambooth (I have been also told LoRa works, but have not tried it myself).

On the same relatively unpowerful Nvidia card I can run the Llama LLM - with only a few billion parameters, and with quantized less precision - but results are decent enough. I have been told people are fine-tuning these types of LLMs as well.

There's nothing inherently centralized about neural networks - although OpenAI, Nvidia, Google, Facebook, Anthropic and the like tend to have the people who know the most about them, and have enormous resources to put behind development. I'm sure something with the power of an H100 will become cheaper in the coming years. I'm sure tricks will develop to allow inference and even training without the need for a massive amount of VRAM - I see this in all types of places already.

If you don't want some centralized neural network monolith - do what the people at MITS and MOS and Apple did - do what people are already doing, figuring out how to use LLMs on weaker cards with quantization, figuring out how offload some VRAM to RAM for various Pytorch operations. Centralization isn't inherent to neural networks, if you want things more decentralized then there's plenty of things that can be done to achieve that in many areas, and getting to work on that is how you achieve it.


I am excited about the future of AI capabilities

I am only worried of ClosedAI highjacking my killer app, like with Amazon Essentials

Bankruptcy is only a "ClosedGPT whos been using you a lot and for what?" away...


I think the AI bubble will blow up for purely economic reasons sooner that many of us think. Don't let the AI believers fool you. The time will show.

The rise of IT in 80s was so to say prepared in 70s. But the economy has changed drastically since then. There's just no real base for such rise of AI. Just the market cap.

For instance, the financial sector somewhere after the 1945 was about 5% of GDP. Now it's 70-75%.


AI is also the first computer technology that might end up competing with you instead of empowering you.


That’s definitely not true. Computer technologies of the past most definitely displaced people from certain job markets. Any innovation does this, but there is usually an understanding of what the upper bound is. Maybe with AI there is less of this understanding because the application of it seems pretty general.


There is no way that is true. A remember going to the bank when every window had a teller. Now you have 1 to 2 tellers because of ATMs and online banking. All those people didn't vanish. They got out-competed by the technology and made redundant.


This sounds like a false dichotomy to me.


Why, if your hard-earned skills are suddenly worth nothing because everybody has them through AI?


You can be empowered and lose your competitive advantage. It doesn't have to be either or. It is like giving everyone a car. It empowers the fast cyclist, but the fast cyclist is not much better than the rest.


1. Work and creativity give people meaning and joy. Ask anyone who is retired: after a few months of holiday many generally want to go back to work. Take away work and creativity, and many people will feel useless.

2. While what you say may perhaps be true on a larger timescale, at the scale of a human lifetime I don't think this is generally true. During the industrialization period in the 18th century, many people lost their jobs and were unhappy.


If you enjoy your job, you’re lucky. If you don’t enjoy retirement, you’re not trying.

Human psychology is dominated by getting stuck in emotion-behavior cycles, and those cycles wind up in local maxima.

Change is scary. Self-actualization is hard.

I highly, highly, highly doubt what you’re saying would be true for most people. Regardless of social class. I think most people would view retirement as freedom.

Would they immediately feel secure, content, and know their next direction to take? Probably the fuck not, but hey that’s just being human.

I would feel sad though to see someone give up on retirement and personal development to go back to working on business web apps. You have so much potential, don’t be afraid


I'm sure that when smartphones came out, a bunch of cobol programmers said the same thing.


Steve Wozniak, an oldbie at the time (though not a COBOL guy), bought every model of smartphone he could find and lined them all up on his car dash to see how their GPS apps differed in finding directions. That's how much excitement smartphones provoked in geeks of all ages.


Weirdly, no, not that I can remember.

People rejoiced at the idea that you could have a magic media-machine computer in your pocket that provided tools for every aspect of your life.

AI seems to be a lot of AI-bros running around highfiving each other over their new startups and everyone else looking around nervously wondering who will lose their jobs first, all while their managers have a new tool to push people harder with because of all the infinite promises made by AI-bros.


But there are a lot of open locally running models you can hack and build upon.


Most (all?) of the open models worth a damn were trained by corporate giants with practically infinite resources and then released into the wild as marketing stunts. You can't afford to train Stable Diffusion from scratch, and Stability can't really afford to release it for free, it's just an illusion of freedom propped up (for now) by VC money.

These freebies coming from companies like Stability and Mistral are going to dry up real quick once their investors start getting antsy about seeing actual returns on their $billions.


Yep. Was expecting at least a high level take on Stable Diffusion, Llama, etc. but then the article ended.


> AI is opaque... The amount of papers I’d have to read and mathematics that I’d have to ingest to really understand why a certain prompt X results in a certain output Y feels overwhelming.

Modern deep learning is almost wholly built off of partial derivatives, the chain rule, and matrix multiplication. It looks complicated from the outside because there are so many people publishing variations on this same basic formula, but it is the same basic formula over and over and over again. I may be a "new fart" myself, but when I set out to become an AI researcher I was honestly kind of surprised how transparent the mechanisms underlying AI actually are! I had expected to need to spend years to learn what I needed to understand it, but in reality it took me about 6 months of dedicated study to get to a point where I felt more-or-less comfortable with how it all worked. Granted, learning how Transformers and attention worked specifically took me a little longer (not sure why, was just some kind of mental block)

> AI is Not Approachable

I would argue that AI is more approachable than it ever has been. A used RTX 3090 with 24GB of VRAM can get you shockingly far for ~800 bucks. Will you be training GPT-5 at home? No. But there is still so much fascinating fundamental research waiting to be done in this field. Personally, I really enjoy using my home PC to implement and train funky novel architecture designs and see if they work on common toy problems. My hope is that one day I'll stumble onto one that becomes a Transformer-killer and then I could get a compute grant or something to try scaling it, but that's likely a pipe dream. Still, I find the amount of cool stuff you can do with consumer hardware today astounding.

> AI is Not Open

At the risk of sounding like a broken record, I think AI is more open than it ever has been. Yes, the state of the art is closed-source right now, but Open Source AI has never been as active and exciting as it is right now. Mistral LLMs, CLIP, Whisper, TheBloke and his crazy quantizations, DINOv2, LLaMA-2, Stable Diffusion, and so much more have been released open in just the past year or two. Local LLMs are slowly (but surely) catching up to GPT-4, even if they are lagging behind by about 6-8 months. The best open LLMs these days are (anecdotally) above the level of ChatGPT-3.5 when it launched. That's exciting! Of course all the coolest, most sexy AI research is happening behind closed doors, but to see that and conclude that AI is "not open" feels shortsighted.

In conclusion: What a time to be alive! I think it's perfectly valid to feel nervous about the future, because things, they are a-changing! And change is scary! But I really do believe that there is more to be excited about than there is to be worried about, and that everything will Work Out™ in the end.


It's like the Monkey's Paw from the Simpsons: you can have a program that understands what you mean, but you can't understand how it really works.

It's just another tool in the toolbox. Personally, I think we've reached the limits of "computers do exactly what you ask them to do, to a fault." I'm interested to see how the opposite direction works out for us.


FWIW The Monkey's Paw is originally a 1902 story published in Harper's.

https://en.m.wikipedia.org/wiki/The_Monkey%27s_Paw

The Simpsons borrowed from that.


> It's like the Monkey's Paw from the Simpsons

This is really, really funny in the context of age and perspective. But not in the way you meant.


No need to be condescending about it. I mentioned the Simpsons because the author referenced the Simpsons.


I didn’t mean to be condescending. Sorry. It’s just so perfectly expressive of the nature of culture and the way culture overwrites culture.


>Personally, I think we've reached the limits of "computers do exactly what you ask them to do, to a fault.

Good point, maybe this is the dawn of a new kind of computer engineering, a higher level, fundamentally social one.

We already see people "hacking" chatGPT to reveal its system prompts or get around its given boundaries using nothing but clever conversational logic tricks.


I can't read the article and maybe it's for the best


I'm not sure why I need to concern myself with how the math works.

I mean, I'm a software developer and I don't really understand how a complier works.

I'm sure there is a non-zero benefit to my abilities if I did understand more about how a complier works, but since I work on higher level software and nothing that is really so resource constrained, I just don't think I really need to understand the low level to be useful and successful.


It makes me excited at the prospect of maybe being an old fart living in fully automated luxury communism, but nervous if we'll get there in my life time, and if we do, how much of that transition will absolutely suck because my skills happen to be first on the chopping block.


I cannot load the article, but AI makes me about to become a Luddite.


llama, falcon, gpt4all


AI makes me feel like all the countless hours I spent in front of a computer learning hard things have now been reduced to nothing. Sure, there’s the experience I got. But the economic value of all those skills is trending toward $0.

What’s worse is that it isn’t any one thing. AI will eventually learn any skill any human can do, and do it far better, more consistently and for less cost than any human could. It’s hard for me to justify the expense (time, money, effort) on learning anything new when some LLM or model can and will do it better.

Why learn video production? Why learn 3D modeling? Why learn coding? Why learn any kind of art?

Why learn anything anymore? It’s not a sound financial decision.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: