Hacker News new | past | comments | ask | show | jobs | submit login

I've been feeling FOMO (for lack of a better term) about recent AI & ML/GPT progression.

It feels like ML/AI it might be the beginning of the end for a large class of things (if I wanted to be alarmist I'd say "everything") -- and the fact that Fabrice Bellard has jumped in and done the absolutely obvious rising-tide thing (building an API that abstracts the technologies) speaks volumes.

Releasing something like this fits to Fabrice's pattern of work -- he built Qemu and that served as a similar enabling fabric for people to run virtual machines. QuickJS quietly powers some JS-on-another-platform functionality.

Simon was right. The Stable Diffusion moment[0] is already here. It's going to accelerate. It was already moving at a speed that was hard to follow, and it's about to get even faster.

There are too many world-changing things moving forward at the same time, and I'm only looking at such a small cut of the tech sphere. I don't know what to do with myself, I feel so thoroughly unprepared.

[0]: https://simonwillison.net/2023/Mar/11/llama




Every one of these articles fills me with a related kind of dread.

My whole life, my whole personality is architected around making things by hand for other people. My ideal world is a hipster stereotype where we all sit around using a small number of artisanal products to make other artisanal products for each other.

The arc of my programming career has gone lower and lower down the stack because when I create, I enjoy it most when it feels concrete, deliberate, and long-lasting. I get no joy out of duct taping a few libraries together (though I respect others who do).

While I spend a lot of my day doing code review and think it's a valuable, important part of the process, it's not my favorite task. I like making stuff, not just socially interacting with others to loosely guide them towards making stuff. The idea of AI-assisted software development to me just sounds like taking the one part of the job I like most—writing code—and turning it into even more code review, except now I'm reviewing code vomited out by a machine.

And I completely dread the long term societal implications of a world where most people spend most of their day consuming media auto-generated by a machine. Where lonely men and women hide from their social anxiety by cultivating simulated romantic relationships with chatbots. Where teens have their expectations of sex set by watching synthesized porn starring virtual actors doing things that are physically impossible. Where people watch auto-generated videos of impossibly idyllic vistas instead of actually leaving the house and going for a hike. Where our beliefs of the world are formed largely by synthesized news articles that may or may not accurately reflect it. Where children learn to speak, read, and write from AI tutors and pick up all the grammatical and stylistic quircks of the AI model such that they now because actual real parts of human language.

And, of course, where almost all of the massive profit generated by all of that flows to an increasingly small number of huge corporations.

None of that sounds like a world I want to live in.

I totally get the value of AI for things like classification and understanding. But generative AI feels like a pandora's box to me.


Interested to know what your plans are. You've thought about this a lot, and for me your books sit next to all the others I have on C, assembly, math, etc--and, I am a bit embarrased to say, a large portion of which I still need to study.

I don't think I should despair too much, and simply grant triumph to a statistical steamroller.

The value is we'll still be making stuff, or yearning for it, and AI becomes a part of our toolkit (or never). Perhaps in your case, we will see Prompt Engineering Patterns someday.

I intend to plop down a tiny fortune enabling my children to have obscene hardware for wherever their personal projects take them, and server-grade CPUs, multiple GPUs, and fiber will be a given.

Just know you're standing at a height somewhere above, and as a giant to me I hope you can see farther ahead :)

Maybe we'll retreat more into our personally-named server, managing a handful of like-minded users, crafting rooms in a MUD no one will read. We'll publish stats for our packet filter, read stories typed by hand, and make little games.

There are still lots of problems we would be completely new to in many domains, people suffering injustice, and are those not things we might be interested in as well?

For sake of learning things, we are still satisfied. Even if AI were to generate it in an instant. For sake of satisficing our home labs and side projects, we find a tiny reprieve.


> Interested to know what your plans are.

I think I'll manage to eke out the rest of my career doing the kinds of programming I enjoy at least until I'm ready to retire. That's basically the extent of my plan.

> Just know you're standing at a height somewhere above, and as a giant to me I hope you can see farther ahead :)

I wish I could, but I'm as lost as the rest of us. I just happened to have written a couple of books.

> Maybe we'll retreat more into our personally-named server, managing a handful of like-minded users, crafting rooms in a MUD no one will read. We'll publish stats for our packet filter, read stories typed by hand, and make little games.

That hits perhaps just a little too close to home.

I think of the countless old folks in garages tinkering on old cars with manual transmissions and no engine computers, simple enough that they can still be tinkered, wishing they still made cars like that today. (I drive a twenty-year-old truck for similar reasons, though I'm not savvy enough to actually work on it.) Or other old folks hand-knitting sweaters and blankets for grandkids who appreciate the gesture but put it into storage because the latest mass produced fast fashion blanket they got off Amazon matches their room's decor better.

The whole thing just makes me feel old and out of touch.


> My whole life, my whole personality is architected around making things by hand for other people.

Not sure if there's a conflict here. We still use compilers, right? Unless we write in machine instructions, there's always some kind of program generations somewhere.

> I get no joy out of duct taping a few libraries together (though I respect others who do).

You still can work on stuff that requires human ingenuity . If you look at the the credits section of the GPT-4 paper, you will see many people there because they are the master of "low-level" optimization. For instance, the author of Reformer is a lead in the Long Context section, because apparently he knows how to reduce the O(N^2) of self attention to O(Nlog(N)), to say the least.


This is specifically what the situationists were trying to express with “the society of the spectacle”. It is very interesting to live though a time where broad swaths of humanity are awakening to such a critique.


I understand and share your fear. In a sense, I think we already live in this world: there are many people who are already slaves to YouTube, TikTok, strategically timed push notifications, etc.

We live in a world where digital tech is fantastic for people who have the skill and strategy to place healthy limits on their own use of it, and the worst possible world for those who cannot. And it's all about to get much more extreme.

All of your fears will likely come to pass, but I think there are also much more positive framings and there are fantastic opportunities to build solutions to some of these problems.

- Loneliness. Today, lonely men and women already suffer. A simulated relationship is a legitimate improvement over no human contact at all. Could a bot use the embeddings of your interactions to match you with a compatible partner, saving you the anxiety of having to navigate the dating market.

- Auto-generated videos. Frankly, most movies and TV shows suck today, probably because they come out of a broken Hollywood. I can't wait until the barrier is so low that a random genius college student can make a feature film with almost no resources. Maybe there will be more new movies that are actually good!

- AI tutors. Playing with GPT4 the past few days I've already experienced visceral joy from its use as a teaching tool. It is like having a pretty smart co-worker who knows something about literally everything. I'm honestly so excited about being able to learn new things without dealing with the fundamental barrier of finding good sources of knowledge.


> We live in a world where digital tech is fantastic for people who have the skill and strategy to place healthy limits on their own use of it, and the worst possible world for those who cannot.

I have several family members with ADHD and this is profoundly true. I don't know if they would even have a diagnosable condition if they didn't have the misfortune of living in a world that is actively preying on their attention every moment of their life.

> Today, lonely men and women already suffer. A simulated relationship is a legitimate improvement over no human contact at all.

I disagree. What I see over and over again in society today is that parasocial relationships (fandom, celebrity worship, life vlogging, etc.) enable and intensity loneliness. They provide just enough of an emulation of actual human contact with none of the effort or risk that many people (myself included, sometimes, honestly) do that instead of being forced out the door to get the real thing.

It's exactly like the thing we all do where you wander into the kitchen hungry. You're too lazy to prepare a real meal, so you have some chips or crackers or something and that gives you just enough satiety that you wander off again. But then ten minutes later you're back again, still hungry.

> Maybe there will be more new movies that are actually good!

I would so much rather watch a movie with flaws and that isn't entirely to my taste if it was made by a person who actually cared about what they were doing.


> There are too many world-changing things moving forward at the same time, and I'm only looking at such a small cut of the tech sphere. I don't know what to do with myself, I feel so thoroughly unprepared.

I think the general trend is that actual useful applications are emerging from enormous models trained and owned by billion dollar companies only. Even projects that aim to run models on private consumer hardware are dependent on commercial orgs to produce them. It doesn't seem like that is likely to change.

I don't think there are many positions/jobs/roles for people doing integral, foundational work that requires a deep understanding of ML. Becoming a world expert in ML will probably only open up opportunities at a dozen companies.

A very shallow surface-level understanding of ML already puts you leagues ahead of the general population. In terms of job security, figuring out how to use an ML API will get you hired faster than knowing how to advance the field.

We don't actually need that many Fabrice Bellards.


> I think the general trend is that actual useful applications are emerging from enormous models trained and owned by billion dollar companies only.

One way to think about it: Today's LLMs require incredible outlays of capital and processor power (and crews of folks with doctorates), such as billion dollar companies can provide. But how is that different from what Intel brought to commodity CPUs in the '90s/'00s, or what Nvidia brought to GPUs in the '00s/'10s? Or even what Cisco and folks brought to networks?

Though we may never design an artisanal CPU/GPU/router, we get to work with them every day to make things, and to communicate. These LLMs can be that for us at this moment. Let's go out and enjoy them, and see what we can make within their (vast) domain-specific capabilities.

[takes off rose-tinted glasses]


> But how is that different from what Intel brought to commodity CPUs in the '90s/'00s, or what Nvidia brought to GPUs in the '00s/'10s? Or even what Cisco and folks brought to networks?

Yes, they made these technologies accessible and useful. And very few people needed to understand high-K dialetrics or out-of-order execution to use them, hence I think the FOMO is misplaced


Same here. I don't know if I retire or dive in deep. There's no really other way to sound other than alarmist.

This is the biggest revolution I've seen in my lifetime and it's going to make the PC, internet and smartphone a kids toy.


> Same here. I don't know if I retire or dive in deep. There's no really other way to sound other than alarmist.

It might be a blind-leading-the-blind situation, but I wonder if retiring will protect you. Could you imagine having gone into retirement in the early internet? Wouldn't you be really confused at how to use things today? Or maybe you would just learn most useful things passively as a consumer.

It feels like the gap between the people who can make and the people who consume is widening.

> This is the biggest revolution I've seen in my lifetime and it's going to make the PC, internet and smartphone a kids toy.

This coupled with everything else that's happening is all too much. Getting closer and closer to cheap/free energy (batteries, solar, fusion, wind, etc), AI/ML, Robotics...


To me it seems clear that most people are going to become a kind of peasant or serf, effectively "owned" by their techno-corporate silo. Programming as a human activity will resemble professional sports or writing: a handful of well-paid superstars and the rest of us mostly just watch/read (although of course we play sports and write, and basic computer literacy should be a thing, eh?)

This might not be so bad? As you've no doubt noticed, most people suffer from some combination of ignorance and stupidity. We can barely function. An earthquake happens and half the building fall down, despite (some) of us knowing how to build them so they don't do that. That's not a technical problem, it's a social problem, eh?

As far as technical problems go, we have pretty much solved "Life on Earth". In video game terms we beat the boss about a century ago (give or take a few decades) and now we're just sort of running around acting silly while the end credits scroll. Atomic power, materials science, etc. We licked it.

IF we employ these newfangled talking computers to tell us how to solve our problems, they will, and we can live happily ever after. (prompt: "What would Jesus do?") It doesn't matter that they're merely regurgitating the wisdoms of the ages, when the computer says it, maybe it will stick?


I understand what you mean. I have dropped delving into other programming languages and frameworks to learn more about these AI/ML/GPT stuff.


Control freak, by any chance?

Is it really so hard to just "go with the flow"?


sorry a bit late with my response, but original commenter here.

I'm not much of a control freak -- it's more when you see an credible posisble extinction event in the distance, don't you want to act?

And even if you didn't view it that grimly, isn't the ML stuff just SUPER cool? Trainable prediction engines are really amazing and very actually useful (feels like the first iteration was being able to recognize things in images, which felt like magic).

Going with the flow is awesome, but one thing I've found about life is that if you're not in the right flow, it's completely different. It's not like you have to be in the perfect stream, but you need one with fast moving water.

Imagine being 10 years late to computers or the last one in your area to get a typewriter.

I personally feel like I have to immerse myself in stuff to get it, and the lack of more than surface level understanding of ML is worrying with how big it could potentially be.


Two things to note:

No, indeed, if I see an extinction event in the distance future, I dont feel a need to act. It is hard to explain, but I dont feel responsible for human kind. Its a nice experiment, and I love to be part of it at the seemingly right time. But I dont go as far as investing personally regarding the future of humanity. If we make it (to wherever that would be), fine by me. If we dont, thats also totally fine...

Regarding the part that it is hard to keep up-to-date with current ML: thats true of course. However, what helps me here is that I feel like this is interesting, but not my field of work. Tinkering with free software was far more contagious, because I could do that literally at home, without wasting too much money. ML is totally different. I could toy with tinyGPT, but that is effectively too tiny to be fun. Once I scale up, I no longer can do stuff on my own. Sure, I could also order 1024 A100s from a cloud provider, but... that is going to bust my budget. So in a sense, this makes me free from wanting to understand it all. Its totally OK for me to have a rough overview on how it works...


I have a positive outlook for large language models where we can have new synthesized stories that break the rules of what we consider what makes something a game or not. The biggest complement to new forms of art are the people that reject the new forms.

I can’t really sing or draw anime but now my computer can so it can be a large enabler for people.


Hard to just "go with the flow" when your livelihood can be possibly threatened.


Well I’m sure that one guy will be able to change the course of history to suit his job situation. It’s going to happen so you had better go with the flow. Become water.


In the US, people's jobs effectively justify their existence. Smugly saying mid or late career professionals can "become water" when their entire career category faces collapse is the most patronizing thing I've heard in a long time. That's a non-recoverable blow to many smart, capable people. There's a big difference between demanding we stop the wheels of progress to protect a few people and saying we owe the masses being crushed by them some harm reduction. Of course society on a whole will profit. But, the rust belt shows that the platitude about people deemed professionally unnecessary just "figuring it out" merely comforts the people turning those wheels.


Not all that long ago people on this very site were super enthusiastic about the prospect that my job would very likely be taken over by the robots.

Because, you see, the robots didn’t need to be perfect but merely fractionally safer than the humans.

Now all I see is No True Scotsman arguments on why the current crop of AI won’t take away their jobs and, ironically, my job is pretty safe due to Tesla’s tomfoolery causing blowback from the feds.

So robots eating the world is good or what?


Yeah, it's pretty bizarre. So many people who become incensed by businesses profiting from being shitty to customers are gleefully just dumping the clutch without looking for people in the crosswalk. Maybe literally if the car is driven by an algorithm. I'll bet they could whip up a lot of macho, patronizing personal-responsibility-based arguments in support of being able to mow down the slower pedestrians at crosswalks though.


Remember how the universe works? It doesnt care about the well-being of individuals. Thats just a fact of life. Demanding equality of outcome was always an illusion, doubly so in the face of big changes.


Demanding equal outcome is very different from saying we shouldn't kick the chair out from under someone and say it's their own fault. Homo-sapiens didn't become the dominant species on the planet because people were stronger, more aggressive, or greedier individuals. They thing that made them better was cooperation.

So that's a fact of life? Families don't work like that. Military organizations, governments, and and sports teams don't work like that. Businesses don't work like that internally. Well, not successful ones. Sears actually tried that and promptly collapsed within a decade-and-a-half of pointless infighting and undermining. The fact that you think that's a natural state of the economy and not a deliberate choice proves how great the propaganda has always been for lazzais faire American capitalism.


If this is a "non-recoverable blow" then those people were likely not as smart as claimed to begin with...


What a ridiculous thing to say. People's brains become less plastic over time and the longer we stay in one specialty, the less we keep up with unrelated tech and knowledge. Intelligence had no bearing on whether a late career radiologist could pivot into a totally different category of job if deep learning eliminated the need for the only medical career they were qualified to do. As a long time developer, it blows my mind how common this hubris was among my colleagues. As depressing as it will be to see the overwhelming majority of software developers rendered obsolete by progressively more sophisticated code generation tools, I will be glad to see the naive arrogance in the software world taken down a few hundred notches. It's an industry full of people convinced that they are too smart and useful to get left behind. Lol good luck.


[flagged]


Only someone who claims to have all the answers would make such bold incorrect assumptions about who I am and what I know. The demand for my specialty is waxing, I'm likely safer than most work-a-day software developers, and it's my third career. Don't talk to me about becoming water in my career and life. I also don't think that lacking my brain plasticity and marketable cognitive profile should disqualify someone from easy access to food, housing, and health care. So if that body of water drains into the ethically feeble sewer of self-absorbed SV business culture, I'll opt for evaporating on a rock instead.


So hope for the unconditional basic income or become water ...


Or rather than blaming people who are getting stepped on for getting stepped on, the rest of us can:

a) Admit socioeconomic mobility has plunged in the US, so when you're down, it's a lot harder to get up. Stepping on people to climb to the top was always a scumbag move but now it's a lot more consequential than it used to be.

b) Recognize that absolute meritocracy is a myth. People lacking socioeconomic status and/or one of the "it" cognitive profiles don't have the same opportunities for growth, especially when young, which changes your whole career trajectory. That is not something you can change no matter how watery you are.

c) Realize that people are intrinsically worthwhile and dismissing them as collateral damage for your profit is immoral.

d) Recognize our moral obligation to create a just society and act on it.

I know it's a lot more convenient to pretend the people you're walking on are part of the floor but that doesn't make it true. Homo-sapiens didn't become the dominant humanoids by being stronger, more aggressive, or greedier-- we were able to cooperate. That concept is anathema to SV business practices.


It's so funny, because that phrase when I hear it suggests going with the ML/AI flow.

"becoming water" is a double-edged sword -- if there's a storm out or really hot, it's a bad time to be water. You kind of want to be rock then.

The point is that going with the flow feels like it could easily be interpreted either way (as in -- go all in on AI/ML)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: