Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Disillusioned after AI?
114 points by uptownfunk 69 days ago | hide | past | favorite | 109 comments
Maybe it’s me but the advances in AI are just leaving me feeling disillusioned as a builder. There’s an odd feeling that whatever I am going to build will just get gobbled away by some big tech company. The demos just become more cringey, the messaging more duplicitous and fake-authentic. I’m wondering if anyone else has wrestled with similar feelings or maybe I am just going through a rough patch.

I feel it. I'll add that I'm also rather scared of how AI's compute and data hungry nature is de-democratizing tech and...reality in a big way. A handful of giga-scale companies hold all the cards. The actual technology behind AI is mostly a small set of clever tricks for encoding data in a way that neural nets can work efficiently on, the nets themselves aren't magic and the process of making them is well understood, but unless you have the money to shell out for tens of thousands in hardware and have access to literally everyone's private data to train on you're just going to be very second tier for eternity.

> and...reality in a big way

This is the part that frightens me somewhat, because I don't think the public is prepared for what's coming. Although, it's of secondary concern to me versus your first point:

> de-democratizing tech

...which is certainly problematic.

I'm at a point right now where I haven't fully decided what to think. On any given day, my Facebook feed is filled with a growing collection of AI generated images, and more and more comments are expressing exasperation that it's "real" when the images are (to me) clearly generative "art." But, I think we're at a point where the average user is going to be duped by at least some small percentage of images that are out there. It wouldn't be too difficult to imagine a time in the next 5-10 years (maybe less) when the majority population is convinced of an imagined event that never happened.

More to your point, though, even companies that have a large volume of data to train on but simply don't have the resources are going to be "very second tier." I've noticed this with the AI subscription for Logos Bible Software. ChatGPT does a better job of answering questions, generally, than Logos which oftentimes can't even answer fairly trivial queries based on one's library. I've heard this has changed in the past couple of months for the better, but I'm not optimistic because of the capability mismatch.

I feel like this is one of those red lines that change society's behavior for the worse. We didn't need to invent locks until thieves were commonplace. We didn't need to invent private walled gardens until our data was being used to make our lives worse....

Data part seems to be different. With tech it's always in a race to the (cheapest) bottom. And I totally see hardware price (or rather computation price) coming down, but data would be harder and harder to collect, because:

1. Regulators finally think of something.

2. Internet is becoming increasingly full of AI-generated content which is bad as a training data.

> There’s an odd feeling that whatever I am going to build will just get gobbled away by some big tech company.

I have been wrestling with this exact feeling (and everything you posted) since the first day they released chatGPT to the public. As a software engineer / Maker, I can't possibly overstate how essential it is for me to be in the flow, writing software, building things and finding novel solutions to problems. If you take this away from my day-to-day job I have no idea what I would do with myself. I know I would be absolutely miserable.

People who are blindly "enthusiastic" and expressing their excitement about being able to "build things they never built before" "with such ease" are totally missing the mark. They're just being short-sighted and can't think about the logical implication of what it means when (in a few years) Products can be created simply by telling a computer what you wish for. Here is what happens then:

(1) The value and uniqueness of all products drops to near-0

(2) You can't create a business out of these "products" because anyone else can build them

(3) The only one profiting is the person or entity that owns the AI platform you're building your products on.

Why are people not revolting against companies who's mission statement is to "build AGI" is beyond me. it's just humans being completely oblivious as always. it's not like we haven't seen this with climate change.

I think you bring up some good points.

One thing I'd add: If we enter a world where the ability to build some product, movie, song, novel, whatever is so widespread and cheap that anyone can do it (and I do emphasize the word "if", because I think its debatable that this will ever happen at a level that is cost-accessible):

The strongest way businesses will be able to differentiate is Taste. Human-ness; novelty, Design, things that AI isn't actually all that good at. AI may generate an episode of Law and Order, but it probably couldn't generate, say, Oppenheimer; because nothing in the world knew that it wanted what Oppenheimer delivered. If everything else goes to zero, then that's all that remains.

The issue is: AI is tremendously bad at working with Humans who actually have taste. You generate an AI image with the subject on the left; you ask for a revision, same image, but move the subject to the right side of the image; it changes too much, or generates new images entirely rather than iterating. Its not-obvious that this will get better because it is somewhat antithetical to the way the AI works.

Put another way, english lacks precision. Its not the right language for this. And what seems inevitable to me is that we'll need something other than english to get the last 20% finished with anything AI is being used to deliver. That thing is, guess what, a human knowing python, knowing After Effects, knowing Photoshop, etc. An image generating AI will be popular; a .psd generating AI, with extremely precise layers left in-tact, would be worth billions.

In that sense, I think the current phase of AI is absolutely "people with no taste now being able to lean on the trained taste of everyone else", but you're right that they're totally blind to one IMO non-zero probability endgame: Taste + Skill + AI becomes a hyperweapon. Creation (AI), in the right direction (Taste), with the technical acumen to take the 80% output AI generates up to 100% delivery (Skill).

The unsolved problem is: How do we integrate AI into that workflow in a way that puts the Human, that puts Taste, first? Such that it isn't "AI does 80%, then passes it off to a human", but rather that the AI becomes more like Photoshop, or After Effects, or VSCode, involved in the process from Day 0, integral to it, back and forth. No one has solved this. No one is even remotely close.

> AI is tremendously bad at working with Humans who actually have taste. You generate an AI image with the subject on the left; you ask for a revision, same image, but move the subject to the right side of the image; it changes too much, or generates new images entirely rather than iterating. Its not-obvious that this will get better because it is somewhat antithetical to the way the AI works.

You can mark parts of a generated image in Dall-E/ChatGPT and tell it how to change it. That’s generally called inpainting.

And you can iterate on the same image. For example by leaving the seed unchanged while experimenting with the prompt. Or by using image to image generation instead of text to image.

And then there are the endless ways to change the style of an image to get a desired effect.

And so on. So I personally wouldn’t subscribe to your hypothesis that these tools just show the average taste and people with their own taste can’t use them.

Looks like we found of the "people with no taste now being able to lean on the trained taste of everyone else" people I was talking about.

> (in a few years) Products can be created simply by telling a computer what you wish for

Most people don't know what they want though, and we take it for granted because we are good at making things that others are too.

We'll just get people producing a bunch of products that all feel the same (i.e. what AI is trained on).

Yes it'll get easier to build things - but aren't many things already much easier? I mean go back in time and make a website then compare it to the tools today.

I feel like we're not giving humans credit for true creativity. The bar is being moved, barriers are being changed but don't underestimate the resilience of people.

There are and will be many opportunities, we just need to use our natural intelligence to get there.

Posts like this are dangerous because they could blunt the rapid investment in AI if enough people read them. HN should do something to hide them.

You have nailed it.

> Maybe it’s me but the advances in AI are just leaving me feeling disillusioned as a builder.

I think we're living a big evolution in tech and as a developer myself I think it will take time to really adopt those tools (and not fear them). For now, to be honest I don't see any big advantage using AI for code generation. However I think today stuff like ChatGPT are great dev rubber ducks. Sometime it helps you think and put you on the right path.

> There’s an odd feeling that whatever I am going to build will just get gobbled away by some big tech company.

I could be wrong but for now we're still in the hype train, AI companies are showing off everything they can to impress and raise as much money as possible (because their own subsistance is based on huge amount of money they're not able to generate by themselves). Don't get me wrong, what they are showing us is extremely impressive. But today I tend to think those tools will give more credit to the builders that will imagine and build apps and tools with something AI eats but doesn't have... Creativity, Taste and Imagination.

I might be a fool to believe that but If I compare that to cooking, even if you have restaurants where that only re-heat stuff that taste good, people still appreciate a good meal prepared with love by a real chef :)

So my only advice here is: be a chef, build what you think makes sense to you, be happy at what you're doing. Learn, try, brake and build but just don't stop doing it.

Something that I've found worth pointing out to other builders is that you don't need to build an AI product outright to sell to consumers (re: your comment about your ideas being gobbled up by big tech). Building internal AI/ML tools to solve your own business/workflow/whatever problems is often the better route for folks to leverage the advances in that technology. The best AI you build is likely the one your users don't even know exists.

This also has a beneficial side effect of showing you first-hand just how bad this tooling is currently and how far away it is from replacing builders wholesale. And to be clear, I'm not just talking about leveraging closed proprietary APIs for these efforts, I'm actually more referring to building your own training and inference stacks from scratch. The pedagogical impact alone can make up most of the ROI for your time spent.

    > The best AI you build is likely the one your users don't even know exists
AI has a lot of use cases that are "transparent" and not necessarily "in your face" and still present a lot of opportunities for builders.

I work at a startup that leaned into AI and a use case is simply detecting mis-classified products (happens all the time at large retailers).

On my own side projects, I'm using it in a travel app transparently to help extract places from travel blogs. Imagine you're trying to plan a trip and you find a blog that has a great itinerary. Using AI, it's trivial now to pull out all of the place names for your and organize just the days and destinations without the fluff.

I have another side project where I use AI to generate a customized daily newsletter based off of a feed from the FDA. The AI allows the recipient to ask whatever questions they might have against clinical trials registered with the FDA and automatically answers those questions and sends it along with a summary of the trial.

If anything, I think AI has opened up a vast field of new possibilities.

> I work at a startup that leaned into AI and a use case is simply detecting mis-classified products (happens all the time at large retailers).

This tracks with my own experience for the last decade or so. I stopped being surprised at how many business problems can be, in part or in whole, reduced to classification issues.

I have seen startups geared toward finding clinical trials for cancer patients.

This is the way.

I feel that Google IO today was rather sad. There was a couple things I'd use. But there was weirdly nothing I'd pay money for? I can't square in my mind the amount of money and hype in this industry with how, like, 30% excited I am for the things they make. I found the iPad keynote a few days ago more exciting and potential-invoking; seeing them present these devices from the perspective of "look at how you can draw with it" "look at people make music with it" "You can manage your business from it" and I was left with a small ember of excitement, like, maybe I should get one of these (and then you realize that I've felt that every time this happens and bought one and it never happens, but hey that's mostly a Me Problem).

Google showcases Veo or whatever it is with a small, boring, stupid spot with Donald Glover, and I feel nothing. OpenAI demos GPT-4o and all I can think is "damn that TTS is horny". Asking AI to make art for you does not scratch the dull itch all humans have to create. Asking an AI to create is not creation; its management. Even using it as part of the process implies a larger, more comprehensive creative process the AI sits within, which none of these companies, Google, OpenAI, Runway, clearly know anything about.

My take in general is: Big Tech lost the mandate of heaven like six years ago, OpenAI had it for a bit but at this point they've also lost it. Its actually kinda the case that no one has it right now. My innate response is to feel depression at Everything AI we're seeing happen, but with a little bit of effort I find that feeling sits right next to excitement; that I'm probably not the only one feeling this, that what these companies are doing is actually literally boring, its loved most by Elon thread-bros on XXXwitter, and that the world by-and-large is actually kinda holding its breath right now. Its the same feeling I had during crypto; that this can't be It, please God this can't be the thing that drives humanity forward for the next twenty years; and crypto wasn't; and AI probably won't be either. Doesn't mean it won't be useful, but it does mean that there will be a next thing, and I'm most excited to help build that.

I wonder if you were about thirty-five six years ago; it's possible that it's an age thing and that's usually about when it hits. It's very hard to judge these things objectively. For myself, I became very disillusioned when AWS became the de facto way to run infrastructure - I didn't get into the business to pay a toll (with quite high markups) to Jeff Bezos. But you do have to find your way in the business and sometimes it means compromises.

I'm going back to basics. The other day I've implemented a raycaster (wolf 3d), after that I took a shot at the shunting yard algorithm... I honestly don't give a ff about llms, video and music generation using AI. It's all sad at this point

Sometimes our capacity for metacognitive reason is in conflict with our capacity for selfconscious motivation. I feel you. Still, I encourage you to persevere where able. Imagine a world where DnD-style fantasy magic appeared over night, and your friend refused to learn it because the looming threats to their occupational identity made them sad... I think you'd try to appeal to their reason, in the long term!

Last time I went to NW I got myself a Google 8 Pro, really nice camera. Cannot stop it from asking me to activate the backup for my pictures and videos. I dismiss it today, it will ask me again next week.

I wonder WHY Google wants all my pictures and videos on their servers...

Because the lead of the google photos team in charge of that enrollment funnel will get a pat on the head for having solid Artifacts detailing how many tens of thousands of users they got to sign up. I don't think there's any nefarious scheme, they just want you firmly and permanently locked into their ecosystem, and every little step helps.

Plus, it's one of those things where 90% of tech-illiterate people probably do want it. So it can make some benign sense to spam it a bit -- though that obviously sounds excessive.

if you believe that they won't train their models on data from their users I have a piece of land on the moon that I want to sell. I'll give you a good deal if you interested

There's a certain serenity in it. Maybe its akin to making furniture in a way, working with you hands. Certainly at some point in the future writing C and raycasters may be closer to woodworking than to the status quo of software development.

Sadly raycasters and fiddling with bits is a difficult way to pay for food and rent/mortage. Slinging yaml and json pays better.

Maybe one day I can retire from professional programming and become a hobby programmer...

Oh, don't get me wrong, I still work a full time job. There are still some good challenges that gets me motivated on my day to day at work. I mentioned those projects as a hobby. I like computers and i think I'll ever will. They will not take away the joy of it for me

I love to work with me hands

Same for me. I was looking at Ratatui to do some TUI games in Rust. There's a joy in simple creation, I think, that seems to be lacking with all the AI stuff.

In a world of IKEA, people still perfect the craft of handmade furniture.

In the era of the camera, people kept painting. Some still paint landscapes — others were driven to create new forms of expression.

Poets still exist in the era of tweets, and directors shoot on 35mm in the age of TikTok.

Change is constant — but there's always joy in finding something you love and diving deeply into it.

While all true, if as many people wanted to become carpenters today as back before IKEA was a thing, they'd quickly find out there isn't enough demand. I think that's the big fear in software as well, that while yes people can still code by hand, doing so as a job might not make sense at one point.

Now we've got to figure out whether coding "by hand" is the value here for others or the result of such coding.

For me, coding is worthless if it doesn't solve a problem (of any kind), whether I typed it letter by letter or I instructed an LLM to type 60-80% of it, it's irrelevant really.

I'm not disillusioned I'm just being realistic. As opposed to the Dotcom boom, AI is top down rather than bottom up.

Very few startups are finding success and if they do it's short lived because it gets replaced or integrated natively by newer capabilities from existing major players.

There will be success stories but I haven't seen many novel ideas created using new AI tech that are not just an overlay on top of an LLM that people aren't willing to pay for.

I think the right state of mind is to explore and keep thinking about real world applications, even though at the moment it's basically just every existing saas inserting AI into their existing products.

> The demos just become more cringey, the messaging more duplicitous and fake-authentic.

Completely agree, and this is a sign that they're just riding the hype train. Why does that make you feel disillusioned? What do you want to build or are building that would get gobbled away?

Even before AI, I always considered my work to be like a Tibetan sand painting. As soon I was off the project it may have just been wiped away. Quite a few things I built never launched or never got adoption. You're never guaranteed success at anything.

I feel it. I imagine it's how someone like David Foster Wallace felt when postmodernism overtook the culture at large and seeped into every nook and cranny.

I have certainly become more of a recluse on the internet. I'm still figuring things out... at this time I think I'll pull more inwards. Do my own thing. Create my own art. Maybe I'll see you on the flip side? Maybe not.

You are forgetting one point about the current AI technology. No creativity. Yes you can ask models to generate music, video, images. But they are rehashing the content of somebody else. Asking to create a video of Michael Jackson dancing at the style of Fred Astaire with music similar to a Metallica song, is rehashing, not creativity. There is nothing in the current GenAI that is not a rehash.

It will stop when they run out of data, or when the training data needs to be isolated from the datasets created by GenAI.

So create something based on human creation. Sell it to the LLM trainers. It's the new business model....

AI should be applied towards the mundane tedious tasks that people dont want to do or would rather not do. Example: cleaning public toilets. While this is more a robot thing it does require AI.

AI is a tool. I feel its the attitude of the big players wielding those tools that is more the problem. Instead of applying AI to tedius unpleasant tasks they seem to want to go for the more creative tasks that we would classify as "The Arts". It seems the big players were applying AI to purely cut Labor costs.

I think the creativity is now in the prompts...

A creative prompt can do new things with existing art styles. It can’t make new art styles.

If you don't think new art styles have come out of AI art, you haven't looked for them. Or, you are putting your requirements for "new art style" too high for even humans to achieve moving forward indefinitely.

Maybe? My definition of an art style would be something distinct and recognizably different like say, Van Gogh, Picasso, synthwave, or the aesthetic of one of Wes Anderson’s movies. I’ve seen AI blend, remix and do slight variations on these styles, but I haven’t seen AI make something that I’d consider a new style in that same sense. Can you share an example?

Well, if the standard is "A group worked on a style for decades, it changed the field widely, and decades later it is recognized and studied worldwide" then nothing made in the past four years can possibly qualify :P Because random people with and without AI make all kinds of new styles every day. It's only in grand retrospectives that they are declared "A New Movement In The World of ART!"

I don't think it's hard at all for AI to create new styles. Will one of those become a new art movement? We'll have to wait a while for the retrospective.

Meanwhile, I'd recommend reading https://twitter.com/halecar2/status/1731612961465082167

And, checking out https://twitter.com/misterduchamp/status/1785009271010148734

Look at the images in your last link and look at the work of the following artists and you will see, just the rehash...

Top Left: Gerhard Richter, Mark Bradford.

Top Center: Beeple (Mike Winkelmann), or V0idhead

Top Right: Wassily Kandinsky, László Moholy-Nagy

Middle Left: Benjamin Von Wong

Middle Center: Maurizio Cattelan

Middle Right: Takashi Murakami, Hayao Miyazaki

and so on....

OK then. Your turn. Show me some humans that in the past four years have made new styles that you honestly cannot find references for in the same way.

Show me an LLM who can compose, paint, write after just going to school and without a large dataset.

I am sure any Human could, even if never seen a painting before, or read literature or seen a famous musician.

And 4 years is nothing in Art.

If you look at Renaissance, Baroque, Rococo, Neoclassicism you got around 100 years between each and 50 years between Impressionism , Modern Art and so on. For Jazz, Blues Hip Hop its around 20 to 40 years.

Maybe when LLMs have their own Cultural background they are ready to attack creativity :-)

Attacking creativity is not the topic here. The topic is if new art styles can come out of AI.

I think it can because I think that "Style A + Style B + some unique qualities" is plenty to qualify as a new style. Like "Miyazaki made a new style from Toei Doga + Walt Disney + some unique qualities".

But, as expected, detractors require the goalposts to be defined as impossible to achieve. Generative AI has only been around for 4 years or so. Therefore, the current goalpost is placed at "It can't do anything new because it has not yet spawned a multi-decade movement" which is just silly.

Or, "It can't be creative because it can't yet physically go to school and instead it learns from a dataset" as if school was not a means to stream a dataset through a student :p Or, as if humans who never saw a painting before, but still swam constantly in an enormous dataset of nature and people, didn't make stick figure cave paintings.

At least this discussion is more interesting than the usual "I can tell just by being told it was made with AI that this piece has no soul". Where 'soul' is "A thing that AI is defined to not have that is unmeasurable in any way" And, therefore does not affect anything in any way and cannot be shown to exist or not because it is literally the fantasy of a ghost! :D

Show an example...

I think you do not understand how "new" artstyles arise, it's extremely rare for something not to be influenced by something else.

With AI I could merge an anime style with a hyper-realistic photo of a spider and see what style could come out.

And if the AI doesn't quite work for this, I could get it to generate a base and one can then finish it up by hand.

> But they are rehashing the content of somebody else. Asking to create a video of Michael Jackson dancing at the style of Fred Astaire with music similar to a Metallica song, is rehashing, not creativity.

That's most real-world bands.

Yes, but the point still remains. Most are inspired but once in a while you get The Beatles, Pink Floyd, Led Zeppelin, The Velvet Underground and so on.

I guess we can use that to define when we achieve AGI. You mumble an input prompt and get a new Rolling Stones, U2 or Queen, a new Gabriel García Márquez or James Joyce.

Currently if it's not in the Dataset then...404 Creativity Not Found

I find it more interesting you view "AI" as a threat at all, instead of leveraging it to your advantage. Did Squarespace/Wix kill frontend development? Did any low code/no code solution kill development at all?

> whatever I am going to build will just get gobbled away by some big tech company

This has been happening since the industry existed. X company copies/acquires Y and things go to die. New things pop up, the cycle repeats.

Sounds like you're going through a rough patch.

A lot of folks used to get paid to set up small websites for people and small businesses. All those jobs are dead. Most front end jobs now are with large corps on complex websites. Also very few companies hire purely front end developers anymore, it’s mostly full stack.

I'm not feeling disillusioned. I'm impressed with what's been accomplished and how fast things are moving.

I went through Stanford CS in the mid-1980s, just as the "AI Winter" was starting. The expert system guys were claiming Strong AI Real Soon Now, but expert systems didn't do much of anything. Now that was disillusioning. All those smart, famous people in AI going nowhere and in denial about it.

This time it works. Sure, LLMs aren't really that bright, but they've blown through the Turing test and can do most of the work of the chattering classes. If only they knew when they were totally wrong. Even more impressive to me is that machine learning now works for robot balance and coordination. Even automatic driving works now. That gives hope that "common sense" might emerge. We badly need that, because we can't trust current systems with anything important.

The AI/LLM boom has really hurt my interest in developing new programming languages. I worry that even if I come up with a new language that's a significant step forward, the lack of an existing corpus for it would limit the ability of tools like Copilot to work with it, which would discourage adoption.

It's funny I was just thinking about this. While this is true, the huge silver lining is that using much smarter LLMs, we'll be able to port existing libraries from other languages to your new one in an idiomatic way thus helping adoption!

Doing it in an idiomatic way might be difficult if there's not much of an existing corpus, though. (Or if the language is new enough that it hasn't developed a lot of idioms)

Large context models offer some hope for this. Gemini had a demo where the model was able to accurately translate a (human) language it didn’t know by putting an entire dictionary and grammar guide into the prompt. So maybe in the future you’ll be able to link Copilot to your new language’s documentation and it will just work.

Where is this demo? I googled and can’t find anything.

It wasn't a demo. It was a report in the model card

https://imgur.com/a/qXcVNOM https://arxiv.org/pdf/2403.05530

AI could also be used as a tool to help you develop the language - you could try to teach your language to a LLM, or finetune one to help generate the language, or simply plug in a grammar to the ones that can restrict by grammar and have a ready-made generator.

I can see how that might help write sample programs, which could be useful for testing my implementation, but how else would it contribute to developing the language?

I've used AI to test documentation, that is I give the AI some documentation I wrote as context and ask it to do something. If the AI gets it wrong, it can be an indication the docs aren't clear. I've thought about this since I also have a toy language I play around with, and I intend to use AI for this as well - feed it my docs and see if they're clear enough for the LLM to produce correct code.

I get where you're coming from, though it has me neither excited nor scared. I think interests will just get pushed around a bit, and I feel like we still inherently crave human-created things in one way or another. For example, it used to be cool when someone would make a nice movie of photos of their trip with titles and effects and share it. Now... it takes seconds, is automated, my feed is filled with such "reels", and I don't watch any of them. Instead, that plain, typo-filled but authentic text update catches my eye. Things that highlight personal craft will be demanded. Look to any number of parallels in the physical world.

It feels like the tail end of a culturally stagnant generation. I'm hoping it's potent enough to trigger an aesthetic revival.

When was the last culturally significant generation you experienced?

It's not a date it's a vector space. Look at the velocity of cultural change along the dimensions of music, art, architecture, infrastructure, agriculture, religion, ideology, politics, economics, technology (mechanical and informational) and so forth.

1940s to 1980s you see tremendous velocity on nearly every dimension.

1980s to 2020s you have tremendous change on IT, minimal change on mechanics and either stagnation or decline on most other dimensions. Especially physical manifestations are stagnant or declining.

Of course you are going to find outliers, exceptions and overlap because this isn't a discrete space , it's more of a stochastic system with noisy observations.

I'm not sure I understand your point.

> culturally stagnant generation

It definitely sounds like you should not use the word generation: otherwise you are pandering to bad stereotypes and miscommunicating your idea. 1980s to 2020s ain't no generation.

give it time

I am now able to build things I could never do before. I have never felt more excited about what I am able to complete.

Personally, as a developer who has been doing this for more than 25 years, I am excited, enthralled, and chomping at the bit to see what's next. Every day is like christmas, and I'm not even kidding. I have been having the mental time of my life playing with arenas, reading research papers, experimenting with agents, pitting one model against another, watching 'the sand learn to talk' and seeing everything take off. This is something I never thought I'd live to see.

There is a lot of concern about AI just building code, or creating music, or creating art, writing articles for you. I agree with this concern but to balance ...On the writing side, most people in my circle don't like how AI writes. It's clear how AI writes is not how we would say things. It lacks original insight. This is where humans come in.

To the developer who likes to come up with creative solutions to problems, if your solution has not been done before and it has not been done many times over the AI is not going to invent your idea. At least not today's AI! :-)

You won't get any sort of heartfelt response here. Talk to her instead.

"her" being gpt-4o chan?


(downvoters: this is a joke, but I believe it has some truth to it)

One possible issue with the increasing adoption of AI is the potential for humans to lose their critical thinking and problem-solving skills. As we come to rely more heavily on AI systems for assistance, there is a risk that individuals may become less self-reliant and less skilled at addressing complex issues on their own. This could lead to a gradual erosion of human intellectual capabilities, as we offload more of our cognitive tasks to AI.

Another concern is the potential disappearance of entry-level positions for junior developers due to AI-driven automation. As AI systems become more sophisticated and capable of performing tasks once reserved for human programmers, the job market may undergo a significant shift. This could make it increasingly difficult for newcomers to the field to gain a foothold and establish their careers.

If anything, I think the junior generation is going to ramp past where they are at today. I'm personally doing this now and my engineering mentor has been reporting me having a lot of things figured out better than they expected for my experience.

It helps so much to be able to rubber ducky a new term, problem or even to just double check my own code against another opinion. All of that helps, even if it's half correct in its advice. The design patterns I have started to use (and understand!) with its suggestions helped me have "consistency and readability that's years beyond my experience".

Now I'm 20 years into a related role doing a switch to Eng, and that definitely helps. But if the first thing AI does is speeds up our overall ability to move the tech workforce around, that's a huge win.

> The demos just become more cringey, the messaging more duplicitous and fake-authentic.

This part sounds like you are going through a rough patch. Like you are feeling down, not sure why, and are digging around for reasons.

What you are saying is legit. What you are feeling is legit. But, it would be worth doing some deep chill soul-searching to figure out if these really are the reasons. Maybe you need a hike in the woods. Maybe you want a job better mission behind it. Maybe I'm full of shit.

Either way, best to laugh at cringey demos. Learn how to use AI. What is good for and what it's not. Use it or don't. Just build stuff people need either way.

No, seeing through the AI hype does not suggest that one needs to examine oneself. Sometimes bullshit is just bullshit.

If it's both "Fake, cringe hype" and also "Going to beat anything I do", then it's Schrodinger's Villain.

Nobody's being fascist here. But, the old fashy trope of "The Enemy is both Too Weak and Too Powerful!" is echoing in here regardless.

If OP was annoyed at either extreme, I'd have nothing to say. But, being down about self-contradictory problems indicates that you're not clear what's got you down.

Was in an ML PhD program. Cut my losses and left with my Master's. It was a combination of:

1) I just don't think I have the intellect or ideas to excel in the field as I would need to.

2) Current-day academic process was antithetical to how I wanted to work. An absolute slog.

3) AI started to reach peak exuberance near when I quit, such that I looked at the job market and saw nothing but scammy startups milking VCs for easy cash, with dozens and dozens of candidates lined up for unrealistic salaries. Everyone everywhere was and still is misinterpreting what these models are and what they can actually do.

The trough of disappointment is going to be apocalyptic for this one. Once NNs are making decisions that face tough legal accountability and auditing, they are going to be dropped like a tungsten rod from God.

I'm now in a pretty comfortable job doing analysis in an engineering field, so it all worked out for me. I love the work and problems I have to solve; while the pay isn't top-tier, I'm comfortable with steadily improving my software engineering and analysis skills, rather than running the grant treadmill or competing in the ML researcher job market.

A lot of low-lying fruit has surely been plucked by AI models in the fields of Code Generation, NLP and Computer Vision, and a lot of this is now shrink-wrapped as fine-tunable or zero-shot models behind APIs or abstracted in HF pipelines. IMHO, there are plenty of rapid advances yet to be made in focusing coding and ML engineering in more esetoric fields that require specialized domain knowledge (eg Cheminformatics, Quant Finance).

There was a point in time, if you had a popular windows app, Microsoft would build their own version of it and ship it to the masses.

Excel, Word, Flight Simulator, Internet Explorer etc

LMMs trained on ungodly amounts of internet and private data could be that.

You and I should be worried. It’s a dog eat dog world out there.

A definition of AGI is “do what humans do”. Better, faster, cheaper.

Best course of humanity is that this takes a few decades to be as good as humans. If it’s rapid, it’s going to be very disruptive.

I hope LLMs gobble up my code. Amplify my code's use by presenting it in a new way where others can solve their problems faster

Not just you. And not just you "going through a rough patch," although there could be some overlap. I think my optimism for the future is cut down a bit by announcements like these, too. That does affect my overall life outlook.

Perhaps I'm just in the same boat - shouldn't be drawing any sort of emotional support from that.

> If you don't like how the new technology is pulling the rug out from under your reality, it is you who has the problem.

I didn't want to reply to anybody in particular so as to mitigate the provocation since this is HN. But this feels like a common sentiment among transhumanist etc types. Or maybe I'm imagining, but I bet a lot of us are feeling this way so I thought I'd call it out. I have no expectation of receiving any understanding. The future is not for people like us.

It's NOT AI, it is what we called it "Expert Systems" way back in the 90s

all LLMs are good at doing that type of pattern matching and assistance.

I think the only solution is to confront the deep root of this issue: you've grown your identity around the current system and your worth as citizen within it, a period of unexpected upheaval is extremely scary. I can definitely relate, especially since the Wall Street Fat Cats (retro is cool) are in an extremely advantageous position vis-a-vis AI. They see how AI isn't just a new internet fad, it's the culmination of computing - and they're investing to match that assessment.

But just because we have long battle ahead of us doesn't mean it's over. Hackers built the internet and AI and Silicon Valley for all its fucked-up idiosyncrasies, and I think we (/our predecessors) should be proud of that.

This will make me lose any authority I have, but I unironically suggest watching Mr. Robot, a dramatic fiction about an individual "hacker"-type in times of great technological upheaval, and IMO it puts forth a compelling analysis of the various ethical and practical barriers that present themselves therein. For example, "how to influence technology without working at a giant company"

its motivated me, I think the real question is whether the thing you're a builder of is intended to make money. because for making money this has been great.

for trying to do a passion project for accolades on your originality, with money as an afterthought, I can see that seeming more pointless

for an analogy: grocery stores don't need to be original, you can put one right across the street from another and we still need more grocery stores.

Feels like an opportunity to me - when big tech is whiffing small builders have a chance to fill the gap

Why? This stuff is garbage.

Go build something that doesn't suck. They're not even trying to compete with that.

Yes they are, and that's part of what so disillusioning about AI. Every major media company and corporation is going all in on AI as much as they can afford to. It will win, not because AI can create a superior product but simply because companies will push AI generated content over alternatives because it's cheaper, and advertising will train people to accept it as good enough.

Wondering why this got shadow banned from the front page?

Aren't you a part of a tech company?

I was already disillusioned before ;)


Yes I‘ve had that too big time. Currently it goes better especially since Yann LeCun debunked a lot of the OpenAI AGI fearmongering. There is for example no such thing as general intelligence, people can be orders of magnitude better or worse in certain domains AND it seems AI is just approaching human-level and passing it only a little. Not some kind of magical AGI which can do everything in seconds.

Also related: https://youtu.be/nkdZRBFtqSs

Do you really think ...with multi-model modals and action-based models that AGI isn't coming? AND FAST? The AI news cycle continues to speed up, not slow down. It's not even an ebb and flow.

AI is a baby, it only takes 5-10 years for it to become a teenager or adult, at that point it can do just about everything a human can do.

> There is for example no such thing as general intelligence I think general intelligence means a single machine that is better than 100% of people in every aspect, perhaps every reasoning and logical aspect at first, but when these are in machines -then also physical aspects. That is highly do-able in the next 5 years.

AI is not a baby. That is an anthropomorphism. It is a very large field of mostly unrelated algorithms. Of which, one is getting all the attention at the moment.

Y’all need to go back to school. I’m so tired.

Meh. AI is like having a super senior developer on your team to ask questions of. They don’t always give the correct answer.

If and only if you write web boilerplate all day. Try asking it for advice on embedded and hardware design, it'll just tell you to connect black to red.

Yeah for stuff that’s well documented online it’s pretty decent. I’m just learning rust and building a desktop app with it. Tons of errors but I still got ten times further with it than without.

I think I was never convinced so I can't be disillusioned...

In general terms: big tech know IT have started to be spread enough to makes people thinking back at classic desktops, hosting own services and so on, making their business model rooted on lock-in and others ignorance in a threat status. They have successfully pushed laptops, but have failed to go further with netbooks and mobile "the cloud-integrated platform", Chromebooks prove to be a limited success, people still want to own their own data even if far less than we all need. Emails are almost a synonym of webmails now, but even with modern antispam sheriffs they are still a success, modern socials after usenet have gained a big success but ultimately lost much of their popularity, hw is cheap enough to own a small machine room at home in most of the west world for most people and some https://tech.ahrefs.com/how-ahrefs-saved-us-400m-in-3-years-... or https://tech.ahrefs.com/how-ahrefs-gets-a-billion-dollar-wor... have started to talk how cheap and effective is owning something... ML systems are a potential solution being not a small show. They are also very nice to hide any sort of nasty things, isolating people without them easily notice and so on.

Aside the dream to "talk to computers like humans" or having "smart devices" who follow us easing our life till the point of being able to became another species transferring our consciousness in something we built in a factory to finally reach the immortality, the pure intellect is still a thing as it was since ever.

Bottomline: we are in declining and aging society where schools was reformed to generate legions of useful idiots https://www.theatlantic.com/ideas/archive/2020/08/i-was-usef... because they are easy to manage like Ford model workers, unfortunately they are not just workers, the lowest base of the society but ALL LEVEL of the social pyramid and while became ignorant is easy became literate it's not.

It's about time we need to know to use computers, the myth of a chimp able to operate was and still is a myth, the myth we can make eye candy UIs not demanding users knowledge is and was a myth. We see CLIs coming back in nth forms, we will see DocUIs coming back, and big tech is desperate who to make them back since we need them but without giving power to the users and without loosing their digital dominance. I can't predict the future, but I'm pretty convinced we see again bat tech take ground while not fully succeed.

Wouldn't intelligent AI agents enable average people to build amazing things?

No. Thus far no AI agent has shown itself to be capable of building amazing things. AI agents generate, at best, mediocre and derivative content and at worst, soulless nightmares out of the uncanny valley. None of it has any sense of originality or a specific creative touch or vision, nor can it given the way LLMs work. All this despite the consistent refrain from people that LLMs work exactly the same way that human beings do, with equivalent creative processes. Thus far, such claims only demonstrate a quasi-religious belief not backed up by evidence, or equivalent results.

And if it were somehow to be the case that AI was capable of building amazing things, it would be the AI doing so, not people. You can go to a Michelin starred restaurant and order a world-class meal, but describing your order doesn't make you a chef. In the same way, describing what you want to an AI that simulates the work of countless artists doesn't make you an artist.

And no, this isn't equivalent to an artist using tools and filters in Photoshop (although Photoshop is now moving to integrating AI so it kind of is.) Those are tools that still require the skill and talent of an artist and allow a degree of direct control over the end result that AI doesn't.

While no AI agent has shown itself to be capable of building amazing things and I agree with your food and artwork analogies, AI agents can work as fantastic teachers, lowering the technical barriers of entry for fields like programming.

ChatGPT can’t retain decent context to save its life but if you have the time, patience, and motivation to widdle away at a goal or a project you, yourself, can come out on the other side far more capable of building amazing things, thanks to AI. These are things you could have learned through other mediums, sure, but for a lot of people it is a far more natural experience akin to talking with a teacher, especially if you structure your prompts to feels as such.

I’m not ride-or-die AI over here but it has taken me from a painfully non-technical person, to someone who now has a vast interest and a growing, albeit slowly, skill set in a new hobby. It will be a long long time before I create something that isn’t derivative but that doesn’t mean it hasn’t been valuable along the way.

I'm happy it works for you, but given the general consensus about the quality of AI generated code (not great beyond trivial tasks and full of bugs) and AI's tendency to confabulate, I think tutorials by human beings are still the better option in general.

Right, no AI agent has shown itself to be capable of building amazing things. I just thought if current trends hold and big corps keep gobbling up everything to create bigger and better AI models, then even average people will have a lot to gain from it if big corps actually successfully build intelligent AI. Because then the playground would be leveled.

The playground is already more level than it would be if the primary tools for creativity were driven by corporate controlled AI, because those corporations have no intention of allowing the playground to be level. It's never going to be legal for you to make the next Star Wars, or possible for the average person to make something that can compete with the big media companies, without paying a lot of money and giving up a lot of rights.

Meanwhile the tools to create in just about any medium are almost entirely freely available to anyone, and plenty of people are already creating tons of amazing things. AI isn't necessary for that, it's just a means for corporations to commoditize human creativity.

You mean like what JavaScript and PHP did for websites. :D

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact