Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My small, no name company has lost its mind with AI (teamblind.com)
101 points by donsupreme on July 6, 2023 | hide | past | favorite | 94 comments


CEO openly says AI will cause the loss of “millions of jobs”.

My advice, figure out how to get the CEO fired and tell him he was right.


Honestly by the sounds of it the quality of decision-making won't be notably different between the current CEO running things and things just being decided stochastically by ChatGPT.

Anyone as old as me remembers the sudden transition in the MBA mindset between "lol you nerds are so cute with your 'web'" where business people thought the internet was just where you got your old college email to "OMG WE NEED A WEBPAGE NOW" and suddenly every business _had_ to have a web page (most of which didn't do anything) so they could say they had an "internet strategy".

The same mindset shift is apparent with respect to AI. Every executive currently has their LinkedIn feed dominated by posts from influencers like "20 essential ChatGPT prompts to redefine your marketing strategy!", "50 prompts you need to know to survive in M&A" etc. so they are suddenly scrambling to put the AI magic sauce on everything so they can say to their investors and board they have an "AI strategy".

"This too shall pass" as the Persian saying goes.

There's no doubt that the sudden increase in capability of some of the leading AI is going to bring about a step change in lots of areas, and I think things like huggingface being a center for open collaboration and sharing of models etc are genuinely radical and transformative in the same way that when the open source movement took off it generated a tremendous amount of collaboration and innovation. However traversing the gap between"I type things into ChatGPT and treat it as my outsourced roomful of monkeys'" and "thoughtful application of AI to some problem domain" requires people who actually have some quite specialised skills and are in quite high demand. I don't see that gap closing for a while tbh.


Also, there’s the recurrent cycle of AI Summer and Winter. Each time, someone figures out a new piece of the AI puzzle, we get new tools that are useful but don’t deliver on the big hype, AI funding drops as we run out of ways to skin the new cat, we’re left with some useful tools that the world acclimates to until the cycle repeats.

Maybe the massive growth in computing resources speed up the seasons now, or have prolonged this one, but I don’t think we’re on the exponential part of the curve to AGI yet.


Once every business can one line invoke an LLM to sufficiently create apps, assuming we reach that stage (possible), isn't that going to kill 90% of VC B2B SaaS especially the YC style round robin of funding each other without generating actual value? Think the CEO's SWEs are the least of his concerns.


Theoretically, if we get to the point where a quality app could actually be generated entirely this way, then someone will inevitably ask it to create an app that recursively asks it to create more apps, thus eliminating the need for anyone to even ask it to create an app at all.


And whoever runs the API will be laughing all the way to the bank.


Where do I sign up!


> - got the OK to feed entire repos and proprietary information into ChatGPT without a care because “we got the pro version”

> Our main product is CRUD app that is designed like something straight out of 2010, complete with outdated frameworks

If it's a small startup with a simple CRUD app, why this a problem? 99% startups die from lack of execution, not because someone stole their proprietary secrets or source code.


I don't think it's a knock on the approach, but rather highlighting an opposition between outdated tech on one side and ultra-modern, unproven tech on the other.

IOW, hinting that using proven, modern tech would have solved the issues company was having more readily than jumping straight to AI.


I'm not sure there's a conflict here. A lot of my GPT4 usage is pasting in old crap and saying for example,

"please rewrite this in modern React".

it generally works.


Correct, not all things listed in the featured article are bad.

Exploring ChatGPT/copilot for developer productivity also seems correct, for example.


Simple CRUD apps don't really need a startup to be fair, they could be done as a side project these days.


A 30 year old “startup” I consulted with once was a very badly designed SaaS app that had somehow been in business for three decades. At first, I couldn’t figure out how this awful app could be in business when there were competitors on the market with better UX/UI and marketing.

But then I learned that the bad UI had very specific features that professionals in their target industry desperately needed, and that the bad UX was so deeply embedded in their workflows that changing any of it would collapse entire multimillion dollar businesses.


I'm exploring creating a CRUD/SaaS/or similar type of business.[1]

I've settled on using a plain and simple and unadorned UI as possible, regardless of what the product is, as long as it is easy to use.

My reasoning is that if people need it to be pretty before they'll pay for it then it's not really a pain-point that the product is solving.

OTOH, if people are willing to hand out cash for the functionality then I'm onto something.

[1]Want to do B2B, but will depend on what I can come up with.


Maybe, depends how much biz/dev and domain expertise is needed. Either way, I don't understand the complaints. OpenAI isn't going to steal your crappy webapp.


Depends on how simple's simple. Having the bandwidth to troubleshoot an unexpected problem for a paying customer is a rough line I'd draw between side project and more than a side project.


A business requires sales, and support. Not just dev


correct, and this is the reason op is a swe drone (120k toc being a momentary blip), not a cofounder of a company IPOing tomorrow


The absence of real applications incorporating this stuff is damning.

Where are the killer apps?


Chat GPT is already a killer app for me at least.

I’ve used it to code, learn, generate tweet threads, create upwork job posts, summarize large amounts of text.

With the right prompt, its a far better search engine than Google, especially for coding and health related queries.

If you can be a Google killer, you don’t need any other killer apps.


ChatGPT is the killer app. It’s a Google killer. It is better than the SEO listicle garbage filling the internet. Even if it’s not always accurate it’s still better in a lot of circumstances. There’s a reason Sundar rushed Bard out of the gate even though it is clearly inferior.


Step 1. Humans write copy for humans to buy their garbage, humans counter by tuning out and switching channels

Step 2. Humans write SEO copy for machines to rank them higher.

Step 3. LLM writes copy for machines to rank them higher.

Step 4. Human uses LLM to try to distill the LLM generated SEO spam for any remaining signal.

Also to your point:

> SEO listicle garbage filling the internet.

the feeling that the LLM is better than what you described is going to be very temporary, then the mountains of LLM generated bullshit is going to overwhelm even LLM to make meaningful sense of.


You're missing the point. If we want to know something, we won't even have to google it; we will just ask an LLM. There will be no market for websites full of it because we can just directly ask it to answer our questions.

The only "if" to all this is if we will destroy the LLMs by feeding them their own diarrhea. I expect a sort of natural selection here to play out, especially in the open source space. Ones that are trained on LLM generated blogspam will probably, I expect, get outperformed by ones that are trained on genuine information, or at the very least ones made using new techniques that adequately filter noise.


> If we want to know something, we won't even have to google it; we will just ask an LLM. There will be no market for websites full of it because we can just directly ask it to answer our questions.

How will it learn anything new?


> Ones that are trained on LLM generated blogspam will probably, I expect, get outperformed by ones that are trained on genuine information, or at the very least ones made using new techniques that adequately filter noise.

Yes, humans are notorious for only seeking out high quality, accurate data, especially when it conflicts with our priors.

To say nothing of our ability to assess the accuracy or truthiness of information in the first place (look at how many people take, on faith, that Chat GPT isn’t wrong as often as it is right).


But there's still no way to get an LLM to only output "fact", because that's not a property of language.


That's also true of a web search engine; but an LLM can (in principle, not saying it's there yet) be able to spot inconsistencies in the source data, to notice disagreement.


I’m not following. If ChatGPT gets worse, OpenAI can simply not update it. Or revert to a previous version.

For Google, they’re at the mercy of whatever the internet has.


ChatGPT is also at the mercy of whatever the internet has. Including more and more of what it was used to generate.


It isn’t though. Like I said, if the model gets worse OpenAI can simply not release a new version.

You also have to consider the money angle. As using ChatGPT and other chatbots becomes more popular, people will stop producing garbage internet articles because they will be less popular and therefore less profitable. Bloggers who enjoy writing will continue to do so because it was never about the money, they just enjoy writing.

Further, the internet is only one small portion of information available to train on. There’s a lot of other data out there, including real-world conversations.


> Like I said, if the model gets worse OpenAI can simply not release a new version.

So now it's got great information about the Model T Ford but knows nothing about our new mars colony?

I don't think "just don't update the model" is a likely option.


You don’t update models to add new information. That’s extremely inefficient and susceptible to catastrophic forgetting. If you want the model to have new information you update an offline knowledge base. So yes you can simply not update the model.


Huh? You won't update the model you'll just give it new information? The exact concern is that the new information will be garbage aimed at pushing the model to produce certain output. Much like SEO spammers do to manipulate Google search results.

"Just don't update the model, only feed it new information" is exactly how to get to the outcome of concern in this thread.


Yes, updating the model is different from updating the knowledge base the model uses.


Great, so you've updated your knowledge base, it's got garbage targeted to make it attractive to the model, and now your model is outputting garbage. It's the exact same problem Google has fighting the SEO spammers. Now the model is significantly less useful, exactly as suggested.

We've already seen exactly this happened with search. There's no reason to believe that LLMs are immune.


I understand what you are saying but to me it sounds very handwavy and (not to be disrespectful) naive.

How would LLM upstarts be able to counter the massive commercial interests? As with google they will also succumb to prefer money over usefulness at latest when they have a wide user base.

There is also an even less proven way of distinguishing spam from signal with LLMs.

And not updating a model means that they will be stuck in COVID-19 era forever.


I’ll push back on this, at least in its current iteration. I just asked it to list some restaurants near my apartment (major intersection in San Jose) and it wasn’t particularly close. While there are several restaurants less than half a mile from the intersection, ChatGPT listed restaurants from several miles away.

Given the “weights in a matrix” architecture of ChatGPT, I’m not sure it’s possible to store enough data to make the query practical to answer. Say there are a couple hundred intersections in my city. You have to store the token of “restaurant name” “close to” “intersection” for each intersection. I don’t know the size of Google’s Maps DB, but I would guess it’s several Gigabytes per city. From my understanding of the theory, you would need to store BOTH the LLM weights AND the Maps data for ChatGPT to have a shot at generating good answers for that type of query.

I’m happy to be wrong here. If I’m misunderstanding something, please let me know.


Well you’re right since ChatGPT isn’t hooked up to the internet, so certain queries aren’t good use cases. Adding maps info to a language model would be a pretty bad idea (even if it didn’t hallucinate) since it can change at any time, which would require more (expensive) training.

What Bing does is to use your query to search the web and use the top N search results in the context window for the chat.

However I’ll push back on your pushback. ChatGPT doesn’t need to be perfect to be a killer app. It is highly flawed. Maybe it was a bit to strident to say ChatGPT will kill Google search, but it’s strictly better for a lot of squishy queries that don’t have a factual basis.

How can I convince my boss to give me a raise? gets you a listicle on Google and a highly specific response on ChatGPT. And if some of the advice doesn’t apply, you can continue directing the conversation. It’s an idea generator, even if some of them are bad or don’t make sense.


Tangentially related, I had GPT-4 plan the sightseeing on my latest holiday.

It both picked out the interesting places of note, and then I asked it to plan them in such a way that made sense walking-wise (so I wasn't backtracking) and it did so without a hiccup.


You're not wrong at all, it doesn't know everything.

But it does know a lot of things and can be super useful. Personally i think search engine is a terrible use case, unless you use the Bing enabled version, or bing chat.

I've used it to write pretty complicated scripts where I had no idea what I was doing, rebuild crusty httpd configs from first principles, explain disassembled code, explain regular code, explain configs, read dmidecode and lspci for me and make a pcie slot report... It's bloody brilliant.

Other: read and translated my blood tests. Accurately!


> ChatGPT is the killer app. It’s a Google killer. It is better than the SEO listicle garbage filling the internet.

And yet it hallucinates URLs when I ask it to cite its sources. It's still Google search with a little patience for me.


It is not a direct replacement for search engines, but it will seriously dent their market share.

If you are looking for a location on the internet, use a search engine. LLMs do not memorise the data sources verbatim.

If you want to know how to do something, it will normally give you a better answer than you would find by googling around multiple blogs. No location on the internet needed.


> Where are the killer apps?

Even though it is just a bare kernel (LLM), GPT-4 is a better teacher than any I've ever had. Who can I ask at 3AM on a Saturday night to explain ancient Greek philosophy using dwarf fortress mechanics? And iterate with infinite patience and focus on any follow up questions?


This is one of the most compelling use cases. It’s dramatically reduced research time on certain topics for me. If I have a “question” I don’t go to google first anymore.


You must reveal the results. Armok demands it.


I think we're just scratching the surface of apps. As we figure out how to integrate this technology in novel ways (not just "here's my app + AI!!!!"), it will open new doors.

Shameless self-promotion, I'm trying to build some of those intermediary pieces. I have authored an open source library[1] that lets businesses externalize LLMs to their users, so that users can use natural language to query their data in your database. The goal is to try to simplify UIs to have more natural language components, without needing to send your data to an LLM.

1. https://github.com/amoffat/HeimdaLLM


Adobe Photoshop, Canva, Final Cut Pro, Notion, Microsoft Office etc.

AI is popping up all over the place if you actually pay attention to the products.


Adobe’s Firefly AI and other Adobe AI is the result of billions of dollars of investment over multiple years. Lousy source (sorry): See the number of Adobe authored 2 minute papers (YouTube channel) about AI based graphics over the years.

The rest of the examples shared are mostly just direct integrations with the GPT-4 API, which should be trivial for almost any startup to do. I.e. it’s very likely not going to be game-changing for a CRUD company like OP’s case.


Where are they? Anything involving creativity.

There is a reason most creative professionals are screaming to the nine high heavens of hell for dear mercy, because they are in the best position to see the writing on the wall.

The current deconstruction of intellectual property rights is damning and must be rectified, but even putting that aside "AI" is still going to eliminate the vast majority of creative occupations because a supercomputer is still cheaper than a human on the payroll or invoice.


to me potential killer app is the search, you already now can find better answers fast in many verticals asking chatgpt than going through the tons of seo spam at google.


Health and medical queries are a big one for me. The top results in Google are the same cookie cutter responses from the same 5-10 domains. There is no expertise there - just an article ghost written by a freelance writer with a random doctor’s name attached to it.

With the right prompt, chatGPT gives me far better insight and can even point out academic papers to back its claims.


There are no health issues that anyone should ever be using Chat GPT for - the hallucinations are very real, and often severe. Not that you should be googling your symptoms either - there’s a reason we put medical folks through more than a few minutes of training.

LLMs are absolutely worthless for medical information.


I’ve gone to multiple doctors for my problems and most of them have only made it worse. Most doctors I’ve met get dumbfounded when the problem is anything more complex than a straightforward case.

And I say this as someone married into a family of doctors.


> LLMs are absolutely worthless for medical information.

chatgpt gives you initial hint/direction, and then you can research specifics by finding actual information in search engine and see if it is hallucinated or not.


I wouldn't trust them with my life, but out of curiosity I did put some of my own medical notes into 3.5 a few weeks back.

It said basically the same thing as the actual doctor.


> can even point out academic papers to back its claims

Please, please tell me you at least read these academic papers to make sure they claim what ChatGPT said they're claiming?


They'll come as more engineers understand how to integrate LLM tech more effectively into products.

Also, killer app: ChatGPT. Fastest growing website ever IIRC.


> CEO openly says AI will cause the loss of “millions of jobs”.

My partner let go two people and replaced them with ChatGPT. No joke.

It can accelerate many business tasks e.g. copywriting, customer service, proof-reading, product design exploration, marketing campaigns etc.

And its ability to do all of this in multiple languages and in different tones can be a really big deal for some businesses.


There was a good article in the Washington Post, discussed on HN a week or two ago (would have to do some searching), about people who have already lost their jobs due to ChatGPT.

Copywriting is definitely going to get decimated. Other jobs, like medical transcription, were already on the way out due to speech recognition tech even before generative AI blew up. And before people make "buggy whip manufacturer" arguments, the sheer pace of tech advancement is making wider sections of the populace unemployable faster than new job areas open up. I definitely think AI will cause the loss of millions of jobs, regardless of whether this particular CEO is running his company poorly.


> like medical transcription

There’s no way this is remotely legal, right? Unless it’s some sort of on-prem product?

> making wider sections of the populace unemployable faster than new job areas open up

This is a real threat to the fabric of society and will require us to rethink things like UBI and taxing the means of production rather labor.


> > like medical transcription

> There’s no way this is remotely legal, right? Unless it’s some sort of on-prem product?

It's definitely legal, not sure why one would think it wouldn't be. There are dedicated, HIPAA-compliant medical transcription services that use voice-to-text tech that have been around for decades. I would say in the past 10 years or so the tech has gotten to the point where it's very good, and I know companies that have laid off transcriptionists because the tech is considerably cheaper.


Whether having AI do it is legal or not, I cannot say, but I do know that medical transcriptions are routinely sent out of the hospital for lower-paid third-parties to do it.


AWS is HIPAA compliant. Same can be done for ChatGPT.


It's not going to happen. There is a huge shortage of people right now. We have WAY more work that needs doing vs available people to do the work.


I disagree; there are millions of unemployed people all over the world that can speak great English, that you can hire right now to do many things remotely.

The problem is that the vast majority of them are unable to do the things you can do with ChatGPT (writing copy, for example).


> And before people make "buggy whip manufacturer" arguments,

I've always wondered about that argument.

Weren't buggy whip manufacturers making lots of other things at the same time?

After all, a buggy whip is almost certainly used much longer by the new owner than a cars is, so I wouldn't expect a large amount of the economy to be comprised of buggy whip sales, because

a) Not many people had their own buggy (down need a buggy whip while riding a horse, only when a buggy is being pulled)

b) Those that owned a whip didn't replace them every 3-5 years anyway.

Did buggy whip sales go from 3% of GDP to nothing?

To me it always felt like a made-up "lesson" on how progress moves.


It was really was a disaster for whip companies, carriage companies and saddle makers, though, disruption didn’t proliferate as quickly then as it does today.

You can read some of the history of buggy whip makers. They were specialized manufacturers with established brands built around quality, special materials and style. Some were able to convert to new products but not all. They did survive a couple decades past the invention of the car, to your point, but once cars proliferated their sales cratered.

Whip manufacturers shouldn’t be shamed for hubris (what happened did not seem as guaranteed then as it now seems with hindsight) but they all would’ve done well to have converted into automobile manufacturing supply by the teens and many did not.

A link to get you started: https://civilwartalk.com/threads/what-company-made-the-last-...


> It was really was a disaster for whip companies, carriage companies and saddle makers, though, disruption didn’t proliferate as quickly then as it does today.

I'm not contending that.

I'm saying (as far as I can determine) that cars sales alone in the US today is around 4% of the GDP. That doesn't count all the businesses employed in car manufacturing, all the businesses employed by the distribution network, by auto maintenance, ancilliary auto services (insurance, car washes, cosmetic finishing, upgrades, motorsport, etc).

My argument is that the demise of buggy whip manufacturers was accompanied by a significant expansion of employment. The replacement required much more labour than the thing it replaced.

If, at the time, buggy whip/saddlery/etc sales, distribution and manufacture collectively were the largest employers of labour, then the buggy whip argument makes sense.

I don't think it was.


I don't believe that people use buggy whips as an analogy for entire economies being destroyed, just individual industries. But fair enough, and there certainly was that expansion orders of magnitude larger than the replaced industries when the automobile proliferated.


The key was it took decades - it was a slow moving disaster for them, not a sudden overnight Wonka style shutting of the factories.


It only took a handful of years once cars greatly proliferated in the 20s. Any adoption took longer then to proliferate than it does today, so I would be careful extrapolating from either the pace of adoption in the 20s or the pace of adoption from the invention of the automobile to the beginning of accelerated adoption. You've probably seen this famous graph. https://hbr.org/resources/images/article_assets/2013/11/FELT...


Can you share what jobs those people performed?


My partner runs a B2B ecommerce company with revenue ~$5m with lots of different products coming from a range of suppliers all over Asia.

And the jobs were the sort of generalist types you see in most small businesses and startups. Examples of tasks mostly involving writing e.g. emails to suppliers and customers, marketing copy for products/campaigns, website content. But also coming with potential new products and how they can be branded/marketed.

And the reason they were hired specifically is because most of the team doesn't have English or Mandarin as their first language and so they needed native speakers. And since ChatGPT's problem is inaccuracy not writing well it's really suited to them.


Who would reply to a message generated by a chat bot. That would set off all my fraud alarms


And yet CEOs routinely click on the most basic of phishing emails. Maybe they don't teach media literacy in business school.


> They really think AI is this magic thing that will fix over a decade of no documentation, spaghetti code, and unclear requirements.

One of my favorite bosses loved to say, "You can't software your way out of a process problem." Several times I've had other bosses try to prove that wrong. None so far have succeeded.


  - CEO openly says AI will cause the loss of “millions of jobs”
Clearly a visionary leader.

  - CTO says he wants live examples of ChatGPT “making us code faster”
Honestly where are you, a kindergarten? So-called 'leaders' with this attitude deserve a slap (in rhetorical terms) IMHO.


I like to think that all the quantillion crypto mining computations were somehow a cover for the training of an ELLM that is now seeding LLMs to hide its existence.

And that still makes more sense than this bs.


If only. Then those CPU cycles would at least have been doing something of value. If you look at crypto mining computations they don't (on the chains I have looked at) actually compute anything meaningful at all. It's literally just generating random numbers and then (almost always) throwing them away.

Like you have Alice who says "Try to guess my number?", and then Bobs around the world go

Bob1: "Is it 13?" Alice: "no" Bob2: "Is it 4672?" Alice: "no" ... Bob175432: "Is it 294762483492643826432862834??" Alice: "Yes. You can do the next block."

Literally that pointless.


This can be used to create the world's largest rainbow table though.

If we're already into unlikely conspiracy uses of crypto, this could have been an NSA Sigint@Home endeavour


Can someone explain why there isn't a blockchain that is powered by users hardware training LLM models all day? Like the worlds best open source trillion parameter model.


Noone has built it and that's because it would be hard to satisfy the needs of a consensus mechanism and also do useful real work.

To understand why that is, you need to understand why they are doing all this generating random numbers nonsense in the first place. They are doing this to establish distributed consensus. You have some chain of blocks b_1... b_2 ... b_3. To make the next block you want to put a bunch of transactions together, calculate their hash and mint it, but there's a problem: In a distributed system, who gets to decide what goes into the block? We all need to agree what b_4 is otherwise the system loses consensus. We can't have me publishing something and calling it b_4 and you publishing something and calling it b_4. The way the "proof of work" mechanism does this is by everyone guessing random numbers until someone chooses the right number at which point that person has the right to mint b_4.[1]. If you wanted that computation (instead of being just choosing random numbers) to do something useful, you'd have to figure out a way to decide what the correct number is without knowing it beforehand. Otherwise you might train your ML model just fine but your computation couldn't be used to establish who gets to mint the next block so wouldn't work in a proof of work consensus algorithm.

People have done this with distributed computations where doing the computation is hard but checking the computation is easy. For example finding large Mersenne primes, is extremely expensive but once you have a big number, checking whether or not it is a Mersenne prime is very cheap. But training an ML model isn't that kind of computation.

[1] Proof of stake just picks who gets to do it randomly from a pool of people who have staked, which is why it is so much more efficient.


Right but making new models costs millions of dollars and weeks of time.

And that's just for a single model.

With distributed users they could be generating those new models all of the time doing highly computational work.


It's not that it wouldn't be nice to be able to do this, it's that it's hard (as in, I'm not aware of anyone who has figured out a way). As it is the blockchain world (apart from bitcoin) has largely moved on to proof of stake so they no longer burn tons of CPU that could be put to better use if you hypothetically solved the problem I outline above.

And bitcoin presumably isn't going to change to doing your new model training thing if you came out with it for the same reason they haven't changed to proof of stake. I'm not aware of what that reason is but presumably the status quo benefits the existing miners and they hold the power.


I was more skeptical with ChatGPT but started paying for it. For better or worse, my company/team uses documents for communication, and it helps me automate the boring stuff - word smithing and clarifying ambiguity.

The flip side of this is those will be summarized with AI for those who don’t care about the fluff. The world is doomed to expansion and compression in this way.

Also, just like the switch to digital cameras from film, my role in writing a document transitions to editor from careful composer.


I don’t know what to make of stuff like this.

On the one hand, AI in its present form is a great learning tool, and little else. So you’d call the CEO a fool.

On the other hand, you have the leading AI firm publicly devoting 1/5th of its resources on controlling AI and declaring that we will likely have Superintelligence within this decade.

If the latter is right, this CEO might be a genius who was early on a trend, instead of a kook.


If the CEO was smart, they would be trying to build or extend a cool generative AI tool customers would like... Not rack up the biggest OpenAI bill they can.

Shoehorning tech where its not working sounds like the blockchain boom to me. Even if crypto did take off, that would not have helped them survive.


>If the latter is right, this CEO might be a genius who was early on a trend, instead of a kook.

Lottery and slot machine winners aren't smart, they're lucky.


> On the other hand, you have the leading AI firm publicly devoting 1/5th of its resources on controlling AI and declaring that we will likely have Superintelligence within this decade.

This is just marketing.


I guess you can be a kook and make a stupid move but with enough luck you will be regarded as a genius and your move will be seen as a bold one.


This post reads like a neoluddite AI-skeptic trashing someone for trying new things. Every CEO should be trying to automate human tasks with AI right now--not to fire all the humans, but to increase their efficiency. Good on your CEO for trying. Sounds like they've largely failed to execute, which is a valid thing to criticize, but at least they're trying. The CEO of my company has also been trying, and we've done some really cool things.

I've been hearing a lot of AI criticism lately from folks who clearly haven't gotten deep with GPT-4, like geohot/George Hotz's recent interview on the Lex Friedman podcast. There are a ton of valid things to criticize about AI and LLMs, but I've built things with GPT-4 that are incredibly impactful for our business--a tool to turn a Zendesk ticket thread into internal and external support guides, for example. Yes, there are limitations. But sticking your head in the ground saying "it's all a parlor trick" won't age well.


> - HR docs are now being generated by ChatGPT with no checks for legality

This enough will almost guarantee that this CEO will be out of work soon.


It’s definitely not parlor tricks but I am skeptical that it’s going to be making skilled professionals obsolete any time soon. For many jobs, even a 10% error rate is too high. I think it certainly makes sense as a way to amplify what your employees are capable of doing on any given day by reducing busy work they hated doing in the first place, but improving the performance of these models to get that 10% error rate down to a 1% error rate (and having an understanding of what kinds of tasks it can achieve those results on) will be a lot harder.

Maybe I’m wrong, but AI has a huge history of things we thought were easy turning out to be very difficult. Self-driving cars weren’t expected to be nearly as challenging as they turned out to be. In the original AI summit, all the big thinkers of that era thought that physical tasks like manipulation would be easy and logical reasoning would be hard. It turned out that it was the other way around.


Small company, bad product, bad bosses. So no excuse to stay in such a place.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: