Hacker Newsnew | past | comments | ask | show | jobs | submit | FromTheFirstIn's commentslogin

What other ways can you approach it?

But it isn’t joining the workforce. Your perspective is that it could, but the point that it hasn’t is the one that’s salient. Codex might be able to do a substantial portion of what a freelancer can do, but even you fell short of saying it can replace the freelancer. As long as every ai agent needs its hand held the effect on the labor force is an increase in costs and an increase in outputs where quality doesn’t matter. It’s not a reduction of labor forces

OK, let me fall less short. It has replaced the freelancer for me. I communicate product requirements. It builds the product immediately at trivial cost. It’s better than a human. There are jobs I would have considered hiring out that I don’t because the machine is better. Nothing you said about labor effects in the large even logically follow. Have you even used one of these systems?


If you don’t support your family, no one else will


I find that when decisions are very minor, people love to have tons of options to select from. When decisions are much more impactful and high stakes, people seem to love finding ways to convince themselves there are no options and that they must proceed down a single path out of necessity for how the world is.


There’s rarely (maybe never) an objective and comprehensive measure of quality. Your concept of what merits matter is someone else’s advertising. No one is operating non-meritocratically, they just value different qualities from you.


I think your definition of popular is holding you back. If popular just means other people like you, you’re obviously wrong- plenty of people are very successful even though they are disliked. Often this will happen multiple times on a single team at a company. If popular means you’re perceived as valuable, you’re obviously right. All institutions are social institutions and operate on social understandings of value. So to be successful you have to be perceived as valuable by these social structures. I think calling this a scam misunderstands the non-quantitative metrics of worth. There isn’t actually a Best Academic, a Best Engineer, or a Best Coworker in some measurable objective sense. Those are all social evaluations and they’re valuable because of that, not despite it


Yes! He did, so it is.


What investors were intentionally deceived and what were the lies specifically? I saw something about a Kickstarter, but it's trickier as there is no promise of delivered products, it ends up being an donation basically, although Kickstarter try to make that intentionally vague.


> There is no promise of delivered product

There absolutely is a promise. Even if you manage to legally find a way to not get sued, taking advantage of the fact that everyone who gave you money believed it was a promise is still scamming them.


Isn't the whole thing (or two) with Kickstarter is that if it's not funded, everyone gets their money back and if it's funded, the creator tries to deliver the goals according to the timeline but if they don't, they're not held liable for that? So if for some reason the creator run out of money before they could send actual products, you as a donator don't have the right to get your money back? Maybe I misunderstood the whole concept of Kickstarter.


James Proud:

* Promised an alarm clock that would do a bunch of things

* Took $2.5M in funding from Kickstarter

* Took another $50M in funding from elsewhere

* Delivered a piece of hardware that did essentially none of what was promised.

It's all detailed in OP and the linked Verge article. That's a scam and I'm not interested in your legalese arguing whether they can be sued or not.


> I'm not interested in your legalese arguing whether they can be sued or not

What... You know, it doesn't matter. Thanks for the summary anyways!


Ethics and legality are independent concepts. A scam is an ethical construct.


>"Even if you manage to legally find a way to not get sued"


But not 90% of the work people do. It’s solved a task, not a problem.


It's what takes time though. When you need to make a wrapper for some API for example LLMs are incredible. You give it a template, the payload format and the possible methods and it just spits out a 500-1000 line class in 15 seconds. Do it for 20 classes, that's work for a week 'done' in 30 mins. Realistically 2 days since you still have to fix and test a lot but still..


Or write a lisp macro in one hour and be done. Or install an opengenerator and be done in 10 minutes, 9 of which is configuring the generator.


If you can get the specific documentation for it. Sadly many companies don't want you using the API so they just give you a generic payload and the methods and leave you to it. LLMs are good in the sense that they can tell what type StartDate, EndDate is (str MSDate), maybe it also somehow catches on that ActualDuration is an int.. It also manages to guess correctly a lot of the fields in that payload that are not necessary for the particular call/get overridden anyway.


Can a Lisp macro automatically search for, and find, the API documentation and apply it to the output?

I've implemented connections to (public) APIs of different services multiple times using LLMs without even looking up the APIs myself.

I just say "Enrich the data about this game from Steam's API" and that's about it.


Just tell us why you think funding at a loss at this scale is viable, don’t smugly assign homework


Apologies, not meant to be smug


...But you did fully intend to assign homework? Why are you even commenting, what are you adding?


If this is the silver lining I’d hate to see the cloud


The cloud being moving away from the "you must work to eat" economy and into the "you can work if you want nice stuff" economy. Getting there won't be fun but a lot of us would like that for our grandchildren.


It’s funny how nothing seems to be AI’s fault.


That's because it's software / an application. I don't blame my editor for broken code either. You can't put blame on software itself, it just does what it's programmed to do.

But also, blameless culture is IMO important in software development. If a bug ends up in production, whose fault is it? The developer that wrote the code? The LLM that generated it? The reviewer that approved it? The product owner that decided a feature should be built? The tester that missed the bug? The engineering organization that has a gap in their CI?

As with the Therac-25 incident, it's never one cause: https://news.ycombinator.com/item?id=45036294


Blameless culture is important for a lot of reasons, but many of them are human. LLMs are just tools. If one of the issues identified in a post-mortem is "using this particular tool is causing us problems", there's not a blameless culture out there that would say "We can't blame the tool..."; the action item is "Figure out how to improve/replace/remove the tool so it no longer contributes to problems."


> You can't put blame on software itself, it just does what it's programmed to do.

This isn't what AI enthusiasts say about AI though, they only bring that up when they get defensive but then go around and say it will totally replace software engineers and is not just a tool.


Blame is purely social and purely human. “Blaming” a tool or process and root causing are functionally identical. Misattributing an outage to a single failure is certainly one way to fail to fix a process. Failing to identify faulty tools/ faulty applications is another way.

I was being flippant to say it’s never AI’s fault, but due to board/C-Suite pressure it’s harder than ever to point out the ways that AI makes processes more complex, harder to reason about, stochastic, and expensive. So we end up with problems that have to be attributed to something not AI.


If poor work gets merged, the responsibility lies in who wrote it, who merged it, and who allows such a culture.

The tools used do not hold responsibilities, they are tools.


"I got rid of that machine saw. Every so often it made a cut that was slightly off line but it was hard to see. I might not find out until much later and then have to redo everything."


How could a tool be at fault? If an airplane crashes is the plane at fault or the designers, engineers, and/or pilot?


Designers, engineers, and/or pilots aren't tools, so that's a strange rhetorical question.

At any rate, it depends on the crash. The NTSB will investigate and release findings that very well may assign fault to the design of the plane and/or pilot or even tools the pilot was using, and will make recommendations about how to avoid a similar crash in the future, which could include discontinuing the use of certain tools.


My point is that the tool (the airplane in this case) is not at at fault, but rather the humans in the loop.


If your toaster burns your breakfast bread, Do you ultimately blame "it"?

You gdt mad, swear at it, maybe even throw it to the wall on a git of rage but, at the end of the day, deep inside you still know you screwed.


Devices can be faulty and technology can be inappropriate.


If I bought an AI powered toaster that allows me to select a desired shade of toast, I select light golden brown, and it burns my toast, I certainly do blame “it”.

I wouldn’t throw it against a wall because I’m not a psychopath, but I would demand my money back.


No one seems to be able to grasp the possibility that AI is a failure


> No one seems to be able to grasp the possibility that AI is a failure.

Do you think by the time GPT-9 comes, we'll say "That's it, AI is a failure, we'll just stop using it!"

Or do you speak in metaphorical/bigger picture/"butlerian jihad" terms?


I don't see the use-case now, maybe there will be one by GPT-9


Absence of your need isn't evidence of no need.


This is true, but I've never heard of a use case. To which you might reply, "doesn't mean there isn't one," which you would be also right about.

Maybe you know one.


I presume your definition of use case is something that doesn't include what people normally use it for. And I presume me using it for coding every day is disqualified as well.


I didn't mean to suggest it has no utility at all. That's obviously wrong (same for crypto). I meant a use case in line with the projections the companies have claimed (multiple trillions). Help with basic coding (of which efficiency gains are still speculative) is not a multi-trillion dollar business.


You've failed to figure out when and how to use it. It's not a binary failed/succeeded thing.


None of the copyright issues or suicide cases are handled in the court yet. There are many aspects.


Metaverse was...


“There’s no use for this thing!” - said the farmer about the computer


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: