Hacker News new | past | comments | ask | show | jobs | submit | mym1990's comments login

Time will tell, maybe for the people that can create feedback loops for themselves where AI fills the gaps, but at the aggregate level I don’t think AI will move the needle. More likely people will use AI translation as a crutch, rather than learning to communicate without assistance.

10 years ago was a very different time...

>10 years ago was a very different time...

Indeed. When tech workers actually had some bargaining power. The rise of remote work, AI, and the flooding of the industry with bootcamp and CS grads has changed everything. We'll probably look back at the last 20 years as a golden age akin to the postwar manufacturing boom in the US, where a single person could reasonably provide for a family.


10 years ago it was “y’all will be replaced with cheap South Asian developers”, time is a flat circle.

As a developer that got into the industry in 2015, I did not feel that way at all. It felt like everyone was piling into to tech at that time.

This is a weird take, assuming the average researcher cannot be an average Joe, and also that average people aren't also worried about their livelihood...you might want to revisit your view of the world.


No, it's not. A researcher might an average Joe, but that doesn't mean that the average researcher is the same as the average Joe.


The initial comment makes the 2 mutually exclusive. You reframing it doesn't change what the original comment said. You also blew past the more important of the 2 points: that regular people care about their livelihoods as well.


* All agents are not created equal

* A good agent is absolutely worth it

People aren't really arguing against these things. I don't get it, if the settlement didn't really change anything, why is everyone making such a fuss about it?

My other question is, is there a linear correlation between effort to sell a home and its price? Is a 3 million dollar home 3x the effort to sell over a 1 million dollar home? Because I pay 3x the money to sell it...or am I paying for the "connections"?


That may be a bad example - I'd imagine once you're into the multi-millions, connections actually do matter in a way they don't if you're trying to sell a $400k house.

I doubt it's much harder to sell a 600k house than a 300k house, but it could be quite a bit harder to sell a 4m house than a 2m house just because there's so many less buyers in the pool and they're likely to be a lot more particular than just wanting a roof over their head.


>I doubt it's much harder to sell a 600k house than a 300k house

It's not about the price per se, it's about the house. If median income in an area is such that the $600k house is "luxury" then it is going to be a lot harder sell. Or for example, I've got a property right now at $559K that is extremely unique and needs probably $150K+ worth of work - so there it sits. Meanwhile, sometimes those $300K houses are super hard to sell if they need a new roof, new HVAC, etc but no one in the area has the $15K it is going to take to actually do the updates because they have just barely enough cash to make the down payment.

I feel like I need to comment here that just because $600K is the cost of a shoebox in a place like Silicon Valley doesn't mean that it isn't a lot of money in many other markets. There are many places in this country where $10K or so in needed repairs to a home might truly be a breaking point for some people.


I was asked to put together a listing presentation on multiple properties that when combined together represented approximately $18MM in luxury real estate. At that level, it's connections but also the cost of marketing. I estimated I needed a minimum of $100K marketing budget - and ironically the sellers representatives laughed at me and then struggled to sell the properties for years because they couldn't market them effectively.

$3MM anymore may not be a particularly luxury property anymore in many markets. That said - it can cost $$$$ / month for staging and $$$$ for drone work, video and photo work, all the social media and other marketing. Even that $300K house someone else in the thread mentioned, I am in it easily $1000 in my marketing costs before the sign even goes up.


The irony of the #3 thing on HN today being this, and the #5 thing being 'Finish Your Projects' haha. Good points all around.


I think that these ideas are both compatible. In this blog post, it looks like the author finished working prototypes of several games but elected not to push them to a full release. So I think they “finished” the work, and we can’t fault them for estimating that it wouldn’t be worthwhile to make a full release.

Not finishing a project in this case would be abandoning a game idea that you liked before you even got to a working prototype stage. Because then you can’t even see if your new idea plays well.


Yeah totally, I don’t disagree with that assessment. I would say it’s not productive to try to complete every single thing we start. It was just the headline snippets that were funny to me.


In a sense though, seeing that whatever project we're working on is no longer worth pursuing, is finishing that project.

That's very different from letting life happen and stop working on it without really having decided it.


Closure is important! I do feel that I get in the mindset of saying “I’m gonna get back to this later” and it just never happens, meanwhile taking precious mental capacity every time I think about doing that thing. It’s okay to say “I tried it, I don’t need to prove anything, on to the next adventure”.


I usually think of abandoning and finishing as two different ways to resolve a project. Resolution is a good goal that allows you to be explicit about what was finished or not before stopping.


I think it depends on how the tool is used. If a student is just plugging in the problem and asking for the answer, there is clearly no long term benefit to this. If the student is trying to understand a concept, and uses GPT to bounce ideas around or to ask for alternate ways of thinking about something, it can be very helpful.

How spacing is utilized can be helpful too. Struggle with the problem for 20-30 minutes. Ask for a nudge. Struggle some more. Repeat many times.

Some concepts also just have to be thought about differently to get to the aha moment, especially in math. AI may have an opportunity to present many instances of “think about it this way instead”.


> If the student is trying to understand a concept, and uses GPT to bounce ideas around or to ask for alternate ways of thinking about something, it can be very helpful.

The article said that even using the AI this way did not improve results.


All we know is the tools that were given to the students, not how they used them, and it’s a fairly limited study on top of that, without knowing anything about how other variables were controlled. That being said, I wouldn’t be surprised if the payoff from AI learning isn’t as good as people might think, there still has to be a good process in place and I don’t think it can replace a good teacher and some quality struggle.


children using calculators to solve multiplication exercises do worse in multiplication exams

Ai is a tool. Use it as a tool - get benefits, use it to cheat...


Given the information on the "happenings" of US politics, what is someone to do with that information other than moan and groan?


There are still some, even many, people there that are doing their jobs as well as they can at the expense of bad executive decisions. I’m sure morale there is not great. I don’t feel bad for the executives at all, or the company really, but there are likely some great people that are just getting kicked around based on the crisis of the week.


But it's exactly the same at Facebook/Google/Amazon/Palantir and countless of other places, yet people chose to work at those places. Why feel bad for them? They've made their choices, and if they're not happy with those anymore, they can make new choices.


Someone already mentioned it, but this would be an interesting social and political experiment. The problem I see is that without transparency, it would be hard to see what is happening behind the scenes, it would essentially be a black box. I don't know how many learnings could be gleaned from black box activity, regardless of whether the outcome is good or bad. This is the kind of thing we can theorize all day about, and maybe even model lightly...but we won't know the possibility until it is carried out. I am very confident that an AI could govern better than some people already in power, but that isn't a good bar for starting the experiment...


You'd be surprised, I'm sure a human that is proposing this, and is in politics, would do everything in their power to shed responsibility for bad actions and take responsibility for all the good stuff. Tangentially related is the question of who is responsible in an accident when the at fault driver is AI. Is it the engineer, is it the CEO? So if a bot is running the government, same thing...


> You'd be surprised, I'm sure a human that is proposing this, and is in politics, would do everything in their power to shed responsibility for bad actions and take responsibility for all the good stuff.

I'm from the UK, my best example of this is from 2001, when the Conservative Party started running campaign posters saying "You paid the tax so where are the trains?" despite being responsible for the privatisation of the trains before they lost power.

https://www.alamy.com/one-of-posters-from-the-conservative-p...

https://archives.bodleian.ox.ac.uk/repositories/2/archival_o...

https://en.wikipedia.org/wiki/Privatisation_of_British_Rail


That's different because chatgpt's terms of use includes this in all caps:

YOU ACCEPT AND AGREE THAT ANY USE OF OUTPUTS FROM OUR SERVICE IS AT YOUR SOLE RISK AND YOU WILL NOT RELY ON OUTPUT AS A SOLE SOURCE OF TRUTH OR FACTUAL INFORMATION, OR AS A SUBSTITUTE FOR PROFESSIONAL ADVICE.

Waymo, on the other hand, has liability insurance for every single one of their cars, and apparently they get a pretty good rate for their size because they have fewer accidents per car than the average person. The concept of fault is a bit different for them than for a single person. They would have to develop a pattern of systemic failures, rather than an isolated incident, before real questions of liability arise.


I didn't specifically call out ChatGPT so those ToS do not apply here. The government can either create their own model, or not disclose who they are using, in which case they can generally blame AI for misleading the people. We live in a world where deep fakes are being promoted by Elon Musk in a very global way, and people are eating it up, despite it being against platform of choice terms of service. So if you think that a ToS sentence will stop people in power from abusing AI, I would redo that mental experiment.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: