Hacker News new | past | comments | ask | show | jobs | submit | greg_V's comments login

I'm not in the US but am familiar with some politicians here, and they too have a problem with recognizing that the feed they received is personalized, the comments are not representative, etc.

If you're wondering why politics sometimes seem out of touch, it's because politicians, their media and the commentariat are locked into an echo chamber already.

If I were an actor interested in influencing the policy of another country, why would I spend $$$ on manipulating the voting populace if I can poison the feed of the people who matter for far less?


I had the "pleasure" of working for someone who would take the AI's word over mine in my domain of expertise. It was a hair-ripping experience to say the least and I'm while I'm not there anymore, from what I can see they hadn't improved in that area by an inch...

Jokes on you, I bet there's a good portion of CEOs who just use chatgpt in their day-to-day life instead of reading up on stuff, thinking through it and making decisions

This would result in a sharp increase in the quality of CEO decisions. I haven't seen anything like that. ;)

And as we well know, a good portion of CEOs are not good CEOs.

Re: the 30% fraud figure. An interesting book just came out last month that speculates on the amount of fraud / wasted clicks / impressions in the ecosystem

https://dl.bookfunnel.com/6kwmi0qlsq

TL;DR it's pretty damning, but since ultimately advertising is shown to drive revenue, the whole problem is priced in so to speak.


I mean... maybe, but not really. The first problem of the internet was that there wasn't that much content specifically. The first internet companies were the broadband providers who were developing content themselves, like AOL.

Google and the ad ecosystem they acquired was basically the flywheel that spurred content creation at scale. Anyone could jump in, follow a few guidelines and earn a living by producing content on the internet. The Youtube acquisition and monetization followed the same pattern.

Over time the market consolidated and got less and less competitive: less platforms with complete control of traffic and one-sided revenue sharing agreements. The guidelines so to speak on how content should look and feel like were algorithmically made stricter and stricter until everything looks, feels, sounds and reads the same.

The problem right now is that the platforms are still tightening their grip, and it's all tied to the approach of using AI to replace the content creators on the platforms from Google to Spotify to Meta, and carving the spared money to shareholders. And while the web has been shitty for a few years now, we're now seeing a sudden drop in quality because the average user has no recourse or alternative, and neither does the average creator have the means of distribution and monetization (not just publishing, that's been solved) to even find, let alone meet the new kinds of demand.

I'm certain that in a few years this will even out: new search engines, new aggregators and new feeds will emerge, but the content - money - network problem triangle remains as a fundamental problem of the internet.


Wait until Monday and see how the stock reacts

I predict that the stock will not react at all on Monday.

probably a bit of a "that's the joke" question, but is that due to the market being closed Monday?

or is that it didn't adjust on Friday and this isn't significant enough to adjust the next time the market is open (Tuesday)?


Yes, I was making a joke about Monday being a trading holiday.

Alphabet is quoted on some non-US markets, so it is tradeable today.

Technically everything an LLM does is hallucination that happens to be on a scale between correct and non-correct. But only humans with knowledge can tell the difference, math alone can't. It's not even a bug: it's the defining feature of the technology!

> But only humans with knowledge can tell the difference

Who says the humans (all of them) aren't hallucinating too?


Knowledge isn't sufficient to show something is false, since the knowledge can also be false. Insofar as it's important for it to be true, it needs to be continually verified as true, so that it's grounded in the real world.

Oh it gets even better. The public has been hearing about AI this and AI that for over a year, but the existing use cases and deployment was confined to some super special niches like writing or the creative industries and programming.

This is the first nation-scale deployment of the technology, running on Google's biggest and most profitable market in one of the most widely used internet services, and it's a shitshow.

They can try manually fine tuning it, but all of the investors who have been throwing money at AI for the past year are now learning what this tech is like in the day-to-day, beyond just speculations, and it's looking... bad.


It's especially embarrassing for Google considering they have indexed virtually all of the world's information for the last 25 years.

Yeah, the most likely take here is that Google's leadership truly did not recognize how utterly awful the quality of their flagship search index had become over the years.

I mean, it explains a lot, but still... you're recruited using industry-leading practices out of an overflowing pool of abundant talent... and this is what you make of it? As the kids say: SMH!


> you're recruited using industry-leading practices out of an overflowing pool of abundant talent

The ridiculous focus on on leet-code is surely industry-leading (because whatever Google does becomes industry-leading) but it sure isn't a good way to filter for competency.


I heard a funny quote that "today we have a new generation of developers who learnt how to pass interviews but don't know how code, and we have an old generation of developers who know how to code but forgot how to pass interviews. Or maybe never knew".

> you're recruited using industry-leading practices out of an overflowing pool of abundant talent... and this is what you make of it?

That's exactly what to make of their frathouse nonsense.

Google has gotten away with it because smart people and a sweet moment of opportunity 20-25 years ago gave them... uh, an inheritance. They can coast on that inherited monopoly position, and afford to pay 100 people to do the work of 1, use the company's position to push whatever they build onto the market, and then probably cancel it anyway, always going back to the inherited money machine from the ancestors.

And then a lot of companies who didn't understand software development blindly tried to copy whatever the richest company they saw was doing, not understanding the real difference between the companies. While VC growth investment schemes let some of those companies get away with that, because they didn't have to be profitable, viable, responsible, nor legal, nor even have reasonably maintainable software.

Poor Zoomers are now a generation separated from before the tech industry's cocaine bender. For whatever software jobs will be available to them, and with the density of nonsense "knowledge" that will be in the air, I don't know how they'll all learn non-dysfunctional practices.


Plenty of people have been using ChatGPT for daily tasks for almost two years now. GPT-4 isn’t perfect but is otherwise really really good, and deftly handling use cases in my industry that would be impossible without it or however many billion dollars it would take to make GPT-4.

From the black Nazis to the suggestion to jump off the Golden Gate Bridge b/c depression, it’s pretty clear that this fiasco isn’t an LLM problem, it’s a Google problem.


Because no one cares when ChatGPT gets things wrong.

The jury will decide on the latter.

At any rate, Altman made clear allusions to hint that they are capable of synthesizing ScarJo's voice as a product feature. The actress retaliated saying she verbally did not consent, and now OpenAI's defense is that they hired a different actress anyway.

...which means they lied to everyone else on the capabilities of the tech, which is y'know, even worse


Exactly. And regardless of the timeline outlined, when discovery happens, they’ll be a bunch of internal messages saying they want ScarJo to do the voice or find someone they can match her close enough. They went down both paths.

This will settle out of court.


The inflammation vs. psychiatric disorders link is very interesting. See also some findings about the gut biome, which is also involved in the process: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7830868/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: