LLMs will always be just a little too random or a little too average. There in is the hidden beauty of AI: elevating the trust in peoples diverse experiences.
Humans are amazing machines that reduce insane amounts of complexity in bespoke combinations of neural processors to synthesize ideas and emotions. Even Ilya Sutskever has said that he wasn't and still isn't clear at a formal level why GPT works at all (e.g. interpretability problem), but GPT was not a random discovery, it was based on work that was an amalgamation of Ilya and others careers and biases.
And just like Henry Ford and the automobile, one of many externalities was the destruction of black communities: white flight that drained wealth, eminent domain for highways, and increased asthma incidence and other disease from concentrated pollution.
Yet, overall it was a net positive for society... as almost every technological innovation in history has been.
Did you know the 2/3rds of the people alive today wouldn't be if it hadn't been for the invention of the Haber-bosch process? Technology isn't just a toy, it's our life support mechanism. The only way our population gets to keep growing is if our technology continues to improve.
Will there be some unintended consequences? Absolutely. Does that mean we can (or even should) stop it? Hell no. Being pro-human requires you to be pro-technology.
I don't think this argument is logically sound. The assertion that this (and every other!!) technological innovation is a "net positive" merely because of our monotonic population growth is both weakly defined and unsubstantiated. Population is not a good proxy for all things we find desirable in society, and even if it were, it is only a single number that cannot possibly distinguish between factors that helped it and factors that hurt it.
Suppose I invent The Matrix, capable of efficiently sustaining 100b humans provided they are all strapped in with tubes and stuff. Oh and no fancy simulation to keep you entertained either -it's only barely an improvement on death. Economics forces everyone into matrix-hell, but at least there's a lot of us. Net positive for society?
Human fecundity is probably not actually the meaning of life, it's just the best approximation most people can wrap their heads around.
If you can think of a better one, let me know. Be warned though, you'll be arguing with every biological imperative, religion, and upbringing in the room when you say it.
I don't need to prove anything. You folks are the ones claiming harm. That said, AI is more akin to the invention of antibiotics than it is to the invention of any specific drug. Name any other entire category of technology from which no good has ever come. Just one.
I doubt you can. Even bioweapons led to breakthroughs in pesticides and chemotherapy. Nukes led to nuclear power, and even harmful AI stuff like deep fakes are being used for image restorations, special effects, and medical imaging.
You're just flat out wrong, and I think you know it.
You are speaking in tautology. Yes we know that technology investment often leads to great advancement and benefits for humanity, but it is not sufficient to obviate the need for consciousness and reduction of harm. This technology will be used to disenfranchise people and we need to be willing to say, "no, try again." Not to stop advancement, but to steer it into being more equitable.
We should be trying to optimize for the best combination of risk and benefit, not taking on unlimited risk in the promise of some non-zero benefit. Your approach is very much take-it-or-leave-it which leaves very little room for regulating the technology.
The GenAI industry lobbying for a moratorium on regulation is them trying to hand wave any disenfranchisement (e.g. displaced workers, youth mental health, intellectual property rights violated, systemically racist outcomes, etc).
> We should be trying to optimize for the best combination of risk and benefit
I 100% support this stance, it's good advice for life in general. I object to the ridiculous Luddite's view espoused elsewhere in this thread.
>The GenAI industry lobbying for a moratorium on regulation is them trying to hand wave any disenfranchisement (e.g. displaced workers, youth mental health, intellectual property rights violated, systemically racist outcomes, etc).
There must be a balance certainly. We can't "kill it before it's born", but we also need to be practical about the costs. I'm all in on debating exactly where that line should be, but object to the idea that it provides no value at all. That's madness, and dishonesty.
It's because people rub shoulders with tech billionaires and they seem normal enough (e.g. kind to wait staff, friends and family). The billionaires, like anyone, protect their immediate relationships to insulate the air of normality and good health they experience personally. Those people who interact with billionaires then bristle at our dissonant point of view when we point at the externalities. Externalities that have been hand waved in the name of modernity.
That's going to tank the stock price though as that's a much smaller market than AI, though it's not going to kill the company. Hence why I'm talking about something like robotics which has a lot of opportunity to grow and make use of all those chips and datacenters they're building.
Now there's one thing with AR/VR that might need this kind of infrastructure though and that's basically AI driven games or Holodeck like stuff. Basically have the frames be generated rather than modeled and rendered traditionally.
Nvidia's not your average bear, they can walk and chew bubblegum at the same time. CUDA was developed off money made from GeForce products, and now RTX products are being subsidized by the money made on CUDA compute. If an enormous demand for efficient raster compute arises, Nvidia doesn't have to pivot much further than increasing their GPU supply.
Robotics is a bit of a "flying car" application that gets people to think outside the box. Right now, both Russia and Ukraine are using Nvidia hardware in drones and cruise missiles and C2 as well. The United States will join them if a peer conflict breaks out, and if push comes to shove then Europe will too. This is the kind of volatility that crazy people love to go long on.
Apple Maps from day one was skating to where the puck was going to be. They had vector based maps when that stuff was brand new. Possibly before Google deployed it widespread (but I'm not sure on this fact).
But the problem with Apple Maps was easy to see (and can only be fixex over time)... data. Google and others had a decade+ head start on Apple when it came to collecting data for maps. Judge Apple Maps 5 years old vs Google Maps 5 years old. Not Apple Maps brand new vs Google Maps 10 years later.
Forstall is the one that pushed to make iOS based on macOS/Unix. He was definitely a lightning rod but had product sense.
From my reading, Forstall was one of the few who actually refused to partake in performative corporate culture, and decided to quit rather than bend over
>when Apple issued a formal apology for the errors in Maps, Forstall refused to sign it
It's obvious that apple maps would never be able to be a perfect replacement for google maps at launch, and it's possible Forstall in fact voiced these exact concerns but was overruled before launch, only to then be used as cannon fodder when he turned out to be right. Given all the clearly empty corporate-style "we take full responsibility" stuff you see today, someone actually _refusing_ to play those games when it wasn't his fault is a very positive sign for authenticity.
(He also did work on Siri, but given that he was booted right after its launch, I don't think it's fair to attribute their present incompetence on that front to him.)
Strange take. Apple Maps was a new product. It's expected it would be behind Google Maps, maybe even forever given all the headstart and resources Google gives it.
In any case, Apple Maps (a NEW then product, in an entirely new space for Apple) being bad, is not at all related to "enshittification".
Apple Maps is absolutely the wrong thing to judge Forstall on.
Not to mention that its main problem is coverage i.e. data quality. Regarding software engineering it's fine, even better than Google Maps in lots of aspects.
Everything is derivative of something else. “Novel” is a distinction for works which are minimally derived, but everything created is a remix of something else. Novelty is an illusion.
Humans are amazing machines that reduce insane amounts of complexity in bespoke combinations of neural processors to synthesize ideas and emotions. Even Ilya Sutskever has said that he wasn't and still isn't clear at a formal level why GPT works at all (e.g. interpretability problem), but GPT was not a random discovery, it was based on work that was an amalgamation of Ilya and others careers and biases.
reply