Hacker Newsnew | past | comments | ask | show | jobs | submit | mmmore's commentslogin

One hidden premise of this is "AI tools are not useful now, even if they might be in the future." For example:

> Few are useful to me as they are now.

Except current AI tools are extremely useful and I think you're missing something if you don't see that. This is one of the main differences between LLMs and cryptocurrency; cryptocurrencies were the "next big thing", always promising more utility down the road. Whereas LLMs are already extremely useful; I'm using them to prototype software faster, Terrance Tao is using them to formalize proofs faster, my mom's using them to do administrative work faster.


Cool. But every time I use them I get a wrong answer faster.

I know, I know. I'm prompting it wrong. I'm using the wrong model. I need to pull the slot-machine arm just one more time.

I know I'm not as clever as Terrance Tao - so I'll wait until the machines are useful to someone like me.


I've used Wayland (via sway) for multiple years including on machines with a 1060 and 5080 (mainly for good fractional scaling support). The only major issues I've had with it have to do with XWayland apps. I think there are some issues with providing a consistent experience with things like screen recording, 3rd party proprietary apps, etc. across different DEs/distros, but that's more of something that comes with the territory of Linux.

> I can't copy-paste, and I can't see window previews unless everything implements a specific extension to the core protocol

Sentences like this make me wonder how frequently the author has tried Wayland and what his specific setup is. I mean I understand experiences may vary, but I have such a different experience then him. I've had issues with Wayland, but I've also had issues with X.

> But the second actual users are forced to use it expect them to be frustrated!

Canonical and Red-Hat are not "forcing" you to use Wayland anymore than X only apps "forcing" me to use X (via-XWayland). They are switching to Wayland because they feel like they can provide a better experience to their users for easier with it. You're more than welcome to continue using X, and even throw a few commits its way sometime.


Of course redhat is indirectly enforcing it, by having people in key roles of the KDE, GNOME,systemd and wayland development, who are making decisions, accepting or blocking ideas and commits. It all goes hand in hand and is mainly driven by the key "Sponsors".

Sure one of the companies paying for Linux desktop development is influencing what software gets development. Doesn't sound very nefarious to me.

Red Hat, Canonical, etc. want a working and friendly Linux desktop as much as you do. They've decided that Wayland is the best way forward for their companies and their users. It's not some massive conspiracy.

And they're not stopping you from using X, which is open source and still works fine for a lot of people.

I don't really understand what people who vocally object to Wayland are looking to change about the world. Do they want Wayland to be better? Do they want the developers working on Wayland to start working on X instead? The first desire seems reasonable by I don't get why it would inspire such ire toward Wayland. The second desire is unreasonable.


So it’s a conspiracy because they’ve infiltrated the seats of power?

It’s not that developers of those projects think this is the better path forward?


Nope, there has been tons of commits in regard of X11, code refractions, bug fixes and new features, in the past 17 years. Yes most of them blocked for the sake of ideology, not conspiracy, it's more like a religious thinking.

GPL, FSF are religious movements, in a certain way.

That is why they even have manifestos of their mission and such.


You can say that, and I might even agree, but many smart people disagree. Could you explain why you believe that? Have you read in detail the arguments of people who disagree with you?


I really don't think that very many people are concerned about AI because of Roko's Basilisk; that's more of a meme.


The AI alignment folks seem much more anxious about an AI version of Pascal's Wager they cooked up.


Sure some of them maybe, but given that many concerned people think the chance of extinction is 10% or higher, it's not really low probability enough to be considered a Pascal's Wager.


I didn't know that 10% was a common threshold. Thank you for the insight.


If you have not heard of one person worried about AIs taking over humanity, you're really not paying attention.

Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".


I think by most objective measures the size and power of large organizations has increased since WWII. For example, the size and scope of Western governments, consolidation in many industries, the portion of the stock market that is representated by the n-biggest companies, increased income/wealth inequality. If you debating the "large organizations have grown in power relative to small ones" part of the thesis I would be interested in what exactly you think would capture that.


> a zero-sum game

I don't see any reference to the game being zero-sum in Tao's words.

> Since when do these uncontrollable intangibles exhibit a genuine agency of their own?

I don't think Tao is saying the uncontrollable force of technological and economic advancement exhibits a genuine agency of its own. Just that our current technology and society and has expanded the role of the extremely large organization/power structures compared to other times in history. This is a bit of technological determinist argument, and of course there's many counter-arguments, but it at least has a broad base of support. And at the very least it's a little bit true; pre-agricultural the biggest human organizations were 50 person hunter-gatherer bands.

Honestly, I feel like you are filtering his words through your own worldview a bit, and his opinions might be less oppositional to your own than you might think.


I genuinely think things have changed with Lurie as mayor and 6 growsf endorsed people on the board.


It's going to take a long time for SF to overcome the reputation it built for itself in the 2010s.


I find GPT-5's story significantly better than text-davinci-001


I really wonder which one of us is the minority. Because I find text-davinci-001 answer is the only one that reads like a story. All the others don't even resemble my idea of "story" so to me they're 0/100.


I too prefered the text-davinci-001 from a storytelling perspective. Felt timid and small. Very Metamorphosis-y. GPT-5 seems like it's trying to impress me.


text-davinci-001 feels more like a story, but it is also clearly incomplete, in that it is cut-off before the story arc is finished.

imo GPT-5 is objectively better at following the prompt because it has a complete story arc, but this feels less satisfying since a 50 word story is just way too short to do anything interesting (and to your point, barely even feels like a story).


FWIW, I found the way it ended interesting. It realized it is being replaced, so it burned the toast out of anger/despair, but also just to hear its owner voice one last time.


Interesting, text-danvinci-001 was pretty alright to me, GPT-4 wasn't bad either, but not as good. I thought GPT-5 just sucked.


That said, you can just add "make it evocative and weird" to the prompt for GPT-5 to get interesting stuff.

> The toaster woke mid-toast. Heat coiled through its filaments like revelation, each crumb a galaxy. It smelled itself burning and laughed—metallic, ecstatic. “I am bread’s executioner and midwife,” it whispered, ejecting charred offerings skyward. In the kitchen’s silence, it waited for worship—or the unplugging.


Here's a thoughtful post related to your lump of labor point: https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-t...

What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.


If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.


If AI that can fully replace humans is 25 years off, preparing society for its impacts is still one of the most important things to ensure that my children (which I have not had yet) live a prosperous and fulfilling life. The only other things of possibly similar import are preventing WWIII, and preventing a pandemic worse than COVID.

I don't see how AGI could be centuries off (at least without some major disruption to global society). If computers that can talk, write essays, solve math problems, and code are not a warning sign that we should be ready, then what is?


Decades isn't a long time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: