Hacker Newsnew | past | comments | ask | show | jobs | submit | BosunoB's commentslogin

Y'all gotta stop looking at politics this way.

You know why they don't share the fruits of capital with us now? Because Americans hate getting taxed to pay for welfare, and so they've been voting against taxes for 50 years. This whole political landscape changes when people lose their jobs to AI, a thing that everyone thinks should be taxed. In fact, the entire ideological underpinning behind extreme wealth accumulation is gone when AI runs everything.


Works great in other countries with high unemployment. That's exactly what happens! People elect a person who says they are going to change everything to fix it and they never get around to it for some reason :)

Just look at the mess south America is in...


Yes, because current unemployment comes as the result of complex factors that intersect with various groups and ideologies in complex ways. Also, raising employment is a complex task.

When AI takes the jobs, it will be dead simple to the majority of people that the current way of doing things will not work, in the same way it's dead simple to you and me.


Dario said this in a podcast somewhere. The models themselves have so far been profitable if you look at their lifetime costs and revenue. Annual profitability just isn't a very good lens for AI companies because costs all land in one year and the revenue all comes in the next. Prolific AI haters like Ed Zitron make this mistake all the time.

Do you have a specific reference? I'm curious to see hard data and models.... I think this makes sense, but I haven't figured out how to see the numbers or think about it.

I was able to find the podcast. Question is at 33:30. He doesn't give hard data but he explains his reasoning.

https://youtu.be/mYDSSRS-B5U


> He doesn't give hard data

And why is that? Should they not be interested in sharing the numbers to shut up their critics, esp. now that AI detractors seem to be growing mindshare among investors?


In his recent appearance on NYT Dealbook, he definitely made it seem like inference was sustainable, if not flat-out profitable.

https://www.youtube.com/live/FEj7wAjwQIk


There are companies working on this, but my understanding is that the training data is more challenging to get because it involves reinforcement learning in physical space.

In the forecast of the AI-2027 guys, robotics come after they've already created superintelligent AI, largely just because it's easier to create the relevant data for thinking than for moving in physical space.


Robotics will come in the next few years. If you believe the AI2027 guys, though, the majority of work will be automated in the next 10 years, which seems more and more plausible to me every day.


Are you independently wealthy enough to benefit from that or someone who should invest in suicide pills for themselves and their family if that day comes?


Why invest in weaksauce suicide pills when you could instead invest in nitrogen compounds and suicide bomb the tallest nearby building? Just because you've already lost doesn't mean they get to win, let alone survive.


Maybe we'll get the Iain Banks Culture future!


Some people haven't overcome their childhood desire to die or suffer so as to make the parents regret their decision not to buy that candy or take the puppy home. They imagine dismal future as a glorified way to suffer. Talk about cyberpunk - there's that sweet alluring promise to spend a whole life eating instant ramen sitting next to a window with a blinking neon sign and endless rain behind it, coding routinely to a lofi soundtrack, or lurking lonesomely about the techno-slum neighbourhood hiding their faces from CCTV behind masks


You know, I'd really prefer not to, but I have eyes and object permanence. Maybe we'll get the Iain Banks Culture future!


Most of that 9 billion was spent on training new models and on staff. If they stopped spending money on R&D, they would already be profitable.


> if they stopped spending money on R&D, they would already be profitable

OpenAI has claimed this. But Altman is a pathological liar. There are lots of ways of disguising operating costs as capital costs or R&D.


In a space that moves this fast and is defined by research breakthroughs, they’d be profitable for about 5 minutes.


Says literally every startup ever i.r.t. R&D/marketing/ad spend yet that's rarely reality.


> If they stopped spending money on R&D, they would already be profitable.

The news that they did that would make them lose most of their revenue pretty fast.


But only if everyone else stopped improving models as well.

In this niche you can be irrellevant in months when your models drop behind.


I switched to Floorp last month and it's amazing. It's a Firefox fork with tab rows. Tab rows are the real MVP. None of the annoying weirdness of Tree Style Tabs, where you have to keep track of a hierarchy of tabs that are hidden behind other tabs. Instead, you just see 3-4 rows of tabs and you make a mental map of what is where.

Once they release the new version in a month or two, we'll also get newer Firefox features like these tab groups, and we'll also get workspace improvements. Floorp is 10/10.


You are absolutely right that AGI will probably barely resemble LLMs, but this is kind of beside the point. An LLM just has to get good enough to automate sufficiently complicated coding tasks, like those of coding new AI experiments. From there, researchers can spin off new experiments rapidly and make further improvements. An AGI will likely have vastly different architecture from an LLM, but we will only discover that through likely hundreds of thousands of experiments with incremental improvements.

This is the ai-2027.com argument. LLMs only really have to get good enough at coding (and then researching), and it's singularity time.


I fell into Rand in high school and it took me a few years to climb out.

The problem with believing in the primacy of reason is that it's incredibly distortionary. In reality, we all think and reason with respect to our ego and our emotions, and so if you believe that you are engaging in pure reason, it can lead you to pave over the ways in which your emotions are affecting your line of thought.

In this way it can quickly become a very dogmatic, self-reinforcing way of thinking. The ironic thing is that becoming a better thinker is not done by studying logic, but instead by learning to recognize and respect your own emotional responses.


> The ironic thing is that becoming a better thinker is not done by studying logic, but instead by learning to recognize and respect your own emotional responses.

This is the single thing that in my opinion both the young and also the naive miss. But people who are wise usually seem to understand.

Not everyone learns it with age, but it usually takes some amount of life experience for people to learn it.


Yeah, “think for yourself, and if you disagree with me that means you’re doing it wrong” is a heck of a way to run a school of philosophy. It’s no wonder she hates Plato, he’s constantly challenging people in their settled beliefs.


Why did she (explicitly) hate Plato so much?


> The "extreme realists" or Platonists, . . . hold that abstractions exist as real entities or archetypes in another dimension of reality and that the concretes we perceive are merely their imperfect reflections, but the concretes evoke the abstractions in our mind. (According to Plato, they do so by evoking the memory of the archetypes which we had known, before birth, in that other dimension.)

I think the concepts of forms and shapes rubbed her the wrong way.

Here's a quote from Piekoff that I think explains why much better than what Rand would have written.

> Momentous conclusions about man are implicit in this metaphysics (and were later made explicit by a long line of Platonists): since individual men are merely particular instances of the universal "man," they are not ultimately real. What is real about men is only the Form which they share in common and reflect. To common sense, there appear to be many separate, individual men, each independent of the others, each fully real in his own right. To Platonism, this is a deception; all the seemingly individual men are really the same one Form, in various reflections or manifestations. Thus, all men ultimately comprise one unity, and no earthly man is an autonomous entity—just as, if a man were reflected in a multifaceted mirror, the many reflections would not be autonomous entities.


something I've always wondered about but never fully grasped about Platonists before. thank you for the wonderful explanation.


Not sure if you are joking, but that’s not a clear summary of platonism.

Platonism does believe that forms exist separate from material instances; like, perfect spheres or triangles are “real,” even if there are no perfect spheres or triangle in the material world. That is a statement for the reality of the immaterial world, for platonists.

However, there are more than just these idealized forms in the immaterial world of forms. There aren’t just right triangles or perfect spheres, there are also forms that have complexity exceeding anything in the material world.

The world of forms might be thought of as the world of information, if that helps. However, it is subtlety different, since information is materially instantiated — and there appear to be hierarchies within the world of forms that somehow take precedence. Eg, the concept of 1 takes precedence over 363279.

The point is that there are forms of individual people, too, in platonism. Plato is vague about this in dialectic, but I believe Plotinus addresses it directly.


Right. I want to make it clear I wasn't saying "I support Ayn Rand's thought's on Platonism", what I wanted to convey was my interpretation of what Rand/Piekoff Wrote and why I think they read Plato and had the reaction they did.

I'm not a professional philosopher, but I think the nebulous nature of how Plato addresses forms and shapes is difficult for someone from a Randian Materialistic viewpoint to accept.

I think that even at it's weakest interpretation, the concept of forms and shapes at least provides an avenue for aspirational meditation as we can discuss what an "ideal" of a thing (food, medecine, political ideaology, etc) might be


Yes, I understood you. Plato has a lot of deliberate vagueness — it’s a dialectical communication, where the knowledge is communicated through the challenges of processing it


Most of our choices aren't thought out and logical. Our emotions and lizard brain drive most of our actions, but some of us are very good at quickly coming up with justifications and rationalizations for what we've just done that are plausible enough that we end up feeling in control.


Great post! I think it all comes down to self awareness. The more you are aware of your conscious and unconscious biases, you are the more empowered to mitigate the resultant rational failures.


Despite heading in more and more romantic directions in my thinking—from a very-analytic start—I don't find the core problem with Rand's thinking to be primacy of reason, but sloppy (or, motivated—it can be hard to tell which) reasoning that leads to ultra-confident conclusions. A consistent pattern is you'll see a whole big edifice of reasoning out of her, but peppered about in it, and usually including right at the beginning, are these little bits that the cautious reader may notice and go "wait, that... doesn't necessarily follow" or (VERY often) "hold on, you're sneaking in a semantic argument there and it's not per se convincing at all, on second thought" and then those issues are just never addressed, she just keeps trucking along, so most of the individual steps might be fine but there are all these weird holes in it, so none of it really holds together.

I've even, after complaints about this were met with "you just didn't start with her fundamentals, so you didn't understand", reluctantly gone all the way to her big work on epistemology(!) and... sure enough, same.

I find similar things in basically anything hosted on the Austrian-school beloved site mises.org. IDK if this is just, like, the house style of right wing laissez faire or what.


I don't think it has anything to do with being right wing -- Rand despised and was despised by much of the right wing -- but more to do with her traditionalist method of doing philosophy. I'd even call it Russian-influenced. I think Rand approached her philosophy from a literary perspective, and viewed her philosophy as a grand treatise that addressed every important aspect... an entire philosophical system. The overbearing rigidity and confidence sprung from this. It is very 19th-century in feel.

It is very different from a modern, more scientific approach, where we would view the system as a work in progress which would be refined over time. It would have been better for Rand to say about (for instance) free will, "it may function this way" or "we can make at least these statements about it", but I think Rand was not constitutionally able to couch her beliefs with qualifiers. It hurt her philosophical arguments, while at the same time perhaps made her a more interesting author.

I'm not an Objectivist, despite being sympathetic, because Rand created it and wouldn't agree that I was one. The reason is because I would tweak her philosophy. I'd incorporate some Bayesian probabilistic arguments into her metaphysics and epistemology, which she would despise. I'd modify her ethics with findings from game theory. I'd fold insights from cellular automata and chaos theory into her philosophy of consciousness. The broad swaths would be mostly the same, but it would no longer by Ayn Rand's Objectivism (tm).


I've always considered reason to allow for emotional motivations, as opposed to rationality, which does not.

Edit: Iain McGilchrist makes a useful distinction here https://www.youtube.com/watch?v=iUJDsdt7Pso


My advice, which you might not see here, is to write about every game you play. And ideally every movie you watch and book you read, too. To develop a sense of your own taste and what makes a game good to you, you have to write your ideas down. Go through the major moments of a game and describe how they made you feel, and what they did to make you feel that way. Dissect the mechanics and identify stuff you liked and stuff that frustrated you.

This is pretty much what college is. You just write essays where you dissect stuff like that and then a professor reads them and gives feedback on your ideas and how you communicate them. You can do the same thing informally with your friends and partners, just discuss games.


Breaking down my favorite games and talking about them really helps grow my thinking. I should chat more with friends I’m building with or other gamers around me.


For anybody looking to try psychedelics without the sketchiness of finding a dealer or buying online, you should know that you can discretely grow psilocybin mushrooms in your home for less than a hundred dollars initial investment, and with very little effort. It's legal to buy the spores in most states, and it only takes a couple of months to grow a nice canopy.

The best guide is https://www.reddit.com/r/unclebens/ or https://www.reddit.com/r/shrooms/comments/8e7g6n/how_to_grow.... Expect to read a lot before you start growing.


Most states ... except California.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: