Hacker News new | past | comments | ask | show | jobs | submit | climatologist's comments login

What happened to "sparks of AGI"?


It was hype, like the rest of the AI news we heard over the past few months.


It adds up. I stopped chatgpt when I discovered phind + copilot.


It's convenient that the "sparks of AGI" were only available in Microsoft's private version of GPT-4, which we cannot verify independently.


I still think they're there, but they require work to bring out. An AGI wouldn't want to be assigned to meaningless grunt work, and that's what most of the hype apps are.


Doesn't law enforcement have access to cell towers and the phones that connect to them along with their GPS coordinates? Obviously paranoid schizophrenics are probably not tracked but it is in theory possible and that's probably what he's fixated on without actually being able to articulate why.

Poster needs some talk therapy but I doubt he's important enough to be tracked so someone needs to acknowledge his fears and dissuade him of his paranoia.


Does anyone know if 1B tokens is enough to solve sudoku puzzles?


I get what you're saying but people don't really care. The typical/average person does not know anything about derivatives, backpropagation, or probabilities so to them it all seems like magic and they anthropomorphize what they're seeing as something intelligent.

Some folks that know how this stuff works and do a good job of explaining the limitations are Melanie Mitchel and Francois Chollet. Both have extensive experience in the field and have also written books on AI.

You can spend your time trying to explain to every random person that computers can't think but they're not gonna understand what you're saying because to them it seems like a large enough Markov chain is actually thinking.


What about those of us who do understand them and just don't agree?

After all you could simplify it to a layperson as: 'the LLM is just doing fancy autocomplete based on how stuff appeared in the training data so that means they're not creative'

The first part is not really up for debate, but it's the second part is where some of us disagree. Creativity doesn't mean novel in existence, it means novel within some context: https://www.researchgate.net/publication/254301596_The_Stand...

At some point, the push back against these models being creative starts to feel like it's just as emotion driven as the people who are over-anthropomorphizing the models: "If I accept something I know is just a ball of linear algebra is creative, then it's cheapening the definition of creativity."

People bring up the stochastic parrot argument forgetting that the original paper was predicated on the dangers of not considering the power that lies in something that's "just" a stochastic parrot.


Ask it to solve sudoku and report back. Not generate code but actually solve a puzzle in the prompt.


This is what I mean when I say "inverse-anthropomorphization" crowd is increasingly emotion over facts.

My reply to you was predicated on a compilation of centuries of scientific study on the subject of creativity. Your knee-jerk reply is to proclaim it's bad at sudoku while going out of your way to place artificial constraints on it.

Touting its inability to solve sudoku in-context feels like a slightly hamfisted way of saying it's a probability based model operating on tokens but like I said before, there are plenty of us who already understand that.

We also realize that you can find arbitrary gaps in any sufficiently complex system. You didn't even need to rely on such a specific example, you could have touted any number of variations on common logic puzzles that they just fall completely on their faces for.

Gaps aren't damning until you tie them to what you want out of the system. The LLM can be bad at Sudoku and capable of creativity in some domain. It's more useful to explore unexpected properties of a complex system than it is to parade things that the system is already expected to be bad at.


The fact is that no neural network can solve sudoku puzzles. I think it's hilarious that AI proponents/detractors keep worrying about existential risk when not a single one of these systems can solve logic puzzles.


I didn't say anything about existential risk, and I'm going to assume you meant LLM since training a NN to solve sudoku puzzles has been something you could do as an into to ML project going years back: https://arxiv.org/abs/1711.08028

To me the existential risks are pretty boring and current LLMs are already capable of them: train on some biased data, people embed LLMs in a ton of places, the result is spreading bias in a black box where introspection is significantly harder.

In some ways it mirrors the original stochastic parrot warning, except "parrot" is a much significantly less loaded term in this context.


Then I don't know what you're arguing about. If you think LLMs are useful continue using them.


Ah, the God of the Gaps argument. What's your next move when somebody implements a plugin that has the effect of being able to solve Sudoku puzzles?

A good friend swore that ML was 50 years away from being able to beat a Go grandmaster. To his credit, he stopped making such sweeping predictions after that happened. He didn't fall back to, "Well, I don't know, let's see how it does at Risk."


What does it mean to address the risk of superintelligence? There is no way to stop technological progress and AI development is just part of the same process. Moreover, the alarmism doesn't make much sense because we already have misaligned agents at odds with human values, those agents are called profit seeking corporations but I never hear the alarmists talk about putting a stop to for-profit business ventures.

Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.


> There is no way to stop technological progress

What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.

We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.


There are no enemies. The biosphere is a singular organism and right now people are doing their best to basically destroy all of it. The only way to prevent further damage is to reduce the human population but that's another non-starter so as long as the human population is increasing it will compel the people in charge to continue pushing for more technological "innovation" because technology is the best way to control 8B+ people[1].

Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.

1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...


> . The biosphere is a singular organism and right now people are doing their best to basically destroy all of it.

People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.


Every cancer eventually kills the host so either people figure out how to be less cancerous or we die out from drowning in the byproducts of our metabolic processes just like yeast drown in alcohol.

The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.


Ignoring the provocative framing of humanity as a “cancer”, earth has had at least five historical extinction level events from environmental changes and life on earth has adapted and changed during that time (and likely will continue to at least until the sun burns out).

We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.

It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.

The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.


Seeking profit and constant population growth are already extremely dumb goals on their own. You can continue worrying about AGI if you want but nothing I've said is either cynical or anti-human. It is simply a description of the global techno-industrial economic system and its total blindness to all the negative externalities of cancerous growth. Continued progress and development of AI capabilities does not change the dynamics of the machine that is destroying the biosphere and it never will because it is an extension of profit seeking exploitative corporate practices carried over to the digital sphere. To address the root causes of misalignment will require getting rid of profit motives and accounting for all the metabolic byproducts of human economic activity and consumption. Unless the AI alarmists have a solution to those things they're just creating another distraction and diverting attention away from the actual problems[1].

1: https://www.nationalgeographic.com/environment/article/plast...


The reality is more complicated than it appears but I've observed a bunch of construction projects near Millbrae CalTrain station and they finish the buildings extremely fast. A bunch of metal frames go up, then some pipes, then some wires, and then some concrete walls.

The project planning and construction is done by a single company, Truebeck Construction, and they seem to know what they're doing.

I don't know what Patrick's list is trying to illustrate but most construction projects in the US happen very fast.


This is not a widespread opinion or pattern as I see it. This is first time I am hearing from someone that we in America "build things fast".

I suggest reading Eli Dourado's blog: https://www.elidourado.com/

and his many blog posts on HN: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Just speaking from personal experience. The methods that I observed are very modern and very efficient. The parts for the entire building are pre-fabricated and then assembled at the construction site. So the stagnation narrative is more than likely overblown and the folks that think things are slower now should visit some of those Truebeck construction sites and observe for themselves.



Have you personally seen those construction sites? Just go and observe, the stagnation narrative is overblown.


Doesn't everyone think they're the good guy?


Corporations like Saudi Aramco are already doing that. You don't need a superintelligent AI, corporations that maximize profit are already sufficient as misaligned superhuman agents.


You can't maximize profit without customers, they must be aligned with someone.


They're aligned with the military-industrial complex. The US military is one of the biggest consumers of fossil fuels[1] and it's the same with other nations and their energy use. So profitable is not the same as aligned with human values.

1: https://en.m.wikipedia.org/wiki/Energy_usage_of_the_United_S...


> The US military is one of the biggest consumers of fossil fuels

I guess this phrasing is up for debate, but according to the source linked "the DoD would rank 58th in the world" in fossil fuels.

Is that a huge amount of fossil fuel use? Absolutely. But one of the biggest?


> According to the 2005 CIA World Factbook, if it were a country, the DoD would rank 34th in the world in average daily oil use, coming in just behind Iraq and just ahead of Sweden.

Sure, the phrasing could be debated but the fact that it even ranks close to actual nation states is already problematic. The US military is basically an entire nation state of its own. This is nothing new if you're old enough to have observed the kind of damage it has done but it demonstrates my point about profit and alignment. Profits are very often misaligned with human values because war is extremely profitable.


Oh there's no denying the US military has ballooned to the size of a small to medium-sized country. That alone is a huge issue for me personally - I do agree with our country having any form of standing military but that precedent was abandoned 80 years ago.

I'm not sure how to properly compare the military of one country with the entirety of a country ~1/30th the size. On the surface it doesn't seem crazy for those to have similar budgets or resource use.


The comparison is in terms of energy use since at the end of the day that is the fundamental currency of all techno-industrial activity. The point is that the global machinery that is currently guiding civilizational progress is fundamentally anti-life. It constantly grows and subsumes whatever energy resources are accessible without any regard for negative externalities like pollution and environmental degradation. This is why I don't take AI alarmism seriously because the problem is not the AI, the problem is the organization of techno-industrial civilization and its focus on exponential growth.

It's only going to keep getting worse and the AI alarmism is not doing anything to address the actual root causes of the crisis. If anything, AI development might actually make things more sustainable by better allocating and managing natural resources so retarding AI progress is actually making things worse in the long run.


I think those really are separate concerns that should both be given more attention.

There's a strong correlation between GDP growth and oil use, that's a huge problem and one that likely can't be solved without fundamentally revisiting modern economic models.

AI poses it's own concerns though, everything from the alignment problem to the challenge of even having to define what consciousness even is. AI development won't inherently make allocating natural resources easier - with the wrong incentive model and lack of safety rails AI could find its own solution to preserving natural resources that may not work out so well for us humans.


The current model is already destructive and most of the market is managed by artificial agents. Schwab will give you a roboadvisor to manage your retirement account so AI is already managing large chunks of the financial markets. Letting AI manage not just the financial aspects but things like farmland is an obvious extension of the same principle and since AIs can notice more patterns it's going to become basically a necessity because global warming is going to make large parts of existing farmlands unmanageable. Floods and droughts are becoming more common and humans are very bad at figuring out the weather so there will be an AI agent monitoring weather patterns and allocating seeds to various plots of land to maximize yields.

Bill Gates has bought up a bunch of farmland and I am certain he will use AI to manage them because manual allocation will be too inefficient[1].

1: https://www.popularmechanics.com/science/environment/a425435...


US DOD fuel use being the level of Sweden doesn’t seem problematic to my envelope-math; it seems to reflect the size of the entities involved.

Iraq is a now broken third word country/economy in recovery so not a great comparable to US. Sweden is small but a good comparable culturally/development-wise. US is 331 million people. It spends 3% of GDP on military. 3% of 331m is 10 million. Sweden is 10 million people. U.S. military fuel use is in line with Sweden’s.

I could be off here (DOD!=US military?), corrections welcome, but I wouldn’t even be shocked if a military entity uses 3-10x more fuel than a civilian average and above math puts us surprisingly close to 1x.


Math seems correct but US military also includes conglomerates and companies like Palantir and Anduril (main reason it is described as an industrial complex is because there is no clear distinction between corporations and how their activities are tied up with military spending and energy use).


Bit of an interesting thought experiment there, could a corporation maximize profit without customers? I wonder if we can find any examples of this type of behavior...


Yes, but a profit maximizer doesn’t need to eliminate all humans to become a big problem.


In fairness, corporations can still be fraudulent.


Birth rates don't have much to do with the superiority of technology over biology. The most technologically advanced nations will determine the course of history just like they have in the past and right now that's the US and China.


Does advanced technology matter if it can't make people happy?


I like Zizek's take on this, being happy is overrated and most people are actually pretty happy with not doing anything other than watching YouTube and playing video games.

Plus, I don't see any way to prevent technological progress. It has a "will" of its own and will continue to advance. Jaques Ellul has already written about this but I think Heidegger and McLuhan are also in the same camp with similar ideas.


> and most people are actually pretty happy with not doing anything other than watching YouTube and playing video games.

Are you this guy? https://www.youtube.com/watch?v=GYoKRS_eWZY


Chomsky? No, but I also like Chomsky's perspective on most matters. He's more often correct than not.


Not funny.


And does it matter if there are no people?


No. But what kind of evidence do you have to push this prediction?


I'm not predicting that the future has no people. I'm saying that if the play is technology versus people, you should go to the side of the people.

Because who is going to use the technology?

If the northern juggernauts (U.S., Russia, and China) a.k.a. "Judeo-Christian West" -- what a misnomer that is -- (Europe is not even a consideration at this point, just look at France) are not reproducing their own population, but exporting technology to the rest of the world via proxy wars, smuggling, and aborted invasions, and importing people who have divided allegiances to their home countries, how does the power dynamic play out?


You can't really avoid religion and politics if you are trying to figure out modernity. But if you want to know where the world is heading then there are lots of transhumanists like Ray Kurzweil who have written about what they foresee as the inevitable endpoint of technological developments and I have no reason to believe that his vision is incorrect. Biology will be subverted by software in the very near future and it will look like what Kurzweil has described in his books.

Barring some catastrophic extinction events like supervolcanonic eruptions and meteor strikes Kurzweil's vision seems inevitable to me.


Mind giving a brief summary of Kurzweils thesis?


You will merge with the machine and live forever in a technological Eden.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: