Hacker News new | past | comments | ask | show | jobs | submit | casebash's comments login

Most of the comments here only make sense under a model where AI isn't going to become extremely powerful AI in the near term.

If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.

On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.


Released 9th October, 2023


Will Oprah screw up the AI story?

Quite possibly, but likely not as bad as this article.

Complete clickbait title, assumes that the author's hobby horses are the most important thing in the world, bizarrely argues that crypto hype is an "attack on labour".


I agree the article's pretty bad. There's a description of the show's content here https://deadline.com/2024/08/oprah-winfrey-host-ai-abc-speci... And Oprah talking about AI a Reid Hoffman + GPT4 book here https://youtu.be/bOXRjnXp3-s Quote from Oprah:

>I am just in this moment where I'm fascinated by how this technology AI is going to be a resource for change and improvements and making the world better and yet the other side of it who's in control of it and what happens when the bad guys get it.


Just thought I'd add a comment as someone who came top of the state in my grade in multiple olympiad competitions:

I always felt that a large part of my advantage came from having a strong understanding of maths from the ground up.

I felt that a lot more people could have gained the same level of understanding as I did if they had been willing to work hard enough, but I also felt that almost no-one would, because it'd be an incredibly hard sell to convince someone to engage in years-long project where they'd go all the way back to kindergarten and rebuild their knowledge from the ground up.

In other words, excellence is often the accumulation of small advantages over time.


It's not just working hard enough, but also doing the right kind of work. Many people make the mistake of trying to memorize things without understanding. Which may be easy at the beginning when you memorize a fact or two, but it gradually accumulates, especially in math when the old topics never go away as the new ones are introduced. And then the memorizers are actually working much harder, and even that is not enough, so they fail.

So why the aversion to understanding? I suspect part of that is generational; if your parents sucked at math because they relied on memorization, they probably won't introduce you to math as an something worth understanding. It will either be "give up", or "work harder" but in the sense of memorizing harder. Not just your parents, but the entire culture around you will be like that. Another part is that most math teachers at elementary schools actually suck at math; because teachers are many, but people good at math are few and they have many better careers available. But another problem is the insistence of school system on everyone going forward at a predetermined speed -- sometimes understanding takes time, and when you don't have the time, you are forced to memorize; but once you start memorizing, you usually need to keep memorizing, because understanding can only be built on understanding the prerequisites.

Properly taught elementary-school math should be fun, like this: https://www.matika.in/en/ Fun makes people think.


A lot of people don't understand what understanding really even entails. They don't know that some people actually understand a topic/idea/whatever, can play around with the ideas in their head, think from first principles on the topic. They've never understood a topic in their life.


If passion, or own experience, is missing it may be a case of unknown unknowns for both parents and teachers.

The Matika site looks really nice but I have difficulties comprehending the instructions. Even the very first one for first grade. “Children step by record.” What does that mean? I tried the next one. “During addition we write addends below each other…” What? If all addends are below, no addend is on the top. It makes no sense. Then, “…and the sum below the line” with no line in sight. What, where, which line, how? That was frustrating.


Wow, the English translation sucks much more than I noticed. :(

The whole "stepping" thing is a reference to how they (in the web page author's country) teach basic addition and subtraction at first grade. There is a carpet with numbers on the floor, you start at number zero, and do addition like "2+3" means "two steps forward, pause, three steps forward, now look at the number you are standing on". The carpet is situated so that from the sitting kids' position the zero is on the left, and the numbers increase to the right.

The idea is to turn integers into something "tangible", in a way that can later be extended to negative numbers.

So the instructions should be like: "You start at given number. Right arrow means a step forward to a greater number. Left arrow means a step backward to a smaller number. What number you end at?"

Sorry, I already know all these things by heart, so I didn't notice how the English instructions don't make sense. Guess I should contact the author about it.


I feel the same way when I'm on hold and a recording tells me "your call will be answered in the order it was received". This isn't about grammatical pedantry -- I don't care that they didn't say in which -- it's about it not making sense. Which, as I said, isn't grammatical pedantry. But it probably is still a bit pedantic. Still, though, how can one thing have an order? What order was my call received in? Is it before or after itself? I get the sense that whoever recorded that didn't spend any time actually thinking about it, or they would have said "Calls are answered in the order [in which] they are received" or something.


Understanding is critical.

I unfortunately spent the entire introduction to calculus in hospital, so missed it - when I came back to school, I was dropped straight into “differentiate this” and “integrate that”. There was no explanation of what either operation was, just the rules that you followed to obtain the result. I had no idea that we were looking at rates of change or at areas under curves. For the first time in my life, I found myself bewildered, and struggling - until a month later I happened to find myself reading a biography of Newton which actually explained what the purpose of calculus/fluxions was - and then it became easy, as it was obvious if a result was nonsensical.


I knew people who somehow missed the information that fractions are the same as division. So they could e.g. reduce the fraction 40/20 to 4/2, then after thinking about it longer also to 2/1, and... then had no idea what to do next.

For me it was completely mind-blowing, how someone could do fractions without understanding what they are? But I imagine, at school they probably could solve some problems, couldn't solve others, got C, moving on to the next lesson.


I just think for some people math is very fun since the very young age and so they of course practice it. For others it may not be, so it is hard work for them. E.g. I have always enjoyed ever since I can remember doing these exercises. When walking home I used to multiply different numbers as a past time in my head. Most people are not going to do those things, and it didn't feel hard work at all.

In first grade I used to run through workbooks being addicted to solving those problems like some addictive mobile or video game and at that point teacher had to stop me and I was frustrated.

I only had this addiction to math and physics - a bit to chemistry, and I couldn't really focus on other text / memorization based subjects.

And it makes sense to me that genetically in a population you will have brains out of the box that are naturally optimized for different specialities, since having a specialized brain allows you to have more power in that specific area. Problem is when you force those specialised brains into the same way of studying.


Exactly. We enjoy different activities. For math oriented kids it's not a grind, it's interesting and fun. For me, reading novels was much more of a grind, as I just wasn't that interested in people and their conflicts and condition.

It took until my twenties that I could realize the value in humanities and "social" topics.

Similarly most people will naturally learn about countless types of fashions and connotations of liking various music bands etc which is actually quite a lot of information to memorize. But it's fun and feels relevant while math feels disembodied and irrelevant to their social goals in life.


I think mathematics education is pretty horrible this way. You only start actually learning the foundations of math in your 3rd or 4th year of undergrad.

At least nowadays there is a shit ton of youtube resources and more, so a self interested kid can learn it far easier. I tried and the books that were out there were... sparse and textbooks are written for other professors, not students.


I can only blame teachers. In primary school after four years they finally managed that there's only on kid per class left (that would be you I guess) has fun with math. At home I am fighting an uphill battle because I know it can be fun (my kid even likes logic puzzles). Living in Germany, for the record.


I'd rather blame the system that teaches the teachers. I'm certainly not going to blame someone for not knowing how to teach an onramp that they don't even know exists because they themselves were never taught properly.


Sometimes it’s just the teacher.

I loved reading until my grade 4 teacher decided we would all write a book report every week. Haven’t read a story book since. It’s been 20+ years.

Forced fun is never fun. The other grade 4 class wrote three that year.


People react differently to task and teachers, there is no one way to do it. I got good grades because a teacher let us repeat a task like "write a report" seven-fourteen times. The feedback was given by him in class highlighting the important points of getting a good grades, and then 24 hours after handing in the report we got it back with notes mentioning which important parts we had missed.

This thought me the rather simple lesson that getting it right on the first try is really hard.

Writing a book report is completely different from reading a book. I have heard people doing literature in university being sick of books because of the same issue.


That sounds like a dream. I have a distinct memory of doing in class writing in 3rd grade where the teacher would force us to redo it if there were mistakes and give minimal feedback. As far as my 3rd grade memory can recall, I rewrote it several thousands times and never got it all the way right.


It was a dream for ME. I always think of that teacher when I do code reviews. The important part is how you manage to communicate things effectively. I had one friend who never managed to get better and complained, not loudly but it was clear it felt like hell to him.

I do not know if the instructions would have worked for you, maybe it just worked on the ones that really saw a need to improve in this specific task. I know most people missunderstand me when I give out instructions.


Yet understanding is necessary but not sufficient when you read university math, especially advanced courses.

Proofs assume you have the elusive thing referred to as ”mathematical maturity”, which means many algebraic manipulation steps are skipped because it’s assumed you can just see the result straight away.

This ability to see the connection is not understanding but learning by rote, having done the same tricks with similar equations a thousand times.

This is what makes advanced math books/courses slow for me as a CS phd researcher. I can very slowly progress through, but it takes a massive amount of time to work through what just happened. If you take 60 instead of 20 courses on math the routine you have is just completely different. I guess you can call it fluency in the language.

(For example now I’m reading optimal control & variational calculus along with the functional analysis it needs, its heavy.)


Most kids don't build up knowledge over time, they forget it all over summer vacation.


Very well put. Many people are very blind for this, they forget that everything they can do they at some time had to learn as well. And not everyone learns everything at the same time.

Anecdotally, something I can actually confirm from personal experience with math. As long as I could remember, I had trouble with a lot of it.

Then during the last years of high school I had an excellent teacher and a lot of concepts actually did start to click on some level. Frustratingly, I still had a lot of trouble. While I understood the abstract concepts much better.

In order to solve issues, I still had to apply a lot of concepts I was supposed to have learned in all the years previously.


How would you approach rebuilding foundational knowledge from the kindergarten level? I have completed all the courses on Khan Academy from kindergarten through 6th grade and have also practiced with more challenging problems beyond those provided on Khan Academy. I'm trying to find the most effective strategies to solidify these fundamental skills.


The idea of starting from scratch and rebuilding one's knowledge, especially when it means going back to the basics, is daunting


I think much of that 'daunt' comes from the lack of instructional resources needed to support a solo journey through higher math. Yes, there are some great illuminating sources (like Kahn Academy and 3blue1brown), but if you're embarking on an epic quest (like recapping a BA in math), the essential guidance needed for coherent and graceful passage through all the requisite concepts simply does not exist -- short of reading 20 HS and college textbooks, which will subject you to a maddening amount of redundancy while leaving many fundamental concepts underexplained.

The day that large language models can capably tutor me through the many twisted turns of higher math -- that's when I'll believe that deep AI has achieved something truly useful.


Can you link a chat and show specifically where one falls off explaining, eg complex numbers or integration by parts? it's been a while since my math minor, but ChatGPT seemed to be able to guide me through what I recall of those topics.


I always sucked at math, even though I did it in undergrad. I basically did this over the course of the last five years to try get better. It went something like this:

Spivak - Calculus. This was a bad idea. Got maybe 30% of it. Gave up at Taylor series.

Hammack - Book of Proof. Finally understood how to prove things, and induction arguments.

Abbot - Understanding analysis. Got far, things fell apart around the Gamma function.

Apostol - Volume I. Got better at calculus. Also trigonometry. Exercises were easy. Skipped differential equations. It was too hard.

Hoffman/Kunze - Linear Algebra. Gave up after a few chapters, too hard.

Friedman/Insel - Linear Algebra. Much better, got to the Spectral Theorem and gave up.

Rudin - Principals of Mathematical Analysis. Absolutely brutal, probably got 30% of it.

Abbot, round 2. Much easier this time, got through the whole book.

Spivak, round 2. Much better, got through the whole book. Actually found it easy.

Hubbard - Vector Calculus. Gave up early, it was too hard.

Apostol - Volume 2. Much better. Stopped somewhere in the middle when it got too focused on differential equations and physics stuff.

Back to Friedberg / Insel - Made it through the spectral theorem.

In between I was doing a lot of mathematical statistics and probabilty stuff like Casell-Berger (I did this book twice, each time going back to the math where I floundered). I’ve worked through just about every exercise in the above books and watched YouTube video lectures where they exist (there is a good one for Rudin). Solution manuals sometimes exist, sometimes you have to find university courses based on the books and look for homework assignments where they have posted solutions, Quizlet has ok solutions, some are buggy. Apostol volume I some dude worked through and posted online.

Anyway point is I refused to accept how stupid I am and I brutally forced myself to become better at math. My attitude was I don’t give a fuck how long it takes, I will keep going until I get better.

I think I’m better now, although I’m still shit. It’s true what von Neumann said: In mathematics you don’t understand things, you just get used to them.


As a fellow brute forcer I can appreciate this comment a lot.


What are the fundamentals one should learn in kindergarten, elementary school, etc?


I'm not going to try to recap all of that, but, as an example, if you have a sufficiently strong understanding of arithmetic, learning basic modular arithmetic should be effortless, pigeonhole principle completely obvious.

I was quite surprised when I tried applying for a Microsoft internship in uni and they gave me a question on the pigeon-hole principle.


I expect this to end up having been one of the worst timed blog posts in history. Open source AI has mostly been good for the world up until now, but we're getting to the point where we're about to find out why open-sourcing sufficiently bad models is a terrible idea.


I'm just going to say it.

The author is an idiot who is using insults as a crutch to make his case.


Did you read the report? It's answer for basically anything contentious was, "views differ"


Not possible because they've got LeCun.


While this article makes some valid points, it basically just ignores the reasons why the law is being passed, that is the potential for open-models to enable bio-attacks, cyberattacks, election manipulation, automated personalised scams, and who knows what else.

One might question why that is. Perhaps it's the case that Jeremy has an excellent response to these points which he has somehow neglected to raise. Or perhaps it's because these threats are very inconvenient for an open source developer.

I'm sure he'd say that open-sourcing models means that all actors have access to defensive systems and that the good guys outnumber the bad guys and it'll all work out well.

And that could be true. Or it could be false. It's not like we really know that everything would work out fine. It's not that we've run the experiment. I mean maybe it works out like that, or maybe one guy creates a virus and then it doesn't really matter how many folk on the other side, but we still get kind of screwed because we can only produce vaccines that fast. It's that's what going to happen? I don't really know, but it's at least plausible. I mean, maybe we'll automate all aspects of vaccine production and be able to respond much faster, but that's dependent on when we develop this technology vs. when AI starts significantly helping with bioweapons with someone then using it for an attack. And at that point it's all so uncertain and up in the air that it's seems rather strange for someone to suggest that it'll all be fine.


As someone who has studied both computer science and molecular biology at postgraduate level I can tell you that the chance of LLMs leading to higher probability of a “bio-attack” compared with a quick Google search is zero.

Do you know how much skill, practice, resourcing and time it takes to develop bio-anything?


You imagine some extremist could somehow use llama version 11 to print viruses from his printer ?

LLMs are not intelligent, they predict text based on what it was trained, if it could somehow build new viruses, weapons then it means the internet has MANY such information so the LLC could predict something useful, so maybe those websites, scientific papers , blog posts need to be deleted because some extremist group or state sponsored group can use them directly plus Natural Intelligence plus good laboratories.

But tell me how can I make my next LLM so it would help on say fighting biologic weapons, creating vaccines but refusing to make evil stuff keeping in mind that jailbreaking is always possible (scientifically proven)


All of those things are extant now without AI. For bioterror the big issue is the massive corpus of data and decreasing cost of equipment not AI.


"This paper follows a recent trend of marketing excellent theoretical work as LLMs being capable of secretly plotting behind your back, when the realistic implication is backdoor risk".

Many top computer scientists consider loss of control risks to be a possibility that we need to take seriously.

So the question then becomes, is there a way to apply science to gain greater clarity on the possibility of these claims? And this is very tricky, since we're trying to evaluate claims not about models that currently exist, but about future models.

And I guess what people have realised recently is that, even if we can't directly run an experiment to determine the validity of the core claim of concern, we can run experiments on auxiliary claims in order to better inform discussions. For example, the best way to show that a future model could have a capability is to demonstrate that a current model possesses that capability.

I'm guessing you'd like to see more scientific evidence before you want to take possibilities like deceptive alignment seriously. I think that's reasonable. However, work like this is how we gather that evidence.

Obviously, each individual result doesn't provide much evidence on its own, but the accumulation of results has helped to provide more strategic clarity over time.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: