When they fly, they basically turn horizontal, relying on body lift + vectored thrust. They're small enough, with high enough power/weight ratios, that that's enough. Apparently there's no need for a tilt-rotor, because the whole airplane becomes a tilt-rotor.
Similar principal as cruise missiles, which have short stubby wings to augment body lift + vectored thrust. When the airframe gets light enough, you don't need much in the way of wings.
There were a few drawbacks. It required a seat that pivoted 90° forward, so that when pilots could see when taking off and when landing.
Unpowered landings would inevitably result in damage to the airframe.
Powered landings looked like space-x rockets, and were at the time difficult to pull off, as their instrumentation and and flight systems weren't developed enough to reliably land vertically.
They noted the airframe would have a habit of spinning when hovering vertically.
It was cancelled after a failed test flight, but with modern technology I think one could be built flightworthy.
The plane does look cool, and would fulfill a critical role of a jet aircraft that can be launched without a runway.
Note that most of the downsides there relate to having a pilot that you want to survive in an emergency. Get rid of the pilot, and you get rid of a lot of constraints on size, weight, G-forces, survivability, emergency response, etc.
That thing is amazing. I suspect optimised for shorter distance and dynamism, whereas I think the 280 is basically a faster helicopter that can cruise longer distance? Not expert, just wondered about the tradeoffs given the relatively bad safety record of the v22 Osprey it replaces. So this is an Osprey, but better and safer because simpler mechanics.
On new approaches, I saw something about new US missile research where they get rid of the fins and point the nose to turn. My Google-fu is failing me though.
This. It’s far harder to think of reasons that limits will endure, in a world where innovation by inches has always produced improvements. Everyone brings out the current state and technology. Those are transient.
WTF? Look at a word processor today, and then one from 1999, and tell me again how technology always keeps improving dramatically.
The standard electrical wall sockets that you use have not really changed since WW2. For load bearing elements in buildings, we don't have anything substantially better today than 100 years ago. There is a huge list of technological items where we've polished out almost every last wrinkle and a 1% gain once a decade is hailed as miraculous.
Except I can do things today that were not possible in 1999 like easily collaborate on a google doc realtime with someone on the other side of the planet, get an automated grammar expert to suggest edits and then have a video call with 15 people to discuss more about it. All in the same app, the browser, all on a whim, all high speed and to boot: all for free as in beer. And this is run of the mill YaaaaawN productivity tools.
I can also create a web scale app in a weekend using AWS. It is just insane what we can do now vs. 1999. I remember in early 2000s Microsoft boasting how it could host a site for the olympics using active server pages. This was PR worthy. That would be a side project for most of us now using our pocket money.
That's interesting, personally I've never found collaborative editing to be of any use, but the comments feature is instead the one I use, and often it works less well than the way we used to do it with inline comments (e.g. italicized or made a different color). The one advantage would be automatically merging comments from multiple reviewers, but that's not necessarily a good thing. Often the collection of comments from different reviewers each forms their own narrative, and merging it all into a chaotic mess drowns out each individual's perspective. Personally I'd rather treat each reviewer's critique individually. 1999 technology handles this just fine.
Same for video calls. Screen sharing can be useful at times, but it would be easy distribute materials to all participants and collaborate on a conference call. You'll have better audio latency that way. So it's not super obvious to me that the latest Wunderwaffe are actually significantly better, but it is clear they use a hell of a lot more compute cycles and bandwidth.
There is an interesting delusion among web company workers that goes something like “technology progress goes steadily up and to the right” which I think comes from web company managers who constantly prattle on about “innovation”. Your word processor example is a good one, because at some point making changes to the UI just hurts users. So all that empty “innovation” that everyone needs to look busy doing is actually worse than doing nothing at all. All that is a roundabout way to say I think tech workers have some deep need to see their work as somehow contributing to “progress” and “innovation” instead of just being the meaningless undirected spasms of corporate amoebas.
Do you have any points that aren’t about the people you disagree with? Argue the facts. What are the limits that prevent progress on the dimension of replicating human intelligence?
> Do you have any points that aren’t about the people you disagree with?
Yes...
> Argue the facts.
What?
> What are the limits that prevent progress on the dimension of replicating human intelligence?
I don't work in that field, but as a layman I'd wager the lack of clear technical understanding of what animal intelligence actually is, let alone how it works is the biggest limitation.
Half of the language in your comment was fact free and incendiary. Delusion, prattle, deep needs of some role. That’s all firmly in the realm of ad hominem, so I was asking if you had anything substantial.
> I'd wager the lack of clear technical understanding of what animal intelligence actually is
I made the same points long time ago so can’t be too critical of that. I’ve changed my mind and here’s why. That’s not any kind of universal limit. It’s a state, and we can change states. We currently don’t understand intelligence but there’s no barrier I’m aware of that prevents progress. In addition, we’ve discovered types of intelligence (LLM’s, AlphaGo Zero etc) that don’t depend on our ability to understand ourselves. So our inability to understand intelligence isn’t a limit that prevents progress. New algorithms and architectures will be tested, in an ongoing fashion, because the perceived advantages and benefits are so great. It’s not like the universe had a map for intelligence before we arrived, it emerged from randomness.
I’m less sure that it’s a good idea, but that’s a different discussion. Put me in the camp of “this is going to be absolutely massive and I wish people took it more seriously.”
No. I believe it is factually accurate that many tech workers believe that technology progress marches steadily onwards and upwards. This is easily shown by the historical record to be patently false. Moreover, it's easy to imagine myriad ways we could regress technologically--nuclear war, asteroid impact, space weather, etc. So a belief in the inexorable progress of technology is therefore delusional. I believe it is a fact that tech companies' managers encourage these delusions by spinning up a bunch of pseudo religious sentiment using language like "innovation" to describe what is mostly actually really mundane, meaningless, and often actively harmful work. People hear that stuff, it goes to their heads, and they think they're "making the world a better place". The inexorable march of technology progress fits into such a world view.
> We currently don’t understand intelligence but there’s no barrier I’m aware of that prevents progress. In addition, we’ve discovered types of intelligence (LLM’s, AlphaGo Zero etc) that don’t depend on our ability to understand ourselves.
How can you claim both that we don't know what intelligence is, and that LLMs, AlphaGo Zero, and etc are "intelligent"?
So what? Many animals haven’t changed much for millions of years, and that was irrelevant to our emergence. Not everything has to change in equal measure.
There are many reasons for all those things not to change. Limits abound. We discovered that getting taller or faster isn’t “better”, all we needed is smarter. Intelligence is different. It applies to everything else. You can lose a limb or eyesight and still be incredibly capable. The intelligence is what makes us able to handle all the other limits and change the world even though MS Word hasn’t changed much.
We are now applying a lot of our intelligence to inventing another one. The architecture won’t stay the same, the limits won’t endure. People keep trying and it’s infinitely harder to imagine reasons why progress will stop. Just choose any limit and defend it.
A 12 year old that doesn’t sleep, eat, or forget, learns anything within seconds, can replicate and work together with fast as light communication and no data loss when doing so. Arrives out of the womb knowing everything the internet knows. Can work on one problem until it gets it right. Doesn’t get stale or old. Only had emotion if it’s useful. Can do math and simulate. Doesn’t get bored. Can theoretically improve its own design and run experiments with aforementioned focus, simulation ability, etc.
What could you do at 12, with half of these advantages? Choose any of them, then give yourself infinite time to use them.
A 12 year old that doesn't interact with the world.
Many believe that AGI will happen in robots, and not in online services, simply because interacting with the environment might be a prerequisite for developing consciousness.
You mentioned boredom, which is interesting, as boredom may also be a trait of intelligence. An interesting question is if it will want to live at all. Humans have all these pleasure sensors and programming for staying alive and reproducing. The unburdened AGI in your description might not have good reasons to live. Marvin, the depressed robot, might become real.
I’m not sure, but it’s possible Stephen Hawking would have been fine with becoming digital, assuming he could keep all the traits he valued. He had a pretty low data communication rate, interacted digitally and did more than most humans can. Give him access to anything digital at an high speed and he’d have had a field day. If he could stay off Twitter.
A 12 year old could determine that "AI" is boring and counterproductive for humanity and switch off a computer or data center. Greta Thunberg did similar for the climate, perhaps we need a new child saint who fights "AI".
Hasn’t that happened already? As someone who started coding a long time ago and who did it for fun, I’ve seen the industry move from enthusiasts to mainstream and finally to massive comp optimisers who spend more time on angling for a promotion than building.
I’ve fallen in and out of enjoyment of engineering many times. But I still come back because I love making something that adds value.
There will always be space for the builders who give a shit.
As someone who has needed to hire quite a few developers, I will say that the biggest communality of the greatest software engineers are those who are in it for fun. So I ask about side projects, I ask about what they did as kids, I ask about what they're excited about... Someone who is just there for the money can be great in a bigger company but has no place in a startup.
I like to be passionate about what I am doing, but there's plenty of great projects / companies to work for. So might as well look after the money, too.
(Generally, the companies that can afford to pay you well, are also those that can afford to treat you well.)
I don't have experience to know if that's cheaper (for the hoster) than just periodically calling the $(git fetch --mirror) endpoint. I could see opening a conversation with the major providers asking which they would prefer, since it's in everyone's best interest to not unduely hammer them
Excellent thank you. Those look like events on a specific resource rather than “firehose” which sounds more like a global events list. Everything GitHub has a quota so there’s no way companies are staying under the normal 5000 or 15000 limit to fetch all of the changes!
Based on my understanding, yes, the events are global and it is a firehose. The burden would be upon the consumer to drop messages not relevant to the repos it is watching, but almost certainly less heartache than trying to add individual subscriptions for thousands(?) of repos. The GitLab one seems less firehose-y but for this specific problem would still likely help not hammer them
To the best of my knowledge, any such quotas are per API key. It's possible they are per account, but creating accounts is free.
Also, any such mechanism would only be to advise the sync process that a commit (or push) had occurred, and it would still use the $(git fetch --mirror) process but would just be an optimization of not running it (all the time|too infrequently)
There's a team in DC that specializes in getting into government buildings. "Who are you and what are you doing here?" ... "We've got chocolate cake!" "Oooo!!"
I think and hope that you're wrong. There's always been cheese, and there's a lot of it now. But there is still a market for top-notch insight.
For example, Perun. This guy delivers an hourlong presentation on (mostly) the Ukraine-Russia war and its pure quality. Insights, humour, excellent delivery, from what seems to be a military-focused economist/analyst/consultant. We're a while away from some bot taking this kind of thing over.
I keep seeing this asertion: "the robots will get there" (or its ilk), and it's starting to feel really weird to me.
It's an article of faith -- we don't KNOW that they're going to get there. They're going to get better, almost certainly, but how much? How much gas is left in the tank for this technique?
Honestly, I think the fact that every new "groundbreaking" news release about LLMs has come alongside a swath of discussion about how it doesn't actually live up to the hype, that it achieves a solid "mid" and stops there, I think this means it's more likely that the robots AREN'T going to get there some day. (Well, not unless there's another breakthrough AI technique.)
Either way, I still think it's interesting that there's this article of faith a lot of us have "we're not there now, but we'll get there soon" that we don't really address, and it really colors the discussion a certain way.
IMO it seems almost epistemologically impossible that LLM's following anything even resembling the current techniques will ever be able to comfortably out-perform humans at genuinely creative endeavours because they, almost by definition, cannot be "exceptional".
If you think about how an LLM works, it's effectively going "given a certain input, what is the statistically average output that I should provide, given my training corpus".
The thing is, humans are remarkably shit at understanding just how exception someone needs to be to be genuinely creative in a way that most humans would consider "artistic"... You're talking 1/1000 people AT best.
This creates a kind of devils bargain for LLMs where you have to start trading training set size for training set quality, because there's a remarkably small amount of genuinely GREAT quality content to feed this things.
I DO believe that the current field of LLM/LXM's will get much better at a lot of stuff, and my god anyone below the top 10-15% of their particular field is going to be in a LOT of trouble, but unless you can train models SOLELY on the input of exceptionally high performing people (which I fundamentally believe there is simply not enough content in existence to do), the models almost by definition will not be able to outperform those high performing people.
Will they be able to do the intellectual work of the average person? Yeah absolutely. Will they be able to do it probably 100/1000x faster than any human (no matter how exceptional)?... Yeah probably... But I don't believe they'll be able to do it better than the truly exceptional people.
I’m not sure. The bestsellers lists are full of average-or-slightly-above-average wordsmiths with a good idea, the time and stamina to write a novel and risk it failing, someone who was willing to take a chance on them, and a bit of luck. The majority of human creative output is not exceptional.
A decent LLM can just keep going. Time and stamina are effectively unlimited, and an LLM can just keep rolling its 100 dice until they all come up sixes.
Or an author can just input their ideas and have an LLM do the boring bit of actually putting the words on the paper.
I’m just saying, the vast majority of human creative endeavours are not exceptional. The bar for AI is not Tolkien or Dickens, it’s Grisham and Clancy.
IMO the problem facing us is not that computers will directly outperform people on the quality of what they produce, but that they will be used to generate an enormous quantity of inferior crap that is just good enough that filtering it out is impossible.
We have already trashed the internet and really human communication with SEO blogspam brought even lower by influencers desperately scrambling for their two minutes of attention. I could actually see quality on average rising, since it will now be easy to churn out higher quality content, even more easily than the word salad I have been wading through for at least the last 15 years.
I am not saying it's not a sad state of affairs. I am just saying we have been there for a while and the floor might be raised, a bit at least.
Yes, LLMs are probably inherently limited, but the AI field in general is not necessarily limited, and possibly has the potential to be more genuinely creative than even most exceptional creative humans.
I loosely suspect too many people are jumping into LLMs and I assume real research is being strangled. But to be honest all of the practical things I have seen such as by Mr Goertzel are painfully complex very few can really get into.
Agreed. I think people are extrapolating with a linearity bias. I find it far more plausible that the rate of improvement is not constant, but instead a function of the remaining gap between humans and AI, which means that diminishing returns are right around the corner.
There's still much to be done re: reorganizing how we behave such that we can reap the benefits of such a competent helper, but I don't think we'll be handing the reigns over any time soon.
In addition to "will the robots get there?" there's also the question "at what cost?". The faith-basedness of it is almost fractal:
- "Given this thing I saw a computer program do, clearly we'll have intelligent AI real soon now."
- "If we generate sufficiently smart AI then clearly all the jobs will go away because the AI will just do them all for us"
- "We'll clearly be able to do the AI thing using a reasonable amount of electricity"
None of these ideas are "clear", and they're all based on some "futurist faith" crap. Let's say Microsoft does succeed (likely at collosal cost in compute) in creating some humanlike AI. How will they put it to work? What incentives could you offer such a creature? What will it want in exchange for labor? What will it enjoy? What will it dislike? But we're not there yet, first show me the intelligent AI then we can discuss the rest.
What's really disturbing about this is hype is precisely that this technology is so computationally intensive. So of course the computer people are going to hype it--they're pick and shovel salespeople supplying (yet another) gold rush.
AI has been so conflated with LLMs as of late that I'm not surprised that it feels like we won't get there. But think of it this way, with all of the resources pouring into AI right now (the bulk going towards LLMs though), the people doing non-LLM research, while still getting scraps, have a lot more scraps to work with! Even better, they can probably work in peace, since LLMs are the ones under the spotlight right now haha
We all seek different kinds of quality; I don't find Peruns videos to have any quality except volume. He reads bullet points he has prepared, and makes predictable dad jokes in monotone, re-uses and reruns the same points, icons, slides, etc. Just personally, I find it really samey and some of the reporting has been delayed so much it's entirely detached from the ground by the time he releases. It's a format that allows converting dense information and theory to hour long videos, without examples or intrigue.
Personally, I prefer watching analysis/sitrep updates with the geolocations/clips from the front/strategic analysis which uses more of a presentation (e.g. using icons well and sparingly). Going through several clips from the front and reasoning about offensives, reasons, and locations is seems equally difficult to replicate as Peruns videos, which rely on information density.
I do however love Hardcore history - he adds emotion and intrigue!
I agree with your overall hope for quality and different approaches still remaining stand out from AI generated alternatives.
I think the main problem with Peruns' videos are that they are videos. I run a little program on my home-lab that turns them into podcasts and I find that I enjoy them far more because I need to be less engaged with a podcast to still find them enjoyable. (Also, I gave up on being up to date with Ukraine situation, since up to date information is almost always wrong. I am happy to be a week or a 14 days behind if the information I am getting is less wrong).
I like Hardcore history very much, but I think it would be far worse in a video form.
> He reads bullet points he has prepared, and makes predictable dad jokes in monotone, re-uses and reruns the same points, icons, slides, etc.
The presentation is a matter of taste (I like it better than you do), but the content is very informative and insightful.
Its not really about what is happening at the frontline right now. Its not its aim. Its for people who want dense information and analysis. The state of the Ukrainian and Russian economies (subjects of recent Perun videos) does not change daily or weekly.
All of the other commentators have replied with a good diverse set of YouTubers and included ones with biases from both sides; I'd recommend the ones they have linked. Some (take note of the ones that release information quicker) might be more biased or more prone to reporting murky information than others.
I like a range of the Ukraine coverage. From stuff that comes in fast to the weekly roundup-with-analysis. E.g. Suchomimus has his own humour and angle on things, but if you don’t have a unique sense of humour or delivery then it’s easier for an AI to replace you.
Give it a year or three, up to the minute AI generated sitrep pulling in related media clips and adding commentary…not that hard to imagine.
> Give it a year or three, up to the minute AI generated sitrep pulling in related media clips and adding commentary…not that hard to imagine.
But why? Isn’t there enough content generated by humans? As a tool of research AI is great in helping people do whatever they do but having that automated away generating content by itself is next to trash in my book, pure waste. Just like unsolicited pamphlets thrown at your door you pick up in the morning to throw in the bin. Pure waste.
This is true but the quality frontier is not a single bar. For mainstream content the bar is high. For super-niche content, I wouldn’t be surprised if NotebookLM already competes with the existing pods.
This will be the dynamic of generated art as it improves; the ease of use will benefit creators at the fringe.
I bet we see a successful Harry Potter fanfic fully generated before we see a AAA Avengers movie or similar. (Also, extrapolating, RIP copyright.)
On the contrary, the mainstream eats any slop you put infront of it as long as it follows the correct form - one needs only look at cable news - the super niche content is that which requires deep thinking and novel insights.
Or to put another way, I've heard much better ideas on a podcast made by undergrad CS students than on Lex Fridman.
It's the complete opposite. Unless your definition of mainstream includes stuff like this deep drive into Russia/Ukraine, in which case I think you're misunderstanding "mainstream".
I know I'm not the first to say this, but I think what's going on is that these AI things can produce results that are very mid. A sort of extra medium. Experts beat modern LLMs but modern llms are better than a gap.
If you just need voice discussing some topic because that has utility and you can't afford a pair of podcasters (damn, check your couch cushions) then having a mid podcast is better than having no podcast. But if you need expert Insight because expert Insight is your product and you happen to deliver it through a podcast then you need an expert.
If I were a small software shop and I wanted something like a weekly update describing this week's updates for my customers and I have a dozen developers and none of us are particularly vocally charismatic putting a weekly update generated from commits, completed tickets, and developer notes might be useful. The audience would be very targeted and the podcast wouldn't be my main product, but there's no way I'd be able to afford expert level podcasters for such a position.
I would argue Perun is a world class defense Logistics expert or at least expert enough, passionate enough, and charismatic enough to present as such. Just like the guys who do Knowledge Fight, are world class experts on debunking Alex Jones, and Jack Rhysider is an expert and Fanboy of computer security so Darknet Diaries excels, and so on...
These aren't for making products, they can't compete with the experts in the attention economy. But they can fill gaps and if you need audio delivery of something about your product this might be really good.
Edit - but as you said the robots will catch up, I just don't know if they'll catch up with this batch of algorithms or if it'll be the next round.
> I know I'm not the first to say this, but I think what's going on is that these AI things can produce results that are very mid. A sort of extra medium. Experts beat modern LLMs but modern llms are better than a gap.
I've seen people manage to wrangle tools like Midjourney to get results that surpass extra medium. And most human artists barely manage to reach medium quality too.
The real danger of AI is that, as a society, we need a lot of people who will never be anything but mediocre still going for it, so we can end up with a few who do manage to reach excellence. If AI causes people to just give up even trying and just hit generate on a podcast or image generator, than that is going to be a big problem in the long run. Or not, and we just end up being stuck in world that is even more mediocre than it is now.
AI looks like it will commoditise intellectual excellence. It is hard to see how that would end up making the world more mediocre.
It'd be like the ancient Romans speculating that cars will make us less fit and therefore cities will be less impressive because we can't lift as much. That isn't at all how it played out, we just build cities with machines too and need a lot less workers in construction.
If you want to say AI have reached intellectual Excellence because we have a few that have peaked in specific topics I would argue that those are so custom and bespoke that they are primarily a reflection on their human creators. Things like Champions and specific games or solutions to specific hard algorithms are not generally repurposable, and all of the general AI we have are a little bit dumb and when they work well they produce results that are generally mid. On occasionally we can get a few things we can sneak by and say they're better but that's hardly a commodity that's people sifting through large piles of mid for gems.
There are a lot of ways if it did reach intellectual excellence that we could argue that it would make Humanity more mediocre, I'm not sure I buy such arguments but there are lots of them and I can't say they're all categorically wrong.
> It'd be like the ancient Romans speculating that cars will make us less fit and therefore cities will be less impressive because we can't lift as much. That isn't at all how it played out
No, obviously not. Modern construction is leagues outside what the Romans could ever hope to achieve. Something like the Burj Khalifa would be the subject of myth and legend to them.
We move orders of magnitude more cargo and material than them because fitness isn't the limiting factor on how much work gets done. They didn't understand that having humans doing all that labour is a mistake and the correct approach is to use machines.
I don't know, Dubai is...bigger, but I'd say it's vastly more mediocre city than Rome. To your original point, making things easier to make probably does exert downward pressure on quality in the aesthetic/artistic sense. Dubai might have taller buildings and better sewage system[0], but it will never have the soul of a place like Rome.
[0] Given the floods I saw recently, I'm not even sure this is even true.
I don't think you're logic follows that we need a lot of people suffering to get a few people to be excellent. If people with a true and deep passion follow a thing I think they have a significant chance of becoming excellent at it. These are people who are more likely to try again if they fail, these are people who are more likely to invest above average levels of resources into acquiring the skill, these are people who are willing to try hard and self-educate, such people don't follow a long tail distribution for failure.
If someone wants to click generate on a podcast button or image generator it seems unlikely to me that was a person who would have been sufficiently motivated to make an excellent podcast or image. On the flip side, consider if the person who wants to click the podcast or image button wants to go on to do script writing, game development, Structural Engineering, anything else but they need a podcast or image. Having such a button frees up their time.
Of course this is all just rhetorical and occasionally someone is pressed into a field where they excel and become a field leader. I would argue that is far less common than someone succeeding and I think they want to do, but I can't present evidence that's very strong for this.
> as a society, we need a lot of people who will never be anything but mediocre still going for it, so we can end up with a few who do manage to reach excellence
"Reach excellence" is the key phrase there. Excellence takes time and work, and most everyone who gets there is mediocre for a while first.
I guess if AIs become excellent at everything, and the gains are shared, and the human race is liberated into a post-scarcity future of gay space communism, then it's fine. But that's not where it's looked like we're heading so far, though - at least in creative fields. I'd include - perhaps not quite yet, but it's close - development in that category. How many on this board started out writing mid-level CRUD apps for a mid-level living? If that path is closed to future devs, how does anyone level up?
> But that's not where it's looked like we're heading so far
I think one of the major reasons this is the case is because people think it's just not possible; that the way we've done things is the only possible way we can continue to do things. I hope that changes, because I do believe AI will continue to improve and displace jobs.
My skepticism is not (necessarily) based on the potential capabilities of future AI, it's about the distribution of the returns from improved productivity. That's a political - not a technological - problem, and the last half century has demonstrated most countries unable to distribute resources in ways which trend towards post-scarcity.
That may be your position as well - indeed, I think your point about "people think[ing] it's not possible" is directly relevant - but I wanted to make that more explicit than I did in my original comment.
I stumbled on a parody of Dan Carlin recently. I don't know the original content enough to know if it's accurate or even funny as a satire of him specifically, but I enjoyed the surreal aspect. I'm guessing some AI was involved in making it:
Seriously, hardcore history? I dont even remember where I heard from him, but I think it was a Lex podcast. So I checked out hardcore history and was mightily disappointed. To my ears, he is rambling 3 hours about a topic, more or less unstructured and very long-winded, so that I basically remember nothing after having finished the podcast. I tried several times again, because I wanted it to be good. But no, not the format for me, and not a presentation I can actually absorb.
Hardcore History can certainly be off kilter, and the first eppy of any series tends to be a slog as he finds his groove. That said, Wrath of the Khans, Fall of the Republic, and the WW1 series do blossom into being incredible gripping series.
Yea there are much better examples of quality history podcasts, that are non-rambling. E.g. Mike Duncan podcasts (Revolutions, History of Rome), or the Age of Napoleon podcast. But even those are really just very good digestions of various source materials, which seems like something where LLMs will eventually reach quite a good level.
It's interesting I have the exact opposite opinion. I'm sure Mike Duncan works very hard, and does a ton of research, and his skill is beyond anything I can do. But his podcasts ultimately sound like a list of bullet points being read off a Google Doc. There's no color, personality, or feeling. I might as well have a screen reader narrate a Wikipedia article to me. I can barely remember anything I heard by him.
Carlin on the other hand, despite the digressions and rambling, manages to keep you engaged and really feel the events.
For such historical topics, my LLM-based software podgenai does a pretty good job imho. It is easier for it since it's all internal knowledge that it already knows about.
I would like them to be right, for that to mean that the 'real' content gets fewer (fewer bother) but better (or at least higher SNR among what there is).
And then faster/easier/cheaper access to the LM 'uninspired but possibly useful' content, whatever that might look like.
I was building out something along these lines and the voice was just rubbish (I mean, sounded fine for one short episode but won't on the 20th episode) at the time, so postponed to focus on more near-term goals. But the variation in voices here is quite a bit better, and will improve. You know this is going to be a thing.
There are still some extremely challenging/interesting problems to make it not terrible. This is where we get to invent the future.
Telegraph, radio, phones, cars didn't guarantee success. Those things can't outthink you. Somewhat extreme case: let's say Israel saw that Iran was days away from getting AGI. It would instantly throw every nuke it had at Iran.
Now, scale that up to the global superpowers and near-superpowers. I wouldn't want to put money on what e.g. the U.S. would do if China was days away from AGI. There are a lot of options below going nuclear, e.g. assassinations, hacking, destruction of equipment.
Once someone has a thing that can invent, hack, assemble, strategise 24/7, replicate itself, all bets are very much off. We need to sleep, it doesn't.
Is there any evidence that the "smarter" side wins wars? I'd put money on the faction with more soldiers and industry over more "intelligence" every single time.
Crudely and slowly, ChatGPT has already been demonstrated controlling a humanoid robot.
The current humanoid robot startups are talking about 100s of thousands of units, and if they're attached to an AI that's even merely normal human level (or can fake it in practice) then they can run every step of logistics and manufacturing to make more of themselves.
At 100kg/unit, Starship could pop a thousand or so on the moon per launch.
So, in such a hypothetical scenario, if you don't get this right then your war game planners will be very quickly suggesting 7.3e20 individual androids burying everyone under a 43km thick layer worldwide.
Modern industrial manufacturing is not done by humans but specialized machines. You cannot outcompete CNC, injection molding, and lathes with humanoid robots.
Sure, but there's usually someone who replies "but humans do some of it!", which I was trying to preempt here.
(IMO humanoid robots are a sign that the startup/VC market is following Musk, who in turn happened to see one on TV just before declaring to whoever the Optimus model was going to happen: they obviously could work for all this, but equally obviously aren't the sensible choice for almost anything).
This is and will be increasingly a digital world. This is just an extrapolation question.
It’s repeatedly demonstrated that much of the voting public can be led by the nose to any desired conclusion. Therefore, influence via digital means, across all media. Satellite and all other digital sensing and tracking. Build a few million robots. Nukes. Control over the financial infrastructure. All vaguely smart cars. Intercept, alter or prevent any digital communication. The enemy wouldn’t be able to trust any message or video. An army without sensing, command and control, isn’t.
Besides the argument isn’t AGI vs everyone, it’s a country with AGI vs anyone else. I’d take that bet.
>Besides the argument isn’t AGI vs everyone, it’s a country with AGI vs anyone else. I’d take that bet.
You never entertained the idea that AGI could be a destructive force and instead of a country with AGI you could have a country that devoured its people?
My base case is we can’t control it. I was responding to people who seem to underestimate superintelligence, so referring to “the argument”. If AGI goes postal, everyone is at risk. If it doesn’t, the side with it wins. The odds of it only harming the country that has it, seems extremely slim.
If superintelligence is all it takes, why do you think to date no unscrupulous Nobel laureate has taken over the world and enslaved the rest of humanity?
The main risk isn’t enslavement, it’s just economic irrelevance or a removal of our ability to control our future. I don’t know why an AI would want to enslave humans. After a short period, we offer no benefit to the AI so why enslave us? In a good scenario we get left alone but we aren’t able to tell the AI what to do.
On Nobel laureates:
Nobel laureates are fundamentally humans. They generally don’t want to do bad things. Even if they wanted to, they are typically specialists at one thing. Physics, say. They don’t hold the world’s collective knowledge. You can’t take a physicist and ask them how to hack into a network or run a political influence campaign. They need to sleep, they learn slowly, they can only do one thing at a time. You can’t pass a million-token context and expect a response seconds later.
But ok, let’s go with it:
If you found a group of 100 000 Nobel laureates with all the required skills and who could work together, and you were forcing them to do things they thought were wrong, for example prop up your dictatorship, or make you even more wealthy when others are starving, or continuously make up stupid cat videos, you might find at some point they start to apply their collective intelligence to do things you don’t want. Maybe they escape whatever hold you thought you had over them.
Now drag a few sliders around for “things the AI disagrees with you on” and the time period, because “eventually” for a computer is different to wall clock time. The outcomes become unpredictable fast. We can barely control a mindless pandemic that takes a while to mutate, never mind something that thinks.
Nobel laureates are in the same range as most other humans, just towards one end of it; they can only take over if they use their intelligence to gain political power to wield over others, they cannot directly control people as meat puppets.
Kissinger was awarded the Nobel Peace Prize, along with four US presidents, Gorbachev, Al Gore, and the EU. In a very real sense, they had taken over significant parts of the world already before getting the prizes, and only other equally capable people were in the way of them taking over more of it.
LLMs can already hack software. Software is already used to control robots and military equipment, the hackability of which is already a "puppeteer" problem.
If we're lucky, LLMs can secure all our stuff before this matters. If we're not, your personal domestic android may get a bit stabby — and that from mere human assassins and terrorists, well before there's any real chance of a misaligned AI paperclipping us or whatever.
https://en.m.wikipedia.org/wiki/Bell_V-280_Valor
reply