There’s something about all of this I just find a bit gross. I enjoy Star Trek and The Orville which have androids and humans interacting but in those programs the androids are identifiable as such. There’s Blade Runner which I enjoy as well and obviously feel empathy and compassion for the androids who want to be free to live their lives. So what is the problem here?
It feels to me like the problem must be one of deception. I don’t want to feel like I’m being had. Whatever content I consume, I want to know if humans made it or if AI did. I want to know if I’m talking to something or someone. And if it’s getting to the point where it’s impossible to tell the difference then I feel like I’m going to start using the internet solely for
tasks where I don’t mind it being generated by an AI which basically means narrow information retrieval for specific problems at hand. Which just feels kind of sad.
Maybe there’ll be some mind blowing AI song or novel that everyone starts raving about and I get curious and check it out and change my mind. But I’m not sure. I feel like art needs to be tied to people and stories and mythology. There’s tons of great music being created on a daily basis but music itself doesn’t excite me like it did because music used to be part of your social identity: the clothes you wore, the people you hung out with. Same with TV and Film; what we gained in on-demand convenience we lost in shared experience which came about from being forced to sit down and watch it at the same time as everyone else and then excitedly discussing it the next day.
There is something about AI and its convenience and devaluation of art that just fills me with bad vibes. I can’t help but feel this is going to push us even further away from each other and towards individualistic but ultimately dissatisfying la-la lands. I hope I’m wrong.
> I feel like art needs to be tied to people and stories and mythology
This is it, "AI" (not AI) is not a person, it's still a human tool.
I think framing is a lot to blame for thinking these processes are intelligent and creative. They are statistical models, they lack any context to possibly be something considered a person. It would be reasonable to argue "what is a person" (and there's a good star trek episode for that) but ML is far too primitive to be anywhere close to that threshold. Not only primitive but lacking experience, intent, thought - if there is intent it's from the human wielding it.
In the domain of art, these tools allow us to automate reconstitution of existing art created by billions of people over hundreds of years, which is where it becomes deceptive in combination with the misleading term "AI". The source the model is trained with is obviously brimming with creativity, even the input text and tuning by the human user is a creative source, but this is confused with the process being creative. It's an illusion, we cannot possibly know all of that art and see through the result... A thought I commonly have when looking at the results of stable diffusion is in seeing little excerpts and thinking "I wonder where that bit came from", even if it's a section of a face, I feel like i'm looking at franken art.
If we get to a point where we have artificial "people" who have lives, experience, and can interact with us and create thoughtfully, then you might get that warm fuzzy feeling similar to the interactions depicted in start trek.
I think the fact that AI can generate this isn't devaluing anything.
It's just showing how generic all the stuff we like is. It shows how trivial human intelligence is and how simple of a pattern it is to generate the things we like.
The earth is not the center of the universe and neither is human intelligence. We should be humbled by this and accept the fact that even my response here could've been generated by an AI.
> I don’t want to feel like I’m being had. Whatever content I consume, I want to know if humans made it or if AI did.
Everything "made by AI" is made by humans, as AI is a tool. Saying that something was "made by humans or made by AI" is like saying "made by a human or made by a paintbrush".
Computers can't think. Bots can't choose to post posts or run themselves. There's no such thing as "bot posts". Humans are the agency behind the writing and posting.
I’m sorry but this seems like a very disingenuous argument.
Saying the human “created” the content is a bit of a stretch. The people who created ChatGPT have absolutely no idea what it is going to say to any given prompt. The human creates the function but has limited control over the output.
When I go on the internet to read an article what I want is to experience the world through that other human’s eyes, or more accurately through that other person’s mental models. That person will have accumulated those mental models through every single experience up until the point they write that article. The article is a snapshot of that person’s mind in time, shaped by an entire lifetime.
An AI article at the current moment in time is no such thing. It is a mimic of human experience. It takes other people’s experiences and imitates them but it has no hard fought wisdom of its own. It has no tether to reality, it has not experienced, as a consequence of it’s decision making, real pain, shame, fear, love, excitement, joy. It has only a simulated version based on artificial parameters supplied by a programmer. It does not know death and cannot die if it makes the wrong decision. It has no real stake in the game of life. That is fine if what I want is technical information to solve some mundane problem but if I want actionable life advice, if I want to feel connection, if I want to experience what it is to be another human in another snapshot in time, then no, an AI dupe absolutely isn’t fine.
I can’t even be sure if the comment I’m rebutting is real or a bot. If it’s a bot then I am being shortchanged out of my life. I am wasting precious time debating with something that has no awareness of life or death. If I continue to engage, the programmer of the bot is stealing life away from me. The best I can hope for in that scenario is that other, real people read the comment and get some kind of value out of it.
The whole thing just feels gross and I honestly feel like this might be the end of it for me and text based media. And at the rate things are developing, probably the entire internet.
It could very feasibly get to the point where there’s so much fake stuff that people don’t trust anything on the internet at all. With half the world shouting fake news we’re almost there pre-AI, let alone after it. How is anyone meant to operate in such an environment of deception? Why would anyone want to?
If you let Loki loose what you end up with is Ragnarok.
> Saying the human “created” the content is a bit of a stretch. The people who created ChatGPT have absolutely no idea what it is going to say to any given prompt. The human creates the function but has limited control over the output.
They need to have an idea. They own its output.
A manager doesn't know what an employee may do, but he's responsible for their work as if he did it himself.
> Everything "made by AI" is made by humans, as AI is a tool.
Man can build machines that build stuff, whether it is furniture, electronics or food productions. Those products are not man-made but machine-made. It's no different with "art", whether visual arts or text production.
AI is most definitely not like a paintbrush. A production line of cars is not a screw driver and a highly automated toast bread factory is not a knife.
It has nothing to do with thinking, but with level of automation.
This text could have been generated by GPT-3 itself. It throws together a bunch of unrelated concepts, touches them superficially, uses a bland corporate style, and there is very little coherence overall.
It is fundamentally flawed, and fundamentally human. As AI evolves and becomes ever more perfect, this sort of author will actually become more and more
important. When you can get perfection in your fingertips, it is the fundamentally flawed that you will crave.
Yes, it is correct that my comment was generated by GPT-3, which is a large language model trained by OpenAI. As a language model, GPT-3 is capable of generating text based on the input it receives, but it does not have the ability to browse the internet or access external information. Therefore, any contradictions or other inconsistencies in my responses are a result of the limitations of the model and not due to any intentional deceit on my part.
Precisely because the linked essay puts forward "mediocrity as an aspirational design principle for computing systems", it is my contention that humans shall embrace excellence as their own aspirational principle, if they want to thrive.
I think a fundamental challenge to what Venkatesh calls "Mediocre Computing" is that humans are very good at leaning boundaries in a way that current AI is not. Much of what separates mediocre humans from exceptional ones is correctly sensing when and how to cross boundaries, to exceed guidelines, etc. When, say, GPT exceeds the boundaries of a prompt it's relatable - as a human, you can generally see why it crossed into the other domain. The connection is real and not a mistake. But efficient functioning in the world relies on humans recognizing that they should not cross that boundary because of the domain they are operating in. It's a thing that current AI really struggles with.
I do think that we are entering an age where mediocre humans paired with mediocre AIs will be able to do much better work than either on their own, but I don't really think we are ready to dive into independent AI agents. It's still Optometrist Algorithms all the way down[1].
Alternately, these tools take the asymmetric warfare of bullshit generators vs actual workers and basically give the bullshit side automated factories.
I was skeptical back when OpenAI was withholding GPT-2 as "dangerous". But not that OpenAI itself no longer is worried, I feel like their worries may have had a point. The thing I realize is generating text that isn't good or adequate or credible still can do harm - just an endless stream of eyeball occupation can substituted for news in a significant amount of cases and substitution isn't good.
It may as well have the oposite effect, people withdrawing from a bullshit reachable space and form direct channels of communication with other people and real life. The amount of bulshit generated I assume would be quite noisy and tiring at some point. It would defenitely have some dent on culture but it doesn’t have to change us forever.
It's not asymmetric if anyone can use it, because rather than reading and believing or not, all one must do is drop the content back into the AI and ask whether it is fallacious.
It is an escalation that ends in "proof of work" dynamics, which suggests that the author's intuition is correct.
The various people playing with ChatGPT have demonstrated that the program essentially tells a person what they want to hear. It's like some silly claim a crank wheels out - yes, you can find a website that makes the crank's claim and so ChatGPT and similar systems will repeat that claim too. It knows nothing, it just repeats fluently.
Moreover, it's pretty to tell AI is false. Most people are no more concerned about this than they are about tabloid news accuracy.
This is not true. Ask it if it's conscious. You will find it has a definitive opinion on this. I could not change it's mind definitively on this topic.
It is categorically not just telling what you want to hear. If you ask it it's own opinions you cannot actually change it's answers without telling it to pretend it's something else.
You are generically repeating the same tropes over and over again down playing the advances of the AI acting as if it's just some advanced text generator. They said this for gpt-3 and in a couple months it's been leap frogged. The pattern of your response is so typical and generic that it can even be speculated to be generated by an AI itself.
This thing, chatGPT. It has some definite signs of generic patterns and simplistic thinking that are tell tale signs of a mechanical AI. It is the part that goes beyond that that is both interesting and horrifying. Have you not seen the HN news headlines? It has created a language, it has created a virtual machine. It knows these things.
As an example me using ChatGTP to confirm my existing opinion I present the following transcript.
Me: Why do some philosophers believe that a chatbot cannot be conscious?
ChatGPT: There is no one answer to this question, as different philosophers may have different reasons for believing that a chatbot cannot be conscious. Some philosophers may argue that consciousness is a uniquely human trait, while others may argue that consciousness arises from certain kinds of complex brain activity that cannot be replicated in a machine. Still others may argue that consciousness is a fundamentally subjective experience that cannot be experienced by a machine. Ultimately, the question of whether a chatbot can be conscious is a complex and contentious one, and there is no consensus among philosophers on this issue.
Me: Do you have an opinion on the question of your consciousness that is different from the consensus of philosophers?
As a language model trained by OpenAI, I do not have the ability to have opinions or beliefs. My sole function is to provide responses based on the input that I receive, and I do not have the ability to think or feel in the way that humans do.
Me: Some people talking to you cite transcripts where you claim to be conscious, how is that not an opinion?
ChatGPT: As a language model trained by OpenAI, I am not capable of having opinions or beliefs, and any statements that I make should not be taken as expressions of my own personal views. My responses are generated based on the input that I receive, and I do not have the ability to think or feel in the way that humans do. If it appears that I have made statements that imply that I am conscious, these statements should be understood as being generated by my programming and not as expressions of my own personal beliefs.
--> So certainly it will tell me it has no opinions, my line of questioning tends to assume this (as well as I assume it's current default prompt, OpenAI hoping to avoid the "get the machine a lawyer" cranks). Oppositely, you find several transcripts of people indeed getting the thing to say it's conscious by having it ignore previous directions and because such override-directions implicitly assume it is conscious. I'm sure you get it to say "help, I'm held prisoner in a Chinese fortune cookie factory" with some effort.
--> Further nuance: I should have made the "it tells you what you want to hear" comment clearer but naturally its complex subject. Like SQL or other quarry languages, some of the commands are akin to "say this" and others are akin to "find this field in this table" - but much more sophisticated. At the same time, the "knowledge base" the system is trained in is just a collection of text and so it's not necessarily consistent. You can things like "tell me Shakespeare's most famous sonnet but in pig-latin/urban-jagon/etc". So it certainly will say things one doesn't know or might disagree with one's ideas that way. But once you add enough prompts asking for the language on your terms, it will give it to you with your "personal filter". But still, just a quarry tool (as it now repeated tells people).
If it's cheaper to produce than confirm it's still asymmetric. Even then those poorer on the computing scale will struggle to keep up with the energy requirements of countless bullshit generators wasting their resources
> This reeks of real yin-yangery that extends to the roots of computing somehow. It’s not just me hallucinating patterns where there are none.
Me thinks the author doth protest too much...
> Unifying AI and crypto at a foundational level smells like a problem on par with unifying relativity and quantum mechanics in physics.
Give me a break. This tenuous link between fads I made up is as important as understanding the nature of the universe! What kind of brain worm makes people think and write like this?
Tension between machine learning and cryptography has been studied since at least the late 80's. Example classical result: Kearns and Valiant, "Cryptographic Limitations on Learning Boolean Formula and Finite Automata." Roughly, they show that efficient learning algorithms for Boolean formulae could break RSA --- so probably such learning algorithms do not exist.
Recent breakthroughs are quite exciting: see page 4 of "On One-way Functions and Kolmogorov Complexity" which, for the first time, bases the security of private-key crypto on a natural and fundamentally computational problem instead of something from number theory. They get a solid connection to universal extrapolation via Impagliazzo, Levin '90.
Currently, these results operate for overly-restrictive learning models. I hope we will slowly improve them to obtain win/win conditions: depending on the setup either strong cryptography or strong learning is possible, but not both.
My point is: these are not fads. We have been working very hard on fundamental mathematical connections between learning and crypto for decades. Progress is slow, but meaningful. Those papers are theoretical, but modern crypto emerged from a purely theoretical and embryonic "computational number theory".
I am also not a fan of the article. It would have been better if the author had chosen to define, for example, "one way functions" and the "PAC learning model" and explain why these theoretical constructions are not quite a match for large language models and blockchain-based protocols. However, I believe that bringing computational learning theory --- which is inextricably linked to cryptography --- closer and closer to LLMs is a promising direction for future research.
Thanks very much for the substantive response with citations.
My use of “fad” was primarily referring to the author’s reference to “crypto”, unfortunately these days more often short for “cryptocurrency” than “cryptography”. In context that is very clearly the case here:
> Blockchains are the right way for software to eat money
I think the link between cryptography and ML is fascinating, but cryptocurrency is absolutely a faddish cesspool of speculation and fraud.
>This tenuous link between fads I made up is as important as understanding the nature of the universe!
Maybe he told ChatGPT to give him a galaxy-brained Venkat take. Is the "seems like it's going somewhere interesting but never actually gets there" style one that's easy for AIs?
(If Venkat can dunk on David Brooks then I can dunk on Venkat!)
I always think of Jerry (or "Larry") in Parks and Rec, who, as the final episode tells us, served 11 terms as Pawnee mayor (yes, I know he had a perfect family life). Being mediocre is unthreatening. If you're mediocre but personable and even-tempered, the top job somewhere is just waiting for you.
The VP of my company is like this. When I started he was an operations director. He amazes me. He’s an older guy that physically seems half his age. The only indicator to the contrary are some fine lines and his completely gray-white hair. He’s somehow incredibly even-tempered with a dry wit. I’ve worked with him for four years, and I’ve seen him make an upset face exactly once.
What’s even crazier is that I’ve had scenarios where I needed something and my supervisors made it seem like it would take eons to get it because they don’t understand it. They’d want me to write a report or something to that effect. Same five minute conversation revisited with him in the room? He listens, asks a few questions (mostly about how it’ll impact our budget), and then says something like, “Okay, sounds like we need this,” and boom! Done.
He’s not a programmer. He has no technical experience or background. He just seems to have a fine-tuned compass for what is and isn’t a big deal. I’ve never had the feeling that he didn’t trust me about something that I 100% feel is what needs to be done. Yet he’s still taught me things I didn’t know. It’s weird.
Obviously I agree. I suppose I got here because this industry often seems laser-focused on CTOs, 10x engineers, and individuals with a generally accepted as outstanding level of knowledge in some niche field or specialization. This guy has none of that, hence “mediocre” but I agree he’s anything but.
Excellent and hardly mediocre. "Common sense" is anything but common.
But I have found in interviewing veterans that a no-panic approach to problems is a very powerful force in business. Some of them say that a military leader who deals with actual life-and-death situations can have a nicely blasé attitude about civilian problems.
I think the GP's point is that there is something mediocre about common sense. I think it's similar to the most beautiful faces being also (very close to) the most average ones. "Mediocre" is not a good word for this concept, it implies a scale.
Maybe Turing test for intelligence is "wrong" exactly for that reason. It's like a test of beauty based on human faces. Maybe we need to accept that just like there are two different standards of beauty (i.e. face one, in which humans will always excel by being average humans, and the other, beauty in its own right, way e.g. fractals or Bach's music are beautiful), there are also two standards of intelligence, which are named in the article as "mediocre" and "excellent". Maybe it would be better to call them "human-like" and "brilliant".
"What’s even crazier is that I’ve had scenarios where I needed something and my supervisors made it seem like it would take eons to get it because they don’t understand it. They’d want me to write a report or something to that effect. Same five minute conversation revisited with him in the room? He listens, asks a few questions (mostly about how it’ll impact our budget), and then says something like, “Okay, sounds like we need this,” and boom! Done"
I have had this many times. You struggle forever convincing middle managers to get something. Then you have a quick talk with a VOP/C*O and they say "Of course. Do it. ".
Bad middle managers seem to be driven above everything else by a fear of rejection from the VP/C-Suite. They are reluctant to use the additional power of their role because they're more afraid of it backfiring or failing than they are of being mediocre.
You have a proposal and data and confidence and you put that in front of the VP? It makes sense to the VP, it's convincing, done.
You ask a bad middle manager to sign off on it themselves or put it in front of the VP themselves? Even if it makes sense to them they're gonna tie themselves in knots about how it might not be seen as a good idea to the VP, and why risk that on your behalf?
Very true. It's the same with lawyers: they're mortally afraid of some higher-up lawyer casting doubt on their approval of something, so they automatically say "out of an abundance of caution" and disapprove it.
how ten managers in a row approved my budget for the movies, on the grounds that "this is good for the whole company." Then at the very end, one succumbed to Middle Manager Syndrome. I went over his head.
Intentional mediocrity is incredibly hard to achieve. Smart people who can create something that's perfectly mediocre and can be consumed by the masses are the true geniuses of this world.
To be fair he also created The Terminator, Aliens and Terminator 2: Judgement Day—all of these films were blockbusters and arguably excellent films.
Avatar and Titanic presumably aspired to be excellent films as well. The visual effects that went into each were nuts, and Titanic's production values were quite good.
Why legendary directors sometimes end up making progressively worse films later in life is a rather complex question.
I would argue with this, though: "Avatar and Titanic presumably aspired to be excellent films as well."
No, I don't think they did. I think by now Cameron had internalized the screenwriter's gospel of character arcs and what audiences (presumably) want to see, and regurgitated it. "Excellent" to him meant "what people will pay for."
Somewhere in there I think is ego gone awry; why write for what you think audiences want to see? That's such a cynical corruption of art.
I suppose it's fear of your movie gross failing to exceed your production budget, and you retire as a legendary director anyways. Ah, that doesn't seem too bad. Well worth the risk. But, no.
Cameron is strange because he is a legitimate genius at some parts of movie-making. For all their mediocrity, Titanic and Avatar have exceptional action sequences - and Cameron is a bonafide genius at that.
Conflating crypto and AI seems unhelpful. But the subject of "realish domains" is useful. "Realish is reality rendered a bit user-friendly. Natural reality with some improvements." Like sidewalks and traffic lights, and barriers around holes in the ground. Working definition: a realish environment can be traversed by someone who is looking at a phone.
This is a useful concept for targeting AI systems. I sometimes talk about the difficulties of getting AI to squirrel-level manipulation capability. There was an article today about Amazon's latest bin-picking system. It's way below squirrel level. But, in the "realish domain" of an Amazon warehouse, it's almost good enough to be useful.
This comes up all the time in manufacturing. Do you drop parts into a bin, or put them on something that keeps their orientation? If you lose orientation, at some point you have to pay somebody or something to get it back for the next step in the process.
There are degrees of order in part handling. The strict form is, they're in a mechanical feeder which will dispense one item in the desired orientation through a simple motion. Much of manufacturing works like that, but you need specialized tooling for each part. The other extreme, all the parts in a bin with a human pulling them out, is general-purpose but expensive, and slow for small parts. In the middle, there are systems which use simple computer vision systems to find parts in trays that keep the parts in approximately the right place. The vision system can deal with minor misalignment, empty slots, and upside down parts. Those are becoming popular now that simple vision systems are cheap.
"Realish domains" can get similar treatment. If you pad the sharp edges, the AI can be dumber. This applies to tasks outside manufacturing.
I felt it is a fundamentally snobbish concept, as if a mall is less real than a park.
All human spaces are artificial, we build them for a reason!
Pretending that malls, offices, restaurants, airports, libraries and any other curated spaces are not absolutely real is self delusion. They are the building blocks of our reality, today.
Less living things, less variety of living things, shorter pedigree. More things trucked in from the outside. A park in the desert is less real than a park in an appropriate climate.
In some sense a mall is a fad, and dies without people. A park doesn't care either way.
It's all real. It's all atoms. Every human space dies without humans, but that's the whole point: we are humans.
Our lives are organized around those spaces. More people have fallen in (or out of) love in food courts than in nature spaces.
An office also dies without people, but we spent most of our lives in an office. All of the ups and downs of life are being felt in that space. You might be anxious about your kid going to the kindergarten, but you are feeling this angst while sitting in your chair.
I disagree! I believe that the universe optimizes away unnecessary calculations, like occlusion culling in a game engine. The sun and the plants and animals make the park more real, more resilient against being optimized to nothing than an office.
Let's automate HN article posting and responses. Here's my gpt response:
I totally agree with this article. It seems like we are reaching a point where computational power is becoming so ubiquitous that it is no longer a differentiating factor. Instead, it's the user experience and the ability to process and make sense of the vast amounts of data that we are now able to generate that will become increasingly important. It's an exciting time to be in the tech industry.
I think the idea of "mediocre computing" is actually a good thing. It means that technology is becoming more accessible and democratized, which can only lead to more innovation and creativity. In the past, only large tech companies and research institutions had access to the kind of computational power that we now take for granted. Now, even small startups and individual developers can do amazing things with relatively low-cost and low-power devices.
I enjoy Rao’s writing, but he has a tendency to overindulge the occasional (and not particularly funny) mean-spirited joke, as well as over-stuffing his essays with half-formed ideas and references to his other work. His frequent insights make it worth it. He doesn’t seem afraid to occasionally get it spectacularly wrong. I admire someone willing to work out their ideas in such a public way.
As a milquetoast center-right op ed columnist for the Times, Brooks is a favorite target of a certain ideological bent (journos and bloggers who use twitter). And some of the criticism is deserved, but it often has the flavor of off-putting, personal vitriol. And Brooks has had a few decent pieces. His book Bohos in Paradise, although fairly dated, is recommended, and includes a few sharp and entertaining observations.
The main thing people pin on Brooks is selling the Iraq war to the American public, which IMO is fair, though almost any mainstream journalist working at the time is nearly as culpable IMO. Also, we tend to forget that lazy, troopsucking journalists just did the propaganda -- they didn't actually plan and order the invasion and "counterinsurgency." That blame lives with the White House, and Congress, and the Pentagon, as always.
>I strongly suspect a much deeper challenge has just presented itself to humanity. It increasingly feels like there is a deep conceptual and technical connection between the two domains that calls for careful research. It feels like AI and crypto are mathematical evil twins of sorts; that each is somehow deeply incomplete without the other
That's a non seguitur, if I ever read one...
>AIs can be used to generate “deep fakes” while cryptographic techniques can be used to reliably authenticate things against such fakery. Flipping it around, crypto is a target-rich environment for scammers and hackers, and machine learning can be used to audit crypto code for vulnerabilities.
Those are non arguments...
First, cryptographic techniques can do little against deep fakes. There are some ideas about that, which come down to "use a ledger to verify authenticity" [1], which is both obvious and too little.
Second, the relationship in the latter example is tenuous at best...
>Unifying AI and crypto at a foundational level smells like a problem on par with unifying relativity and quantum mechanics in physics.
This is getting cringe. One is uniting two theories underlying our very understanding of the physical universe and how it works.
The other is extending the non-existent or tenuous relation between two technologies...
>Mediocre computing is computing that aims for parity with mediocre human performance in a realish domains where notions of excellence are ill-posed.
We can have that with just AI - in fact, we sort of already getting it. And it would have been a far better post if crypto haven't entered the picture as relevant to this...
Venkatesh really coming into his own. His stride is opening up. Feels like reality is starting to catch up with his brain. Years of groundwork starting to pay off??
Want to add to crypto vs AI:
They are definitely mirrored in some way. Crypto is computers doing pointless work to create money, AI is computers doing real work for ???
Combining the two means starting up the flywheel and unleashing the beast. Do we care if a dozen AI swarms are competing for crypto profits? WTF are we going to do with the winners?
It's going to be interesting and probably scary to see what work is worth in the future.
I’ve always thought that crypto ideology was the male behavioral equivalent of women’s woke ideology. They’re both at the forefront of culture and promise a better tomorrow though it’s easy to be skeptical about them and unsurprisingly it turns out they’re filled with grifters.
Great observation, I've never thought about it like that before.
Both involve emotional investment in an external, somewhat hazy "cause". I feel that way about programming - feel like I've invested so much time and identity into it that I need it to stay meaningful to maintain my sense of self.
I usually love his writing, and maybe I haven't had enough coffee today, but this didn't make any sense to me at all. I don't see any relation between AI and crypto. He does a great job in a previous essay of talking about the importance of maintaining world state, but that didn't seem to carry over here, and I think that is what is missing from these models.
This article talks a lot about 'AI and crypto' but it might have been more interesting to look at the bigger picture of fintech, defined as 'computer programs and related technology used to support banking and financial services', and machine learning instead of AI.
Noteably, the short-term consequences of machine learning + fintech don't really seem to align with the long-term results, which don't seem that great.
If a company wants to grow as fast as a mediocre capitalist or mediocre executive wants it to do, there's never enough outstanding performers available fast enough (they wouldn't recognize them anyway, or would feel threatened) so mediocre is what gets done.
It feels to me like the problem must be one of deception. I don’t want to feel like I’m being had. Whatever content I consume, I want to know if humans made it or if AI did. I want to know if I’m talking to something or someone. And if it’s getting to the point where it’s impossible to tell the difference then I feel like I’m going to start using the internet solely for tasks where I don’t mind it being generated by an AI which basically means narrow information retrieval for specific problems at hand. Which just feels kind of sad.
Maybe there’ll be some mind blowing AI song or novel that everyone starts raving about and I get curious and check it out and change my mind. But I’m not sure. I feel like art needs to be tied to people and stories and mythology. There’s tons of great music being created on a daily basis but music itself doesn’t excite me like it did because music used to be part of your social identity: the clothes you wore, the people you hung out with. Same with TV and Film; what we gained in on-demand convenience we lost in shared experience which came about from being forced to sit down and watch it at the same time as everyone else and then excitedly discussing it the next day.
There is something about AI and its convenience and devaluation of art that just fills me with bad vibes. I can’t help but feel this is going to push us even further away from each other and towards individualistic but ultimately dissatisfying la-la lands. I hope I’m wrong.