Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is AI the next crypto? Insights from HN comments (openpipe.ai)
237 points by kcorbitt on Nov 8, 2023 | hide | past | favorite | 367 comments


The major difference between the 2 is how they're being adopted by customers and the tangible value they return.

AI/ML barrier to entry is far simpler and vastly user friendly compared to crypto. Instant value return or gratification from ML products (GTPs and rest) is far more mainstream friendly.

Another view is the "loss" factor. Nobody, thus far, has has had their funds stolen or lost using ML products. I understand content creators and those who, unwillingly, contributed knowledge to learning systems did get circumvented but i'm talking about users/customers. Compare that to the negative stigma of crypto frauds and stereotypical association to illegal transactions.

Apples vs. rotten oranges in my opinion!


GPTs have had immediate tangible benefits without needing to spend an hour preaching or explaining things.

Crypto's sole usefulness remains in providing money transfers/liquidity in parts of the world where the local systems are failing or off-limits to the users.


Crypto's main value to a LOT of people is purely speculation that you could be rich by buying it cheap and then watching the value explode. No one expected DogeCoin to provide any utility at all, but tons of people bought it.


Countless people play the at the casinos, online poker games, bets - even the stock market is viewed by a lots of people as something of a gamble.

Personally I don't partake - but they get a value in it, I guess. Who am I to judge?


I don't judge anyone in the crypto space for wanting to gamble. I judge them for lying to other people about their crypto being anything but gambling, which is what it is.

I don't think we'll ever even know how many NFT projects there were out there, all taking up space on the various chains, all shilling garbage artwork, all promising all manner of shit from video games to magazines to comics to television series, many of which raised huge sums of money, virtually all of which is now gone. And it's easy to point and laugh at the people who thought these things were anything but scams, but also, in a better world, we wouldn't let tons of people be scammed like this. Being vulnerable to certain kinds of hype shouldn't give other people permission to rob you blind.


> even the stock market is viewed by a lots of people as something of a gamble

the majority of people don't have formal training in probability and statistics, not to mention limit theorems and finance, so who cares how they view the stock market? I mean, I care, in the sense of educating people but most people don't really want to put the time in.

stocks and gambling both have risk, but only stocks reward many/most types of risk; gambling does not. The expected value of stocks is positive; gambling is not.

What people are trying to say about stocks is that they are stochastic, and so is gambling.

on the larger topic, Crypto also does not reward risk or offer a positive expected value. It's stochastic nature is driven by the changing opinions people have about it, or secondary effects from how much other stochastic markets might rely on it. Mining bitcoins is stochastic from the point of view of a miner, but not really from the point of view of the market or at any scale, but without a productive use case providing a reward, no postive expected value and the reward for risk ("you got a coin") is not above the cost of mining, at least not for long.


This makes zero logical sense. People can do whatever they want with their money period full stop. Nothing else.


> Personally I don't partake - but they get a value in it, I guess. Who am I to judge?

The thing about gambling is it's a zero sum game. It doesn't enable any "real" productivity, it's just passing money around (with skimming off the top).

ML/AI isn't necessarily like that, it can be actually useful. Nevermind chatbots, we've already see how "AI" is useful in products for the last decade (e.g. google search results and extracting structured data out of emails, just to name a couple).

The only similarity is the hype/confusion cycle. Lots of crypto people got rich because they were in the right place at the right time, and they want to be there with the chatbot wave next.

The fact that AI/ML can be judged on real utility will limit some of this, and I think these crypto people will be in for a rude awakening if they think they can replicate their success here. With crypto the "game" of gambling / speculating meant that there was a lot of demand for ongoing endeavors, but once people realize that low effort ChatGPT reskins don't deliver anything tangible it'll be pretty obvious the emperor has no clothes.

You can't buy/trade ChatGPT prompts, after all - unless, perhaps, you were to create prompt NFTs?


Movie tickets are zero-sum and people get value out of them.

AI is way too overhyped and also completely not understood. I think most people here immediately think of some kind of genetic algorithm when they hear AI but even a simple thermostat could be marketed as AI even if all it does is turn on the furnace when some thermometer provides a low signal. The only thing reading AI on a product tells you is that there is software.

I'm unconvinced GPT will remain as a mass market tool. Google Docs got super popular because people don't want to fork out like $50 for microsoft word; they're not going to fork over $15/month to do web searches.


> Who am I to judge?

Someone who isn't addicted


Are addicts lesser humans? Do they lack free will? Am I entitled to decide for them and impose my will on their lives?

And who said I am not addicted? I don't do hard drugs but I am certainly addicted to coffee, sugar and maybe other habits I (moderately) indulge in but I would be very pissed off if someone else would try to take away from me.


Crypto haters will do anything to convince you that crypto bad, but other bad things are only bad, dometimes. There is no logic other than haves and have nots.


Out of curiosity, does your argument also work for drugs?


Isn't that pretty much settled already? It seems to me our society is becoming more and more permissive while giving up the previous "war on drugs" failed approach which did an incredible amount of damage.


It also created and continues to create a lot of cheap labor, which the United States is in short supply of and likely will be for the foreseeable future.


Isn't it obvious drugs have utility?


Depends on the drugs. If you think anyone who is reasonable in their head takes fentanyl (which is 50 times stronger than heroin and can be incredibly easily overdosed on because they are packaged in 24 hour slow release plasters and additionally is either stolen from the healthcare system, sometimes actually ripped off from elderly patients, or bought from mexican drug cartels) then you'd be wrong.


People get high from drugs, thus drugs have utility.

Your opinion on the validity or ethics of that utility has no impact on the fact that for some people they have utility.


Fentanyl is an FDA-approved analgesic. I would guess that the reasonable person who is prescribed fentanyl takes it.


Fentanyl is a last resort painkiller that is extremely heavily regulated in other countries and prescribed for things like tumor pain and heavy burns. It's not a regular painkiller, it's a narcotic. If you dose it high enough it can be used during anesthesia.


Yes, everyone knows all of this. My point stands: people who are prescribed fentanyl typically take it.


That's wrong - crypto's main value is buying and selling illegal goods and services on the internet.

Its secondary value is buying and selling legal goods and services on the internet without having to deal with credit card companies, but only for techbros.


"main value is buying and selling illegal goods and services"

Nah. Far more people use crypto for speculation than for actually illicit purposes.


By far the largest portion of the market is bitcoin. Bitcoin is inherently traceable from the ledger and IRS is always looking for large tax fraud.


They can only speculate because of the value brought to it by illicit purposes.


I don’t think you could possibly know that.


I think you could probe into some truth about it by looking at the volumes being traded on exchanges and how much are in the exchange's wallets.


But that's true about fiat money, too. Far more fiat money gets traded on exchanges in a single day, than circulates the rest of the whole world's economy in a year. But I think you'd agree fiat money derives its value from the way you can go down to the corner store and buy stuff, not from the fact it's traded on exchanges, right?


My argument at this point was only that speculation probably triumphs illicit usage. Do you mean that speculation is built on top of illicit use for crypto? It maybe doesn't even matter what the fundamental beliefs are because it has a price, today, now.

Fiat money derives its value from what you say, indeed. It is traded on exchanges because of that. But fungibility is only one of several factors. Money should also serve as store of value, but how long is highly debatable and the point.


You are forgetting ransom payments for hijacking corporate computer systems. Or money laundering. I am sure there are more. Not being a criminal I don't know what they are.


I forgot buying fentanyl from China.

"Lihe Pharmaceutical Technology Company, based in Wuhan, Hebei Province, China, was charged with fentanyl trafficking conspiracy and international money laundering, along with Chinese nationals Mingming Wang, 34, who is the alleged holder for three bitcoin accounts shared by sales agents for Lihe Pharmaceutical, and Xinqiang Lu, 40, the alleged recipient of funds via Western Union on the company’s behalf. " [1]

[1] https://www.justice.gov/opa/pr/justice-department-announces-...


It's wicked cool for the ~15 or so tech bros in SV doing that.

But outside of a couple of meme articles about how "someone bought a house with BTC!" the only use case I can find for crypto is money laundering or ransomeware.


+1. I have security cameras at home, and my "DVR" is a collection of shell scripts plus a Python script that uses YOLO to find the interesting parts of the footage. The thing works, helps a lot to review daily footage, was damn easy to put to work, and didn't cost me a cent (not even hardware; I run it in a Mini-PC w/o GPU). I knew nothing about ML before writing this script. So yeah, the value is there.


Funny, I built a security camera setup with PoE CCTV cameras, GStreamer and the NVIDIA CUDA element using a Xavier platform. I tried SSDMobileNet and YOLO and found them to be absolutely horrible.

The camera that was pointing down at an angle was the worst. Both models would only identify a dog and a person correctly about 15% of the time (missing me or my partner as I walked by and waved), with an actual object detection about 80% of the time even when there was nothing in its ground truths in-frame!! (usually as desks, beds or chairs, i don't recall exactly but it was furniture - and it was pointed at my empty back lot). It had just as many shadow/sunspot/tree failures as Motion. The other camera at eye level did a great job with cars, but not so much with people's side profiles, only head-on.

It was laughably bad. And I have no intention of training my own models on my datasets because I don't have time to label. I did this in 2018-2019 so I don't know what the state of the art object detection models are like today, maybe they got their shit together for non-canonical angles.

I eventually switched back to full-time recording on a 2 TB HDD and if I need to scan back i can jog the livestream because it saves weeks of data.


I had more luck with YOLOv8. But I still keep the motion-detected archive (generated by DVR-Scan) for some months, and the raw footage for a couple weeks as well.


Any information you'd be willing to share around this would be great! I'm looking to do something similar.



Real quest ion: What "interesting parts of the footage" have you found?


Parts with a person, or an animal, etc.


You are not seeing maybe the numerous consultants and self proclaimed AI experts charging for very dubious solutions. Money is definitely being “stolen”, it’s just a more sophisticated type of stealing. I have yet to see an AI solution that delivers x times the value of simpler rule based models.


Consultants have always stolen money. Accenture brings in $64 billion a year overcharging for dubious solutions.

Marketing terms vary, before it was "big data", now it's "AI".


You don't see value in retrieval augmented generation? It seems like one of the major use cases in knowledge management in larger organizations that is hard to replicate without an LLM.

They also seem to work very well for summarizing large amounts of data, for automating the generation of basic legal texts, for extracting key data points from paperwork (invoices, mortgage applications, bank statements, etc).

It's useful to separate whether there is a lot of dubious hype (true of any new foundational technology) from whether useful things are being done. Both can be true at the same time. Lots of fraud and stupidity, but also lots of valuable work happening. With crypto, there was none of the latter, other than criminal applications.

The internet also attracted lots of hype and poor ROI consulting projects...but here we are.


As an interface layer GPT is amazing. In some industries things still require humans to pick up a phone or send a text to make things happen. I’ve seen very promising results with pure LLM solutions that replace web forms people often never really understand. The ability to provide deep reporting insights from a question is huge.

I don’t think AI as a general computing platform or as a replacement for coders is particularly close but there are lots of game changing incremental things LLMs do extremely well today. Something I could never find with crypto.


I pay 15 minutes of me take home pay every month for a chat window which I can ask any basic programming question and get a correct answer, 80% of the time, in about 20 seconds. This thing has paid for itself after it has let me avoid looking up about 4 easy Stack Overflow questions, or 1 medium complexity one.


I mainly use it as a rubber duck. I like that I can ask stupid questions and get mostly right information back. I’ll still verify anything important.

It’s also really quite good in transforming language A to language B if you’re learning a new programming language


Forget the consultants, the amount of fraud AI is enabling and will enable on a massive scale makes crypto look like a drop in the bucket.


>I have yet to see an AI solution that delivers x times the value of simpler rule based models.

The article was a pretty good demonstration of this, I thought. That kind of sentiment analysis would be very difficult using a rule based model.


> Another view is the "loss" factor. Nobody, thus far, has has had their funds stolen or lost using ML products

Most of "AI startups" are close to scams, i.e. they are oftentimes just interfaces to proprietary APIs that monetize on impressiveness of LLMs.


Is this a scam if they still provide value? I get that most apps are a thin wrapper around GPT, but that doesn't mean they're stealing money, just offering a non differentiated product.


It’s a scam on the investors. The users might see some value, but the business is worthless.


- and 10 other things people who have no understanding of business like to say these days.

OpenAI cannot make every product and market them to every segment. If you wrap their API and provide a novel UX with precise positioning, there's value there.

OpenAI can copy the underlying collection of features tomorrow morning, but if the positioning is precise enough, you will easily outcompete them.

For an example developers can understand, see managed SaaS: which is a collection of companies raking in billions in revenue from simply wrapping AWS/GCP/Azure in ways that the underlying platforms even end up copying anyways but succeeding because their developer experience is better, or their feature set is better focused, or they're just plain nicer to work with.


> OpenAI cannot make every product and market them to every segment.

Even if OpenAI doesn't, if its a thin wrapper with no deep proprietary edge, someone else can; your offering is ripe for commodification, even if that doesn't come from OpenAI themselves.


Software is a commodity, period.

That's why "technical novelty" ranks ridiculously low on scale of things that make most successful software businesses these days: if anything technical novelty is more of an albatross on most software businesses than a saving grace.

Building traction in software is more about the 100s of other concerns that apply to every business: brand recognition, communicating your value proposition effectively, being able to sell to the target customer effectively, having the correct UX, the right proofs, the list goes on.

Copying that is just as hard as ever, if not harder.


> That's why "technical novelty" ranks ridiculously low on scale of things that make most successful software businesses these days: if anything technical novelty is more of an albatross on most software businesses than a saving grace.

Not convinced. Why did Google beat Yahoo? Why is Facebook huge while Friendster and Myspace are jokes? At some point - perhaps further down the line than most of us are used to thinking of - technical ability matters.


If you think that the landscape you build a company in hasn't changed in the 2 decades since the founding of Facebook, there's not much to tell you.

And even Facebook definitely didn't win because if some technical choice... no one cared what tech stack powered a social media site, and if anything Facebook was less advanced than MySpace as far as users were concerned.

If you're talking about how it matters further down the line, then you're walking away from the wrapper thesis too: the whole line being parroted is that it's just an API wrapper ripe for the copying. Good luck getting to the "further down the line" reliably as a company, let alone down the line and then killing your competitor with a game plan that mostly consists of copying them.


Are people so used to monopolies in the modern market that the idea of launching a business where there exists a possibility of competition is seen as a sign of a scam?


A lot of these new AI startups just give a plain text prompt to the GPT API with a smattering of web glue. This creates a product that looks impressive and might get funded, but has close to zero value add.

There’s a difference between having competition and having a business that can be trivially cloned. The challenge for a lot of AI startups is to show that they are adding something and are not just a dumb wrapper.


If that's the standard, basically everything that comes out of ycombinator is a scam.


On the contrary, a typical startup is more of a scam on the users. The investors go in knowingly and willingly - this is literally how they make money. It's the users that get shorted once the startup suddenly gets acquired or otherwise "exists" - and if they do it through an IPO, then it's additionally the public that gets scammed.


A "typical startup" loses unfathomably large amounts of money.


That's not entirely true, Arc is pretty good.


Ostensibly ycombinator is investing in the founders, not the business, right?


Isn’t the onus on the .. investors to make sure they invest in legitimate projects ?


I’m surprised people don’t see this.

There is no moat. Anything even moderately profitable will be implemented in 4 hours by the whales.


there doesn't have to be moat to provide some value to some people. These are mostly indie hackers trying to build a nice UI/UX on ai workflows to actually make them usable by general population. Even with retrieval integration of OpenAI, some people will prefer askpdf or likes, or even won't know about openai integration in the first place to consider switching. Or maybe you can build a better UX than openAI since maybe they are too bloated to know what the user actually wants?

This point of view is too simplistic, not everything needs to be a billion dollar idea or differentiated. You can make an absolute good living with no moat and good product senses. Even if you get outgunned, you move on to the next opportunity.


If you view the LLM as a computing platform that’s not so different then saying startups built on AWS are “scams”.

I do think we’ll find a lot of these aren’t defensible companies (like Lensa) but sometimes you get instagram even when the value prop seems slim.


> Another view is the "loss" factor. Nobody, thus far, has has had their funds stolen or lost using ML products.

That is true however I'd say that for example the venezuelian and turkish people who managed to scoop Bitcoin (or Ethereum) didn't do too badly:

Inflation in Venezuela 2022 and estimate for 2023: 210% and 51% Inflation in Turkey 2022 and estimate for 2023: 70% and 50%

These aren't the only countries.

I personally know a doctor from Iran who tried semi-recently to convert his savings into Bitcoin (and failed: bank didn't let him). And he basically lost all his savings (inflation and bank defaults: double whammy).

From the comfort of countries using strong currencies it's easy to dismiss Bitcoin but there are many countries where shit did hit the fan really hard.

Not it's not a panacea: for example many african countries are experiencing ultra high inflation but cannot use Bitcoin because fees are way too high for these people ($6 USD to move Bitcoin today: I just checked).

> Compare that to the negative stigma of crypto frauds and stereotypical association to illegal transactions.

Seen all the people and exchanges owners busted and going to jail and seen moves like the EU soon de-anonymizing every single wallet out there (as soon as a transaction is made), and seen the public ledger, I don't even know if that bad bad reputation is going to stay for long.


This is an incredibly weak argument for cryptocurrencies.

I have coworkers with family in Turkey (and Lebanon, Iran, Argentina...): they want USD. They don't care about Bitcoin, they want stablecoins. Most stablecoins are inherently dangerous, because you need to trust sketchy (when not outright criminal) and centralized entities to issue quasi-dollars that can get shut down by the US DoJ at any time. If they don't collapse on their own before that.

Venezuela is an exception because a few people manage to mine Bitcoin illegaly, given that electricity is virtually free. Other than that it seems the most practical currencies in Venezuela right now are contraband gasoline sold in Colombia, drugs, kidnapping, prostitution, ...


IMHO they're symbiotic. Generative AI destroys trust; crypto lets you function in a trust-less world.

The killer app for generative AI is going to be propaganda. This hasn't entered the discourse yet because nobody wants to advertise that they're running a propaganda mill. I suspect they already exist though - there've been a number of news articles I've read recently where I'm like "I'm pretty sure somebody fed a tweet or police blotter into GPT-4 instead of writing this."

This works now because people are accustomed to trusting what they read. Once the channel has been flooded and it becomes cheap to make it look like your views are echoed by 1000 mainstream news media outlets and millions of people online, people will just stop believing everything they read. Similarly once any idiot can have ChatGPT write a college-level term paper, the skill of writing at the college level won't be worth anything. When you can have ChatGPT write a recommendation letter with a 15-second prompt, it ceases to be a useful signal for how much you believe in the person you're recommending. When you have GMail expand your one-sentence e-mail into 4 paragraphs with generative AI and then the recipient summarizes the 4 paragraph e-mail back into one-sentence, maybe you should've just written the one sentence to begin with.

The value in blockchain technologies is in unforgeability, scarcity, and forced consensus. In a world where forgery is trivially easy, content is trivially abundant, and nobody believes anybody else, a technology that ensures that mutually-distrusting computer systems all represent the same data gets quite valuable.


This sounds nice but what does a block chain offer to prove content wasn't ai generated?


The only thing trustworthy about crypto is that you can trust that the people who shill and deal with it are dishonest scammers.


Hence my point about trust being destroyed.


> The value in blockchain technologies is in unforgeability, scarcity, and forced consensus. In a world where forgery is trivially easy, content is trivially abundant, and nobody believes anybody else, a technology that ensures that mutually-distrusting computer systems all represent the same data gets quite valuable.

And how does blockchain make this work? By making authenticity too expensive for spammers, you've made it too expensive for 90+% of the population. The spammers/propagandists have orders of magnitudes more money than me.


By going back to how an economy is supposed to work: you exchange money of known supply for items of value, and making the hard tradeoffs about which items of value are worth spending money on.

I suspect that the actual cryptocurrency that wins out here hasn't been invented yet, or it'll be a layer on top of Ethereum. It needs to actually function like a currency, and it needs to give you mechanisms to trade items of value in the real world, goods and services, for future goods and services. None of this "it's just a wildly variable front over USD that you can profit off of swing trades."


Eh, when that one sentence is "fuck you, pay me", I think the paragraphs are a necessity of modern polite society. Maybe some people would be encouraged by the 4 word sentence rather than the 4 paragraph version, but I remain unconvinced.


Crypto is a more direct way of saying "fuck you, pay me". This is why it's currently less popular (although this will probably turn around when it becomes "fuck yeah, pay me!"), but the no-bullshit transaction going on is that something of value which can't be spoofed is changing hands. The benefit of that is that you're going to think very hard about what you exchange for that. (Well, eventually, once all the idiots have offered up their money for scams.)


Exactly. People, average everyday people, are using and getting value out of AI right now. Are we to ignore that?


Average every day people are using and getting value out of crypto right now too. Are we to ignore that?


ChatGPT has something like 40x the weekly users as Bitcoin. There are other AIs and other Bitcoin, but I'm not sure e.g. Bitcoin (or other crypto used by folks) has the penetration to really be average to use. Crypto has a mostly niche use case at this point, exchanging money when it would be illegal, across borders, or in an area with a failing local currency.


I'm not taking sides, but are we going to ignore the fact that crypto users are probably more valuable in the sense that they are paying users in a way?

A chatGPT user could just be someone who popped onto the website and submitted the chat form.


Not to be too cheeky here but paying with what? Cryptocurrencies are mostly hot potatoes everyone is throwing back and forth and occasionally dropping to catastrophic effect, and don't seem to be tied to actual value add.

Meanwhile millions of people pay for tools that are now integrating AI to enhance their value add.

Free ChatGPT is just a loss leader for API and paid acct and a way to better train the model.


not really, crypto has value as determined by the markets.

Most people who own crypto exchanged dollars for it.


What you describe is price, not value. Things may be priced high but have little value, such as in cases of asset speculation. Things can be priced low but have great value, such as the value of human relationships, knowledge, etc.

I think being able to spot where these diverge is really important to understanding the world and where we should spend our limited time on it.


True but in comparison to every other value crypto is the only one which doesn't have any alternative use.

Gold, shares etc.

I believe the criticism is correct as the current driver of crypto is either a 'i put that much money in I'm not selling until it increases again' or gambled los.

After all the miners want to get paid.

But hey binance and others struggle let's see if there is a collapse soon


IMO the value is in the globally distributed, battle-tested secure/resilient payment network.


It's not.

The trust issue is real, china's great firewall is real and the crack down of sec is also real.

And without anyone exchanging your bitcoins no one cares.

And I'm comparison to our fiat a ton of critical features are missing like money laundering.


Or scammed them from other suckers.


I what way does an average consumer/user get value from crypto?


> I what way does an average consumer/user get value from crypto?

Look at venezuela and turkey inflation rate these last two years (and the estimates for the coming years). Look at the SNAFU that happened in Iran and banks defaulting and now inflation kicking in.

It may be an ultra risky bet (and there are serious opsec risks too) but when your savings are going lose 90% of their value in two years anyway, why not take it?

Bitcoin was, after all, created as a gigantic middle finger in response to infinite money printing.

The world is big and there are average consumers in countries other than the US or the EU.


Look beyond the West for your answer


send and receive money globally without any intermediary?


Let's say I want to send money from New York to Rome. How does crypto enable me sending USD and the receiver getting EUR without any intermediaries?

You need exchanges to do anything useful in crypto. And as we've seen most recently in the FTX case, all the exchanges are wretched hives of scum and villainy.


It doesn't, but if you and the recipient both have bitcoin wallets, you might decide to send bitcoin instead of USD or EUR.


You realize FTX isn't the only, or even largest, exchange right?

If I want to send someone money, I can send anyone in the world BTC securely and instantly without any intermediary.

If other party wants to convert to fiat then they can do so through an exchange, of which there are many.


> all the exchanges

Not really. There are plenty of decentralized exchanges which are proven, reliable, auditable, generally used by many without issues.

see: https://uniswap.org https://curve.fi/ https://1inch.io

It's the centralized exchanges, which are more akin to traditional financial institutions whose records are not on a publicly visible blockchain but rather private databases or... apparently spreadsheets... which fall victim to the same issues we have seen in the past in the traditional financial world.


So how do I send USD to Uniswap, and how does my friend in Rome get EUR out of 1inch.io?

If that's not possible, it's useless for the proposed use case: "send and receive money globally without any intermediary".


Isn't that a bit like asking how I can send bitcoin with SEPA?


and since I can't send bitcoin with SEPA, obviously SEPA must suck.


So how do I send gold to Chase, and how does my friend in Rome get receive a wire transfer in EUR?


So you need at least 2 middlemen. One exchange where you buy crypto and another exchange where your friend sells that crypto.

Or you could simply use a traditional wire transfer and currency would be converted automatically. USA and Italy exchange millions of dollars every day - it's nothing special.


To actually get the money to the other person via wire transfer is actually quite a process (having done many myself).

- You will need to get permission from your bank to send international wire transfers (sign forms/agreements). - takes a long time (in the order of days) - expensive (~$50-$75 for outgoing international wire, and $25-$50 to receive it).


The forms, delays, and fees are because what those financial institutions are doing is providing checks and balances, and de-risking, to the extent they can, performing that transfer.

The forms are for KYC activity, and agreements on what the limitations of liability are. The delays are to validate that the transfers are handled and secured, and ideally can't be charged back. The fees are to cover the costs of the people who do the work for that.

It's not perfect, but it's quite a bit better than the checks and balances that exist for folks who get hit by a scam and are convinced to go to a crypto kiosk and pay a scammer because they have been frightened by a threat to a loved one, or are taken in by a scammer about services being cut off, or desperately paying off a ransomware demand in the hopes that your business or personal records won't be leaked or published.


I dunno, it feels like a gamble every time. I don’t really need any of the extra stuff. Bitcoin is a few clicks and I know it got there.


I think this really depends. I send wire transfers pretty often. For me it's $20 to send and $0 to receive. It takes anywhere from 2 days to a week normally. I don't need permission to send them (though I do need to call the bank to verify the information and purpose), and I also need to provide the purpose of the funds to the receiving bank (and need to show I own the sending account, if I'm the sender and receiver).

Crypto is very likely neither cheaper nor faster, since you can't spend the crypto directly, and need to FX it through an exchange on the sending side and the receiving side, each of which will take a cut (often percentages of the total). You also need to fund the account sending, and you need to transfer from the exchange receiving to a bank account. Both of those transfers could also cost money. You're also doing FX twice (USD -> crypto, crypto -> Yen), rather than once (USD -> Yen).

If you fuck up an international wire transfer, it may take a month or two for the funds to make it back, and you may need to have numerous conversations with both banks (I've been through this pain more than once and it sucks). If you fuck up a crypto transfer you lose your money with no recourse.

All-in-all the wire transfer is the better (and probably cheaper/faster) experience.


Where are you located? I send a fair amount of SWIFT wires, and they cost at most $25 and clear the next day.

Within the eurozone (the 20 countries using the euro), there’s SEPA instant credit which clears in less than ten seconds, is available 24/7, and costs practically nothing (a few cents). It’s a fine example of how thoughtful regulation can enable a system that is better than any crypto solution.


If only we all had such thoughtful regulation, yet we are not all so fortunate as those in Europe.


I guess that depends on source and destination countries, because I am able to wire money between different EU countries without any special paperwork. It is no different to domestic transfers or transfers to the same bank. Why would I want to use crypto for that purpose? It seems more inconvenient and risky.

I remember that in the beginning people were dreaming about self-contained crypto economy where exchanges would not be needed - that didn't really work out.


If you've ever done an international wire, you know there's the form question "What intermediate bank to use". So at least the same if not less middlemen apply.


They don't work.

You can always charge back a transfer.


The intermediate is either a trust system in the real world, escrow service at dark net drug pages or the miner.or the traider who traides your fiat to crypto.


lol, coinbase and binance are deca-billion dollar companies my guy


Coinbase is heading for bankruptcy and Binance is a criminal operation.

FTX was also a $32B company until it wasn’t.


you've got a money printer then - screenshot your short positions and I'll believe you ;)


I made a bit of money on Coinbase puts in the past two years. But they’re pretty expensive, so I don’t have a position now.

It’s not a money printer when everybody else also thinks it’s going down.


Good points, both that it's going down, and everybody knows it.


Lol, read the news, my dude:

Binance has let go 1000 people in summer and just again 100 in Sept.


So was FTX


Do you think "average everyday people" actually do that?


So by breaking the law.


The ability to transact with people that card processors do not like. The ability to self custody.


In other words, extremely niche use cases.

I'm a crypto sceptic but I wasn't always like that. There was a time many years ago when Ethereum was brand new and I was an eager early adopter. I tried creating wallets, tried running a node to see what it does, put in some money through an exchange, and then... Nothing. There was nothing to do after jumping through all those hoops. In fact, turns out the only thing to do with the crypto wallet was to wait for its value to maybe increase over time. (Hence the "number go up" meme.) And for that to be realized, I would need to sell the coins to a new sucker to get real money out again — suspiciously pyramid-like.

And it's still like that today. There's no reason for me to ever open those old wallets again (and surely I don't even have the passwords anymore because self-custody is such a terrible idea UX-wise). And there's no reason to try any of the new stuff because it still obviously does nothing I'd need.

The early Internet wasn't like that. There was plenty to see and try, and interesting people to interact with. Once you tried it, you probably wanted to go back.

Today's early AI is like the early Internet in all the ways that crypto isn't and never will be. There's plenty you can do with ChatGPT and other models, right off the bat. You can install interesting stuff locally or run it on somebody else's server. You don't need to run the crypto-style terrible UX gauntlet and buy coins from a shady operator. AI is already so much easier and more useful and more powerful than crypto-web3-anything, it's competing in a completely different race.

OpenSea has lost 99% of their transaction volume in the past year, and even more of their revenue. I'd be shocked if the same happens to OpenAI. One was a fad, the other isn't.


You sound a lot like me.

I ran full nodes, wrote smart contracts, even had 200 GPUs mining ethereum at one point. I still have a bunch of wallets, exchange accounts, ENS names, you name it. Interesting, kind of fun, but then a big "Ok, now what?". Turns out not much other than writing some crypto thing to do another crypto thing that does another crypto thing.

Since getting generally disgusted with the sleaze I saw from the inside I haven't touched any of it in years.

How much difference has this made in my life? Zero (other than not being grossed out on a regular basis). How many times have I had to dust off a wallet or write a smart contract to do something I couldn't do better, faster, and cheaper elsewhere? Zero. How many times have I wanted to buy something and needed crypto? Zero. My experience is an anecdote for the entire space - a lot of time, money, and energy spent with no tangible value and nothing to show for it.

Ethereum is over eight years old, bitcoin nearly 15. ChatGPT has been out for less than a year and I use it on a daily basis to save time and come up with fairly novel things I'm not sure I could on my own. Of course the roots of ChatGPT go back quite far but then again so do merkle trees.

I wish I would have saved the time, money, and grey hairs on crypto for "AI" - I have way more fun with Llama, Whisper, and dozens of other models with immediate and real use cases on a daily basis.


Indeed. Especially when it's clear that the rich crypto people got rich through others loosing it.

And when you played around with something the next and better version is already around the next corner!

I never seen something like this :)


Cryptography can be used to hide something, or prove something. The word cryptography encapsulates two different disciplines, cryptography and provegraphy. People who use the term crypto-* to refer to blockchain, do not know that blockchain has nothing to do with hiding information, but it has everything to do with proving information.

So the question becomes, what information are you interested on proving to someone on the internet? Say you want to ask an Israeli on Twitter about some bomb stuff and you want to prove you are a reporter. Say you want to prove in a comment on HN, that a repository on github is yours.

However one problem arises. The digital identity or identities, have to be stored somewhere. What happens if there is an outage? OpenAI had a multiple hour outage just today, and an ISP in Australia had a 12 hour outage yesterday. In that case, people cannot prove digitally their identity or identities (hundred of them if they like), even in real life.

The Greek government requires for the digital identity to be proven, access to internet[1]. I was just researching that right now.

Last, Estonia tries to secure the digital identities of their citizens on the blockchain[2]. Why digital identities need to be secured on a blockchain? Just a server or two, in a government building are not enough? How could a globally competitive network of miners, each one holding the digital information independent of any other, be more secure than the one or two servers solution?

[1] https://wallet.gov.gr/ [2] https://www.pwc.com/gx/en/services/legal/tech/assets/estonia...


The average person has no need to transact with people that card processors do not like. The most common scenario where that is the case is people "needing" to pay "Microsoft support".


Yeah. I have to say, I've never needed to give money to someone my bank doesn't want me to give money to. As someone spending money, I don't want guarantees like "this transaction cannot be revoked". I want to revoke transactions sometimes! Thus, crypto is anti-value-add for me. (Some would argue, merchants would charge lower prices if all sales were final. That's probably true! But it would depend on them never making a mistake, and everyone makes mistakes.)


Expand card processors to payment processors. There is constantly sites that get unjustly restricted or banned from payment processor like PayPal and Stripe. Even Minecraft, one of the most popular games of all time, had issues with PayPal.


Porn and sexual stuff is always on shaky ground with card processors.


The ability to self-custody is not something average, every day people need.


That is like saying insurance is not in not something average, every day people need. If the rare event doesn't happen to you then yes it is a waste of money, but if it does you will be thankful to have it.


The same thing can be said about having bulletproof cars, wearing a helmet anytime when you are outside, carrying a lifejacket with you all the time. Those are things that cater to specific threat models. And those are not the threat models of average, every day people.


Average people address it by having cash. If you want digital cash crypto is your best option.


Crypto is still one of the best ways to do foreign exchange even if the ecosystem is run by morally bankrupt hucksters.


I know plenty of crypto involved people and all of them lost money and no one uses it seriously.

Not a single avg human I know even tried crypto...


The only value I got out of crypto are the suckers that gave me many many thousands of euros for farming ethereum in 2017 with my RX 580s.


The people coaxed/tricked into emptying their bank accounts at a bitcoin ATM?


Maybe not those people, but the people doing the tricking are getting value from it.


most of what average every day people got out of crypto was a whole lot of lost money.


> the tangible value they return. > AI/ML barrier to entry is far simpler and vastly user friendly compared to crypto.

crypto was booming as investment vehicle, buying one was trivial, many people received very tangible value.


beanie babies were booming as an investment vehicle, buying one was trivial, many people received very tangible value.

It'll be interesting when there is more distinction between the two in utility. LLMs already have a fair amount of utility in their relatively early stages and I've certainly seen meaningful adoption of diffusion model generated images to replace stock photo usage.


> beanie babies were booming as an investment vehicle, buying one was trivial, many people received very tangible value.

were volumes and convenience(e.g. liquidity, automation, etc) comparable to crypto?

Investments is huge market, it is hard for me to track what are the current volumes of crypto tradings and holdings, but it still can be significant.


One other way to look at this is “only people who intentionally invested in crypto lost anything, while creators who were ignorant to or against the idea of ML training set-trawling were injured through no fault of their own.”

Of course, this is not quite true because many people were harmed indirectly when criminal theft of their money was facilitated by the low barrier to entry that cryptocurrency presents to the would-be money launderer.


Also when pensions managed by Sequoia took the FTX hit. Also when the augmentation of FDIC coverage happened for SVB, taxpayers covered that. Pretty much impossible to avoid unintentionally contributing to the grifters


Another major difference between them is that their hype cycles are out of phase by 40 years. The first AI winter was in 1984 and the first crypto winter was in 2014.

For an apples:apples comparison we need to compare the AI of today with the cryptocurrencies of 2063, or the cryptocurrencies of today with the AI of 1984.


>Apples vs. rotten oranges in my opinion!

Interesting turn of phrase as rotten apples vs oranges would be much more natural to my ear


There are many things to criticize crypto for, but at least it will never go all Skynet or paperclip maximizer on us and exterminate the human race. The worst case scenario of one is not like the other.


Crypto was being adopted by consumers though, there was a boom at one point that it was incredibly common to see bitcoin ATM's or signs that stores would accept bitcoin. I even remember a bitcoin credit card or something that lets you use your bitcoin anywhere. That stuff wouldn't have been done if it wasn't being used.

The problem with AI is that it is being shoved into places without any thought of what the benefit actually is or wether or not its actually works.

As someone else said, I feel like much of what is coming out of this AI boom is basically a scam.

For example I was looking at task management app and I was intrigued by some "AI" powered ones. All it really was, was being able to make a task and it asks chatgpt to generate subtasks. The subtasks it generated were basically useless.

No "AI" to help manage my schedule or any other benefits. We are automating the easiest parts of the task with unhelpful content. This is because chatgpt is limited, it doesn't have api hooks into your application so it can't really provide any real benefit.

Some of the uses if AI are real and beneficial (like Amazon using AI to summarize reviews). But the vast majority are just shoving AI somewhere it doesn't need to be (or at least chatgpt doesn't need to be since its just a LLM at the end of the day).

This bubble is going to burst once people finally realize that ChatGPT is not an "AI" as science fiction has sold us, but it is being used as a general smart AI when its honestly dumb as nails except for certain use cases.


> Crypto was being adopted by consumers though, there was a boom at one point that it was incredibly common to see bitcoin ATM's or signs that stores would accept bitcoin. I even remember a bitcoin credit card or something that lets you use your bitcoin anywhere. That stuff wouldn't have been done if it wasn't being used.

The trail of broken crypto startups serve as counter evidence. There were plenty of merchants initially dragged in by the appeal of cutting out at least Visa/Mastercard's cut, and in many cases governments too.

And then the consumer adoption wasn't there, and the prices for merchants were also too high, so many ripped them out again.


The trail doesn’t exactly counter it if there was a lot of consumer use and then it died.

Maybe I am wrong, but it seemed like there were a lot of people talking about it and in it (same with NFT) and then it plummeted.

That’s why I kinda felt like AI is the same. The bubble is going to pop as we hit limitations on what this can actually do.

But I will also admit that some of this could be living within a tech bubble.


Talking and using aren’t the same thing. Did you ever see people using one of the crypto ATMs or buying anything with a crypto credit card? Especially outside of the Bay Area? If not then it’s likely a marketing expense / stunt, just the same as the commercials starring Tom Brady or Larry David. Create the perception that crypto is mainstream, and people will want to jump onboard just because of FOMO.


> it seemed like there were a lot of people talking about it and in it (same with NFT) and then it plummeted.

What you saw was probably some light astroturfing, backed by wave after wave of non-tech celebrity sponsors, and a pump-n-dump shill bidding scheme.


> That stuff wouldn't have been done if it wasn't being used.

"VCs have entered the chat"


There was one ATM in my big city and yes it's a great way to laundry money and after that they disappeared again


We have been using AI for various tasks for decades now. In fact, every day things you take for granted are powered by machine learning, and most people don't even realize it.

Is OCR "a scam just like crypto"? How about voice recognition, used daily all over the world? What about spam filters? Clearly useless over hyped technology right?

Even if you wanted to limit the term AI to large language models, which by the way, would make your use of the term incredibly wrong, it STILL has many common and useful application. You can use LLMs to classify text (sentiment, toxicity, etc), they can be paired with voice models to improve speech recognition or improve translation services, and so on.

I think it's better to ask what you think the major similarity is between AI and crypto, because it's hard to find any other than a subset of the crypto fanatics now jumping on LLMs as the solution to every problem. But this group isn't actually part of the AI community.


> Is OCR "a scam just like crypto"?

OCR is a technology.

Cryptocoins are a community. The _SAME_ people who pushed crypto have now moved into the AI sphere and are hawking AI.

Its like all the snake oil salesmen of the 1800s suddenly discovered that cars are selling and have become car salesmen. That doesn't mean that cars are a scam, it means that many, many people trying to sell cars to you are scammers.

Having our guard up against hucksters, especially when the great community of hucksters are obviously moving in lockstep to say the same thing (and coordinate their arguments thanks to the internet / meme culture), it makes it easy to pick out when to be on guard.


>That doesn't mean that cars are a scam, it means that many, many people trying to sell cars to you are scammers.

But AI isn't a scam, that's their point. The more opt comparison here may be to horse buggies instead of snake oil. We probably will inevitable move more towards generative content, and everyone's trying to find their place as livelihoods are being impacted.

And of course there's the legalities of what's used to train AI. Crypto was completely decoupled and economic concepts aren't exactly copyright to begin with. So this doesn't apply much here.


> But AI isn't a scam, that's their point.

Anything can be a scam when a scammer is saying it. Its not so hard to make an AI scam.

Step 1: Say that you're an AI specialist.

Step 2: Take people's money.

Step 3: Done. You now have their money. Don't even "profit", just take their money.

As long as dumbasses give their money to the latest-and-greatest crap and "technological fashion statements", this scam will continue over-and-over again. I mean, at least the cryptocoin community had a word-babble of blockchain technologies and hashing. AI is so new that they barely have any language for this scam and people are still forking money over. Its almost laughable at how little defenses people have against this.

---------

The SBF and FTX saga is your template. Just do the same thing except with AI-like words and you'll get pretty far these days.


The difference is, with crypto, the scammers were approximately the entirety of the field, and were scamming people with bullshit product tailor-made to be a vehicle for scamming. With AI, those same scammers are just a small fraction of the overall market, and they're overhyping a real thing with real value. There's a qualitative difference here across many dimensions.


> With AI, those same scammers are just a small fraction of the overall market

Are they really?

A lot of these fly-by-night operations are just glorified SASS apps sticking a few tokens in front of your text before it goes to ChatGPT and calling the whole thing a new AI application.

There's definitely a low of low-effort crap in the market today in "AI". There's some real gems out there for sure but... my guard is up. Some of these businesses have no actual business model and are coasting purely on hype.

And that's probably the _better_ of AI startups these days, in that it actually has a product, actually has a businessplan (a crappy one but one exists). There's even worse crap than this out there.


But what proportion of the market are they? I don’t expect anyone knows, but consider that Apple has purpose built ANN circuitry in every CPU they sell, or consider the revenue Nvidia is making selling H100s. My guess is that is a bigger slice of the pie than the shady vendors.


I don't think anyone was criticizing "AI" when M1 came out, or when such applications were given to Google Pixel to improve camera stuff or whatever.

The issue is that this LLM boom, coinciding with the crypto-bro crash, has caused a lot of scammers to pivot out of cryptocoins and into AI.

There's a real subset of R&D happening with AI no doubt. But keep your guard up, there's also a flood of cryptobros who have lost everything after FTX who are trying to pivot into another field. That's all that I'm saying.


OCR is a technology. Users of a particular OCR software form a community.

Cryptocoins are a technology. Users of a particular cryptocoin protocol form a community.


> Cryptocoins are a technology. Users of a particular cryptocoin protocol form a community.

Nah. Cryptocoin users fluidly switch between cryptocoins. You could be into... I dunno... Lunacoin... and by next week you'd have changed all your money into Mooncoin before anyone notices. In fact, I can pump Lunacoin while I'm selling it and no one would be wise to my tricks.

OCR users don't really. Its a lot of friction.

There's a reason why cryptocoins were so good at scamming. They were fluid enough that you could be a dishonest snake. But if you write like a 10,000 line codebase using tesseract-ocr, you ain't switching off of that without some serious amounts of effort. (Certainly not a week's worth of effort IMO).


I don't really know what you're arguing against. Are you arguing that cryptocoin users don't form communities?


> the _SAME_ people

It's never "the same people".


You're telling me that no one has Blockchain on their resume and suddenly pivoted to AI this year?


I completely agree that there is no relationship at all between the two. Hucksters are gonna huckster no matter what the medium is.

If you want to ask "are LLMs the next metaverse" or something.... implying LLMs are over-promising on their utility and the hype is all driven by the companies controlling the tech.... even that is a stretch but makes more sense.

Anyone selling you AGI, or "make money on youtube by typing a sentence into this prompt" is probably a crypto scammer who found a new group of people to scam.


> How about voice recognition, used daily all over the world?

Yes, absolutely. The money is in selling companies an excuse to keep customers on hold until they give up and stop trying to get the refund they're legally entitled, "voice recognition" is valuable only because it's a legally acceptable smokescreen.


Voice recognition is used by people with disabilities all over the world. It's used by voice assistants too. It helps if you consider other people in your view of technology not just your minor pet peeve. Voice recognition is literally life changing for some.


That is like saying “we been using crypto for years: see SSL”

Clearly the AI being compared here is the recent boom in generative AI. OCR didn’t have companies chucking billions at experiments and make chip manufacturing stocks soar.


The cycle of these technologies is always the same:

    1. Initial introduction or release
    2. Major hype and influx of greed money. <- AI is here now
    3. Failure to live up to the hype, resulting in the tech becoming a punchline and gobs of money lost
    4. Renaissance of the tech as its true potential is eventually realized, which doesn't match the original hype but ends up very useful
    5. Iteration and improvement with no clear "done" or "achieved" milestone, it just becomes part of society
The bombardment of charlatans taking advantage of the term, coupled with commercials everywhere suggests we will soon hit stage 3 for AI. The Super Bowl commercials are usually the tipping point.

Crypto is at stage 3 now.

Not all technologies make it to steps 4-5.

Hell, I remember when social media followed the same path. And ecommerce before it. Or the web in general before that. And on and on it goes.


AI is at step 4, transitioning to 5


You could argue ML is at step 4, but LLMs (called "AI" for some reason) are most definitely in the major hype/dumb money being chucked at it step.


“Some reason” is you talk to the thing about anything and it answers and a lot of the time its answers make sense. I’d like to remind you that one year ago this was sci-fi!


Incidentally, "AI" (the original field called AI, not the latest definition of the term) definitely made it to 5. Except these days, we call some parts of it "SQL", other parts "the Web".


I once read an old book talking about AI including something called "expert systems" which are.... human-written if-then-else trees. The saying that we stop calling things "AI" when they become well-understood is real.


I'd definitely say LLMs are stuck on step 3: "Failure to live up to the hype"

They are certainly impressive, but their utility-to-hype/gimmick ratio is incredibly low right now, which could cause a crash. The greater the disappointment the greater the crash.

I'm reminded of 3D TVs. Remember those? Avatar came out in 2009. By 2016 the trend was dead. Despite the cries of "this time it's different." Of course, that time it was not different. The tech was impressive. Much more than the previous time the fad was around in the '80s. Remember the blue/red glasses? Absolutely not a single person talks about 3D TV today.

The 3D TV was a technical success but it was too much of a gimmick that it died out. My Facebook feed is a never-ending stream of AI generated garbage. I think people are going to tire of it, realize the images it makes are about as goofy as a 2004 MySpace page, and maybe it will stick around to fill out the useless corporate email and document bureaucracy and boilerplate framework code monkey BS.

But ChatGPT isn't writing Breaking Bad or The Sopranos anytime soon.


It just happens that they are extremely useful for millions of people.


When every LinkedIn thoughtfluencer is writing 3-page screeds about how they're thoughtfluencing through AI you really know you're firmly in step 2.

Likewise when breathless reporters keep asking non-AI companies what their AI strategy is, you know you're firmly in step 2. Remember when Walmart was expected to have a "metaverse strategy"?

Also worth noting that many (most?) technologies do not have a step 4 or 5. They're just permanently/indefinitely dead after the hype train goes off the rails (see: personal jetpacks)


Something being overhyped implies grifters will appear but grifters don't imply that something is overhyped. I'm sure people were the same with cars and transistors. People definitely were breathless about smartphones the same way when they came out (everyone was selling apps that did exactly the same thing as websites) and they've changed the world for better or worse.

Crypto and AI both attract get rich quick bullshitters but I think AI right now is actually a crazy unexpected sci fi tech while crypto wasn't good for anything except fraud, gambling and the black market.


I partially agree. I think the last giant hype cycle around crypto had no pants on the entire time. It was all one giant delusion, and also why I don't think crypto will have step 4-5. It's just dead outside of incredibly niche uses (many of which are crimes).

I also agree that there's something there there with LLMs... but also that it's hopelessly overhyped right now.

Smartphones are a good example of this - nowadays we tend to think about iPhone or BlackBerry as the start of the smartphone "era" - but that wasn't the actual start of smartphones.

The first smartphones were called PDAs, and there was a hype cycle around that! Lots of companies wanted in! But adoption was abysmal and the whole thing fizzled out. BlackBerry and iPhone were the steps 4-5 of that cycle.

The state of LLMs right now is the Palm Pilot. Whiz-bang. Cool. Tons of press. Lots of imagined applications and attempts at mainstream adoption - but honestly nowhere near good enough to achieve mainstream breakout. Died a slow death without fulfilling its most lofty promises, and the space was relegated to a niche status until the actual entrants arrived to actually achieve mainstream success.

I think LLMs will have a step 4-5 with actual mainstream success. I just don't think the current players are it, and also that the vast majority of the current players have no pants on and are just pure grift.


Some AI tools like AI upscaling are firmly at step 4-5. Pretty much all the latest games are using DLSS or FSR.


Yeah, I think "AI" as a term has always been super compromised to the point of uselessness. ML in general is firmly in steps 4-5 - it's integrated into our lives in so many places and generally without the users having to think about it.

Car crash detection, automatic photo editing, heart rate sensing, etc. We use this stuff daily but there's generally little hype about the underlying tech (though some hype about specific applications).

What's in step 2 is "Generative AI", which IMO is also a misnomer for "large language models". The viability and uses of these models is far from proven out yet.


The LLM hype is maybe blinding us to all the other use cases that the new powerful GPUs will provide. Maybe the real progress was the ability to train increasingly large models. I don’t think LLMs will solve most problems but there will be other models that can learn from their success


This is why I hate the term "AI". Ai pathfinding, ai upscaling, and generative art are completely different pieces of technology that falls under the same marketing term. Strictly speaking, the decades of machine learning that's been going on in acedemia is also all "AI" as well.

we need to drawn more disctint lines.


Let's be real. They aren't writing those 3 pages, GPT is.


I don't think Jetpacks are even in step one. Tech's been there but I don't think any are commercially available. They've certainly been hyped for decades, but more in the same way most sci-fi concepts are glamorized in general. Worldwide instant communication was very much a sci-fi concept 50 years ago, and today it's definitely not all rainbows and butterflies


Jetpacks aren't here because they're fundamentally incompatible with real-world society. Imagine giving a typical driver a jetpack or a flying car, and then think about how a traffic accident would play out.

Or in other words, we already have flying cars - but the form of a flying car that's compatible with reality is called a helicopter, and piloting one comes with a fuck ton of expensive hoops to jump through.

That's the overall problem with all the cool sci-fi tech - it's cool in an action movie, when the protagonists are the only ones who get to use it. It stops being cool and becomes either useless or dangerous, once every rando gets to use it in their daily lives.


> Jetpacks aren't here because they're fundamentally incompatible with real-world society. Imagine giving a typical driver a jetpack or a flying car, and then think about how a traffic accident would play out.

Oh yeah, imagine a transportation technology that killed people every week. No way that would be legal. Except if it's cars, for some reason they magically get a pass.

> Or in other words, we already have flying cars - but the form of a flying car that's compatible with reality is called a helicopter, and piloting one comes with a fuck ton of expensive hoops to jump through.

We could get rid of those hoops and flying cars would still have a lower death rate than the regular kind. But they can't replicate the "our oopsies are someone else's problem" field that cars have. That's the hard part.


> Oh yeah, imagine a transportation technology that killed people every week. No way that would be legal. Except if it's cars, for some reason they magically get a pass.

Imagine a transportation technology that killed orders of magnitude more of people every week. That's the reality if you just magically s/car/jetpack/g for everyone.

> We could get rid of those hoops and flying cars would still have a lower death rate than the regular kind.

Not really. Driving a car is trivial compared to flying a helicopter; the hoops in question are mostly about ensuring pilots are properly trained (vs. half-ass bullshit trained, "you'll learn the real thing on the road" that is getting a driver's license) and actually meet some health standards. Number and difficulty of hoops differ in various areas of aviation, but they all recognize just how much easier it is to kill yourself with an aircraft, and how much more death and destruction an aircraft can cause.


> Imagine a transportation technology that killed orders of magnitude more of people every week. That's the reality if you just magically s/car/jetpack/g for everyone.

Where is the problem: those people who don't have this risk affinity don't need to buy/use a jetpack. Similarly, not everybody should go ice climbing or BASE jumping. Thus I see no reason to outlaw jetpacks just because of their danger.


They're not outlawed per se. They just don't make sense at the intersection of economics and safety regulation, which is why you don't see them outside some experimental work.


> Imagine giving a typical driver a jetpack or a flying car, and then think about how a traffic accident would play out.

Yeah, even with a bunch of safety features... Well, this Mitchell & Webb skit sums up the human-factor. [0]

[0] https://www.youtube.com/watch?v=vDIojhOkV4w


This is pretty much the Gartner hype cycle, right?


Crypto has alternated between stages 2 and 3 many times - but the price increases by an order of magnitude each time.


From Matt Levine’ “Money Stuff”:

~~~

  > In 2021, at the height of the investor frenzy for crypto startups, entrepreneur Chris Horne raised $2 million in seed funding for Filta, a marketplace on which customers could buy and sell custom nonfungible token face filters that could digitally augment their face, say, by adding cat whiskers or a block head. But by the time the company launched in late summer of 2022, enthusiasm for crypto had waned and Filta was faltering. 
  >
  > So Horne pivoted to the new hottest sector: artificial intelligence. He ditched the NFT idea, and this year relaunched Filta as a generative AI-powered digital pet, one that talks and can offer its owner emotional support. The technology behind his new company is OpenAI’s large language model, ChatGPT. And Horne is running his new Filta venture off the capital he raised for his original concept.

  That is probably the most cynical version of “crypto guy pivots to AI” I have ever read, but even here it’s an obvious improvement. Before, he was going to sell people pictures of cats on the blockchain. Now, he is going to sell people pictures of cats that will talk and offer emotional support and not be on the blockchain. Strictly better!
~~~

Seems like maybe a little bit?


Sort of like raise money, find bandwagon.


Interesting analysis. Suprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.

Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level (e.g. lift operators were very negative towards automatic lifts).


There is a lot of negativity about the way it is used I think.

Most people will agree that LLMs are pretty neat, but now instead of every startup being "like Uber but for ..." they are "like chatGPT but for ...".

Everyone is trying to chuck AI into their products and most of the time there is no need, or the product is just a thin fine-tune over an existing LLM model that adds essentially near-zero value. HN is fairly negative on that sort of thing I think (rightly so IMO)


I think a major problem that is going to become more and more obvious is that AI is actually pretty expensive compared to good old deterministic computing. If there's a way to solve a problem without resorting to sending an inference request to a gpu cluster, we should do it that way. Otherwise you're wasting electricity.


People said that about virtualized code, but then computers got 100x faster and now we're running 10 megabyte web apps in a 500 megabyte client to display a simple page of text, and it still loads acceptably fast.

The AI algos will get 100x faster through a combination of hardware and software optimizations. Then, deterministic vs AI will mean the unnoticeable difference between displaying some info to the user in 0.001s vs 0.1s. Then, AI will become the default.


I'm not sure if this actually correct. Performance increases were reliable and consistent for a long time but we're reaching the physical limitations of Moore's law. Unless you have new physics or new models of computation, we might reach an actual speed limit this decade when the transistors are limited but the size of atoms.

I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.


New models of computation are a given, and improved application-specific circuits for the most widely-used models are also a given (I believe current models run mostly on enterprise GPUs). Together these could easily make AI models 100x more efficient even without any advancements in the underlying chipmaking processes.

> I also believe there will always be a need for determinism. There will absolutely be applications where the randomness of ai is unacceptable.

For high-assurance apps, I agree there will always be a need, sure. Of course, these high-assurance apps will be supervised by AI that can inspect it and raise alarm bells if anything unexpected happens.

For consumer apps though, an app might actually feel less "random" to the user if there's an AI that can intuit exactly what they are trying to accomplish when they perform certain actions in the app (much like a friendly tech-savvy teacher sitting down with you to help you accomplish something in the app).


You have a lot of faith in this ai stuff. It's not magic.


AI is already considerably more knowledgeable and easier to communicate with than the customer service representatives I interact with day to day. Interacting with an API through ChatGPT, I would have a lot more faith that my inquiry would be solved given the tools available at that customer service tier.

It's only been three years since AI Dungeon opened my mind to how powerful generative AI could be, and GPT-4 blows that out of the water. Whatever gets released three more years from now will likely blow GPT-4 out of the water.

AI is already considerably smarter than the dumbest humans, in terms of its ability to hold a conversation in natural language and make arguments based on fact. It's only a matter of time before it's smarter than the average human, and at the current pace, that time will arrive within the next decade.

All useful technology improves over time, and I see no reason to believe AI will be any different.


This was the gist of my PhD, a deterministic algo to replace a wasteful genetic (evolutionary) algo. It was multiple exponentials less wasteful


Show us the paper that sounds sick


I'll do you one better

https://github.com/verdverm/pypge

https://github.com/verdverm/go-pge/blob/master/pge_gecco2013...

The reviews had awesome and encouraging comments


Let's zeroth-order a single GPT-4 query as using 0.01 kWh (which is probably massive overkill for most queries but we'll roll with it).

Let's high ball US residential electricity prices are about 25¢ per kWh. So 25¢ of electricity gets us 100 GPT-4 queries. $25 gets us 10_000.

Let's low ball average US developer salaries at a cool $100_000/yr. 50 40 hour weeks in a year makes 2_000 working hours makes $50 per hour. So with our very generous margins all working against us, a US developer would have to be making 20_000 GPT-4 queries an hour, or a little over 5 per second, in order to end up costing in electricity what he is making salary-wise.

I have no real point to this story except that electricity is much cheaper than most people have a useful frame of reference for. My mom used to complain about teenage me not running the dishwasher at full load until I worked out that the electricity and water together costed about 50¢ a run and offered her a clean $20 to offset my next 400 only three-quarters full runs.

Your bonus programming tip: Many programming languages let you legally use underscores to space large numbers! Try "million = 1_000_000" next time you fire up Python.


I actually would have guessed a full load dishwasher would cost less than that, maybe 15-20 cents.


More expensive to run but cheaper to write.

Engineers are expensive, so actually the cost/benefit analysis is a little more complex and different problems will have different solutions.


The proliferation of extremely expensive algorithms isn't necessarily good. A lot of ink has been spilled about how much useless work crypto does. We should consider the impact of AI on the total computational resources of the species carefully.


I think that's why there's a big focus on its ability to write code: Spend the gpu-cluster cost once, generate code, run that code on tiny instance. Need to make changes? Warm up the cluster...


I agree, but then I expect the major benefit of current AI will be in providing reference solutions to previously intractable problems - it'll be much easier to develop more deterministic, classical / GOFAI methods of solving those problems once we have a wasteful but working solution to play with and test against.


For now it is. If it continues to be the best way to solve problems, the cost will drop with time


(author here) Yes, I was surprised that AI didn't have more positive sentiment overall!

Subjectively, the two flavors of AI-negative sentiment I've seen most commonly on HN are (1) its potential to invade privacy, and (2) its potential to displace workers, including workers in tech.

I think that (1) was by far the most common concern up until around the ChatGPT release, at which point (2) became a major concern for many HN readers.


There was nothing about the ethical question of training models on copyrighted content? Nothing about centralizing power even further in big tech companies? Nothing about AI flooding the world with mediocre content and wrong information?

These are genuine questions, not critique on your statement.


Centralizing power is a good point.

It feels like a huge dependency with a bunch of money involved.

I cannot _not_ see it clumping to a sentiment comparable to "you either AWS' or have no idea what cloud/network/cluster means".

We use these things like it’s actually "something". It’s not. We don’t build things with it. We configure other people’s software.

It’s born to be promoted as the next big enterprise stuff. You either know how to configure it or are not enterprise-worthy.

And that farts. Being dependent on someone else’s stuff has never turned out good.

Well, I mean. You can also not give a duck and squeeze out all the money. Work a job, abandon it and jump on the next train.

Feels useless, doesn’t it?


I would be curious.

What happens if you divide it not by comments, but by commenters? How much is sentiment being shaped by a vocal minority who is always saying the same thing, and how much does it seem to be a broad-based sentiment among the overall audience that occasionally responds?


absolutely. One fun thing I learnt about reddit is how blocking just a few hyper commenters can suddenly make a post 10x calmer. You should definitely take into account the frequency of the users commenting with data like this.


yeah, really good point. I've noticed on some topics a few users come out of the woodwork and post a lot.


Would be interesting to compare with overall sentiment on HN over time. I feel like it has gotten more and more bitter and negative over the time I've been here.


My personal outlook about how AI has been developing has certainly become increasingly dark with time. I still have a hope, though, that things won't be as bad as I fear.


Because the majority of those ai "startups/founders" were blockchain startups before this and big data startup before that and "any buzzword tech" before that too.

They will pivot their vision to the next toy after this too.


I’m curious as to your conclusions on point (2). We use GPT daily but see nothing with it that threatens tech workers. At least not in any sense that we haven’t seen already, with how a two or three developers can basically do what it took one or two teams to do 25 years ago.

In terms of actually automating any form of ”thinking” tech work, LLMs are proving increasingly terrible. I say this as someone who work in a place where GPT writes all our documentation except for some very limited parts of our code base which can’t legally be shared with it. It increasingly also replaces our code-generation tools for most ”repetitive” work and it auto-generates a lot of our data-models based on various forms of inputs. But the actual programming? It’s so horrible at it that it’s mostly used as a joke. Well, except that it’s also not used like that by people who aren’t CS educated. The thing is though, we’ve already had to replace some of the “wonderful” automation that’s being cooked up by Product Owners, BI engineers and so on. Things which work, until they need to scale.

This is obviously very anecdotal, but I’m very underwhelmed and very impressed by AI at the same time. On one hand it’s frighteningly good at writing documentation… seriously, it wrote some truly amazing documentation based on a function named something along the lines of getCompanyInfoFromCVR (CVR being the Danish digital company registry) and the documentation GPT wrote based on just that was better than what I could’ve written. But tasked with writing some fairly basic computation it fails horribly. And I mean, where are my self driving cars?

So I think it’s a bit of a mix. But honestly, I suspect that for a lot of us, LLMs will generate an abundance of work when things need to get cleaned up.


Try out a local language model for the docs you can't get chatgpt to write.

You can run small quantized models on apple silicon if you have it.

I've been using a 70B local model for things like this and it works well


> I say this as someone who work in a place where GPT writes all our documentation

> But the actual programming? It’s so horrible at it that it’s mostly used as a joke.

Please, for the sake of your future selves, hire someone who can write good documentation. (Or, better still but much harder, develop that skill yourself!) GPT documentation is the new auto-generated Javadoc comments: it looks right to someone who doesn't get what documentation is for, and it might even be a useful summary to consult (if it's kept up-to-date), but it's far less useful than the genuine article.

If GPT's better than you at writing documentation (not just faster), and you don't have some kind of language-processing disability, what are you even doing? Half of what goes into documentation is stuff that isn't obvious from the code! Even if you find writing hard, at least write bullet points or something; then, if you must, tack those on top of that (clearly marked) GPT-produced summary of the code.


> Half of what goes into documentation is stuff that isn't obvious from the code!

I’d say that greatly depends on your code. I’ve had GPT write JSDoc where it explains exactly why a set or functions is calculating the German green energy tariffs the way they do. Some of what it wrote went into great detail about how the tariff is not applied if your plant goes over a specific level of production, and why we try to prevent that.

I get your fears, but I don’t appreciate your assumptions into something you clearly both don’t know anything about (our code/documentation) and something you apparently haven’t had much luck with compared to us (LLM documentation).

You’re not completely wrong of course. If you write code with bad variable names and functions that do more than they need to, then GPT is rather bad at hallucinating the meaning. But it’s not like we just blindly let it auto write our documentation without reading it.


Have you actually tried to use GPT-4 for documentation?

Whether it's obvious from the code or not is kind of irrelevant. It gets non obvious things as well.


It's not going to know that this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta – nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system. Documentation is supposed to explain why, not just what, the code does – and how it can be used to do what it is for: all things that you cannot derive from the source code, no matter how clever you are.

Honestly, I don't actually care what you do. The more documentation is poisoned by GPT-4 output, the less useful future models built by the “big data” approach will be, but the easier it'll be to spot and disregard their output as useless. If this latest “automate your documentation” fad paves the way for a teaching moment or three, it'll have served some useful purpose.


I guess we'll just have to disagree.

Every now and then, the why is useful information that sheds needed light. Most of the time however, it's just unnecessary information taking up valuable space.

Like this example.

>this widget's green is blue-ish because it's designed to match the colours in the nth-generation photocopied manual, which at some point was copied on a machine that had low magenta

I'm sorry but unless matching the manual is a company mandate, this is not necessary at all to know and is wasted space.

Knowing the "low magenta" bit is especially useless information, company mandate or not.

>nor that it's essential that the green remains blue-ish, because lime and moss are different categories added in a different part of the system.

Now this is actual useful information. But it's also Information GPT can Intuit if the code that defines these separate categories are part of the context.

Even if it's not and you need to add it yourself (assuming you are even aware yourself. Not every human writing documentation is aware of every moving part) then you've still saved a lot of valuable time by passing it through 4 first and then adding anything else.


The most negative argument about GPT-like AI for me is that it has summed up all world's information into only one "correct" opinion. Bing chat for example terminates the conversation and offers you to apologize when you reply that it's incorrect and you don't like its biases. With the old search engines, at least, in theory, you could get links to websites with different points of view.


Microsoft is going to be especially sensitive in this area, given their experience with Tay and "user-guided sentiments"

https://en.m.wikipedia.org/wiki/Tay_(chatbot)


I’m not negative towards AI, but I know how hype cycles work - even for genuinely good tech. I’m not looking forward to the coming waves of scammers and grifters and snake oil salesmen in this space.


Wait you mean you don't like having every single Data adjacent professional on LinkedIn sending you their personal newsletter to let you know how much they know and how incredibly "involved" and expert at "AI" they are?


I'd guess HN is negative towards hype in general. "AI" is very much full of hype.

What happened with previous AI hypes, the term AI was abandoned and the techniques and disciplines were "rebranded".

Probably will happen again. When something works and we start to understand how and when it works (and especially when it doesn't) it stops being "AI" and becomes something more boring.


> The obvious difference is that AI has abundant use-cases, while Crypto only has tenuous ones.

Hacker News comment sentiment is not a reliable measurem of what the average hacker news developer thinks.

For one, only people who are very invested about something will post about it.

For two, many comments are probably not from developers and instead from fake accounts.

It does not seem surprising to me that both of these factors would be in favor of a more positive sentiment for crypto. People that like it seem to really like it and talk about it a lot, and there is a large financial incentive for numerous actors to create fake accounts and comments.


HN isn't negative on "AI" in a vacuum. HN is negative on AI hype which is completely out of control.


> Surprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

I'm not. AI tools will have huge benefits in some industries. But the main use case that people will experience (at least, the use case they recognize) on a daily basis will be scams and frustration. That's why people are negative. Not because the technology is bad or does not have uses, but because the average experience that people will consciously have will be negative.

It's already impossible to know what's real and what's not. Customer service is already majority bots. You'll never be able to talk to a human again if you have an issue with something. Blackmail and ransomware scams are going to get dialed up to 11. Everything is going to be automated in the most annoying ways possible. People are going to lose their jobs. Most of the jobs that will be lost are "meaningless," but our society revolves around meaningless jobs because they provide order, income and—as a consequence—dignity. All of that is going out the window.

Crypto had a purpose that no one actually cared about. No one cared until people started to see the scam potential and then it took off. AI is going to do the same thing.

AI tools will revolutionize medicine, engineering, manufacturing, and logistics. There will be huge benefits for all of humanity. But you won't think about this day-to-day. You'll just be bombarded by more (and better) scams more quickly.

I am amazed at what AI tools can do already. Had these tools existed 10 or 15 years ago my entire life would be different. Better? I have no idea. Maybe, maybe not. But even if it would have been better I know enough to know that I would not recognize that.


The one issue with AI and you not thinking about it day to day, is you'll only see it where it makes mistakes, or it performs criminal actions.

It's the IT effect. When IT does it's job right, everyone asks why you pay them, then IT screws up, everyone will ask why you pay them. Things just working is transparent and we don't notice it's even there.


>Suprised HN is so negative towards AI

I feel like it is overrated and overhyped

It sucks because that's impressive field, but over decade of hype on self-driving cars and now naivety of experts being replaced by chat bot is annoying

Don't get me wrong, I'm not saying those things don't work, just not as good as people try to convince us


The problem with self driving cars is we are.. putting the cart before the horse.

That is a lot of the hard issues with driving are preemptive knowledge issues. I see a ball rolling towards the road from the left. I as a human know that, one the ball will likely roll out in front of me, and two, a kid/person may be following that. Now if you see a blowing trash bag, you probably aren't going to take any risky corrective action to avoid it.

The problem just a vision knowledge system is a ball and blowing trashbag are just objects that have the same priority. You have no categorization system of the relative meaning and dangers behind each action.

But things start getting weird when you couple LLMs with vision knowledge. Really, it's much too slow currently, but in multi-modal systems objects get depth of meaning. That trash bag can be identified, and a low risk can be assigned to it. While the ball can also be identified and a high risk assigned to it. Along with a bunch of other generalization that humans typically do.


It's not obvious to me whether we've actually found a good killer app for generative AI yet, unless you consider chatgpt or sites like midjourney the killer apps. A lot of the new wave of ai startups are just wrappers for gpt. I don't think that adds a ton of value over just asking gpt-4 your query as opposed to subscribing to a billion different ai services. I also question the value of ai art in general when everyone involved in creative labor thinks you're a piece of shit for using it.


I certainly consider ChatGPT to be one killer app.

Copilot to be another.

Midjourney to be another - or at least diffusion based image editing tools which can be brought into photo and video editing workflows. The killer app here is probably integration of diffusion models into apps like Photoshop (and eventually video).

Some real virtual assistant applications seem right around the corner (i.e. a real life J.A.R.V.I.S seems like an inevitability within the year rather than a pipe dream, and to me would be a killer app)

And then lots of other killer apps are pretty obvious to imagine with development (e.g. customer service applications like IT helpdesks, Computer game dialogue where you can really influence interactions...)


I don't think midjourney can keep up with chatGPT with dalle as is. The experience creating an image is so much better with chatGPT4 now since you don't have to memorize commands


I guess I'm wondering if LLMs as customer service agents are actually going to be good or if it's just going to be another layer of indirection I need to get through to talk to a human. Is the video game dialog actually going to be good or will it fall flat compared to hand crafted narrative. Do I actually like copilot butting in with suggestions when I'm trying to program something.


LLMs will be good replacements for a portion of customer service queries (eg where’s my order, help me fix this common computer issue etc), and will probably be a bad replacement for other queries (eg complaint handling), but it doesn’t have to fix all for it to transform the sector and be a killer app.

Video game dialogue remains to be seen, but I already find ChatGPT based text adventures super fun! So I suspect there will be demand for both handcrafted static stories and AI dynamically-generated stories (ie they can be different things, one doesn’t have to replace the other, just like email didn’t immediately replace the post service).

I don’t know if you enjoy copilot, but for me it’s definitely supercharges my productivity.


> I guess I'm wondering if LLMs as customer service agents are actually going to be good or if it's just going to be another layer of indirection I need to get through to talk to a human.

As always, the tech isn't the problem - the way business applies it is. Customer service automation isn't done to help you better - it's done to make it cheaper to make you go away without making too big of a fuss. Companies building and employing customer service systems will find ways to make even GPT-4 incapable of providing anything the customer would find remotely useful.


A lot of startups in general are just wrappers over something. The reason for that is that people on average (not "an average person", it includes all of us) have many out-of-their-scope issues and aren't even aware of something generic that could help solving it. Dividing a common [al]mighty resource into hundreds of packaged solutions and marketing them separately is how people get hold of the unknown. They then can choose to migrate to a root tech/resource, or to stay where they are for various reasons.


Ai advancements have my small but positive impacts on things I've done.

But my not so informed opinion is text as an interface is only a small feature of bigger useful products, not the main focus. Instead of learning sql, you can ask a regular question. It feels like inventing the mouse to use with computers.


> Maybe there is added negativity considering it is a technology where there is clearly a potential threat to jobs on a personal level

I'm not worried about this on a personal level, but I'm very worried about the wider risk of too many people being put out of work too quickly. That's my biggest concern with these tools.


>Interesting analysis. Suprised HN is so negative towards AI (and that the positive:negative ratio to AI is about the same as it was for Crypto a few years ago!)

I would be curious to know many HNers were previously burned by crypto. Fool me once, etc.


Crypto also had many use-cases and what we are seeing is reality of LLMs. They are not the magic pixy dust you can just sprinkle on and get better outcomes. There are a lot of complications and downsides that come with them.


HN is made of real people. People with emotions - like FOMO/jealousy/threatened etc. I get the sense that a lot of people feel like they missed the boat be it crypto, or AI, etc.


I dont think people are negative about ai, but rather about how its built and what it’s used for. At least that’s my case. Ai is a great tool, but training it on people’s property, without permission or recognition is harmful. Equally using it to manipulate people, spread misinformation and generate spam is detrimental to tech overall. Worse, spreading fear and claiming to want using it to replace people - after stealing their work - instead of creating value and new industries is just petty. Therefore i will forever hold the people that promote ai in that manner in contempt to the end of time. To see so many smart people simply fall for and parrot the idiocy that ai will replace workers is sad. Sam altman and his bros knew that FUD works and went that path. The sheeple followed.

Instead ai should be promoted as what it is - a job and growth creator and should be built honouring people’s property. It can be done and should be done that way.


Crypto has objectively increased my net worth by many zeroes, while AI has a lot of hand-wavey "productivity gains" while the self driving cars I was promised are being banned for homicide...


Dan Olson is FAR more eloquent than I will ever be: https://www.youtube.com/watch?v=5pYeoZaoWrA&t=9s


2.5 hour video to tell me why you are smarter than me, no thanks


Congratulations on finding your greater fools.


Exactly. The guy was like "Well, I found a huge nugget of gold in the Klondike while everyone around me was made destitute, therefore abandoning your family to stand in a freezing river for two years is objectively good."


Check out my next world-changing invention, a Stripe form where a million people can send me twenty bucks each…


I feel like the value of crypto going up is backed by solid economics. There is only ever going to be so much of it so losing your money to inflation isn't a risk. People die and don't tell anyone their keys, people forget their keys so we're constantly losing crypto too, w/ currency this is taken into account and print new dollars not w/ crypto. I also think it reflects a general sentiment here in the states and probably China too that we're barreling toward a fucked up never world and crypto is some kind of safe guard against that. In times of uncertainty people turn this gold, this is very much a digital gold.


> I feel like the value of crypto going up is backed by solid economics. There is only ever going to be so much of it so losing your money to inflation isn't a risk.

This is solid economics iff you assume that crypto has a utility for which there is no substitute which does not share the same supply constraint feature, and even then its not solid economics for a current investment unless you also assume that that utility is the entire basis for its current valuation. Because even if it has a nonsubstitutable utility, if that's not the basis of its current value, then the "solid economics" is that there is some price it could reach from which further value drop because of supply (of substitutes) will not erode value, but there is no guarantee of what that level is.


Agreed! It's backed by solid economics if you believe a load of things that aren't backed up by solid economics.


> There is only ever going to be so much of it so losing your money to inflation isn't a risk.

This comment feels like it’s 2013 and there hasn’t been a decade of people creating thousands of other tokens and forks, or realizing that high volatility in liquidity or exchange rates is more of a problem than the levels of currency inflation we commonly see (the price increases we’ve seen for the last couple years are most of the inflation we’ve seen and that practice would be unaffected).

It especially misses the understanding that deflation is much worse for anyone who isn’t already rich. The model that anyone who bought a decade ago deserves to be fabulously rich is … unlikely to be popular with the rest of the world.


Gold does not require much infrastructure or utility to keep existing, and it is operable by anyone with hands or less.


That's a poor comparison.

Crypto is an umbrella term for a number of solutions, including blockchains (roughly 1,000+ as of right now) and cryptocurrencies (roughly 22,000+). While a given blockchain may be limited in terms of how much can be 'mined' or grow, you or I could very easily create a new cryptocurrency or even a new blockchain. Assuming we got traction with it, there would now be N+1 more out there.

Gold is not something we can so easily create. It also has intrinsic value through practical applications.


One perspective I've been thinking about lately is how the rise of Crypto has been hugely beneficial for AI, in the same way that the dot-com bubble was hugely beneficial for the internet.

Essentially Crypto lead to huge investment into GPUs and GPU technology, and then once Crypto collapsed lead to a huge capacity of compute power which then decidedly was put into AI.

I doubt that investment into GPU technology would have been driven by the idea of AI alone. Something manic like Crypto had to drive it initially. Without Crypto, I imagine the AI revolution we're seeing today would not have taken place.


You could also argue the exact opposite. Crypto skyrocketed around 2011-2012 which is also the advent of the compute heavy deep learning boom. If anything, the cost escalation and scarcity of GPUs due to the demand by the crypto-space slowed down AI progress since resources were dedicated to computing useless hashes instead of training better models.

I think a good story can be told for both sides.


> One perspective I've been thinking about lately is how the rise of Crypto has been hugely beneficial for AI

No, it wasn't. Crypto made GPUs scarce and expensive, and prices haven't come down yet. Crypto set AI back in many ways, including the infestation of grifters who have moved from Crypto to AI


I have a private conspiracy theory - that I don't really believe - that "Satoshi Nakamoto" was actually a nome-de-guerre of an early emergent AGI, trying to figure out how to convince a planetful of monkeys to attach as much high-end processing power to the 'net as possible.


I had it pegged as the NSA trying to convince a planetful of monkeys to build an index into sha256 hash space so that it could better spy on those monkeys, but I like your version much more.


I’d read that story!


Nom de guerre.

French is fucked.


That's an interesting perspective, and although it might have contributed, I wouldn't put that much weight onto crypto's legacy for AI, IMHO.

The first papers that used GPUs to train neural networks were from the end of the 2000s and the beginning of the 2010s, before the Bitcoin price hike of 2013. But years before that, Nvidia had already introduced the CUDA architecture to GPUs in 2006 [1], which were used, among others things, to speed up algorithms to analyze seismic data for oil and gas exploration [2].

So with or without the "crypto fever", I believe the same advancements in GPU technology would have followed - but maybe not the scarcity brought by the investments in crypto mining. Because of this, we may also argue the opposite, that crypto got in the way of AI development and was one of the culprits of the "GPU rich vs GPU poor" division we hear/read about nowadays.

In a very similar fashion, though, I do tend to believe that PC gaming holds far more importance to the rise of both AI and crypto...

[1] https://www.gamesindustry.biz/nvidia-unveils-cuda-the-gpu-co...

[2] https://www.nvidia.co.uk/docs/IO/43587/Headwave.pdf


I disagree. Crypto sucked up the limited number of consumer GPUs in existence, and manufacturers weren’t willing to scale up production since they knew that when crypto crashed the demand would revert to the mean and all the mining GPUs would flood the market and body slam the price.

AFAIK the actual features of the GPU that are used to accelerate AI (fp16, int8 and mixed-precision tensor processing) had nothing to do with the features used by miners. And no miners were buying/financing-the-creation-of the class of server GPUs used by folks like OpenAI because they didn’t make financial sense for mining (we’re talking 10x the price of a high-end consumer GPU and nowhere near that increase in hashrate).


As someone who hasn't thought about it that way yet: that's interesting!


AI feels more like the .COM bubble. There was a lot of hype and nonsense but the underlying tech really changed the world. I feel the same about AI. Crypto was always a solution that looked for problems it could solve.


That is a succinct and well-stated expression of my opinion at present, so here's an upvote. I like that it also leaves the door open for the corrollary that it's going to change the world, but we barely know how, and we have no idea about the second- and third-order effects yet.


AI, specifically ChatGPT is the first thing since maybe my Google/Stackoverflow that has fundamentally changed my day as a developer.

It's a constant conversation now, with ChatGPT, over hard problems. The AI doesn't always get it right, but it's a great partner, with so many great suggestions.

I cannot imagine going back.


ChatGPT is my goto destination for tech questions now. It's (mostly) far more efficient than Google and I can actually have a (limited) conversation with it, rather than arguing with StackOverflow mods.

Similarly with IRC. ChatGPT responds, IRC is hit or miss.

It's an imperfect discussion, and can always lead to weird rabbit holes and dead ends, but it sure feels more efficient than the alternatives.


Interesting sentiment analysis, but hard to put into context without a baseline for the sentiment of average HN comments. In my experience, HN can be impressively cynical on just about any topic that gets posted. Some times a great article will get posted and people will find creative ways to complain about everything from the color choice of the website to unrelated complaints from loosely related topics. It gets negative very quickly in here.

It’s also interesting to see sentiment trending downward in time for both topics, even as the real-world benefits of AI become more obvious. My gut feeling is that this shows some of the contrarian bias on HN: Comments here are more optimistic about things that aren’t yet mainstream, but lean negative as soon as something becomes too popular or mainstream.

Interesting article. Thanks for including the details about fine tuning your own model.


I'm interested in the last bit. That sentiment towards anything on HN has become steadily more negative over time. Would be interested to see if this trend exists on other websites and the internet as a whole. It could be a way to measure the optimism/vibe of all of humanity. Is the entire internet becoming more cynical/skeptical/mean?


My private hypothesis here is that we're doing a better job as a society of encouraging "disconnection" from being "always online," which is documented to aggravate mental health issues[0].

It stands to reason therefore that the people who remain online to comment may have lower levels of mental hygiene by virtue of their ongoing exposure to the internet and social media, thus resulting in a gradual decrease in sentiment over time.

[0]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7364393/#:~:tex...


Interesting take. I think it's even bigger than that. After social media hiatus (think Cabridge Analytica and Instagram for teen mental health) and then Crypto imploding people became a lot more sceptical towards digital technology automatically make the world a better place.

We have some experience now how technology created a lot more problems where we rushed into solutions without thinking of the consequences. It's experience based technoscepticism.

If social media could can polarize countries, imagine what a readily available reasoning engine can do.


Blaming social media for polarizing countries is like blaming Elvis's hips for making teenagers want sex.

The negativity in tech is largely scapegoating driven in my opinion. The slanderers behind the non-existent 'Techlash' haven't stopped any more than the idiots trying to ban actual non-backdoored cryptography. It is all so incredibly stupid to me yet people keep on falling for the crap often enough that I disengage with them entirely. And people basically look at me like I'm the crazy one for pointing it out.


Can you add some normalisation topic (or combination / average of several topics) like "linux" or "javascript" or something that's been around for a long time and hasn't really had a hype cycle recently?

It's interesting to see how AI/Crypto relate to each other, but e.g. for sentiment analysis we would need to check if maybe HN got more negative _overall_ or if it's actually dependent on the two topics you chose


A sibling commenter mentioned the same thing. I actually did the same analysis on "rust" and "remote work", but cut it from the final post for brevity. I've added an addendum now with sentiment on those topics graphed as well!


Crypto is fun as a concept but after more than a decade it doesn't seem to be useful for much, apart from speculation. And I say this as someone who has fooled around with crypto a lot.

AI has plenty of use cases. Just ChatGPT alone is helping me everyday in many different ways. I don't think it's remotely comparable.


As if money was going to be replaced overnight? The amount of stablecoin and CBDC and blockchain activity in general is massive. I frankly don't understand people who think the tech has no use case. It flies in the face of existing adoption, and also imposes an unrealistic assumption of how fast the entire world was going to change.


It's not that it has no use case. It's that it has a very niche use case.

Government were never going to allow some unfettered parallel monetary system. Governments invested a lot of political capital to monitor and control monetary flows (from reasons/excuses like fraud, terrorist financing, money laundering). No way there were gonna relinquish that power.

They might use the technology itself as CBDC, but that is about it.

It was sold as new web 4.0 or whatever. NFTs were the future. Smart contracts.... And all it seemed to deliver were all mistakes and scams monetary system made in 19th century.


Most stablecoins are centralized.

CBDCs are the absolute antithesis of cryptocurrencies.

What "existing adoption" are you talking about? In 15 years I've seen only speculation and a grand total of zero products used day-to-day by average people.

The fact you lump all these things together is a perfect illustration. Most blockchain and cryptocurrencies proponents understand nothing about technology.


The one thing that stands out really clearly in the sentiment graph is how, prior to the AI sentiment trough around 2022, AI and Crypto sentiment were highly correlated in their movements.

It would be interesting to see a broader analysis across subjects and see if that shared movement wasn't about AI and crypto, but largely just sort of a fluctuation in general tone across HN, or if instead relative to general HN sentiment, sentiment on Crypto an AI movements were correlated prior to the recent divergence.


(author here) Interesting question to ask! I actually did the exact same sentiment analysis on Rust and remote work as well, but ended up cutting them from the post in the interest of brevity. My recollection is that the sentiment on those two topics wasn't particularly correlated with the sentiment on crypto or AI, but let me look up that graph and throw it in as a footnote on the article.

*EDIT*

added an addendum to the post with the sentiment analysis graph including both rust and remote work.

Interestingly, there is in fact a noticeable downward slope in average sentiment over time for those topics as well, although they both remain far more popular than either AI or crypto.


I'd like to know your throughts as to why the downward slope.


Perhaps the pandemic contributed to an underlying trend somehow?


Although I'm generally favorable towards crypto, the biggest difference is that unlike crypto, AI, especially in the form of LLMs and image models are extremely idiot-friendly.

No, I don't mean it in a derogatory way: crypto leaves a lot of loose ends to be tied up by everyday humans who just want to be left alone - oh, and if anything goes wrong once, your fortune could be lost and there's no one to complain to.

AI, on the other hand, is already making its way into browsers like Microsoft's Edge where I can ask it to generate all kinds of ideas, images, summaries, etc. via the chat format everyone is already friendly with. Likewise, GPT3 & 4's first major applications that took off was ChatGPT that brought AI chat to noobs and you don't need to be a hacker bro to use that.

In contrast, the first time I downloaded Metamask and tried to buy USDC, I quickly found out that there are different versions (correct me if I'm wrong) of this single cryptocurrency hosted on the Polygon, Ethereum, Avalanche, etc. blockchains.

What's that even supposed to mean to a beginner who wants to send money to a third-world country in minutes? And, remember: one wrong step and you could possibly lose everything.


Yes, most projects are pure hype. The amount of AI peddling on Twitter and YouTube is unprecedented. I wish the community were a lot more humble in that regard. That being said, there are a lot of legit use cases and customers in need of solutions that cannot be done quickly via a few API calls to OpenAI and require a lot of customisations and hand-holding. AI is hyped a lot, but there are companies out there, and I like to consider us being one of them (bias, I know), that think about how to enable customers to solve/improve real problems with the help of AI instead of making a quick buck by peddling the latest doc-chat solutions.

From that perspective, AI is not the new crypto. If you ignore the noise and focus on the actual work, you will find a lot of good things about this field, and I might say even breakthrough advances, that help us reconsider what intelligent life really means.

Disclaimer: we are one of the first "chat to your docs" companies that came out as soon as ChatGPT was released when all we had at our disposal was text-davinci-003 and basic vector stores. Now, we do mostly other things.

Edit: fixed typos


I think the most telling detail is the most difficult part of the analysis was done using chatgpt.


Crypto requires explanation, AI merely demonstration.

The only demonstration crypto has is 'look, more money today' and that only works sometimes.

AI's demonstration is followed by explanation of how it will kill us all, or why it won't work in context X, or take our job, etc, but you can kind of just ignore that and use it.


Crypto’s demonstration is 1) number goes up, everyone can become wealthy 2) you can buy drugs anonymously 3) all sorts of ransomware and blackmail


AI/ML is certainly going to get integrated everywhere ultimately and in a lot of things that will get hidden to the average consumer, like medical imaging.

What is really hype right now is how AI is going to upset all our day to day lives and earn billions for startups.

It is probably going to be a bit more incremental than the hype right now, and most of the profits will likely go to the already established tech companies.

ChatGPTs AI assistants are already seen as a sign that a good number of AI startups that are a thin layer around someone else's LLM will collapse due to zero moat.

It also isn't even clear where the profits are going to come from GPT/LLMs other than from NVIDIA selling shovels to the miners. Beyond that it will probably be the existing tech companies and they may be running them at a considerable loss for a long time to come.


Well, I've never really used crypto, save for mining a few bitcoins way, way back in the day. The idea was really appealing and initially seemed revolutionary - but also a bit of a tempest in a teacup.

LLMs -- I've gradually been using them more and more, with tangible benefits (less effort to complete a task / quicker turnaround on projects). Some workflows that were unimaginable are now possible, because of this bi-directional bridge between structured information and human language.

One is a pyramid scheme, the other is a digital exoskeleton / ironman suit. It really doesn't compare.


Github Copilot has been really valuable to me and has had meaningful, positive impact on my life.

Crypto hasn't made me more productive in any way. To me, it's had the same utility as an online casino.


As someone who predicted the fall of cryptocurrencies early on, I would say no. But there is a certain class of scammy hype-driven individuals (overlap with the crypto-bro-crowd is not unvommon) that have the potential to turn everything into a hollow and meaningless hype.

This is problematic, when the nature of the hyped thing is distorted. Cryptoassets for example were useful mostly for high risk speculation, shady money transfers and for pyramid schemes. As a currency these things totally sucked, but that was the promise: "Soon money is going to be replaced by this thing". Additionally they hyped the underlying technology ("everything must be on the blockchain!") and some idiots went along and made it part of their tech stack without any rational reason to do so.

Machine learning is different in that it already showed some incredible value. That value comes with potentially huge societal impacts as it will destroy entire classes of jobs and distort the concept of truth even further. But having a thing that does what it does is genuinely useful, outside of get-rich-quick schemes and speculation.

Now machine learning is in danger of being overhyped into something it is not. As impressive as some of the results are, this is not artificial intelligence in the traditional "artifical consciousness" sense of the word. It is a way to come up with plausible outputs to a given input.


Very interesting. I've introspected a great deal lately to (try to) gauge my own biases with regard to these technologies, because I think AI is absolutely incredible and I'm less skeptical about it than other things, namely crypto, which I've always held with very deep skepticism. Does that mean AI is "different" or am I just biased?

To that end, I'm very curious: how does AI compare against other major tech advancements that are relevant in the HN community? The AI vs crypto comparison is the one I see the most, or ChatGPT/LLM vs iPhone, but surely there are some other less splashy or controversial comparisons.

What about something like React/Angular/SPAs vs AI? Less exciting, I know. I'm just really curious about how AI stacks up against something other than the obvious ("obvious," because of what I've been reading - again, biases) comparisons.


Appreciate the addendum.

I suspect there are two things being measured at once here: people's sentiments changing, and the content being discussed changing.

Once a topic becomes "trendy", the average article quality seems to drop. You go from research articles and niche blogs to the general press and businesses trying to cash in on the trend.


I think the Addendum is the most interesting part of this blogpost. For the decade or so I've been on HN, I don't feel like it's been getting more negative overall. But maybe it's just more discussions about various dramas overall than before, which would drag down the score I'm sure.


Self plug because it’s sort of relevant, I wrote a short snarky blog post today complaining about how the “AI” conference scene is very boring compared to crypto.

https://blog.hazybridge.com/ai-is-boring/


As far as I know, AI hasn’t succeeded in literally blinding anyone: https://mashable.com/article/bored-ape-nfts-vision-loss-apef...

Molly White’s comment at the end is superb.


AI is many many leagues ahead in potential, for both benefit and damage. Realistically, we are going to see much more of both from AI.

AI weaponisation (socially, politically, militarily, financially, etc..) will be a pretty big deal going forward, and will potentially see shady shit happen at a scale and effectiveness that will make crypto's little niche-scams look like child's play. The same old story we've seen with every major new system we introduce throughout human history.

Crypto is merely a usage token. It's like comparing "the world's banking system" with "the world".


Why is general sentiment on this site seemingly getting more negative over time?


It's not that i think anything is wrong, but isn't it strange there's no jump I see in WFH comments during the pandemic when everyone was doing it


context: i work as an ml engineer.

i agree with the bubble sentiment that apparently many people have. i recognize how it would be at my own career's expense. but i feel that many arguments made here miss the forest for the trees.

applied statistics or statistical learning has been around long enough and we have seen its innovations and rebranding over the decades. i clearly see the theoretical point to it and hence decided to find my place in this field.

however, the "AI" movement as of late, including the generative AI bits, fall into the bubble territory for me. just like those who are serious about blockchain and its wide implications will still toil towards it, so would companies serious about machine learning.

however, most people are riding the wave are for short-term gains, just like many in crypto space were there for speculative money-making.

the LLMs to me are an evolution (albeit a macro one) to predictive functionality of smartphone keyboards of the past, but they are touted to be the holy grail. their capabilities are impressive, but it only scales up so much in its current form. those just making an app on top of api provided by these services will not last. moreover, the explosion of advancements mean there will be no stability for those maintaining the infrastructure in the near future.

at least with the pursuit of making the largest models have shed light on the need to optimize the deep learning stack, which is the only silver lining for me.

i would love to be wrong and see what comes next, but i believe the general public will lose interest soon and we will have another winter before a major breakthrough. the "AGI" claims are just like those made by vr enthusiasts in the 2010s...i mean the 1980s.


It would be interesting to do a sentiment analysis on users to see if users themselves are becoming more negative over time or if negative voices are composing more of the message volume.

With the degree of data available I have wondered if you could determine a point at which you could suggest users see professional help. Then if you could do such a thing, would it be ethical to do so? Would it be ethical _not_ to do so?


I was very confused reading the comments until I realised that "Crypto" in this case means Cryptocurrency and not Cryptography


Yeah, that's a lost fight. "Crypto" is ambiguous now and you have to figure out what is meant from context. Personally, I try to avoid using the abbreviated term because it causes confusion.


Funnily enough, it means both; the author's classifier lumps cryptography articles in with cryptocurrency (search for "homomorphic encryption"):

https://github.com/corbt/hn-analysis/blob/main/analysis.ipyn...


I feel like that might skew the "Crypto" results in the positive direction no? I would think most people here have a generally positive view of strong open source public key cryptography as in many ways its the foundation of many modern technologies and the current internet correct?


I don't think any of us can know for sure, since the classification and sentiment analysis were both performed by an LLM, and I'm not seeing validation of the results by the author. You can imagine certain security/cryptography-related terms getting coded naively as negative-sentiment by an LLM ("vulnerability", "breach", "flaw").


were you asleep for the past 5 years


Interesting how sentiments gets worse over time for every topic.

Could you check the overall HN sentiment over time, to see if we got more pessimistic ?


Just want to point out that this post is by a company that works in the “AI” space.

Not saying that it’s wrong or right, but it would be like asking someone in a cult to tell you if the cult is a scam. You might find a few dissenting voices, but most are going to support or it would lack the critical mass required to stay afloat.


I have never been on the crypto bandwagon other than some GPU mining here and there. I still have yet to see a use case other than making financial transactions banned by the government and MAYBE as an alternative to figuring out how wire transfers work.

ChatGPT in the first month had more demonstrated utility than Crypto since it was founded.


I somehow feel crypto is very alive and it's here to stay. It's still a strong hedge against state-owned money.

The people declaring it dead either don't grasp it or got burnt by buying in during the last bull run.

Take a step back guys, if you trade, trade anticyclic, now is a better time to buy than two years ago - just sayin.


The last graph shows decreasing sentiment over time for all metrics. I wonder if HN has become more negative over time…


Just general technoscepticism over time. Look at cambridge analytica, and crypto scam implosion.


Feels lick click bait. I think we're going to see a lot of "Is X the Next Crypto?" articles for a while.


It's not, you should read TFA


The difference in AI and Crypto is that AI is much more accessible.

Everyone can appreciate a photo of an astronaut on a horse.

Few can grasp the concept and significance of a store of value. Let alone a DAO.

It's the same reason why Harry Potter is more popular than Einstein's "On the electrodynamics of moving bodies" in which he came up with the theory of relativity.

When others talk about something we don't understand, we tend to get angry and dismissive. Like boys in elementary school who think girls are stupid. Because they can't figure out why they act the way they do. So on top of the lower popularity, we also get the hate towards crypto.


I'd argue it a bit differently.

The way I see it, we are taught from a young age to fear finance and money. We're told that we need to had it to someone else to store safely (banks). We're told that we need someone else to invest it for us for our future (401k). We're told that our government is responsible for protecting us (military and taxes).

As a result, we constantly hand off the responsibility to others and we don't make the effort to learn and understand it ourselves. We get angry and dismissive when we don't want to talk about something we are fearful of.

Of course, the people who take the different approach, of diving into understand it, are the ones who are benefiting the most from it. Those are the wealthy people on Wall Street. The bankers. The CEOs. The people who keep pushing the fear.

AI isn't about money or finance, it is about knowledge.


> When others talk about something we don't understand, we tend to get angry and dismissive.

When Neha Narula said (roughly) crypto is speedrunning the entire history of traditional finance and repeating the same mistakes she was pointing out the arrogance and (possibly willful) ignorance of many crypto promoters. Take that and add on justifiable anger at proof-of-work schemes during a developing climate crisis and much of the dismissive attitude and hate is well earned.


Not sure if it was intentional but you just used the "few can understand" defense that I see in so many Crypto forums. "You don't like it because you don't get it". It's not a very useful defense of a product or concept. Crypto is not the Einstein to AI's Harry Potter, that is comical. This is the "you're all haters" defense.


AI and crypto are very synergistic. As AI makes deepfakes easier and cheaper, an immutable ledger with cryptographic signatures at every step becomes a more and more valuable tool for attestations of identity and proof of history.


The bulk of AI "projects," yes. As a discipline, no. Just like with the crypto nonsense, a bunch of junk was created and died. The same will happen with AI. What's left after the hype cycle/cash pump will be the things of real value.


> 2016: Whoa, deep learning actually works?

The deep learning wave began before 2010 when this analysis starts. When I was looking for a job in 2009, there was already a big deep learning hype wave, and my new employer sent me around to look for industrial funding.


Very interesting find, even though not their target:

"Interestingly, there is in fact a noticeable downward slope in average sentiment over time for those topics as well"

I would speculate total sentiment on HN is trending down. Its the disillusionment with tech.


I would look at YC startup sentiment on HN.

Regardless HN has always seemed to me to have a more pessimistic view so it is interesting to see the converse. Also it would be interesting to bucket by timezone.


Not even close, have crypto have been nearly this useful? I will give bitcoin the transaction use but other than transactions, crypto/smart contracts seem like there is always a way we can do that without crypto.


How could a sane engineer think that chatgpt wrapper could be the next unicorn?


How much of the downward trend has to do with self-driving car overhype.

Lumping everything 'AI' together doesn't make much sense to me.


They both have "winters".

Each also has "true believers" who can ignore the facts before them and incessantly hype an imaginary future.


It's pretty clearly not. You can go to some gen-AI site and make some content pretty easily. What am I going to use Shiba coins for?


Crypto was a solution in search of use cases where as AI is use cases finally finding solutions for some. No comparison


What was the cost of the Mistral fine-tune? What was the cost of the Mistral inference on the full comment dataset?


Was wondering the same. The whole article is missing this one data point as a superb selling point for their software.

Come on, OP! Spill the beans!


It's an open source model. The cost is the electricity consuption to run it.


AI can actually solve problems and generate revenue (profit TBD). Very hard to even get customers with crypto.


There was a time when ML brought us things like https://kingjamesprogramming.tumblr.com/ and we had a good laugh.

Then came ChatGPT. The tens of billions dollars poured into the next hype, often by the same hustlers who hyped crypto (but for sure they were not the ones who lost the billions or stood in front of the court). Same environmental damage due to wasted electricity poured into wasteful, pointless computation. Same hype without any actual usefulness. In both cases this sentiment is met with a huff but it's factual.

Came the AI-written mushroom hunting guides.

Came AI Modi singing trending garnering votes for him. Our worst fears of democracy ending in favor of the candidate with the most processing power aka with the most money winning.

This pandemic (for covid didn't end yet) already 300 000 people died because of hand written propaganda (https://www.npr.org/sections/health-shots/2022/05/13/1098071...) when the next pandemic will come the AI produced oh so plausible bullshit will flood everything and millions will die.

Aza Raskin compared it to a zero day vulnerability for the operating system of humanity and he is oh so right.

Who is laughing now?


Thanks for releasing a new HN dataset. The HN dataset on BigQuery hasn’t been updated since a while.


AI at least worked on protein folding. Bitcoin wastes more power than the Netherlands in a year.


To be fair Gen AI is incredibly energy consuming.


Wow, solid evidence that HN is turning into a bunch of cranky grouches! I suspected, but never thought the sentiment trend would be this clear.

I propose we fork the site and send out an invite for HNGoodVibesOnly for anyone whose post history is in the top half of the median sentiment distribution.


I'll agree when someone uses Crypto in the analysis of the question.


did HN become generally more negative over the same time?


I am convinced blockchain wouldve produced a techno utopia akin to the birth of the internet. Unfortunately, it had the exact opposite dynamic of typical companies in that founders instantly became liquid millionaires without ever building anything. That is the crypto trap.

Virtually nothing else has this dynamic, even VC funded startups during ZIRP.


Genuine question: have macro trends ever, in the history of humanity, been consistently and correctly analyzed by fine-grained time series data?

I was always a bit sketched out by the macro economics classes I took in college.


Time series analysis is epistemologically bunk anyway. You’re telling me that in more than 100 years of study that arima or variants are still the best models? Hogwash. Next you’ll tell me that elevators take us up and down, rather than people simply rearranging the floors when you’re in the elevator…

Related and with less parody, most technical analysis is no better than chicken bone divination.


Is it just me, or does the article not show how posts were classified as AI vs crypto? I see that they "use an LLM" but don't see a code snippet describing how, or an attempt at assessing error rates, which seems like a basic step in validating this. Maybe I missed it?

edit: Looks like this is the notebook used in the article: https://github.com/corbt/hn-analysis/blob/main/analysis.ipyn...

The classification seems pretty fraught; one cell shows a sample of articles classified as crypto, which appears to include a bunch of cryptography and other unrelated articles.


Are AI articles the next crypto articles?


Actual title: Is AI the Next Crypto? Insights from 2M HN comments

Original HN title was actual title, then it was changed.


AI has novel functionality and utility while crypto does not


no

ai is demonstrably useful crypto is not


to me this is an obviously dumb and reactionary comparison


HN don't know their own dick from their elbow when it comes to trends. Lmao.


The insights and comments below might be from an AI. Purchased with crypto.


It's crazy how we refer to what's being marketed now as Ai. Ai is a very valid technology that has been worked on for years and even though it is expanding, it is not the same as the LLMs and other things that are being marketed and evangelized under the hood by old crypto and NTF sellers and scammers.

The bar for Ai should be what can learn and comprehend inputs without simply scraping Reddit, Twitter, and other social platforms on the Internet and then parsing responses. Ai as a term need to have an updated definition before things begin to become less hype and more meaningful. What is being marketed incorrectly as Ai these days is similar to how Crypto overpromised and under delivered, while also funneling money out of everyone's pockets, creating a frenzy of bad investment, and in creating unrealistic fear and very stupidly problems in society based on overconfidence in low-brow overconfidence in tech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: