You can, but you shouldn't. Especially when the other people doing it are not, like, difficult to catch.
Laws that are enforced very inconsistently aren't exactly the sort of thing that you want in a society with rule of law. They are exactly the sort of thing you want in a society with rule by law, though...
I'd love for a precedent to be established, but I have zero confidence that it will actually be used for anything that doesn't come from China.
Given it's very spotty track record, I have very little confidence that the US is actually a country where nobody is above the law. I'd be very pleasantly surprised if this actually goes the other way, but I would not bet on it. The entire discourse so far about TikTok has been about how it's bad because it's foreign media, not about a principled approach to why it and other things like it are bad.
What makes you think it'll turn around, and extend to other social media?
All cars in Europe support DAB (digital radio) for the last ~5 years by law.
Pretty much all cars also support bluetooth, USB sticks, and some still have Aux in. Some support various internet radio/music (spotify etc). Most cars support Android Auto/Carplay, wired and wireless, giving you access to anything your phone supports.
In theory any old car can support DAB+ but car accessory manufacturers like to as ridiculous amount of money for car radios, so I doubt this change will occur faster than the car replacement rate.
DAB+ has been a complete failure so far. Its reception issues are even worse than FM and of the few people I know that have even heard of it, nobody cares. The benefactors of the DAB+ transition aren't the people listening tk the radio, but the radio stations fighting for frequency space.
if you think DAB a failure, then the "we" I meant is just unevenly distributed.
(norway and switzerland are both mountainous countries, which meant the same FM station had to maintain several different transmitters on numerous frequencies — if you're in a country flat enough to serve with vanilla sugar and hagelslag, might that have something to do with our divergent experiences?)
The average age of a Norwegian car is still 11 years (https://www.ssb.no/en/statbank/table/05528/tableViewLayout1/), not the five since DAB+ has been mandated. I don't know if Norway has a large car radio upgrading scene, but support is still far from guaranteed. Broadcasters may have switched away, but with the modern omnipresence of services like Spotify, I wouldn't be surprised if the cars not supporting DAB simply don't use their radio anymore. Based on the numbers I can find, 30% of Norwegian cars still can't receive DAB broadcasts and at the time of the switchover only a third of the cars on the road could even receive DAB transmissions. Percentages improve if you also count home radios (that's where the 97% number comes from) but it's a lot easier to install a new radio at home than it is to upgrade your car.
Furthermore, DAB transmits on an even higher frequency than FM, so mountainous areas will need more transmitters than with plain FM, not less. Sure, the combined digital streams DAB provides are used to reduce the amount of transmitter installations, but that also could've happened with FM.
DAB is far from a failure. It'll eventually replace FM by mandate, because there's an incentive for governments to let more radio stations pay for broadcasting licenses. However, it's also far from a success at the moment. Access to streaming services such a Spotify or the internet broadcasts of the radio stations themselves has probably eased the transition as well.
I haven't listened to FM, AM, DAB, or anything broadcast since I've been able to connect my phone to my car (20 years?).
Why would I listen to what the station programmers decide (and possibly riddled with ads) when I can configure my phone to play whatever I want when I want?
I’ve never used radio in my current car. Yes, I connect my phone to my car and, if I don’t have cell reception, plenty of music in my library. That’s probably pretty normal.
True, although "back in the day" people used to memorize at what times during the day certain routes were busy, and they took alternative routes ("the back roads" in my area) to get around traffic that could be predicted.
It also has information on store closing times/dates (some stores are closed on random days of the week, or close early on others), unexpected detours (construction, previously announced road work), speed traps (crowdsourced), and more.
Some of it simply wasn't possible before the technology came along.
> True, although "back in the day" people used to memorize at what times during the day certain routes were busy, and they took alternative routes ("the back roads" in my area) to get around traffic that could be predicted.
I think "memorize" has the wrong connotation of rote memorization, like you were memorizing data from a table. I think it was more like being observant and learning from that.
> We've outsourced that to an app, too.
The technology lets you turn off your brain so it can atrophy.
In a big enough city that information is too dynamic to memorize. Car crashes, road work, sports events, presidential visits all caused their own microclimate that was not part of the everyday rush hour.
Slightly OT, but one thing I noticed further into the demo is how you were prompting.
Rather than saying “embed my projects in my portfolio site” you told it to “add an iframe with the src being the project url next to each project”. Similarly, instead of “make the projects look nice”, you told it to “use css transforms to …”
If I were a new developer starting today, it feels like I would hit a ceiling very quickly with tools like this. Basically it looks like a tool that can code for you if you are capable of writing the code yourself (given enough time). But questionably capable of writing code for you if you don’t know how to properly feed it leading information suggesting how to solve various problems/goals.
> Basically it looks like a tool that can code for you if you are capable of writing the code yourself (given enough time).
Yes, exactly. I use it the way I used to outsource tasks to junior developers. I describe what I need done and then I do code review.
I know roughly where I want to go and how to get there, like having a sink full of dirty dishes and visualizing an empty sink with all the dishes cleaned and put away, and I just instruct it to do the tedious bits.
But I try and watch how other people use it, and have a few other different styles that I employ sometimes as well.
> I use it the way I used to outsource tasks to junior developers.
Is this not concerning to you, in a broader sense? These interactions were incredibly formative for junior devs (they were for me years ago) - its how to grew new senior devs. If we automate away the opportunity to train new senior devs, what happens to the future?
This is cool, but I wish it were integrated into tools already used for coding and writing rather than having it be a separate app.
This also demonstrates the type of things Google could do with Gemini integrated into Google Docs if they step up their game a bit.
Honestly I’m scratching my head on OpenAI’s desire to double down on building out their consumer B2C use cases rather than truly focussing on being the infrastructure/API provider for other services to plug into. If I had to make a prediction, I think OpenAI will end up being either an infrastructure provider OR a SaaS, but not both, in the long-term (5-10 yrs from now).
When they are focusing on just being an API provider then they will be in a market with (long term) razor thin margins and high competition - most likely unable to build a deep moat. But if you can shape customers habits to always input "chatgpt.com" into the browser whenever they want to use AI then that's a very powerful moat. Those customers will also most likely be on a subscription basis, meaning much more flexibility in pricing and more rent for openAI (people using it less then what OpenAI calculates for subscription costs).
From Wikipedia, for that don’t know the term: “a concept in economics that describes a process in which new innovations replace and make obsolete older innovations.”
Ironically, I had to google it, and agree with the comment.
You should read The Innovator's Dilemma as well, as it goes into detail on this concept, basically explaining why and how technological disruption occurs from the point of view of the disruptor and disruptee.
> the type of things Google could do with Gemini integrated into Google Docs
Google already does have this in Google Docs (and all their products)? You can ask it questions about the current doc, select a paragraph and ask click on "rewrite", things like that. Has helped me get over writer's block at least a couple of times. Similarly for making slides etc. (It requires the paid subscription if you want to use it from a personal account.)
That's there too; see https://support.google.com/docs/answer/14206696 — you can click on the "Ask Gemini ⟡" and carry on a conversation, e.g. "summarize emails about <topic>" and use those to paste into the doc. (I haven't found all that much use for referencing other files though. But the "proper chat" is useful for saying things like "no actually I meant something more like: …" and carrying on.)
I wouldn't be surprised to see Apple add something like this to Pages and some of their other apps. Their approach to AI, from what we've seen so far, has been about integrating it into existing apps and experiences, rather than making a separate AI app. I have to imagine this is the way forward, and these stand alone apps are basically tech demos for what is possible, rather than end-state for how it should be consumed by the masses.
I agree with you on where OpenAI will/should sit in 5-10 years. However, I don't think them building the occasional tool like this is unwarranted, as it helps them show the direction companies could/should head with integration into other tools. Before Microsoft made hardware full time, they would occasionally produce something (or partner with brands) to show a new feature Windows supports as a way to tell the OEMs out there, "this is what we want you to do and the direction we'd like the PC to head." The UMPC[0] was one attempt at this which didn't take off. Intel also did something like this with the NUC[1]. I view what OpenAI is doing as a similar concept, but applied to software.
Every app with a significant installed user base is adding AI features.
OP is lamenting that Cursor and OpenAI chose to create new apps instead of integrating with (someone else’s) existing apps. But this is a result of a need to be always fully unblocked.
Also, owning the app opens up greater financial potential down the line…
How many people use Pages these days? I don't think Apple even mentions the product in their WWDC these days. My guess is that most people either use Microsoft suite as required by their employer or use cloud based knowledge base/notes tools like Notion/Quip/Obsidian/Confluence etc. I doubt Apple thinks it worthwhile to invest in these products.
People who need to make the occasional document outside of work, who don’t need to invest in paying for Office, use iWork. I count myself in that list. I use Office at work (99% of that usage is Excel), but at home I use the iWork apps. Mostly Numbers, but Pages as well. I hear many of my friends and family doing the same, because it’s what they have, it’s good enough, and it’s free.
Few people outside of tech circles know what those other apps you mentioned are. I use Confluence at work, because it’s what my company uses. I also tried using it at home, but not for the same stuff I’d use Pages for. I use Obsidian at work to stay organized, but again, it doesn’t replace what I’d use Pages for, it’s more of a Notes competitor in my book. A lot of people don’t want their documents locked away in a Notion DB, and it’s not something I’d think to use if I’m looking to print something.
I went back and looked at the last WWDC video. Apple did mention the apps briefly, to say they have integrated Image Playgrounds, their AI image generation, into Pages, Keynote, and Numbers. With each major upgrade, the iWork apps usually get something. Office productivity isn’t exactly the center of innovation these days. The apps already do the things that 80% of users need.
75% of OpenAI's revenue is coming from their consumer business - the better question is the long term viability of their public API.
But if they believe they're going to reach AGI, it makes no sense to pigeonhole themselves to the interface of ChatGPT. Seems like a pretty sensible decision to maintain both.
75%? Thats astonishing to me. Where are you able to see those details?
It wouldn't surprise me if not a lot of enterprises are going through OpenAI's enterprise agreements - most already have a relationship with Microsoft in one capacity or another so going through Azure just seems like the lowest friction way to get access. If how many millions we spend on tokens through Azure to OpenAI is any indication of what other orgs are doing, I would expect consumer's $20/month to be a drop in the bucket.
This very good analysis estimates 73%, which includes team and enterprise. Given that enterprise access is limited and expensive, it seems Plus and Teams are mostly carrying this.
The whole financial breakdown is fascinating and I’m surprised to not see it circulating more.
Your source is a blog post by a polemic author whose own source is second-hand by NYT, an organization that is in lawsuit with OpenAI. I would have rather have heard it from the horse's mouth. What financial information about OpenAI does NYT have that I don't? Do they have privileged access to private org financials?
In my estimation, you're not qualified for this conversation.
It may be pretty minimal but i can personally vouch for 20ish techies in my own social orbit who's businesses wont authorise or wont pay for OpenAI yet and are doing so out of their own pockets; i share an office with four of them.
Maybe the consumer side will slide as businesses pick up the tab?
Same here. I feel like Google's products have become such a labyrinth of features, settings, integrations, separate (but not really) products, that navigating them requires an expert. Sadly, I don't see a way back - each new additional feature or product is just bolted on top and adds more complexity. Given the corporate structure of Google, there's zero chance of an org-wide restructuring of the labyrinth.
Google isn't a startup, they aren't desperate to impress anyone. I don't even think they consider "AI" to be a product, which is probably correct. These AI enabled features are background processes that ideally integrate into products over time in ways that don't require you to explicitly know they're even there.
Given how widely used Google Docs is, for serious work, disrupting people's workflows is not a good thing. Google has no problem being second, they aren't going to die in the next three months just because people on Twitter say so.
The most amazing thing with notebooklm is that is can turn your docs into a very high quality podcast of two people discussing the content of your docs.
It is a cool concept, but anyone who listens to enough podcasts know that hosts have personalities and interests, and productions usually have their styles, focus and quality. These features make podcast channels unique and make you want to come back. That's why you may want to listen to podcast A instead of B even though they discuss the same topics. I doubt the Google thing will ever give us that -- likely just one hour of generic rambling that gets boring.
Finding signal in noise is not an easy job given clip things are moving along. Whatever content creators need to do to deliver quality distilled content - I'm here for it.
This feature is cool as fuck, but I noticed that podcasts it generates loose quite a lot of details from the original article. Even longreads turn into 13 mins chunks.
I've only used the "Deep Dive" generator a few times, and I'm already sensing the audio equivalent of "youtube face" in the style — not saying that's inherently bad, but this is definitely early days for this kind of tool, so consider Deep Dive as it is today to be a GPT-2 demo of things to come.
Do you have a reference for the "Juggling dog" thing? I've heard it with "singing dog", but I never managed to find any "official" reference or explanation of the thing.
He meant singing dog, likely conflated due to his linguistic interest.
"Juggling dog" has only been expressed a single time previously in our corpus of humanity:
During the Middle Ages, however, church and state sometimes frowned more sternly on the juggler. "The duties of the king," said the edicts of the Sixth Council of Paris during the Middle Ages, "are to prevent theft, to punish adultery, and to refuse to maintain jongleurs."(4) What did these jugglers do to provoke the ire of churchmen? It is difficult to say with certainty, since the jongleurs were often jacks-of-all-trades. At times they were auxiliary performers who worked with troubadour poets in Europe, especially the south of France and Spain. The troubadours would write poetry, and the jongleurs would perform their verses to music. But troubadours often performed their own poetry, and jongleurs chanted street ballads they had picked up in their wanderings. Consequently, the terms "troubadour" and "jongleur" are often used interchangeably by their contemporaries.
These jongleurs might sing amorous songs or pantomime licentious actions. But they might be also jugglers, bear trainers, acrobats, sleight-of-hand artists or outright mountebanks. Historian Joseph Anglade remarks that in the high Middle Ages:
"We see the singer and strolling musician, who comes to the cabaret to perform; the mountebank-juggler, with his tricks of sleight-of-hand, who well represents the class of jongleurs for whom his name had become synonymous; and finally the acrobat, often accompanied by female dancers of easy morals, exhibiting to the gaping public the gaggle of animals he has dressed up — birds, monkeys, bears, savant dogs and counting cats — in a word, all the types found in fairs and circuses who come under the general name of jongleur.”(5)
-- http://www.arthurchandler.com/symbolism-of-juggling
I suspect what I heard was a deliberate modification of this sexist quote from Samuel Johnson, which I only found by this thread piquing my curiosity: "Sir, a woman's preaching is like a dog's walking on his hind legs. It is not done well; but you are surprised to find it done at all." - https://www.goodreads.com/quotes/252983-sir-a-woman-s-preach...
Trying to find where I got my version from, takes me back to my own comments on Hacker News from 8 months ago, and I couldn't remember where I got it from then either:
> "your dog is juggling, filing taxes, and baking a cake, and rather than be impressed it can do any of those things, you're complaining it drops some balls, misses some figures, and the cake recipe leaves a lot to be desired". - https://news.ycombinator.com/item?id=39170057
"Dogs were not aware of their shared interest in juggling until the invention of the internet, where like-minded canines would eventually congregate unto enclaves of specialty."
Trying to find where I got my version from just brought me back to one of my own comments on Hacker News from 8 months ago:
> "your dog is juggling, filing taxes, and baking a cake, and rather than be impressed it can do any of those things, you're complaining it drops some balls, misses some figures, and the cake recipe leaves a lot to be desired". - https://news.ycombinator.com/item?id=39170057
I couldn't remember where I got it from then either.
He is adapting one of Samuel Johnson's most famous quotations, about the astonishing sight of seeing a woman preaching - like a dog walking, it may not be done well, but it's astonishing to see it done at all.
ChatGPT itself is them copying their own API users, this is just them building out more features already built by users. My guess is they know they don't have a long term edge in models alone, so they are going to rely on expanding ChatGPT for better margins and to keep getting training data from users. They obviously want to control the platform, not integrate with other platforms
If I'm reading this right; it's been in VSCode as Copilot Chat for a fair bit now. I use it often, when they added context (provide extra files to reference or even the entire @workspace if it's small enough), absolute gamechanger.
Their API is unusable due to rate limits. Myself and my wife have both had ideas, started using it, and found other approaches after hitting rate limits. I tried funding more money in the account to increase the rate limits and it did not work. I imagine they see poor growth there because of this.
It's pretty trivial to get increased limits, I've used the API for a few consulting projects and got to tier 4 in a month. At that point you can burn near $200 a day and 2 million tokens per minute.
You only need 45 days to get tier 5 and if you have that many customers after 45 days you should just apply to YC lol.
Maybe you checked over a year ago, which was the wild wild West at the time, they didn't even have the tier limits.
You need to use it for some time to get into their higher tiers of usage. I used to also have this problem and it annoyed me greatly, but once I got to usage tier 4 it never happened again (except for o1-preview but that just wastes tokens IMO).
LLM as a service is much easier to replicate than physical data centers and there's a much lower potential user base than consumers, so I'd imagine they're swimming upstream into B2C land in order to justify the valuation
Aren't we talking about, say, GitHub Copilot? That's integrated into Visual Studio/VSCode. I just started using it again as they've done some small upgrades, and the results can often be phenomenal. Like, I will visualize an entire block of code in my mind, and I'll type the first couple of characters and the entire block will just appear. I'm literally that predictable.
Copilot is only using GPT3.5 for most of the results though, seemingly. I'd be more excited if they would update the API they're using.
> Honestly I’m scratching my head on OpenAI’s desire to double down on building out their consumer B2C use cases rather than truly focussing on being the infrastructure/API provider for other services to plug into
I think it's because LLMs (and to some extent other modalities) tend to be "winner takes all." OpenAI doesn't have a long term moat, their data and architecture is not wildly better than xAI, Google, MS, Meta, etc.
If they don't secure their position as #1 Chatbot I think they will eventually become #2, then #3, etc.
At the moment this feels like a x10 speed run on the browser wars: lots of competitors very quickly churning who is "best" according to some metric, stuff getting baked into operating systems, freely licensed models.
How do you make money off a web browser, to justify the development costs? And what does that look like in an LLM?
LLMs are a more flexible platform than browsers. They can be prompted, finetuned or run locally. Even if a company wants to make their base model spit ads, it won't fly.
Depends how subtle they are about it, and what the rest of the ecosystem looks like.
Perhaps the ad/ad-blocker analogy would be: You can have the free genuinely open source LLM trained only on Wikipedia and out-of-copyright materials, or you can have one trained on current NYT articles and Elsevier publications that also subtly pushes you towards specific brand names or political parties that paid to sponsor the model.
Also consider SEO: every business wants to do that, nobody wants to use a search engine where the SEO teams won. We're already seeing people try to do SEO-type things to LLMs.
If (when) the advertisers "win" and some model is spitting out "Buy Acme TNT, for all your roadrunner-hunting needs! Special discount for coyotes!" on every other line, then I'd agree with you, it won't fly, people will switch. But it doesn't need to start quite so bold, the first steps on this path are already being attempted by marketers attempting to induce LLMs crawling their content to say more good things about their own stuff. I hope they fail, but I expect them to keep trying until they succeed.
Google and Facebook grew organically for a number of years before really opening the tap on ad intrusions in to the UX. Once they did, a tsunami of money crashed over both, quarterly.
The LLM companies will have this moment too.
(But your post makes me want to put a negative-prompt for Elsevier publications in to my Custom Instructions, just in case)
There is huge choice in open models. People won't adopt one with ads baked in, unlike Google and Facebook, because now there are more options. There are 100K LLM finetunes on HuggingFace.
I've got some of them on my experimentation laptop. They're only good enough to be interesting, not good in comparison to the private models, and the number of fine-tunes doesn't help with that. In particular I've had Microsoft's Phi 3.5 for less than a week and yet I've already had at least 4 cases of it spouting wild nonsense unrelated to the prompt — and I don't even mean that it was simply wrong, I mean the response started off with Chinese and then acted like it was the early GPT-3 "Ada" model doing autocomplete.
One of my machines also has a copy of Firefox on it. Not used that in ages, either. But Firefox is closer in quality to Chrome, than any of the locally-runnable LLMs I've tried are to the private/hosted LLMs like 4o.
I suspect they are building their B2C products because it gives them better data to train on. It's a lot harder to control the quality of data when you have no idea how API inputs were produced, what the UI is like, or who the users are. You don't know the provenance of the data, or the context. Or even if multiple unrelated client products are being commingled through the same key.
If you control the UI, you have none of those problems.
> demonstrates the type of things Google could do with Gemini integrated into Google Docs
Or Microsoft!
> think OpenAI will end up being either an infrastructure provider OR a SaaS, but not both
Microsoft cut off OpenAI's ability to execute on the former by making Azure their exclusive cloud partner. Being an infrastructure provider with zero metal is doable, but it leaves obvious room for a competitor to optimise.
To be honest I think they’re having less success than it appears with their B2B offerings. A lot of cloud providers services like AWS have their own things they sell through those channels and I think a lot of businesses are finding those solutions to be cheaper and “good enough”
> but I wish it were integrated into tools already used for coding
Unless I'm missing something about Canvas, gh CoPilot Chat (which is basically ChatGPT?) integrates inline into IntelliJ. Start a chat from line numbers and it provides a diff before applying or refining.
Yea, I'm wondering the same. Is there any good resource to look up whether copilot follows the ChatGPT updates? I would be renewing my subscription, but it does not feel like it has improved similarly to how the new models have...
I check the GitHub blog[0] from time to time. They also have a RSS feed if you'd prefer that. The is also a waitlist for o1 access you may sign up for[1]
According to this (1), they are using the 4o model. And looks like you'll be able to pick your model(2) in the starting with version 1.94 released this September.
I think this is already built into Microsoft's Office365 "CoPilot" (which I assume is a ChatGPT frontend. You can ask the AI to make changes to your Office documents.
But my subscription at $20/mo is a fraction of my API usage at $5/day (about $100/mo).
You can sell a lot more GPT services through a higher bandwidth channel — and OpenAI doesn’t give me a way to reach the same bandwidth through their user interface.
I only use Gemini in Colab perhaps 5% of the times I use Colab, yet it is nice to have.
I use Gemini, OpenAI, Claude, smaller models in Grok, and run small models locally using Ollama. I am getting to the point where I am thinking I would be better off choosing one (or two.)
You could have said the same for crypto/blockchain 3-4 years ago (or whenever it was at peak hype).
Eventually we realized what is and isn't possible or practical to use blockchain for. It didn't really live up to all the original hype years ago, but it's still a good technology to have around.
It's possible LLMs could follow a similar pattern, but who knows.
It created a speculative asset that some people are passionate about.
However, if you saw the homepage of HN during blockchain peak hype, being a speculative asset / digital currency was seen almost as a side effect of the underlying technology, but it turns out that’s pretty much all it turned out to be useful for.
> Before things like this become widespread, can we get some etiquette (and legal awareness)?
I was watching CNBC interview last week with the founder of https://mercor.com/ (backed by Benchmark, $250m valuation).
The founder was pitching that their company would take every employee's employment and payroll history (even from prior roles) and use that to make AI recommendations to employers on things like compensation, employee retention, employee performance, etc.
The majority of what the founder was describing would clearly be illegal if any human did it by hand. But somehow because a LLM is doing it, it becomes legal.
Specific example: In most states it's illegal for a company to ask job candidates what their salary was in prior roles. But suddenly it's no longer illegal if a big company like ADP feeds all the data into a LLM and query against the LLM instead of the raw dataset.
Copyright issues wasn't enough to regulate LLMs. But I suspect once we start seeing LLMs used in HR, performance reviews, pay raise decisions, hiring decisions, etc, people will start to care.
> But somehow because a LLM is doing it, it becomes legal.
IANAL, but I believe it does not. As it was famously said way back in the day, a computer can never be held accountable, therefore a computer must never make a management decision.
There is always a human in the loop who makes the actual decision (even if that's just a formality), and if this decision is based on a flawed computer's recommendation, the flaws still apply. I think it was repeatedly proven that "company's computer says so" is not a legal defense.
aren't credit score checks literally this? credit scores are assigned to you based on feeding your personal data into some proprietary model. that score determines whether you get a loan, or a job, or a security clearance. there are policies that use hard cut-offs. how is that not exactly this?
It is exactly that (basically), and there's numerous ethical arguments around credit score. But credit score was in a somewhat unique position, because what predated it was obvious racism, sexism, and other types of discrimination.
For both companies and consumers, it was a step up. Now, I'm not sure if that's the case.
Today there's still many legal and moral qualms about using credit score for job applicants. It's illegal in many areas and highly scrutinized if a company does this.
Well, I haven't really kept track on this, but I believe some states (at least California and Illinois, I believe) prohibit use of credit scores for employment decisions, and I think there was some legislation banning this approved by the House but haven't passed Senate or something like that...
So, yeah, you're right that it's an issue - and chances are we'll see a wider bans on this (governments are extremely slow).
> take every employee's employment and payroll history (even from prior roles)
Selling and purchasing employment history is thankfully banned in a growing number of states. Their business prospects in the US will eventually shrink to zero.
I’m not a ML expert but I’m not sure a _language_ model makes sense here.
A model like that… It’s basically going to be a ZIP code to salary coefficient mapping on steroids (with more parameters). The model by itself is probably (IANAL) legal if it can no longer produce any data points for individuals, but whenever using it for hiring purposes is legal or not certainly depends on inputs: e.g. feed it protected category (or a data that strongly correlates with one, e.g. name -> gender) and it most likely won’t fare well in court.
You can read the code, but it’s complex and convoluted.
He referenced Backbone as a comparison point. If you compare Backbone’s source code to React’s source code, you’ll get what he means.
Basically, libraries like Backbone are small and simple enough that you can literally read the source code and fully understand how it works. Compare that to React, where the source code is an order of magnitude larger and very difficult to fully grasp the inner workings.
The simplicity and ability to easily understand the source code obviously comes with tradeoffs (e.g. with Backbone, to use it to build a reasonably complex app, you basically have to build your own framework on top of it, compared to React which has more abstractions and therefore is more plug and play.
So it doesn't matter whether this practice is unique to TikTok. "What about all my competitors doing the same thing" isn't a viable defense in court.
reply