Hacker News new | past | comments | ask | show | jobs | submit | dlg's comments login

Wasn't that Scott McNealy? (Though, if I recall your bio from previous HN posts, you'd know far better than me.)


Right, Schmidt actually said: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."

https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmid...



Just great, now I'm getting so old I'm mixing up my Sun Executives :-).

You are correct. I also believe Scott said something to the effect that IT was dead although we all know how good of a prediction that was.


There was a version of streptococcus mutans developed that didn’t produce tons of lactic acid and would have pretty much ended tooth decay back in 2000. Iirc, it was built to outcompete the regular bacteria too. As far as I can tell there’s been no progress in commercializing this-—I assume because of the cost and complexity of FDA approval.

E.g., https://pubmed.ncbi.nlm.nih.gov/12369203/


Looks like that research led the lead author to found a company[0] and develop a probiotic tablet[1]. Seems like it could be worth trying.

[0]: https://www.dentistryiq.com/dentistry/oral-systemic-health/a...

[1]: https://probiorahealth.com/


[1]: https://probiorahealth.com/ resolves to google, yet it shows in google results, so dont know if the HN hug of death has forced this change.

Last scan by the wayback machine.

https://web.archive.org/web/20230628125533/https://probiorah...


No way?! Great find, dude. I wonder if anyone can comment about it specifically or if it might be worthwhile to start a thread soliciting users experience...


I've never tried it but I see cause for skepticism: the claim is that these beneficial bacteria will out-compete the harmful ones. But if that's true, why would it take 30 days for them to get established? One shot of Listerine to kill what's there now, then one batch of the good critters to get them started, and, if the claims are true, you should be set for life, right? So something doesn't add up.


Doesn't your body have a reservoir of your microbiome throughout your body (like in the appendix or whatever)? Is it that far-out that some of the mouth stuff makes it there as well? Keep in mind, these bacteria have probably adapted and co-evolved over our development as a species, they must be fairly hardy and well-positioned

Not qualified to argue with your very logical position here but I feel like it might be a longer-term transition and there might still be hold-outs if its only a one-and-done deal like you've described.

Mea culpa tho, I definitely want to believe


> I definitely want to believe

Me too. Nothing would make me happier than for someone to show me why I'm wrong here.


It totally makes sense as a real-life conspiracy theory too, although I'm not super familiar with the ADA's exploits. Obviously, dentists have an enormous amount to lose if something like this ever escaped the laboratory, so to speak.


That's true, but if it really were the case that you could stop cavities and gum disease by popping a pill that was already on the market I don't see any way they could stop it. I also think that there are a few ADA members who actually care about people's dental health, and if they thought that there was a conspiracy to suppress such a thing, they would have said so.

Like I said, nothing would make me happier than to be proven wrong about this. But right now my money is on the things-that-sound-too-good-to-be-true-usually-are theory.


It could be the case that these specific strain of good bacteria does not last long enough in the mouth for some reason. For example they could mutate or they might not have the capability to attach itself to the tooth surface for long enough.

In their website they claim that within 30 days the good bacteria will outcompetes the bad one. I don't think you can stop taking the tablets after those 30 days completely to keep tits benefits. Those bacteria might die down over a course of a few months for various reasons.


What do you think "outcompete" means in the context of evolutionary biology? Something is going to set up shop in your mouth; it's just too attractive an environment to be left fallow. Whatever that ends up being without intervention has by definition outcompeted all the other contenders. So if the good bacteria don't persist, then by definition they have not "outcompeted" the competition.


Guess the main point is the environment itself changes depending on what you do and what you eat. So you need to constantly resupply the initial good bacteria for them to keep holding on


> you need to constantly resupply the initial good bacteria for them to keep holding on

Then unless your mouth ends up bacteria-free, the good bacteria are by definition not out-competing the bad ones.


Is there any reason you can't just propogate and cultivate them in a seperate breeding receptacle for an unlimited supply like people do with SCOBY or whatever for making sourdough and kombucha?


Is there any reason you can't just propogate and cultivate them in a seperate breeding receptacle for an unlimited supply like people do with SCOBY or whatever for making sourdough?


Zinc starves bacteria via a variety of means, and a high zinc intake will see high levels of zinc in the saliva and in the teeth, helping to keep bacterial levels down, but RDA's are highly conservative amounts for young healthy people, not old or ill people, which then makes some RDA's woefully inadequate.

Very few products kill 100% of bacteria, even deionised water will still have less than 25 colony forming bacteria per litre in it, although by virtue of being deionised has less in it to help bacteria get established. Acidifying water will make it harder for pseudomonas to get established. But when scientists say they are searching for life on mars or an asteroid, they are referring to bacteria, mainly the bacillus aka rod shaped bacteria as it can survive in radiation 100,000 times more than humans can survive in, and extreme cold like space, so global warming and melting ice at the poles presents new viral and bacterial risks.

Diet can also reduce the body's own immune response, for example calcium disodium ethylene diamine tetra-acetate aka Calcium disodium EDTA found in a variety of products from makeup to food like Mayonnaise chelates zinc reducing zinc's ability to activate GPR39.

Zinc's inability from deficiency or chemicals like the one mentioned above, from being able to activate GPR39, creates a myriad of problems in human health, including reducing saliva production. [1]

There isnt anything wrong with using bacteria to out compete other bacteria, but adaptions occur.

Phages are viruses that kill bacteria, something the Russians developed decades ago as the West when with antibiotics[2]. I would also consider Georgia as a medical destination for some conditions as they are superior to Western options. Some of their doctors do scoff at the Western doctors!

You can find millions if not billions of phages in just 1ml of seawater [3]. The problem with phages is they take time to develop, so you could be dead before the bacterial strain is identified and a phage is developed, so antibiotics are the fastest immediate response, but the gold standard is antibiotics until the phages have been developed and then used as a part of a treatment program, but you'll only get this from very expensive private healthcare.

Of course drinking seawater when one goes surfing is a bit of pot luck, or lucky dip with regards to consuming phages. It makes me wonder if Surfers Against Sewage know about phages. [4]

So lots of different ways to tackle health problems, but medical experts cant always use them due to cost or simply lack of knowledge.

[1] https://www.mdpi.com/1422-0067/22/8/3872

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6203130/

[3] https://www.bbc.com/future/article/20210115-the-viruses-that....

[4] https://www.sas.org.uk/


I've always had this idea for something you could put on your teeth before you start eating like those trays people use for whitening but obviously more robust. I wonder why nobody has ever pursued this angle, stop the food from ever touching your teeth in the first place. I'm sure someone will be able to shoot this down in 3 seconds but I fail to see why one couldn't simply create a flexible barrier to prevent all these problems. Sort of like a dental dam for eating


I’ve had this same thought for years haha. Something like a thin dental guard


There are a few of these kinds of products, I wonder how they compare

https://www.lifeextension.com/vitamins-supplements/item02120...


I wonder if there is anything more to the reason that it has never gained any real traction? A lot of groups would stand to lose a lot of money if something like this become used by even half the population. This would mean a massive reduction in purchases of many dental products, visits to dentists, procedures needed by dentists, etc.


How'd you call this lactic acid promoting cartel? Big Toothpaste? Big Plaque? Big Teeth?


You're trying to make a joke here, but please do some reasearch about the dental lobbying groups. The ADA is no joke.

What OP is implying is not really tinfoil hat material. As one example, one of the reasons parroted (by democrats, actually) that we won't have universal healthcare ever, is that it's going to cause thousands of health insurance jobs to dissapear.


Then why doesn't a country with a nationalized health care system do it? There are incentives in other countries that would encourage this, if it were possible.


My nationalized healthcare comment is unlreated to the existence of the dental industry in its current form. It was just an example to point out that protecting jobs is something our elected leaders are worried about when lobbyists are paying up (they're not really that worried about jobs dissapearing due to automation or mergers or monolopies, etc..)

Regardless of the type of healthcare (nationalized, private, etc..) the dental health industry is still getting paid, the only diffference is who is doing it. The original statement was that a breakthrough in preventive medicine will destroy a large portion of the bread and butter of the dental health industry, which will lead to it being only a fraction of what it is today. Industries fight tooth and nail to keep growing. Guess what they do when their existence is threatened.


> Regardless of the type of healthcare (nationalized, private, etc..) the dental health industry is still getting paid, the only diffference is who is doing it.

No, one difference is who is doing it, another one is the sums being payed. That's why the dental industry in countries with national health care fight to stay out of the general health system. The lucrative compensation of the practitioners lead people to want to go specifically for that. If it were part of national health care it would cease to be sure path to accumulating loads of money, the incentive to become a dentist as well as the power of current practitioner will be similar to that of family doctors or orthopedic specialist. Spoiler alert: in countries with national health care their status is higher then the status of teachers in national school systems, but not by much.


I'm not saying there isn't such a country, but the countries with national health care that I know about don't or don't fully cover dental health (presumably because one can live in perfect health with a rotting mouth and not due to a strong dental healthcare lobby /s).


How oh how did our ancestors survive without the dental lobby!? How did the children with their rotten ass teeth consume all their goodies and sodie pops with such poorly functioning teeth!? Its a complete mystery, one I'm sure is lost to the sands of the 1950s.


They died at young age, miserably?


Most of our ancestors just ate much less refined sugar, so probably had less need for dental care in the first place.


> ...there was never yet philosopher that could endure the toothache patiently (shakespeare)

Why defer to a baseless "probably" when you can easily check what is known about the past? Wikipedia doesn't put the blame on refined sugar but rather point to farming as to what correlated with dental issues. Are you familiar with the barber's pole? that's the sign for the location where our ancestors went to be relived from owning teeth.

https://en.wikipedia.org/wiki/Barber%27s_pole


I appreciate your response. I almost didn't even make the original comment because I knew I would get 'conspiracy nut' responses, as I did lol I promise, I'm the furthest from a conspiracy nut.


You think that the governments of the world with much poorer people wouldn't jump on something like this? Improving dental health dramatically improves outcomes all across the board.

If something works, someone, somewhere in the world would start using it. Hell, people are willing to use stuff that is flat out harmful simply because some people on the internet said so.

The big issue with bio things is that the human organism has a lot of variation and a lot of cures sorta work for some people some of the time. Consequently, a high enough bar to get FDA clearance has to be significantly strong.

(Two good recent examples: A woman died from oxalate overload from drinking green smoothies and Vitamin C and anti-oxidants can spur cancer growth. Does that mean that everybody should stop drinking green smoothies and taking Vitamin C? Obviously no. But it shows that humans vary and that things aren't always straightforward.)


What's more likely, vast cabal conspiring against competing product, or competing product just doesn't work as well as claimed? There aren't nearly as many grand conspiracies out there as there should be.


> it was built to outcompete the regular bacteria too

What if it invades the gut?


Very different conditions


My naïve thought, assuming it was effective, could outcompete acid spewing species, and you had dosed a handful of the population: why would it not have been able to spread through the population?

Inoculated person X kisses two people, they go on to kiss two people, etc. Probably too simple a model, but I assume that kissing spreads all manner of microorganisms. How much do you need for the bacteria to take hold?


In the US, I believe that would not be legal under GINA (the Genetic Information Non-Discrimination Act of 2008) https://www.genome.gov/about-genomics/policy-issues/Genetic-...

(Edit: per tssva—LTD & life are generally state laws, GINA is health ins and employment)


GINA covers health insurance and employment decisions. It doesn't cover life or disability insurance.


Looking, you’re right. Those are regulated at the state level. But it does look like pretty much every state has a law.


> Looking, you’re right. Those are regulated at the state level. But it does look like pretty much every state has a law.

The 1st state I checked to validate that, my own, doesn't. The state level laws much like the GINA act only cover employment and health insurance. There is a newly enacted law preventing consumer genetic testing companies from disclosing results without consumer consent, but nothing stopping an insurance company requiring consent for access or requiring their own genetic testing before issuing life or disability insurance. Based upon this I'm not comforted by the assertion that "pretty much every state has a law".


> preventing consumer genetic testing companies from disclosing results without consumer consent

Even if you didn’t consent, genetic genealogy can still be used to triangulate your genome from relatives of yours who do consent. This is still a manual process for now, but it’s very likely that a CODIS-like system to automate DNA triangulation for purposes of fingerprint search will be implemented soon. Only a small step from there to insurance companies being able to deny you coverage based on an “sub-clinical family history” of something.


My previous comment was more about not trusting the comment about state level protection than having an issue with that lack of protection. Life and disability insurance companies already deny coverage based upon "sub-clinical family history" of conditions. They do so based upon gathered family medical histories. They will also deny you coverage based upon a required medical examination. What is the issue with adding genetic screening to the list of tools?


> Life and disability insurance companies already deny coverage based upon "sub-clinical family history" of conditions. They do so based upon gathered family medical histories.

The term "sub-clinical" means "something that has not yet caused you any problems bad enough that you mention them to a doctor, and therefore never makes it into your medical history; and which also would not yet be revealed by a medical examination."

To be clear, a "sub-clinical family history", then, isn't information about your sub-clinical conditions attained from medical data about your family's clinical interactions (that would be a regular family history!); rather, it's information about your clinical or sub-clinical conditions, deduced through triangulation of your (potentially quite distant!) relatives' sub-clinical conditions, which were in turn discovered through genetic screening of those distant relatives, that they themselves did consent to, as some presumed-boilerplate when submitting their DNA to ancestry websites and the like.

There is currently no way for insurance companies to be aware of your "sub-clinical family history" besides just asking you. With automated triangulated genetic screening, they would have a way to get around asking you.


Which makes the process of dismantling or avoiding the regulation actually slightly easier because all you need is one state to defect in order to cause a precedent which then allows for a regulatory cascade.

It’s almost deterministic at this point, and you see how they did it for clawing back reproductive rights.

And this issue is obscure enough for a small enough current population, that you would not be able to actually build a robust counter protest in any kind of sensible way.

So really all it would take is a handful of just Millionaires to care about this problem to throw — let’s call it $10 million - at lobbying in order to make it go their direction.


Just another reason to work hard and succeed.


Liberalism is currently not succeeding.


(downvote all you want but look at the supreme court, Dobbs, and the inability of the democratic party to hold on to power at all--at some point the excuses need to end and the party needs to be judged on the basis of where we've actually wound up)


Sorry Lamont, I forgot the /s

And fully agree on the DP.


Yeah I was about 50/50 on if you were being sarcastic or not.

I caught several downvotes and you can't downvote replies, so I was talking to the audience on the second comment.


Life insurance companies already factor risk from family history without using a genetic database

https://havenlife.com/blog/family-medical-history-life-insur...

They also take into account smoking. Playing devil's advocate if a car insurance company can charge you a higher premium because you are male why shouldn't a life insurance company use your genetic code?


The US would have to care to enforce it first.


Interesting how close this statement is to US does not enforce care about health.

Or rather, US does not have health care.


The change you describe from N+1 to N+2 wasn’t made to defeat your hack. It was made because we got lots of complaints that people thought the filters were buggy/broken when they saw people who didn’t match their filters and because they were seeing irrelevant people, lowering the chance of a match. The set of people who already liked you were being served out of a different service than regular recommendations and it was, iirc, just a Redis list until fixed. (More generally, we never purposely made the recommendation algo worse to increase boosts or because people would only stay if they didn’t meet someone during my era, even if everyone thought that’s we did. I haven’t been involved in several years however.) In any case, sorry!


>More generally, we never purposely made the recommendation algo worse to increase boosts or because people would only stay if they didn’t meet someone during my era, even if everyone thought that’s we did.

Would anyone admit this publicly? That's a surefire way to destroy your career or getting sued.

Also begs the question for which performance indicators did you optimize if not engagement and retainment?


The YIMBY's are asking the individual property owners be allowed to decide what to do with their own properties. Not letting government dictate what people do doesn't seem like "tyranny" or "expropriation" to me.


It changed its name from the Scholastic Aptitude Test in 1990. (Over time they’ve also eliminated some of the sections most correlated with IQ.)

There’s a good history of the SAT and where it succeeded and failed in its goal of making college admissions more fair, The Big Test by Nicholas Lehmann.


What were the sections removed that were more correlated with IQ?


Is the creator thinking of doing another printing? If not can someone recommend a service that will print a high quality card deck given the PDF?


I print with some professional card printing companies, the pricing goes something like 10 decks, 500$, 50 decks 800$, 100 decks 1200$, and 300 decks 1500$ and then it continues to drop

Since I printed only 50, I think the price is too high, so I would rather to give them for free than to charge unreasonable price. If you are willing to pay the shipping cost send me an email to b0000@fastmail.com, I still have few left.


You can get the 50 deck rate for even a single deck (and there are often coupon codes) at https://www.printerstudio.com/unique-ideas/blank-playing-car... or https://www.artscow.com/photo-gifts/playingcards

Also, if you're trying to give this away, they both allow you to share a link to your design so other people can buy the cards direct.

----

edit: the sites may seem cheezy, but they're probably responsible for 95% of prototype card decks that professional designers print.

For other excellent non-Chinese, Buy America options, there are https://www.printplaygames.com/product-category/prototypes/c... , https://www.thegamecrafter.com/make/pricing#Cards and https://www.drivethrucards.com/joincards.php


For those kind of costs, I'd print 300 for $1500. That's just $5 a piece. Sell them for $10, and you can afford to give away half of them.

I think this project would make a great Kickstarter. I don't think it would be hard to get 300 people interested in backing this. Shipping is probably going to be the biggest issue; find people on other continents to help you distribute it there. That can save a lot of money.


Maybe you could run all the decks as a kit on kickstarter?

I would love to buy all of them as a set, and I believe a lot of others would as well.


I am halfway done with the C deck, as we are switching to C soon, and I will setup a kickstarter after, should be done around December.

I want her to know why x[3] and 3[x] are the same thing.

    int x[3];
    2[x] = 5;

    printf("%d %d\n", 2[x], x[2])
A lot of people struggle with

    x = 5
    y = 6

    y = x

    x = 7
    print(y)
and

    x = [1,2]
    y = [3,4]

    y = x

    x.append(5)
    print(y)
There is something magical in understanding how the computer uses its memory, its almost as if you walk out of a mist.

I think it will be very valuable to have a set of 4 decks: python, machine code, unix pipes and C, so that the decks compliment each other. In the machine code deck there are few cards that have pointers (e.g. https://punkx.org/4917/play.html#43), and they can be used to help with the C deck for example.

Then its LISP.


Sounds awesome! I’ll pitch buying a bunch of these for work as well. Extremely good idea!


A cheap laser engraver could work well here.


Would you need 2layered paper, with a different color underneath? Or would engraving the words directly onto card stock be legible?


Salganik MJ, Dodds S, Watts DJ Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market

https://www.science.org/doi/10.1126/science.1121066


Thank you for finding this, I've been asked for the source before and could never dig it up!


I am not a lawyer, but I've had to argue about copyright with several.

In the United States, there are two bits of case law that are widely cited and relevant: In Kelly v. Arriba Soft Corp (9th), found that making thumbnails of images for use in a search engine was sufficiently "transformative" that it was ok. Another case, Perfect 10 (9th), found that thumbnails for image search and cached pages were also transformative.

OTOH, cases like Infinity Broad. Corp. v. Kirkwood found that that retransmission of radio broadcast over telephone lines is not transformative.

If I understand correctly, there are four parts to the US courts' test for transformativness within fair use (1) character of use (2) creative nature of the work (3) amount or substantiality of copying (4) market harm.

I'd think that training a neural network on artwork--including copyrighted stock photos--is almost certainly transformative. However, as you show, a neural network might be overtrained on a specific image and reproduce it too perfectly--that image probably wouldn't fall under fair use.

There are also questions of if they violated the CFAA or some agreement crawling the images (but Hiq v Linkedin makes it seem like it's very possible to do legally) and whether they reproduced Getty's logo in a way that violates trademarks (are they trying to use it in trade in a way there could be confusion though?)


Search engines don't create market harm for a work because they don't compete with it. In fact, they do the opposite: they advertise the work, making it more accessible and increasing exposure.

These AI tools on the other hand seem to do the exact opposite. They can (or could, if they got good enough) absolutely compete with a work, and therefore seem like they create substantial market harm. The character of use also seems vastly different; AI tools are creating images explicitly to be consumed, vs a search engine is basically just an index, and only shows the image in so far as it needs to make it discoverable.

So three of the four tests for fair use seem clearly against AI image generation, at least to me. The only test that possibly goes in favor of AI is the amount or substantiality of copying, but AIs can easily reproduce images, or if not entire images, other substantial subsets of a composition.

I just don't get how these could possibly be fair use.


As I see it, 3 of the 4 tests are strongly in OpenAI's favor; the 'market effect' is mixed.

(1) The use is highly transformative;

(2) the images used were offered to the anonymous browsing public (with watermarks);

(3) the end effect of training will only retain a tiny spectral distilled essence of any individual photo, or even a giant source corpus;

(4) there's a potential risk of market competition from the ultimate model output, for some uses – but that's also the most 'transformative' aspect.

Getty et al could potentially just ask creators of such models not to include their images – perhaps by blocking their crawling 'User-Agent' – and it might not make any real difference in the models.


I'm still not seeing the "transformative" argument: the point of transformation isn't "it is in a different format" but (to quote Wikipedia, which is, of course, dumb... I'm sorry ;P) where one "builds on a copyrighted work in a different manner or for a different purpose from the original". The reason a search engine thumbnail is transformative isn't because it has been transformed to make it smaller... it is because the purpose of the resulting use of the image is somewhat unrelated to the use the original author was going for when they made the original image. At issue here is then that, rather than using an original image from Getty Images, someone decided to take all of the images from Getty Images and churn them through some algorithm that generated an image that directly competed with the original images from Getty Images. So like, sure: if you really only narrowly want to talk about OpenAI, what they are themselves doing (training and distributing a model) might potentially be legal, but the people using the result would seem to be in serious hot water... oh, and actually, I think they run it all a service, don't they? So no: I don't even think that defense works, as OpenAI is in some sense not even selling a model, they are merely directly competing with Getty Images to provide sell photos to people.


Autogenerated, often fantastical, never-seen-before AI images strike me as a paradigmatically 'transformative' use. It's novel. It's shocking to many practicioners how flexible & high-quality the images can be. It will unlock all sorts of new downstream creation.

The representation that feeds the generation is statistical, even to the point of being plausibly factual: these things/people/places/concepts can be abstractly represented as the balanced weights inside the model. And under US law, facts aren't copyrightable.

I could see a case being factored as: (1) the scraping/training/ephemeralization itself involves the usual copying of downloading/locally-processing images, like indexing, but all those 'copying' steps are fair-use protected, as science/transformative/de-minimus/whatever; (2) any subsequent new-image generation no longer involves any 'copying', only new creation from distilled patterns of the entire training corpus, in which Getty retains no 'trace tincture' of copyright-control. So there's no specific acts of illegal copying to penalize.

Also, a human artist would be allowed to review related Getty/etc preview images, free on the web, to familiarize themself with a person or setting, before drawing it themself, with their own flair – as long as they don't copy it substantially. Why wouldn't an AI artist?


"AI artist" doesn't add any of its "own flair". It builds exclusively on past experience and work of humans. And it also directly completes with them without any thought of credit or compensation.

People are really underplaying how damaging this is going to be for the industry. It's going to completely decimate it. You can already see people using names of artists in the DALL-E prompt to get "their" work for few dollars avoiding any copyright or social issues.

Artists will suddenly be competing with AI on price and time - why we should pay you living wage when we instantly generate something close enough.

Why would anyone try to create some new aesthetic or push anything further if their effort will be replicated next week when the model gets updated with new source data. Everything is gonna get stuck to aesthetic of 2025 and before.

It's completely inhuman.


The synergistic effect of all the AI's inputs absolutely results in a unique new 'flair', with extensions, reversals, and mash-ups of styles just as in human-made artistic styles.

And AI "builds exclusively on past experience and work of humans" just like any young new human artist equally does. In many cases, you can even tell the different models' outputs apart, not by raw quality or glitches, but by hard-to-describe aesthetic tendencies.

I share your concern on the effect on human artists – both the market for their work, and even their morale, when learning, knowing that decades of practice will still be outproduced by seconds of computation.

But I don't think the genie will be put back in the bottle, by either expansive interpretation of existing copyright law, or even new laws.


Indeed the genie is out. And while we will get some interesting AI uses ultimately this is degenerative tech. In the end we end up with less authentic, less unpredictible and less delightfull art. Instead we get the perfectly suited to us, predictible, mediocre stuff.

I said it in comment above - yes people build on work of others but they also bring lots of their originality and intelect. Part of what people do is truly uniquely theirs and piece by piece we progress as a whole.

The crutial detail is that AI learns only from visual patterns from past and cant think at all. And humans learn from everything around them and think about it deeply.


I don’t believe we will lose the capability to create new original styles. If a prompter can describe the creation of a new style, the AI can create it. Using both iterations of image & text prompts, unique styles will come.

The thinking is still done by the human prompter.


The value of the image is in the human prompter (in the overall concept) but the overall style - the aesthetic is stuck in the past. Its almost impossible to describe aesthetic in text without referencing examples of that aesthetic. Its the case of one image says more than thousand words. It has to be seen.

I am not sure finding new aesthetics is even the playingfield nowdays. Its probably not because we’ve been stuck for decades. Its more about cyclic trends of things forgotten. So who cares. But this will just solidify that even more. But yeah it has already happened and since the tech will be firmly in private hands everybody will be just exploited and pushed by it instead of it helping anyone.


One could make the same case about humans, nobody works in a vacuum. Even though he used it in a pejorative sense, Sir Isaac Newton, the famous English scientist, once said, “If I have seen further, it is by standing on the shoulders of giants.”

That humans are capable of developing their own style could still be argued that it's just a intermixing of previous work that they've seen, but they've combined it in a different way, which effectively is exactly what these generative systems do.


Of course humans build on work of other people. And what they do is partialy a mashup. But their work is not only replication of visual patterns. Its thinking its other non visual experiences its their politics and world views combined in their work. Often its their life project.

To think that artists only mash up what was before them is quite obviously wrong.

But its exactly only thing the tech does.


I'd argue that if an artefact such as a watermark is copying even more substantially than any other human would and that human would at best be labelled as unoriginal, or doing very derivative work or be in violation of copyright.


Perhaps I’m misunderstanding your argument, but my counterexample would be: if a human digital artist transformed a Getty image, resulting a fantastical, never-before-seen result, using software like Photoshop, that use would be no more defensible. If anything, the vast scale at which this occurs in AI makes it worse.


I think your hypothetical would depend on the character & extent of the transformation. Mere filters that leave the original recognizable? Probably an infringement. But creative application of transformations to express new ideas? Maybe not – especially if the derivative is a comment/parody on the original, that actually increases interest in it. Most art is a conversation with the past, reusing recognizable motifs & often even exact elements.

For example:

Andy Warhol died in 1987, 35 years ago. One of his 'Prince' collages dating to the early 80s used another photographer's photo, without permission. In 2019, one federal judge ruled that was not infringement. An appeals judge then said it was.

The Supreme Court has decided to take the case.

The US Copyright Office & Department of Justice agree with the photographer in briefs filed with the court... but the mere fact the Supreme Court took the case indicates they think there might be issues with the appeals court ruling. They might agree with the original judge!

Oral arguments come this October. See:

https://www.reuters.com/legal/litigation/us-backs-photograph...

So, when all the (possible) disputes over AI-training-on-copyrighted-images resolve – maybe in the 2030s or 2040s? – what will the laws say, & courts decide? It'll depend a lot on other specifics, & reasoning, that may not be evident now.


Thanks, that is a thorough and interesting reply.

I find legal disputes in fine art interesting, however—IANAL, of course—I understand that fine artists (Richard Prince comes to mind) are subject to very different copyright restrictions than graphic artists under commercial use.

It’s, as you said, up to courts to decide. But AI generated imagery is frequently commercial in nature (KFC, already). AI services are trained on unlicensed commercial stock images, and are able to reproduce enormous quantities of derivative images, and do so at a profit. I think that’s categorically different from a fine artist appropriating imagery in a single artwork or even series of artworks in an entirely different context.


These AI generated images are directly competing with stock images. AI tools are selling images to blogs and other customers that often would purchase stock images instead.

The "character of use" is not in favor of dall-e, it is a commercial use.

Copyright law does not require getty to block a user agents or ask them not to include their images.

Another issue here is that removing copyright management info like a watermark is a violation of the DMCA, separate from fair use or copyright infringement. These cases have statutory damages and attorneys fees awarded.


Whether something is directly competing for the same business would have to be evidenced, and copyright doesn't mean protection from all possible competition - it's just one factor weighed. And fair use protects many commercial uses, too, depending on proportion/character-of-original/etc.

But also, none of these images are direct, or even necessarily subtantial, "copies" of other images. The generator learned from other images – the same as any human artist might.

No watermark has been removed; the bigger issue may be that the spectral watermark violates a trademark. (But, I doubt consumers are likely to be confused.)


"The generator learned from other images – the same as any human artist might."

A lot of people seem to make this comparison, but I don't think it's fair. It's wrong. A computer is capable of ingesting/processing and "learning" from images at a rate no human can possibly come close to matching. To elaborate, it is not actually learning in the way we normally think of it, as its "brain" is completely different from a human's brain. It is doing something entirely different that should have its own word. Human artists learn from other human artists' work. An AI does something else.

It's also worth noting that the art the AI was trained on was posted online when the technology didn't exist (or if it did in some form it was not in the state it is in now). So an artist having posted their art online for public consumption can't be equated with somehow consenting to its consumption by a web scraper / AI.


It's great that human artists learn from, & introduce into their work, influences other than just patterns seen in other works.

But it's also great that AI artists can learn from more examples in a few minutes than a human artist might see in lifetime.

To say that's "not actually learning in the way we normally think of it" is superficially true, but it doesn't mean it's "not actually learning", or necessarily any worse than typical learning. It's so new, & we barely understand fully how it works or what its limits are. It might be better in many relevant & valuable aspects!


Fair, I don't know what it's actually doing. I just know you can't equate it with anything a human does, and the use of the word "learn" is misleading, or vastly oversimplifies what is happening, to the point that it allows for false analogies.

That said, my main objection to this technology is that:

- The AI's work is based on human artists' work

- Companies are then profiting off of the AI's work

- The companies are indirectly?/directly? profiting off of artists' work

- The companies do not get artists consent or compensate them in any way

- The companies are essentially stealing from artists

Companies should be forced to obtain the creator's consent when using art to train their models.


It’s going to be interesting what the stock companies will do. Maybe they will make their own Image Generator. Perhaps we will see a case based on the new factor that is AI. An AI is not artist; they can’t be conflated. A decent artists can churn out maybe 5-10 works if he is productive. AI can churn out by the hundreds or thousands if needed. The process also isn’t the same.

Anyway it will be interesting to watch this space.


AI generated images cant be copyrighted.


Given the iterative contribution of a artistically-talented human prompter, I'm not sure that precedent – set by the Copyright Office in the US, rather than a clear statute or court decision – will hold up. A court might decide differently, or a statutory update could overrule the copyright office, especially in cases where an individual output is the mix of human & AI effort.


I have a hard time agreeing with 3, given https://ibb.co/DzGR063


aside from if it is not copyrighted the image, the Getty watermark usage probably might have a bunch of issues.


> Search engines don't create market harm for a work because they don't compete with it. In fact, they do the opposite: they advertise the work, making it more accessible and increasing exposure.

AMP, snippets, Knowledge Base and in-app browsers would like to have a word with you


Knowledge Base I grant you, but snippets are a crucial feature to trust a result is correct before clicking through.

AMP is completely unrelated so I'm not sure why you mention it. Website owners have to create a specific version of their own site for AMP to even work.


It seems it is possible to generate images which are very similar to the existing stock photos if you feed getty images' description into DALL-E.

I tried it with a distinctive banana image:

https://imgur.com/a/0OrIr6e


"very similar" insofar as it's following the narrow prompt, sure.

> Different runs can generate different size, orientation and placement of the bananas, as well as different shades of pink.

At that point it's definitely the curation causing any possible derivation. The image generator is innocently doing what you ask in an unbiased way.


Those bananas are completely different. There's no copyright infringement there. I could take a photo of a banana and photoshop it repeatedly onto a pink background. That would look just as similar, and there's no copyright problem there.

You can't copyright an idea.


Images are different, but it appears that DALL-E is inspired by the aesthetics and the layout of the copyrighted material.

Another example, picking a random image from the Getty Images site. "A young parkour flips through the city,guangzhou,china, - stock photo":

https://imgur.com/a/pPruwzA

The images are obviously different, but it appears that DALL-E maps the getty images description to similar tone, similar perspective, similar background, and similar weather conditions. I'm sure there are thousands of possible backdrops in Guangzhou, and many ways to show a parkour flip. Even in the Google image search results there's more variance than in the output of DALL-E.

So you can't copyright an idea, but you can certainly scrape a copyrighted DB with image metadata, and use it to create your own product. My point is that DALL-E itself might be a derivative work of Getty Images and thousands of other online catalogs.


Interesting. Adding "stock photo" to the string generated that getty tag? That is probably the most attackable (alas easy to fix) part of the issue. It will be an interesting question how close to the original a picture has to be to be considered the same (I'm sure there's some case law) and maybe there's some new research to be done regarding how to recreate the training data images with the correct search string (I suppose one could build an ML model for that).

Fun times ahead


No, I didn't get the tag. But I suppose that Getty metadata as well as the images were used for training.


From what I understand, the actual process of fair use boils down to "the judge decides in his/her gut if the use is fair, and then writes up the analysis to justify coming to that conclusion." If you look at the recent SCOTUS opinion in Google v Oracle, you can see how two judges can look at the same facts and come to almost diametrically opposed fair use analyses. My further understanding is that generally the #1 overriding concern in fair use analysis is money, which means you're more likely to see analysis along Thomas's dissent than Breyer's opinion.

In this case, let me give a fair use analysis that is going to suggest that this isn't fair. Factor 1 weighs against fair use: it's not transformative because, well, transformative is extremely narrowly interpreted against fair use. Factor 2 weighs against fair use because, well, it's factor 2 and it weighs against fair use unless the underlying copyright was paper-thin in the first place. In factor 3, it's weighing against fair use because it's not copying the minimal amount of the original work to get what it needs (it copied the watermark after all!). And factor 4 of course weighs against fair use because you're essentially creating stock images which is naturally in the exact same market that a stock image provider is in.

If you wanted to write a fair use analysis that finds fair use, you'd argue instead that the work was transformative, and the amount copied also weighs in favor of fair use (thus converting factors 1 and 3 to weigh in favor of fair use). You might try to argue that it's a completely different market, but I'm incredibly skeptical that such an argument could win over both a district court and an appeals court (although Breyer's opinion in Google v Oracle did basically follow this thread of analysis, its repetition is unlikely since everyone wants to pretend that Google v Oracle has 0 impact to anything outside of software). Such an analysis is possible, but unlikely, since the unspoken factor of "could you have paid for this" tends to be the factor that wins out over everything else.

Note that we are going to have a SCOTUS case in the fall that will specifically explore transformative uses in the context of fair use: Warhol v Goldsmith (https://www.scotusblog.com/case-files/cases/andy-warhol-foun...). I'm not going to hold my breath that the use will be found fair, though.


Putting aside the core question of the legality of training data on licensed material - what about the false advertising/copyright aspect that comes with slapping a "GettyImages" logo on some random nonsense generated by a "neural network"?


It's not worth discussing about Getty so much. AI labs will collect a dataset to predict if an image is watermarked. They will crawl to index the Getty images to make sure they are not in the training set. Then retrain and in 2 months the problem is solved. They can cut out a sizeable part of the training set without problem, the model will still be good.

They can also OCR the output to make sure there are no blacklisted words and use an index to skip all images that look too similar to the training data. Then the argument of copyright defenders is going to be weakened.

The fact that a prompt and curation are necessary also goes against the "AI works can't be copyrighted" narrative - it's generated by a human-AI team, so human work is part of the process.

The core of the issue I see is that human and AI both learn from the published media but an AI can both "see" and "draw" more than a human, so there is an important distinction there.


I understand that there are (both practical and theoretical) ways to reduce the chances of an AI generating an image that has copyrighted elements in it (such as the "GettyImages" logo).

I'm mostly curious about the legal aspects of having a black-box system that can - under some unknown circumstances - attach openly copyrighted or trademarked elements (such as a company logo) to a piece of work.


> (2) creative nature of the work

Is AI even capable of having a creative nature. All that I see is re-use of source images.


Or if not, their work for 43 of the 100 largest global polluters [0] or illegal corruption in Africa? [1]

[0] https://www.nytimes.com/2021/10/27/business/mckinsey-climate...

[1] https://www.nytimes.com/2018/06/26/world/africa/mckinsey-sou...


I hate it when "studies" assign pollution to the company that pulled the oil from the ground. It's so dishonest, and perhaps even worse, it provides zero valuable insight. If I go joyride in a private jet, I'm the polluter, not Exxon Mobil for pulling the oil from the ground.


I prefer addressing issues at the source. Mental gymnastics are for the guilty.


Right, the source here being the jet where the fuel is actually burned for energy.


Wrong. The jet is a matter of convenience. The fuel (source) being sufficiently inconvenient (expensive, illegal) would force alternative decisions.


Takes two to tango in this case.

The source is still where the fuel came from. You're talking about the sink it's ending up in.


do you know oil has a lot of uses that are not pure transportation/electricity/heating related and necesssary for the modern civilization ?


Yeah, like the plastics that kill our environment on another vector.

Stop apologizing for oil companies. Your brain power would be better spent solving how to stop literally pouring gas on the fire.


Why not both?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: