Hacker News new | past | comments | ask | show | jobs | submit | manholio's comments login

It's an excellent idea if the conversion market can be bootstrapped. In France there are already firms offering the service.

Millions of cars exist in the roads and they have 90% of what an electric car needs. There is absolutely no point of sending them to the garbage pile and remanufacture then add electrics, all for the good of the environment.


There are companies in the US specializing in EV conversion as well, but AFAIK they're all aiming at the high-end of the market, where a $2000 incentive won't matter much. I'm thinking about folks like Zelectric converting old Porsche 911s: https://www.zelectricmotors.com

This bill appears to be backed by Specialty Equipment Market Association, an industry group that represents the automotive aftermarket, so I guess they're attempting to do what you suggest - support more companies that aim to do such conversions.

https://sd25.senate.ca.gov/news/2023-02-02/senator-portantin...


The other problem is there is no way a car could be converted for $2000 in parts and labor.

Even saran wrapping your car in PPF costs $2000 nowadays, you expect an engine conversion?


The subsidy doesn't have to cover the full cost, it only has to make the conversion economically feasible. A 2014 Chevy Equinox with an EV conversion might make sense at 16k but not at 18k for example.


Nobody said the incentive will cover 100% of the conversion cost. TFA quotes a conversion at around $14,000.


The $14,000 quote seems a bit low, but regardless I predict that will plummet once EV motor production kicks into high gear, and sodium ion batteries in a few years hit $40/kwhr at 200 wh/kg (all are on the production roadmap for CATL).

Yeah, one of the biggest issues in switchover to EVs isn't taking over the new car sales, it's that there will be two decades of ICEs going through the various used car price tiers.

If you jack the gas price with a carbon tax, it becomes very regressive on the people that can only afford a $1000 crappy gas guzzler from circa 2010 or earlier from the halcyon days of the US's obsession with SUVs (now they at least are obsessed with crossovers, an improvement).

I'm hoping a drop-in cheapo conversion for $5000 becomes feasible in a few years and the Chinese start producing very cheap EVs that most people will prefer over a gas guzzling used car. I don't think the incumbent automakers in the US are interested in making a new car that can compete with the used car inventory.

Maybe scooters, ebikes, and other offbeat kinds of transportation can fill the market.


There's a point to not allowing companies to sell highly modified cars as though the original safety ratings are still accurate though.


The most annoying thing in my experience is not really the raw compilation times, but the lack of - or very rudimentary - incremental build feature. If I'm debugging a function and make a small local change that does not trickle down to some generic type used throughout the project, then 1-second build times should be the norm, or better yet, edit & continue debug.

It's beyond frustrating that any "i+=1" change requires relinking a 50mb binary from scratch and rebuilding a good chunk of the Win32 crate for good measure. Until such enterprise features become available, high developer productivity in Rust remains elusive.


To be clear, Rust has an "incremental" compilation feature, and I believe it is enabled by default for debug builds.

I don't think it's enabled by default in release builds (because it might sacrifice perf too much?) and it doesn't make linking incremental.

Making the entire pipeline incremental, including release builds, probably requires some very fundamental changes to how our compilers function. I think Cranelift is making inroads in this direction by caching the results of compiling individual functions, but I know very little about it and might even be describing it incorrectly here in this comment.


As far as I remember Energize C++ (and VC++ does a similar thing), allowed to do just that, and it feels quite fast with VC++ incremental compilation and linking.


> It's beyond frustrating that any "i+=1" change requires relinking a 50mb binary from scratch

It’s especially hard to solve this with a language like rust, but I agree!

I’ve long wanted to experiment with a compiler architecture which could do fully incremental compilation, maybe down the function in granularity. In the linked (debug) executable, use a malloc style library to manage disk space. When a function changes, recompile it, free the old copy in the binary, allocate space for the new function and update jump addresses. You’d need to cache a whole lot of the compiler’s context between invocations - but honestly that should be doable with a little database like LMDB. Or alternately, we could run our compiler in “interactive mode”, and leave all the type information and everything else resident in memory between compilation runs. When the compiler notices some functions are changed, it flushes the old function definitions, compiles the new functions and updates everything just like when the DOM updates and needs to recompute layout and styles.

A well optimized incremental compiler should be able to do a “i += 1” line change faster than my monitor’s refresh rate. It’s crazy we still design compilers to do a mountain of processing work, generate a huge amount of state and then when they’re done throw all that work out. Next time we run the compiler, we redo all of that work again. And the work is all almost identical.

Unfortunately this would be a particularly difficult change to make in the rust compiler. Might want to experiment with a simpler language first to figure out the architecture and the fully incremental linker. It would be a super fun project though!


Here, Energize C++ doing just that in 1993.

https://www.youtube.com/watch?v=yLZwLSzkH3E

VC++ has similar kind of support nowadays.


Most of the time for most changes you should just be relying on "cargo check" anyway. You don't need a full re-build to just check for syntax issues. It runs very fast and will find almost all compile errors and it caches metadata for files that are unchanged.

Are you really running your test suite for every "i+=1" change on other languages?


> Are you really running your test suite for every "i+=1" change on other languages?

You don't have to run your testsuite for a small bugfix (that's what CI is for), but you DO need to restart, reset the testcase that triggers the code you are interested in, and step through it again. Rinse and repeat for 20 or so times, with various data etc. - at least that's my debug-heavy workflow. If any trivial recompile takes a minute or so, that's a frustrating time spent waiting as opposed to using something like a dynamic language to accomplish the same task.

So you would instinctively avoid Rust for any task that can be accomplished with Python or JS, a real shame since it's very close to being an universal language.


Why yes, what could possibly go wrong by releasing the main constituent of acid rain in an haphazard attempt to cool down the Earth, a problem itself created by releasing uncontrolled quantities of other pollutants.

Sounds like the setup for a dystopia.


We're long past dystopia, then.

500 grams of SO2 is released by a 737 flying less than 100 miles.


That's something of a straw man. Air pollution is one thing. Deliberate experimentation with hazardous emissions calculated to be scaled up on a global level with impacts on us all, is a completely different thing.


You have it exactly backwards.

Your semantics game is the "straw man." The atmosphere doesn't care what we call it, only what we dump in it.

https://news.ycombinator.com/item?id=34529944

>calculated to be scaled up on a global level

You say that like polluting industries don't try to scale up to a global level.


Industries don't try to scale up pollution. The pollution is a by-product of trying to scale up profits. And one wrong doesn't make the other a right.


>Industries don't try to scale up pollution. The pollution is a by-product of trying to scale up profits.

There's that word "try" again...

My exact point is that the atmosphere doesn't care about intent. It cares about emissions.

Humans care about intent. That's fine, but apparently the rule here is that you can hurt people (for amoral profit), but don't you dare try to help people!

My argument is that maybe we should rethink this ethical schema.


Not sure I follow you any more. In what way would you like to rethink this ethical schema? What schema?


The wildly disproportionate pollution vs geoengineering double standard that paralyzes us in the face of arguably the greatest existential threat to our species (perhaps second only to nuclear weapons).

>> apparently the rule here is that you can hurt people (for amoral profit), but don't you dare try to help people!


I think that's a big misunderstanding. Climate change is not exactly an immense existential threat to our species. It's threatening the livelihood and sustainability of life in almost all impoverished regions, and will cause the death and suffering of 100s of millions/billions of people.

The average 1-percenter who participates in this forum will probably see nothing more than a few tourist destinations changed, need to crank up the AC a bit more, less options for skiing in winter, and price of exotic fruits increasing.

The combination of climate change, loss of wildlife habitat, biogeochemical flows, ocean acidification, etc, could very well be a threat to our existence as a species, yes. Climate change is only one of them, and the injection of SO2 in the atmosphere only solves this one, with unclear and potentially negative consequences on all the other planetary boundaries.


Whataboutism


People really shouldn't comment before doing a bit of googling on the scale of things. I doubt you'd have made this comment if you'd known how SO2 much was being released and how much is normally released during the course of the day.


I indeed knew the scale, it's plastered all over this thread, but it's not relevant to the point being made.

The whole scheme is harebrained regardless of the amount, what depends on the amount is the harm it can cause, in the range <totally harmless, with perhaps some barely detectable rain acidity local increases ... Venus-like global catastrophe>.


Your stated point was acid rain. And the amounts are too low to cause that, making your point invalid.


My stated point was acid raid in the context of climate engineering, there was nothing said about quantity. Small quantities will cause acid rain in undetectable amounts, and have the same nil effect on climate, but presumably the end goal is to have a measurable effect on climate.

Since we already have SO2-induced acid rain on a warming planet, and there is no real motive to believe such intentional releases will be confined to the upper layers of the atmosphere, it follows that such climate engineering attempts will have a measurable effect on acid rain.

So the point is perfectly valid even if you won't follow it through to the logical conclusion. Just like these clowns here.


That just means people will buy daily drivers with small batteries and charge them nightly. 15 years out of a cheap compact car with near zero operating costs sounds like a good deal, still does not make sense to waste a cycle for $1 = 20kWh fed into the grid.


This is never going to happen at scale. Energy is fungible: one cannot compete on "quality" of energy, just on price, a kWh is a kWh and the provider with the cheapest energy will always win in the marketplace. So this means any EV owner looking to make a profit is competing against large scale industrial storage entities that:

- have large mass and purchasing power, optimizing their battery purchasing and operational costs;

- have grid-scale storage oriented solutions tuned for maximum charging cycles and lifetime-storage

- use stationary batteries with no mass penalties, affording them the use of low density exotic chemistries (Na-ion) or non-battery storage systems.

Meanwhile, the EV owner has a mobility-optimized battery that is tuned for maximum density that still results in a cycle count comparable with the lifetime of the car. At market equilibrium, any revenue he extracts while serving the grid will reduce the useful life time of the battery and therefore depreciate his capital, and make his battery a "spare parts consumable" which is a major profit driver for most auto-manufacturers, especially a custom form factor battery for a 5 year old vehicle that is no longer sold.

Never mind that the whole operational cost, changing the meter to a bidirectional one, making sure the vehicle is connected for extended periods of time etc. is probably not going to be worth the pennies you will earn.

Grid storage is EVs is a decade old pipe dream, it will never make sense economically, it has been attempted multiple times and always failed, just let it die.


My electricity cost currently is split around 1/3 generation (~10¢/kWh) and 2/3 distribution (~20¢/kWh). If the power from this scheme can avoid most of the costly distribution, eg. I use it directly in my house and neighborhood, then it’s an economic win. This would be true even if the centralized generation was free.


It can't, because the distribution fee is an amortized cost of having distribution infrastructure built. Since that infrastructure still needs to exist regardless of where you get your energy (and in fact needs to be upgraded to handle bidirectional consumer/producers), EV storage won't bypass it regardless of where the EV and the consumer is physically located.

If you consume what you store, then you will charge up at low (production prices + distribution fees) for the times when (production prices + distribution fees) are high. The second term is constant so you are arbitraging on production prices.


> Energy is fungible: one cannot compete on "quality" of energy, just on price

It is fungible, but prices can vary according to limitations on supply. If the big company's capacity is maxed out and demand continues to increase, energy already acquired at lower cost and stored in the car can be sold for profit.


But if that limitation is repetitive to the point of investing in infrastructure to use EVs, then some other large-scale investor will close that arbitrage opportunity.

Basically, the next-day / week energy markets, where EV owners can compete, will be saturated by grid-scale battery operators. Renewables will leave large gaps for seasonal energy needs - for example two weeks of winter with no sun and no wind - but EVs cannot help there. So some spin on-demand non-renewables will need to cover that (i.e., the current main providers, after becoming too expensive to run due to carbon pricing).


It'll always be some marginal utility - the main purpose of the EV will always to be a car. You can use it to store energy purchased at non-peak hours so you can avoid using the grid at those times, something that'll probably raise peak prices, because if you need the energy right then, you really need it.

So, EV owners may use their cars to help reducing their energy costs and supplementing their PVs and fixed batteries (if any), but shouldn't expect a car to pay for itself like that.


But you will not use it if the depreciation on your car vastly exceeds what you could earn from the scheme.


I see, renewables will only leave gaps that fit your argument. You don't see any scenario where there could be brownouts during the day, say in the summer when AC usage is high?

And no, using cars for grid-scale storage has not been tried multiple times. The technology has never been available/feasible at a large scale before.


There is an economic case being made above that explains why, you might want to try and follow that and respond on point instead of mindlessly nitpicking.

There exist large scale trials for this idea, you never heard of them because (aside from the fact you are arguing on a subject you know little about) they failed or are barely limping along.


It's about a law designed by humans to give everyone a chance to make a living and contribute to the common good. You think you have found a loophole in that law that lets you use that work for free and deny author's any compensation.

Bear in mind, AI is not making artists or creative types obsolete - that would be fair game, just like computers made human calculators obsolete. No, this is about abusing other people's work.


Copyright never guaranteed anyone compensation, nor is it loophole to not pay for copyright just because you saw and learned from something in the public domain.

If using someone else work for learning is infringement, then that's going to cause a lot of difficulty for all artists. Try making a rock song without listening to rock, or paint some modern art without viewing it etc.


Copyright exists to protect human authors and promote creation, which in turn leads to learning - from other human beings. Algorithms are not learning, they are automated tools which are consuming creative works and outputing derivative works.

The loophole is to use copyrighted works for free despite no learning taking place - no human being observing and developing their skills based on that work - rather, an algorithm transforming those works into some other useful interpretation of them.


I’m not seeing a salient argument here. Something about an allegation of abuse?

Where is the abuse happening?


The crypto market is the trash, a decade old speculative bubble with zero real world applications.


It actually has a nice use case as a breach canary. Have an unprotected wallet stored on your machine with some btc on it (like 200-500 USD worth), and if you notice a transfer out from there, you can be sure your machine is compromised. This method was recently posted by a fellow HN member, unfortunately I forgot where.


Ah, yes, $200-500 USD, an amount of money that surely no one will ever miss, and everyone would feel perfectly fine throwing away as a "breach canary".

Never mind that that's, what, about a week's wages for many people?


Then you have bigger worries than a PC breach.


Right, because The Poors don't have lives to live that also care about things like privacy. They just exist in a horrifying life-or-death, dog-eat-dog scramble every day in their slums. (/s)

Seriously, can you be any more out of touch and dismissive of the concerns of people who haven't been making six figures ever since they left college?


Well, here's my point of anecdata: despite Python being a very productive language and very easy to use, despite it existing for more than three decades and being perhaps the most massively learned language by aspiring programmers today, you hardly see any stand-alone application written in it. I can name not a single application that I use on a daily basis written in Python. It seems it's strongly confined to the web-server, where the environments are well controlled via containers, runtime bugs impact a single page load and fixes can be applied continuously. People don't ship Python standalone apps.

Based on the buggy and unstable Python desktop apps I have used, I have a strong suspicion that developing large applications in Python is strongly self-limiting after the initial sprint.


That example threw me off too because both the numbers and the perspective are non-nonsensical. 90% of the energy draw of those data-centers goes into things like inner video encoding loops, SSD/memcache storage and retrieval, ML algorithms etc.

But the vast majority of those 27.000 engineers do not work on such low level routines, but on things like millions of lines of crufty Python that power Adsense analytics, which are essential for maintaining a revenue stream. Yes, development speed is very important but it's mostly orthogonal to other operational costs if the right tools and architectures are employed.


Author here, that might actually make the comparison stronger: not all energy in a data center is wasted by the memory safety approach's drawbacks, meaning it makes less sense to optimize for memory safety overhead.

But if I'm being pro-Rust, I would also say that not all coding is affected by a memory safety approach's downsides; there are some domains where the borrow checker doesn't slow development velocity down at all.

Either way, I definitely agree that its orthogonal to many operational costs. I'll mention this line of thought in the article. Thanks!


But they are not original works, they are wholly derived works of the training data set. Take that data set away and the algorithm is unable to produce a single original pixel.

The fact that the derivation involves millions of works as opposed to a single one is immaterial for the copyright issue.


If I take a million copywritten images from magazines, cut them with scissors, and make a single collage, I would expect the resulting image to be fair use. Fair use is an affirmative defense, like self defense, where you justify your infringement.

People are treating this like its a binary technical decision. Either it is or isn't a violation. Reality is that things are spectrums and judges judge. SD will likely be treated like a remix that sampled copywritten work, but just a tiny bit of each work, and sufficiently transformed it to create a new work.


If I take a million copywritten images from magazines, cut them with scissors, and make a single collage, I would expect the resulting image to be fair use.

That’s not how it works. Your collage would be fine if it was the only one since you used magazines you bought. Where you’d get into trouble is if you started printing copies of your collage and distributing them. In that case you’d be producing derived works and be on the hook for paying for licenses from the original authors.


That’s not how fair use works. It’s not a binary switch where commercial derivatives automatically require licensing. Such a college would be ruled transformative and non competitive.

Me having bought the magazines also has nothing to do with anything. Would apply equally if they were gifted or free or stolen.


That is not true. The dataset is needed, the same way that examples are used by a person learning to draw. But the dataset alone is not capable of producing images not derived from any part of it (and there are many examples of SD results that seem so far to be wholly original), so you can’t reduce stable diffusion to being only derived from the dataset. It may “remember” and generate parts of images in the dataset - but that is a bug, not a feature. With enough prompt tweaking, it may even generate a fairly good copy of pre-existing work - which was what the prompt requested, so responsibility should lie on the prompt writer, not on SD.

But the fact that it often generates new content, that didn’t exist before, or at least doesn’t breach the limits of fair use, goes against the argument made in the lawsuit.


The model can generate original images, yes, and those images might be fair use. But it can also generate near verbatim copies of the source works or substantial parts thereof, so the model itself is not fair use, it's a wholly derivative work.

For example, if a publish a music remix tool with a massive database of existing music, creators might use to create collages that are original and fall under fair use. But the tool itself is not and requires permission from the rights owners.


The training data set is indeed mandatory but that doesn't make the resulting model a derivative in itself. In fact the training is specifically made to remove derivatives.


Go to stablediffusionweb.com and enter "a person like biden" into the box. You will see a picture exactly like President Biden. That picture will have been derived from the trained images of Joe Biden. That cannot be in dispute.


You've made some errors in reasoning.

First, there is a legal definition of a "derivative work" and there is an artistic notion of a "derivative work". If the two of us both draw a picture of the Statue of Liberty, artistically we have both derived the drawing based on the original statue. However, neither of these drawings in relation to the original sculpture nor the other drawing is legally considered a derivative work.

Let's think about a cartoonish caricature of Joe Biden. What "makes up" Joe Biden?

https://www.youtube.com/watch?v=QRu0lUxxVF4

To what extent are these "constituent parts" present in every image of Joe Biden? All of them? Is the latent space not something that is instead hidden in all images of Joe Biden? Can an image of Joe Biden be made by anyone that is not derived from these "high order" characteristics of what is recognizable as Joe Biden across a number of different renderings from disparate individuals?


I can draw Biden, yes, but SD can only draw Biden by deriving it's output from the images on which it was trained. This is a simple tautology, because SD cannot draw Biden without having been trained on that data.

SD both creates derivative works and also sometimes creates pixel level copies from portions of the trained data.


Yes, and we are now using the artistic definition of “derived” and not the legal definition.

You cannot copyright “any image that resembles Joe Biden”.


This isn't about what can be copyrighted but that there are copyrighted images being used without following the legal requirements.


Can you draw Biden without ever having seen him or a picture of him? So,why is it that you are not deriving but SD is?


Just because it generates you an image like Biden still does not make it a derivative either.

You can draw Biden yourself if you're talented and it's not considered a derivative of anything.


The difference is that computers create perfect copies of images by default, people don't.

If a person creates a perfect copy of something it shows they have put thousands of hours of practice into training their skills and maybe dozens or even hundreds of hours into the replica.

When a computer generates a replica of something it's what it was designed to do. AI art is trying to replicate the human process, but it will always have the stink of "the computer could do this perfectly but we are telling it not to right now"

Take Chess as an example. We have Chess engines that can beat even the best human Chess players very consistently.

But we also have Chess engines designed to play against beginners, or at all levels of Chess play really.

We still have Human-only tournaments. Why? Why not allow a Chess Engine set to perform like a Grandmaster to compete in tournaments?

Because there would always be the suspicion that if it wins, it's because it cheated to play at above it's level when it needed to. Because that's always an option for a computer, to behave like a computer does.


You’re acting like the “computer” has a will of it’s own. Generating a perfect copy of an image would be a completely separate task from training a model for image generation.

There are no models I know of with the ability to generate an exact copy of an image from its training set unless it was solely trained on that image to the point it could. In that case I could argue the model’s purpose was to copy that image rather than learn concepts from a broad variety of images to the point it would be almost impossible to generate an exact copy.

I think a lot of the arguments revolving around AI image generators could benefit from the constituent parties reading up on how transformers work. It would at least make the criticisms more pointed and relevant, unlike the criticisms drawn in the linked article.


> There are no models I know of with the ability to generate an exact copy of an image from its training set

Is it "the model cannot possibly recreate an image from its training set perfectly" or is it "the model is extremely unlikely to recreate an image from its training set perfectly, but it could in theory"?

Because I am willing to bet it's the latter.

> You’re acting like the “computer” has a will of it’s own. Generating a perfect copy of an image would be a completely separate task from training a model for image generation.

Not my intent, of course I don't think computers have a will of their own. What I meant, obviously, is that it's always possible for a bad actor of a human to make the computer behave in a way that is detrimental to other humans and then justify it by saying "the computer did it, all I did is train the model".


In theory, you can:

- Open Microsoft Paint

- Make a blank 400 x 400 image

- Select a pixel and input an R,G,B value

- Repeat the last two steps

To reproduce a copyrighted work. I'm sure people have done this with e.g. pixel art images of copyrighted IP of Mario or Link. At 400x400, it would take 160,000 pixels to do this. At 1 second per pixel, a human being could do this in about a week.

Because people have the capability of doing this, and in fact we have proof that people have done so using tools such as MS paint, AND because it is unlikely but possible that someone could reproduce protected IP using such a method, should we ban Microsoft Paint, or the paint tool, or the ability to input raw RGB inputs?


>The difference is that computers create perfect copies of images by default

are we looking at the output of the same program? because all of the output images i look at have eyes looking in different direction and things of horror in place of hands or ears, and they feature glasses meting into people faces, and that's the good ones, the bad one have multiple arms contorting out of odd places while bent at unnatural angles.


Storing and retrieving photos, files, music, exactly identical to how they were before, is what computers do.

Save a photo on your computer, open it in a browser or photo viewer, you will get that photo. That is the default behavior of computers. That is not in dispute, is it?

All of this machine learning stuff is trying to get them to not do that. To actually create something new that no one actually stored on them.

Hope that clears up the misunderstanding.


There is no need for rhetorical games. The actual issue is that Stable Diffusion does create derivatives of copyrighted works. In some cases the produced images contain pixel level details from the originals. [1]

[1] https://arxiv.org/pdf/2212.03860.pdf


> The actual issue is that Stable Diffusion does create derivatives of copyrighted works.

Nothing points to that, in fact even in this website they had to lie on how stablediffusion actually works, maybe a sign that their argument isn't really solid enough.

> [1] https://arxiv.org/pdf/2212.03860.pdf

You realize those are considered defects of the model right? Sure, this model isn't perfect and will be improved.


> You realize those are considered defects of the model right? Sure, this model isn't perfect.

You can call copying of input as a defect, but why are you simultaneously arguing that it doesn't occur?


I don't call these defects copying either but overfitting characteristics. Usually they are there because there's a massive amount of near-identical images.

It's both undesirable and not relevant to this kind of lawsuit.


Correction: if you draw a copy of Biden and it happens to overlap enough with someone’s copyright of a drawing or image of Biden, you did create a derivative (whether you knew it or not).


is that really how copyright law works? Drawing something similar independently is considered a derivative even if there's no links to it?

It's bad news for art websites themselves if that's the case...


No that’s not… at least in many countries. Unlike patents, “parallel creation” is allowed, this was fought out in case law over photography decades ago, because photographers would take images of the same subject, then someone else would, and they might incidentally capture a similar image for lots of reasons and thus before ubiquitous photography in our pockets, when you had to have expensive equipment or carefully control the lighting in a portraiture studio to get great results… well it happened and people sued like those with money to spare for lawyers are want to do, and thus precedent has been established for much of this. You don’t see it a lot outside photography but it’s not a new thing for art copyright law and I think the necessity of the user to provide their own input and get different outcomes outside of extremely sophisticated prompt editing… will be a significant fact in their favour.


So is your mental image of Joe Biden, unless you know him personally.


If I were to take the first word from a thousand books and use it to write my own would I be guilty of copyright violations?


Words have a special carve out in copyright law / precedent. So much so that a whole other category of Intellectual Property exists called Trademarks to protect special words.

But back to your point “if you were to take the first sentence from a thousand books and use it in your own book”, then yes based on my understanding (I am not a lawyer) of copyright you would be in violation of IP laws.


I doubt it would be a violation.

Specifically fair use #3 "the amount and substantiality of the portion used in relation to the copyrighted work as a whole."

A sentence being a copyright violation would make every book review in the world illegal.


This argument's pedantic and problematic for artists; take away a human's "dataset" and processes and they are also unable to produce a single original "pixel".


[flagged]


I've prepared a boiler-plate response for autistic nitpickers like yourself: https://cdn150.picsart.com/upscale-235459796047212.png?r1024...


I don't think that your phrasing is helpful or appropriate.


If I make software that randomly draws pixels on the screen then we can say for a fact that no copyrighted images were used.

If that software happens to output an image that is in violation of copyright then it is not the fault of the model. Also, if you ran this software in your home and did nothing with the image, then there's no violation of copyright either. It only becomes an issue when you choose to publish the image.

The key part of copyright is when someone publishes an image as their own. That they copy an image doesn't matter at all. It's what they DO with the image that matters!

The courts will most likely make a similar distinction between the model, the outputs of the model, and when an individual publishes the outputs of the model. This would be that the copyright violation occurs when an individual publishes an image.

Now, if tools like Stable Diffusion are constantly putting users at risk of unknowingly violating copyrights then this tool becomes less appealing. In this case it would make commercial sense to help users know when they are in violation of copyright. It would also make sense to update our copyright catalogues to facilitate these kinds of fingerprints.


how is that any different from new human artist that study other artists work to learn a style or technique. In fact it used to be that the preferred way for painters to learn was to repeatedly copy paintings of masters.


What you and many other in the thread seem to be oblivious about is that algorithms are not people. Yes, it may come as a shock to autistic engineers, but the fact that a machine can do something to what a person does does not warant it equal protection under the law.

Copyright, and laws in general, exists to protect the human members of society not some abstract representation of them.


It seems like you're using "autistic" as an insult here. If that's not your intention you might want to edit this comment to use different verbage.


What do you mean, autism is well established as a personality trait that diminishes empathy and the ability to understand other people's desires and emotions, while having a strong affinity to things, for example machines and algorithms.

Legislation is driven by people who are, on aggregate, not autistic. So it's entirely appropriate to presume that a person not understanding how that process works is indeed autistic, especially if they suggest machines are subjects of law by analogy with human beings.

It's not that autists are bad people, they are just outliers in the political spectrum, as you can see from the complete disconnect of up-voted AI-related comments on Hacker News, where autistic engineers are clearly over-represented, versus just about any venue where other professionals, such as painters or musicians, congregate. Just try to suggest to them that a corporation has the right to use their work for free and profit from it while leaving them unemployed, because the algorithm the corporation uses to exploit them is in some abstract sense similar to how their brain works. That position is so for out on the spectrum that presuming a personality peculiarity of the emitter is the absolutely most charitable interpretation.


So, is any sort of creation that relies upon copyrighted or patented works copyright infringement? Is any academic research or art that references brands or other creations illegal? This is such a clear case of fair use that it could be a textbook example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: