It's vastly more simple to run one cable through the ocean than run heaps of high voltage lines between all the remote renewable generation sites and the consumers. Even with the solar built into a suburb, you need to built much thicker connections to transfer it to other consumers than what you do in the old generation system.
Distributed grids are complicated.
You'd think it would too since apple openly advertises how good the newer hardware is at AI. No idea where the above user got the idea that they had no ability to do that.
AI wouldn't help their app in the slightest. Taking an anti "AI" stance aligns with their business. It's a perfectly rational decision. Now, that doesn't take anything away from being a heartfelt decision, too.
Bold of you to assume hating AI isn't the "Current Thing" among online artists.
Blind decisionmaking is bad. It's perfectly fine if Procreate devs don't want to add AI features. I simply find it distasteful to present it as a moral position (as people are wont to do with so many things nowadays) or groundbreaking perspective. It's a preference - which should be respected - but just a preference.
You might be surprised. Many students who use ChatGPT for assignments end up turning in code identical (or nearly identical) to other students who use ChatGPT.
Different in an exact string match but code that is copied and pasted from ChatGPT has a lot of similarities in the way that it is (over) commented. I've seen a lot of Python where the student who "authored" it cannot tell me how a method works or why it was implemented despite having the comments prefixed to every line in the file.
From my experience using ChatGPT, It usually remove most of my already written comments when I ask questions about code I wrote myself. It usually give you outline comments. So unless you are supporter of the self documented code idea, I don't think ChatGPT over comments.
It's obviously down to taste, but what I've seen over and over is a comment per line which to me is excessive outside it being requested of absolute beginners.
That happens and also the model can't decide if it wants the comment on the line before the code or if everything should be appended to the line itself so when I see both styles within a single project it's another signal. People generally have a style that they stick with.
yeah but the prompt itself generally adds sufficient randomness to avoid the same verbatim answer each time.
as an example just go ask it to write any sufficiently average function. use different names and phrases for what the function should do; you'll generally get a different flavor of answer each time, even if the functions all output the same thing.
sometimes the prompt even forces the thing to output the most naive implementation possible due to the ordering or perceived priority of things within the requesting prompt.
it's fun to use as a tool to nudge it into what you want once you get the hang of the preconceptions it falls into.
MOSS seems to be pretty good finding multiple people using LLM-generated code and flagging them as copies of each other. I imagine it would also be a good idea to throw the assignment text into the few most popular LLMs and feed that in as well, but I don't know of anyone who has tried this.
If Apple's store isn't illegal then they will switch to their model. Android will lock out alternative stores completely in response.
They'll probably rename Android to something else and say its a new, more secure OS.
reply