I love how everyone on this thread is giving energy advice to the country that "makes the world's computer chips". Surely they have a couple of smart people on that island…
I reckon that solar chips are approaching potato chips in terms of (relative to CPU) techological sophistication ... IOW, TSMC has better things to do than panels.
AI empowers normal people to start building stuff. Of course it won’t be as elegant and it will be bad for learning. However these people would have never learned anything about coding in the first place.
Are we senior dev people a bit like carriage riders that complain about anyone being allowed to drive a car?
My spouse is a university professor. A lot of her students cheat using AI. I am sure that they could be using AI as a learning mechanism, but they observably aren't. Now, the motivations for using AI to pass a class are different but I think that it is important to recognize that there is using AI to build something and learn and there is using AI to build something.
Engineering is also the process of development and maintenance over time. While an AI tool might help you build something that functions, that's just the first step.
I am sure that there are people who leverage AI in such a way that the build a thing and also ask it a lot of questions about why it is built in a certain way and seek to internalize that. I'd wager that this is a small minority.
Back in school on occasion it was considered cheating to use a calculator, the purpose to encourage learning. It would be absurd in the work environment to ban the use of calculators, it's your responsibility as an employee to use it correctly. As you say the first step.
I'm sure at some point that universities will figure out how to integrate AI into pedagogy in a way that works other than a blanket ban. It also doesn't surprise me that until people figure out effective strategies that they say "no chatgpt on your homework."
To learn effectively you need to challenge your knowledge regularly, elaborate on that knowledge and regularly practice retrieval.
Building things solely relying on AI is not effective for learning (if that is your goal) because you aren’t challenging your own knowledge/mental model, retrieving prior knowledge or elaborating on your existing knowledge.
The problem with using the current crop of LLMs for coding, if you're not a developer, is that they're leaky abstractions. If something goes wrong (as it usually will in software developent), you'll need to understand the underlying tech.
In contrast, if you're a frontend developer, you don't need to know C++ even though browsers are implemented in it. If you're a C++ developer, you don't need to know assembly (unless you're working on JIT).
I am convinced AI tools for software development will improve to the point that non-devs will be able to build many apps now requiring professional developers[0]. It's just not there yet.
[0] We already had that. I've seen a lot of in-house apps for small businesses built using VBA/Excel/Access in Windows (and HyperCard etc on Mac). They've lost that power with the web, but it's clearly possible.
I've gotten a lot of mileage in my career by following this procedure:
1. Talk to people, read docs, skim the code. The first objective is to find out what we want the system to do.
2. Once we've reverse-engineered a sufficiently detailed specification, deeply analyze the code and find out how well (or more often poorly) it actually meets our goals.
3. Do whatever it takes to make the code line up with the specification. As simply as possibly, but no simpler.
This recipe gets you to a place where the codebase is clean, there are fewer lines of code (and therefore fewer bugs, better development velocity, often better runtime performance). It's hard work but there is a ton of value to be gained from understanding the problem from first principles.
EDIT: You may think the engineering culture of your organization doesn't support this kind of work. That may be true, in which case it's incumbent upon you to change the culture. You can attempt this by using the above procedure to find a really nasty bug and kill it loudly and publicly. If this results in a bunch of pushback then your org is beyond repair and you should go work somewhere else.
How busy are the car parks by their dispatch center? are the cars staying longer because people are working overtime? how many UPS trucks are visiting?
you buy data flow data from ISPs at all tiers, so even though they're encrypted, knowing how much traffic is going to Macy's.com vs JCPenney.com gives you information you can act on.
We know this is being done, because of reports that say Netflix is X% of Internet traffic. The undredacted reports from those same data sources have much more detail. It's also why some apps that don't appear to have any business model are actually quite valuable.
If this was true, why haven’t we seen it with manipulated pictures?
Maybe I’m not well informed but there seem to be no example for the issues you describe with photos.
I believe it’s actually worse than you think. People believe in narratives, in stories, in ideas. These spread.
It has been like this forever. Text, pictures, videos are merely ways to proliferate narratives. We dismiss even clear evidence if it doesn’t fit our beliefs and we actively look for proof for what we think is the truth.
If you want to "fight" back you need to start on the narrative level, not on the artifact level.
Hell - look of the fate of the rest of the world online. They’re basically thrown to the fucking wolves.
Minorities are lynched around the world after viral forwards. Autocrats are stronger than ever and authoritarian regimes have flourished like never before.
Trust and safety tools are vastly stronger for English than any other language. See the language resource gap (lost in translation, Nicholas and Bhatia)
In America, the political divide has reached levels unimaginable. People live in entirely different realities.
Images from democrats sides are dismissed as faked and lies take so long to discredit that the issue has passed on, tiring fact checkers and the public.
The original fake news problem of Romanian advert farms focused entirely on conservative citizens.
There are. The fact that the example that comes to a mind is a case from 30 years ago so heroic they made a movie about it hints this is pretty rare. There doesn’t seem to be an Erin Brokovich for every type of problem or even every location in the US.
Because claims that one singular concept or idea can explain pretty much everything in life have, historically, been quite false. Worse, such claims tend to attract people who like to have one simple solution for everything, or like to have one single thing that's root of all evil, or like to have one simple explanation for everything, etc. The damage such people can incur, especially when equipped with a grandiose enough concept, have (again, historically) been quite incredible.
reply