Hungary, Turkey, India, Venezuela come to mind.
Poland, Brazil are also recent near misses. South Korea also had a recent oops.
So a well trodden path.
Duolingo is a great kickstarter from zero knowledge, in my experience it also leads to very fast reading proficiency.
I'm convinced I can make sense of a newspaper within weeks and read novels within months in any language just using Duolingo (maybe longer if I also have to learn a different writing system). I did it with Danish years ago. I could read newspapers after a month and after four months I read a few novels at a slow but still enjoyable pace.
Or another example: my daughter was struggling with English at school and I bribed her to keep a 200-day Duolingo streak. That was enough to allow her to understand sources in English about her hobbies and interests, and now after a couple years her level is way above what is taught at school, she aces every test effortlessly.
Fluency... English is not my first language, but I can write, read and understand it pretty well. I even "think in English" at times, but I have never spent more than 4 consecutive days in an English-speaking country, and my fluency suffers a lot comparatively.
I can speak about anything, even give a talk or teach, but my accent is very thick, I'll stammer and stutter and stop here and there to find the right turn of phrase, or struggle to pronounce words that I have written a thousand times but never said aloud.
So in my opinion if you want to speak well, you have to speak a lot and I don´t think Duolingo is very good for that.
This has been my experience with a recent try to guide the LLM to a complete implementation of a small internal tool.
I had in an hour what would have taken me 4 or 5 to write.
But after that, it was an endless loop of the LLM adding logging code to find some bug and failing to fix it, only to add more logging code and ineffectual changes and so on.
The problem is that even after it's lost at sea, it's still answering in a completely confident and self assured tone, so when you decide to take matters in your hands you might be too far gone from sanity and have an unfixable mess in your hands.
I guess I can go back to where it strayed and retake it from there, but by now the experiment seems to be a failure.
Back in early 2023 I tried to write a tool to do my taxes based on my broker CVS files. Since I wasn't familiar with how the data was structured, I let the LLM lead me while building this in incremental steps. The result was not just buggy, it simply failed to detect the relationships in the data (multiple somewhat implicitly embedded tables that needed to be joined). Even after I pointed this out, it failed to handle it, getting stuck in the same kind of loop you described.
To this day, no LLM that I tried passed this task of leading the development while detecting the underlying structure of the data.
At least in my experience as soon as something goes a little wrong it just gets worse from there. The more of it's confusion and contradictory information are in the chat history the worse it gets. It also has to make changes to the code so you accumulate these spurious changes and the problem gets more confusing. I've had some luck starting over with a new chat asking what is wrong but if that doesn't work I just assume I'm on my own.
I've found that quality degrades really quickly after just the first reply, for some reason. They all seem heavily biased towards one-shot correct answers, and as you say, they go down the wrong path really quickly if you even get the first message slightly wrong.
I tend to restart chats from the beginning pretty much all the time, because of this.
I’ve also found this to be the case. Starting a new chat or in Cursor composer session puts things back on the right track. Also, prompting is really important. A lot of people just seem to think they have some kind of oracle - “fix the bug” - how is anything supposed to work from that?
> But after that, it was an endless loop of the LLM adding logging code to find some bug and failing to fix it, only to add more logging code and ineffectual changes and so on. The problem is that even after it's lost at sea, it's still answering in a completely confident and self assured tone, so when you decide to take matters in your hands you might be too far gone from sanity and have an unfixable mess in your hands.
I wonder how much better or worse things would get, if we took the human factor out of the loop. Give the LLM the ability to run tests and see the results, then iterate on its own output and branch off with different approaches, gradually increase the temperature etc.
Maybe it’d turn out that you need 10 LLMs running in parallel for an hour to fix something, or perhaps even a 100 would never stumble upon a solution for a particular type of problem. And even then I wonder, whether it’d get better if you fed it your entire codebase or the codebases of the entire libraries or frameworks that you use (though at that point you’re either training it yourself or are selectively finding and feeding the correct bits not to exceed the context).
Exploration of what’s possible and what’s not, identifying whether the weaknesses can or cannot be addressed.
A bit like traditional autocomplete can help streamline familiarising oneself with various libraries, a clear step ahead when compared to just needing to dig through documentation as much.
Maybe there’s a class of code problems that LLMs can be decent at solving, given the ability to iterate, verify solutions and what works or doesn’t, perhaps with 10x more compute than is utilized in the typical chat mode of interaction though.
Get more people into computer science. Knuth said early on in his career he thought he needed to make the computer faster or cheaper but really it was about getting more users. Anyone can program. Or try to then learn about computer science
Part of the skill in using these tools is recognizing when it spins off the rails and backtracking immediately. Most of the time something can be gleaned from that wrong approach which can then guide further attempts.
This is my experience with Aider. When I first started using it, I turned off the auto git commits, but I’ve since turned them back on because they serve as perfect rollback points. My personal style is only commit once I have a feature fully working but with Aider it's best to have it commit after each exchange.
I've gone 2-6 steps down a path before realizing this isn't going to work or the LLM is stuck in a loop. I just hard reset back to the first commit in that chain and either approach the task differently or skip it if it wasn't really that important.
you're not graded on getting the LLM to output perfect code, the point is to get the code in git and PR'd. If your LLM tooling doesn't automatically commit to git so you can trivially go back to "where it strayed" you need to find a better tool. (My current favorite is aider)
It's a tool not a person. When was the last time you got mad at a hammer for being smug?
My wife has been receiving medical appointments, test results, event tickets, package tracking numbers and so on destined to a few old ladies for years.
I once tracked the sons and nephews of one and told them, and they apparently thought I was some kind of scammer and didn't return messages after a couple exchanges, so to this day she still receives all those.
I was studying Physics, not out of particular interest, just because it was challenging, so I was doing badly.
I then discovered a small room that had two unsupervised computers hooked up to some mysterious world-spaning network, and made friends there, ended up leaving Physics for Computer Science.
My first job and every job in my 20s came from people I met in that room getting jobs themselves and calling me to see if I would go work with them, or someone from the previous jobs calling me back. I've never done a real job interview or sent a CV.
But then I formed a family and my social life plummeted. I'm also bad at really nurturing relationships that don't self sustain, so in retrospect I can see how my career ossified since then.
I don't totally regret it because even if I'm now underpaid and underemployed, I earn more than enough for my lifestyle and have loads of free time, so it balances the pang for greater things.
This is so myopic. I feel it's similar to the scrapping of Philosophy from the common high school curriculum here in Spain. It was thrown away as the uninteresting rants of beardy old men, to make space for things like some trite dabbling with Word and Excel billed as "digitalization".
So now things like History, Literature and many STEM subjects feel completely ungrounded. When my kids have some trouble understanding things and they ask me, the answer is very often something I learned studying Philosophy.
So now let's also not do any research on for example... Social Networks. It's not that they are a relevant aspect of modern life worthy of careful observation. Don't dare look closely into what the overlords are doing.
I really wish classes on basic philosophy and skepticism/propaganda techniques were taught in high school as mandatory class. People need to learn to question propaganda and demand sources and logic when people try to "convince" them of "thangs and stuff". Too many youngsters trust the crap on the internet without question if it makes them "feel" good or "a part of something" or any other number of emotional responses to tiktoks and instagram tripe.
You get philosophy at any Catholic or Jewish school simply from your theology class and "skepticism/propaganda" is usually called "rhetorical analysis" and is taught to high school Sophomores in English class.
I can't speak to Protestant / "Christian" schools because theirs is a very different religion but a Catholic education is 12 years of moral philosophy and formal ethics. It's just filtered through the writings of prominent Catholic theologians whose ideas are informed by but ultimately separate from the bible.
Despite being an atheist I think my theology classes were the most valuable ones I had growing up because more so than any other class, even math which would go on to study, they taught me how to think for myself. I'm sad a secular version of it isn't taught everywhere as standard curriculum.
Yeah, 18 months ago I was struggling with a philosophy book and on the street saw an ad for a professor who does tutoring. I thought, "What the heck!" and signed up to see what happened. We meet every couple of weeks and it has been great. I don't in general find philosophy's answers particularly useful, but the questions and the habit of questioning has been great.
I do get though, why systems of power want to defund things like philosophy and sociology. Good questions and good data are two things that run counter to the willful exercise of power.
I want to say something along the lines of "they want to de fund critical thinking in general" but I fear I'm becoming too extreme. I'll go with what you said instead.
I do not know how things look in New Zealand, but I would argue that, on average, very little critical thinking is thought in university or college. In neither the humanities or STEM.
I think good professors that focus on that are the exception and not the rule, sadly.
> do not know how things look in New Zealand, but I would argue that, on average, very little critical thinking is thought in university or college. In neither the humanities or STEM.
I am in New Zealand
I have been through the university system here, my family still closely involved
A great deal of critical thinking happens in New Zealand universities
I'm a New Zealander. I got a degree in Philosophy and entered a Masters in Computer Science here. I definitely learned a lot about critical thinking. Both from humanities and sciences - I think the combination is probably superior to either in isolation
there is no unified "they" .. instead I think you are identifying a difficult fork in the road of education.. exploratory and associative free-will versus collective learned information up to disciplined obedience. A full society needs both! neither are inherently better ! It is indeed a difficult subject. People in the disciplined obedience camp do sometimes prioritize their own ways for funding, and vice-versa, for sure ..
I don't think there's a "unified 'they'" in the sense of, say, there being a Stamp Out Critical Thinking Council with meetings Tuesdays and Thursdays. But I do think that people who rise through power systems while seeking control are going to be averse to critical thinking in their underlings and their social lessers. I think that's in part because most systems like that select for that kind of person, but also because it's in both their personal interest and that of the system they've hitched themselves to.
So the "they" here isn't a unified, coordinated group. But you'll rarely find those in structural problems. But there is a "they" in the sense that we can define a set of people who will act to oppose critical thinking either through direct self interest or class interest.
I don't know about that. China seems to have found a middle ground that allows for a pretense of exploratory and associative free-will ("special economic zones", China even has billionaires) with people in reality being one wrong move away from the usual sudden and drastic crackdown you'd associate with its style of government.
On the other hand Western democracies largely seem to fund this kind of "exploratory and associative free-will" to the benefit of their aristocracies (i.e. wealthy people who hold a lot of social, economic and often political power but can not actually directly control the government itself despite often benefiting from selective enforcement) while at the same time clearly being aware that ideas like the state monopoly of violence (even in the US) or the "right for a country to defend itself" are vital to the state's continued existence and that democracy is a threat to that ulterior motive if taken too seriously.
China seems to be an example of a "disciplined obedience" system adapting to its economic environment (more the international one than the internal one) whereas "the West" seems to provide examples of systems creating layers of misdirection to hide their inherent "disciplined obedience" based nature that ensures their self-preservation.
Yeah, I suspect few of them directly want that. But I also think few of them are inclined to appreciate the benefits of it.
But it does happen intentionally that way some times for sure. E.g., the way the US's Dickey Amendment defunded gun research to prevent any inconvenient facts from coming to light.
Just for the sake of looking at what good intent might have caused this decision..
I would argue that, in periods of scarcity, it makes sense to prioritize public spending on what has a more tangible economic ROI. I recognize that I am extrapolating from the fact that STEM related jobs tend to be more remunerative than social science.
I could not find any literature regarding ROI of research programs.
We aren't particularly in a period of scarcity. New Zealand's a rich country with a stable GDP.
Also, ROI is the wrong frame to use for government activity. If something has significant short-term ROI, then normal commercial capital's a good match. If it has large long-term ROI, then that's VC's domain. It's government's job to make investments in public goods, things that don't have ROI in the sense usually meant here.
I don't really agree with your characterisation of ROI.
Every potential decision outcome has a ROI (which we mostly don't know in advance, though we can often guess). The investments for which we need governments are those that won't ever work in the private sector due to misaligned incentives (no company will pay out of its own pocket to educate young children, not because there's no ROI overall but because it's much too diffuse -- the ROI for that company is extremely low).
People can disagree about whether government should also make some investments that would also be made by self-interested people/companies.
ROI is very much a business term of art. And that's how it's mostly used here in this little venture-capital supported niche.
I agree one can broaden it to mean something much vaguer, using it metaphorically. But A) I think we at least have to explicitly distinguish that, and B) I think casting societal questions in business terms is a perilously wrong frame, one easily leading to all sorts of errors of thought, ones that have negative social implications.
Just as an example, we could take Elon Musk's recent swipe at funding to fight homelessness. There he uses a common business framing and ends up with some conclusions that are deranged from the point of view of people actually working on the problem. Which wouldn't matter much if he were some random internet commenter, but here his error could have a significant body count.
if we define direct ROI as the canonical monetary gain that private entities tend to optimize for, and then _indirect ROI_ as something that might cause monetary gain because of second or nth order consequence, then I was mostly referring to indirect ROI.
I tend to agree with you that public spending would be wasted or even misdirected in optimizing for direct ROI (EG prison system, education, healthcare...)
I should have better specified in my original response.
I've tought myself a lot of things over the course of my life and am a huge proponent of self-education, but a lot of the 'learning how to learn' had to happen in graduate school. There are few environments that provide the right combination of time, close involvement of experts and peers, the latitude to direct your research in a way that you find interesting and useful within the larger constraints of a project, the positive and negative feedback systems, the financial resources from grant funding, etc.
The negative feedback loops are particularly hard to set up by yourself. At some point if you're going to be at the researcher level (construed broadly), you need help from others in developing sufficient dept, rigor and self-criticality. Others can poke holes in your thoughts with an ease that you probably can't muster on your own initially; after you've been through this a number of times you learn your weaknesses and can go through the process more easily. Similarly, the process of preparing for comprehensive exams in a PhD (or medical boards or whatever) is extremely helpful, but not something most people would do by themselves--the motivation to know a field very broadly and deeply, so you can explain all of this on the spot in front of 5 inquisitors, is given a big boost by the consequences of failure, which are not present in the local library.
The time is also a hard part. There are relatively few people with the resources to devote most of their time for learning outside of the classroom. I spent approximately 12,000 hours on my PhD (yes some fraction of that was looking at failblog while hungover etc. but not much). You could string that along at 10 hours a week, 50 weeks a year, which is a 'serious hobby', but it would take you 24 years. How much of the first year are you going to remember 24 years later? How will the field have changed?
> You wasted $150,000 on an education you coulda got for $1.50 in late fees at the public library.
Does the rest of the movie support that claim? Will Hunting had book smarts but required significant effort from several people to get him to the point where he was ready to meaningfully apply his intelligence.
I've hired a handful of folks who learned solely by self-study and while none of them required the level of support Will did, they all took significantly more effort to get to the point where they contributed productively than hires who attended university or had previously collaborated with experts.
Not saying that requires a degree, but even the most brilliant people benefit from collaborating with like minds.
Yeah, there's a lot of education you can't get just by reading books. Which is exactly why I ended up hiring a tutor.
Philosophy in specific is one long argument, 2500 years of new people showing up and saying, "Well that guy's wrong and I'm right." So much of what I needed to know to make sense of philosophical arguments is either hugely scattered or not written down at all. It was vastly more efficient just to hire an expert.
That's not to argue for the $150k education; I wouldn't know. But I don't think that taking life advice from fictional characters is much better.
Assuming one has the self-motivation and ignores everything else that goes with attending a university. Most people aren't super geniuses who spend their days reading books from the library or online papers.
Most people who aren't self-motivated will almost completely stop studying anything new after university anyway, and will still end up far behind the motivated people. Far better if they were put in a situation where they were forced to learn how to motivate themselves and study of their own accord.
The forcing doesn't have to be particularly dramatic. One of the things I like having about a tutor is it "forces" me to make some progress on a regular basis. As a friend of mine put it, "Sometimes I need somebody to not disappoint."
Next door (PT) the right wing also wants to cut down on several things: social sciences, philosophy, sex ed. Part of (1) a crusade against what is perceived to be "the communists brainwashing your kids", and (2) an idea that schools and universities ought only to teach that which the market requires. So yes to STEM because it has economical value, and no to social sciences, arts, or philosophy because according to The Market it doesn't.
As far as I know, Freudian psychoanalysis has been dead academically as long as I've been alive. Complaining about it is pretty close to complaining about how physics is stuffed with all these people who believe in the lumineferous aether. [1]
Were a lot of Freud's initial ideas discredited? Definitely. But the same is true of Isaac Newton, who was an alchemist and theologian. Consider his recipe for curing plague with toad amulets. [2]
The beginnings of any sort of knowledge are messy. But progress is possible.
Of course the hyperreal doesn’t exist — that’s inherent to the concept! That’s the entire point: representations of ostensible reality have become ungrounded in any physical truth, but cannot be said to have lost meaning because meaning in the world is primarily formed by social consensus. When that consensus breaks with any directly observable facts, weird stuff happens (like Gain-scented Febreeze, or cryptocurrency, or meme stocks): but these things cannot be dismissed as “meaningless” because they have the meaning imbued to them by social structures!
On to Deluze: A body without organs is an excellent frame through which to understand a modern corporation (dendritic) or a hippie commune (ideally rhizomatic). That’s the point — it’s an organization operating without biological structures. Is this useful? Well, the primary goal of any organism is to continue existing…
Biopolitics is how politics affect biology: abortion and forced sterilization and healthcare. It obviously has some effect: is analyzing politics through this lens useful? Depends on your goals.
Smooth vs striation is the effects rigid schedules have on us. They have some effect, or vacations wouldn’t feel nice! Worth restructuring society over? I’m not ready to dismiss it out-of-hand.
…area studies are looking at what’s going on in your neighbourhood?
And moreover, I feel like you’re missing the point of analytic philosophy, despite having studied it: it does not derive absolute truth, but offers frameworks through which events and phenomena can be analyzed! These frameworks can be anywhere from fully explanatory to having no value at all for a given scenario, but their importance is through their engagement — and refutation is equally as important! Philosophy, more than any other field of study, is a continual dialogue about problems that can never be solved, hopefully finding new tools and perspectives along the way.
As someone with multiple degrees in philosophy... yes, there are obviously very eccentric views. Just steer clear of Continental Philosophy if it bothers you.
The idea that we aren't teaching children the works in the Modern and Analytic tradition is truly a shame, however, given the conflict with religiosity, it does not surprise me that public education programs avoid it for political reasons.
Have you read Discipline and Punish? Foucault be hard to read in parts but he did the research and it shows up on every page. I disagree vehemently with his thesis, but there's no denying it has impact for a reason.
Freud and other psychodynamic theorists were basically the beginning of the idea within medicine and neurobehavioral sciences that people don't always have insight into their behavior.
Too much has been made of them on the basis of caricature and stereotype.
Maybe in the humanities it's still dominant but I don't get that sense.
It's always struck me as odd that people are fine idolizing and giving Nobel prizes to vague two-system theories of decision making ("fast and slow") but then turn around and act like Freud was the worst form of charlatan, when the former is just an empirically articulated form of the latter. Important difference but fundamentally not all that different in another way.
I feel silly defending Freud but sometimes I feel like the weird vitriol and animosity toward Freud is strange. As someone pointed out, it's like the general public getting angry with physicists for ever positing luminous ether, or getting angry with biologists for entertaining Lamarckian inheritance.
You shouldn't feel silly. The animosity towards Freud is completely understandable, and I think, pernicious. What people want to challenge is not Freud's crankery, but the destabilizing and widely accepted idea of a human subconscious. I see it most common in a certain kind of "rationalist" that doesn't want rational methods to extend to analyzing their own behavior.
What else is there to research about Social Networks? They’re bad, but people get addicted. Nothing else to it. Not sure why people should just get funding forever to constantly arrive at the same conclusion.
Just off the top of my head: 1) what specific mechanisms are used to hook people initially? 2) What specific mechanisms sustain or deepen the addiction? 3) What actual value do the provide to users? 4) Who specifically do they harm, and by how much? 5) Who uses them without harm? 6) What societal impacts, positive and negative, do they have?
And I could keep going, but you get the idea. Any one of those could be a hundred research projects.
Even if your sole goal was to regulate them out of existence, you'd need a lot more than "I think Facebook is bad". You'd at least need a solid enough definition of the problem to craft the ban in such a way that it stuck. But that's a very unlikely outcome, so most of the people working on this are looking to minimize harm while maximizing value, and that just requires a lot of detailed research. For example, compare Facebook vs Mastodon, or vs HN. Do we ban them all, because "social networks bad"?
I hate to break it to you, but spouting a little pop science jargon plus some anecdotes is not "figuring it out" for the purposes of actually fixing anything. If it were, then they "do your own research" people would have health care sorted out already.
But what policies work in regulating them? Street drugs are bad, and outright prohibition in the war on drugs has failed for many different reasons. Figuring out how to stop incentivised organized crime, street gangs is hard. Understanding how decriminalization does or doesn't work is hard. Does giving out free needles reduce harm by preventing disease or increase harm by enabling use? What economic or social policies would indirectly help? Does high housing costs drive homeless, drive addiction so we should all be YIMBYs or does abuse, lead to job loss, lead to homelessness. The truth is very complex and hard to figure how how to fix it.
Social media is a similar type of problem. You probably can't outright ban them, in democratic countries there is too much demand and there would be backlash. Even if not, underground social media would arise, as it does already in countries where it is restricted. Can you regulate it? If so, what works? Certain ages? Restrict algorithmic curation? Chang liability rules? Better educate people about the costs and benefits and good use? Enable more heavy handed censorship and content filtration? Require real names and public have strong libel laws that are enforced? What about foreign ownership or influence campaigns? Corporate advertising? Monopoly and anti-truest issues? What about standards around interoperability and federation? Should they be free, or require subscriptions?
The people behind d this policy do not think there is "Tons of stuff to figure out."
They are like many commentators here, they (think they) know it all.
It is, I hope, the last gasp of the old Thatcherist guard. "There is no such thing as society...just individuals "
They see everything through a materialistic lense, are desperate to reduce taxes (it is a fetish), and are doing incredible damage to the infrastructure of our society
Politics, sigh
The government they replaced (earnest left wing types) had some good ideas but were very centrist, paternal, and astoundingly incompetent
Came to add my input as a NZer but yeah, this sums it up.
Most annoyingly to me there is no true green party to vote for. The actual NZ green party's primary focus is on what would be labeled socialist outcomes by someone with a US perspective.
Really, if we want to make a case for addiction there is clearly an addiction to trying to claim new things as somehow addictive. It is on a dodgy basis backed mostly by sensationalism and pseudoscience that pathologizes anything which has to do with pleasure. Really they are neo-puritians wearing a mask of science.
“What else is there to research about disease? They’re bad, but people get infected.”
These are things which shape our world, it’s worth understanding them at a level which doesn’t fit on a bumper sticker – and, right-wing mythology aside, the cost is not very high. Academics are cheap and their work almost always has spin-off benefits, even if that’s just providing a place for people to learn general research skills they take on to the workplace.
It's not that myopic though is it? Over the last 2 or so years breakthroughs in AI have given us access to a new level of technology. It's a rich seam to mine, so society is likely to benefit more from a state focussing its research on it instead of e.g. understanding Maori migration a few hundred years ago. A stronger economy leads to more funding for public services to support people alive today.
History has been pretty well uncovered, whether people listen to it and learn its lessons is another matter (schools certainly don't teach it unless it is how the white male oppressors fucked over everyone and it is the cause of all the worlds evils).
Most historical debate these days is also pretty subjective, egos-versus-egos for clicks and likes (and research money) Don't get me started on the subjective biases of social "science"
This is just absolutely factually wrong and betrays a total lack of understanding of the field. History manuscripts are released constantly that are investigating and discussing contents of the archive that have been sitting in a box unexamined since the time of their creation. Even if you take the outrageously limited view of history that it just exists to document the past, we make significant progress constantly.
There's also no research money in the field for egos to squabble over. Research grants for historians are regularly in the "couple of thousand dollars" range.
I wish I could agree, and happy to be shot down but I am not seeing anything that is not just a re-interpretation of current facts to make history sound nicer. there has certainly been nothing uncovered this century that has changed anything and I mean anything important about the current world and the original article was about economic benefits to our country which there frankly are none. Subjective "research" IMHO is a waste of taxpayer dollars when objective research is still underfunded.
Understanding how Māori and Pākehā react differently in different situations is crucial to good social services
If you do not study the society you live in how do you act in a socially positive way? How do you know what public services are even required if you refuse to look?
If there's not much money in the kitty to pay for the services it's academic. Far better to focus on potentially valuable tech so there is money to pay for things later, and do research then if there's any question how to spend it.
That's the kind of reasoning the USSR's Communist party embraced back then. It turns out that state planning of research doesn't works very well in the long run. In the short term it kind of does because all you have to do is catch-up with the state of the art in a handful of priority domains, but when these domains stale then you're screwed because that's all the research you have.
I've seen my 15 year old use ChatGPT for her homework and I'm ok with most of what she does.
For example she tends to ask it for outlines instead of the whole thing, mostly to beat "white page paralysis" and also because it often provides some aspect she might have overlooked.
She tends to avoid asking for big paragraphs because she doesn't trust it with facts and also dislikes editing out the "annoying" AI style or prompting for style rewrites. But she will feed it phrases from her own writing that get too tangled for simplification.
Also she will vary the balance of AI/own effort according to the nature of the task, her respect for the teacher or subject:
Interesting work from a engaging lecturer? Light LLM touch or none. Malicious make-work or readable Lorem Ipsum when the point is the format of the thing instead of the content? AI pap by the tons for you. I find it healthy and mature.
I've always seen a dilemma with increased bureaucracy driven by corruption prevention:
Systems too focused on defeating corruption as a main objective tend to miss their original intent, and become overly restrictive, to the point of having to rely on rule breaking to actually perform their function.
But once a particular rule is OK to break, every other rule is in jeopardy.
This way you end up with systems like the Spanish access to public jobs: extremely punishing for the participants that are subject to humongous nonsensical competitive examinations, ostensibly to select the very best strictly on their merits, but still rife with corruption.
Programming doesn't happen in a vacuum, and experience and institutional knowledge can account for many orders of magnitude of performance. A trivial example/recent anecdote:
The other day, two of our juniors came to see me, they had been stumped by the wrong result of a very complex query for 2 hours. I didn´t event look at the query, just scrolled down the results for 10 seconds and instantly knew exactly what was wrong. This is not because I'm better at SQL than them or a Carmack level talent. This is because I've known the people in the results listing for basically all my life, so I instantly knew who didn´t belong there and very probably why he was being wrongly selected.
Trivial, but 10 seconds vs. 4 man hours is quite the improvement.
https://en.wikipedia.org/wiki/Tit_for_tat
reply