Massive plus one for Language Transfer. It's well presented, interesting, and kept me engaged. The whole concept is finding connections to language you already know, and gets you thinking in fuller more complex thoughts and sentences really quickly.
The audio lessons are free on various podcast platforms / YouTube etc.
I've recently been enjoying Book Overflow https://www.youtube.com/@BookOverflowPod - they're reading and discussing really interesting (programming / software engineering) books, getting interviews with authors etc.
The AI tool report shown to the dean with "85% match" Will be used as "proof".
If you want more proof, then you can take the essay, give it to chatGPT and say, "Please give me a report showing how this essay is written to en by AI."
> If you want more proof, then you can take the essay, give it to chatGPT and say, "Please give me a report showing how this essay is written to en by AI."
And ChatGPT will happily argue whichever side you want to take. I just passed it a review I wrote a few years ago (with no AI/LLM or similar assistance), with the prompts "Prove that this was written by an AI/LLM: <review>" and "Prove that this was written by a human, not an AI/LLM: <review>", and got the following two conclusions:
> Without metadata or direct evidence, it is impossible to definitively prove this was written by an AI. However, based on the characteristics listed, there are signs that it might have been generated or significantly assisted by an AI.[1]
> While AI models like myself are capable of generating complex and well-written content, this specific review shows several hallmarks of human authorship, including nuanced critique, emotional depth, personalized anecdotes, and culturally specific references. Without external metadata or more concrete proof, it’s not possible to definitively claim this was written by a human, but the characteristics strongly suggest that it was.[2]
I think what you pointed out is exactly the problem. Administrators apparently don’t understand statistics and therefore can’t be trusted to utilize the outputs of statistical tools correctly.
I just played with townie AI for an hour or so... Very cool! Very fun.
There's still some glitches, occasionally the entire app code would get replaced by the function the LLM was trying to update. I could fix it by telling it that's what had happened, and it would then fill everything in again... Waiting for the entire app to be rewritten each time was a bit annoying.
It got the initial concepts of the app very very quickly running, but then struggled with some CSS stuff, saying it would try a different approach, or apologising for missing things repeatedly...and eventually it told me it would try more radical approaches and wrote online styles... I wonder if the single file approach has limitations in that respect.
Very interesting, very fun to play with.
I'm kind of concerned for security things with LLM written apps - you can ask it to do things and it says yes, without really thinking if it's a good idea or not.
But cool!
And anything which helps with the internet to be full of small independent quirky creative ideas, the better.
> I'm kind of concerned for security things with LLM written apps - you can ask it to do things and it says yes, without really thinking if it's a good idea or not.
Well, right. If I'm using an LLM to create code, I'm going to use all my skill and experience to review and shape the code to standards I'm ok with.
But for people with extremely limited experience, LLMs offer a "create an app by talking!!" Zero understanding required. So they won't know to not leak user PII in JSON responses or have publicly writable endpoints or keeping private keys for external services server side and outside of the code base, etc... Let alone anything more complex.
Also rich patrons supporting artists was how many of the greatest artworks of many civilisations were commissioned for thousands of years...
Is writing software closer to growing potatoes or designing the Sagrada de Familia, or writing the Clarinet Concerto?
I'd argue that writing a web browser is a lot closer in scope to writing a symphony than building a house - at least in audience and durability. In a house's life time, maybe 100 people live in it, and perhaps 2000 people visit it, but a browser, or a symphony, will have an audience of millions.
The market is massively smaller. The impact massively larger.
Ask it to write a sitcom comedy screenplay about a software team asked to do exactly this same situation, but where it goes disastrously wrong. Give enough info to the prompt to make it parody your own company. Make sure it's not offensive.
Then use an AI image generator to generate cartoon pictures for the characters in the screenplay.
Then use an AI voice system to generate all the voices for it.
Then put it all together as a video, present it as a early storyboard concept pitching to them for funding to turn into a full TV series.
This unfortunately has `docstring-to-markdown` as one of its dependencies, which is a pain to install behind corporate pip proxies because it's under GPL.
In the interim, check out basedpyright [1]. It's an up-to-date fork of pyright with some improvements, less arbitrary limitations, and does not require npm to install.
I was thrilled to learn about basedpyright recently. It does a great job of filling in some of the missing parts of pyright that MS deemed a better fit for pylance – which is a vs code exclusive.
Easy to install with pipx.
As with pyright, I’ve noticed `--createstubs` helps against slowness when working in modules that import large untyped packages.
Which is a very sensible decision given that VSCode is Typescript, and also it means it can run on the web. Also Typescript is a much nicer and faster language than Python.