I tried this out on huggingface, and it has the same issue as every other multimodal AI OCR option (including MinerU, olmOCR, Gemini, ChatGPT, ...). It ignores pictures, charts, and other visual elements in a document, even though the models are pretty good at describing images and charts by themselves. What this means is that you can't use these tools yet to create fully accessible alternatives to PDFs.
I have a lot of success asking models such as Gemini to OCR the text, and then to describe any images on the document, including charts. I have it format the sections with XML-ish tags. This also works for tables.
There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.
In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.
The linked articles seem to primarily criticize three things about connotative load theory;
- Difficult to measure and therefore a hard or impossible to empirically study. (a bad scientific theory)
- Its application to education and learning theory which where a lot of other techniques are more proven.
- The idea that it's a primary mechanism of human learning, which has had a-lot of research showing otherwise.
Though those points seem valid, this article does not concern itself deeply with this concept. The word "mental strain" or "limited short term memory" could have been inserted in place of "cognitive load", and the points raised would be valid. In effect the article argues we should minimize the amount of things that need to be taken into consideration at any given point when reading (or writing) code. This claim is quite reasonable irrespective of the scientific bases of CTL which it takes its wording from.
So i don't think your criticism is entirely relevant to this article, but raising it does help inform others about issues with the used wording if they happen to want to learn more.
I think the criticism is relevant because TFA isn't the first to exercise the term "cognitive load" in the context of computing. It's a term thrown around quite often, so we should cross reference its alleged meaning to literature.
I myself find it to be a term that's effectively used as a thought-terminating cliche, sometimes as a way to defend a critic's preferred coding style and organization.
hmm. Using a term from formal science literature to loosely argue or back questionable arguments withe the ruse of scientific basis is a common issue. I pointed out that this article does not use the formal definition of the term, which you point out is itself an issue. Put that way i agree.
I think the article could have used a different term, or made a more clear declaration of what they specifically meant with the term to resolve this issue. Though i don't think it was done intentionally to deceive since the article makes no mention of the formal literature or theory of "cognitive load" to back its arguments.
Students learn and understand college math more when the classes are contextualized (usually engineering, biology, but you can also use everyday examples). See decades of research on situated learning and related approaches.
https://careerladdersproject.org/docs/Contextual%20Approache...
Contextualization was hugely important for me grasping math. I think one of the most dangerous things we do is relying on people who believe math is beautiful/interesting for its own sake to teach math.
My public high school offered a combined physics-math course (basically two different teachers and courses that coordinated with each other), and it was definitely an excellent way to learn calculus.
We learned derivatives for mechanics around the same time as limits for calc. So everything in calc was properly motivated. I think we moved into E&M around the same time as we got into integrals in calc. We had done basic integrals in the mechanics portion of physics, but got into it formally in calculus and into trickier applications in E&M.
IIRC, the class as a whole did very well on the AP exams. I’m often frustrated by courses that don’t offer similar motivation for math concepts. I think it makes the material far more interesting.
Nice. Once you see how acceleration, velocity, and position are related and how integration and differentiation describe them, what calculus is for becomes clear. After all, that's why Newton invented it. Not because he liked to sum infinite series.
Me personally I'd like to see something that supports easily creating and using different types of objects besides pages (such as: events, books, recipes, etc.), like content types and fields and views in wordpress or drupal, ideally aligned with schema.org like https://www.drupal.org/project/schemadotorg
I think Hugo might support content types in YAML or something.
I've done a lot of experimentation in this space with ChatGPT4 and also the Wolfram plugin. I've mixed but generally good results when working through basic physics problems though you have to be careful about how you prompt. In particular, you want to break down the problem into smaller bite size chunks and eliminate ancillary information. Interestingly, even when it gets the math and algebra wrong, I still find it useful because it gives me hints about how to approach the exercise. Sometimes having several parallel conversations with the Wolfram plugin, for example, can set you on the right track. I expect there will be significant improvements in this arena in the short term.
That doesn't sound anything like a product suited for young learners, many of whom are unprepared to practice the finesse you're talking about and many of whom will want to put no more effort in than strictly necessary.
That sounds like something an especially patient autodidact might use to automate some busy work or help them explore the basics of a new topic, which is fine, but not what the article is trying to champion.
I believe your insights are accurate. Nevertheless, in the age of AI, it's evident that critical thinking remains indispensable. The value of a liberal arts education, which fosters the age-old practice of intellectual scrutiny, cannot be overstated, LLMs or not.
Educational researcher here. There's no such thing as a "science of reading." It's part of the highly politicized "reading wars" (see also the "math wars" which has been going on for decades). It's no coincidence that Republicans are pushing phonics as the end all be all solution to teaching reading, and you can cherry pick educational research studies that support or disconfirm various teaching strategies. Phonics has its place, contexts where it is appropriate and beneficial, but it is not the sole strategy that works or should be used in every context.
The good news is there are a lot of strategies that help with reading in various contexts. There's even research on how reading to dogs (or even robots) helps students with reading :)
I notice that the study you cite is measuring effectiveness in reading interventions. Obviously, that's where the data is coming from because we don't carefully track readers who learn successfully at a much earlier age.
However, I wonder if the ideal pedagogy would be different for younger students (maybe pre-K to 1st) who have less knowledge and smaller vocabularies? It's a bit tricky because a lot of the students who need intervention probably need remedial instruction in other areas too, but some of them may have been good students who struggled with reading.