Hacker Newsnew | past | comments | ask | show | jobs | submit | edtechdev's commentslogin

which explains why this tool requires a NEAR AI account to use


I mean, it's literally a repo belonging to NEAR AI.


I tried this out on huggingface, and it has the same issue as every other multimodal AI OCR option (including MinerU, olmOCR, Gemini, ChatGPT, ...). It ignores pictures, charts, and other visual elements in a document, even though the models are pretty good at describing images and charts by themselves. What this means is that you can't use these tools yet to create fully accessible alternatives to PDFs.


I have a lot of success asking models such as Gemini to OCR the text, and then to describe any images on the document, including charts. I have it format the sections with XML-ish tags. This also works for tables.


Cognitive load isn't a valid or useful concept: https://edtechdev.wordpress.com/2009/11/16/cognitive-load-th... https://www.tandfonline.com/doi/full/10.1080/00131857.2024.2...

There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.

In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.


The linked articles seem to primarily criticize three things about connotative load theory;

- Difficult to measure and therefore a hard or impossible to empirically study. (a bad scientific theory)

- Its application to education and learning theory which where a lot of other techniques are more proven.

- The idea that it's a primary mechanism of human learning, which has had a-lot of research showing otherwise.

Though those points seem valid, this article does not concern itself deeply with this concept. The word "mental strain" or "limited short term memory" could have been inserted in place of "cognitive load", and the points raised would be valid. In effect the article argues we should minimize the amount of things that need to be taken into consideration at any given point when reading (or writing) code. This claim is quite reasonable irrespective of the scientific bases of CTL which it takes its wording from.

So i don't think your criticism is entirely relevant to this article, but raising it does help inform others about issues with the used wording if they happen to want to learn more.


I think the criticism is relevant because TFA isn't the first to exercise the term "cognitive load" in the context of computing. It's a term thrown around quite often, so we should cross reference its alleged meaning to literature.

I myself find it to be a term that's effectively used as a thought-terminating cliche, sometimes as a way to defend a critic's preferred coding style and organization.


hmm. Using a term from formal science literature to loosely argue or back questionable arguments withe the ruse of scientific basis is a common issue. I pointed out that this article does not use the formal definition of the term, which you point out is itself an issue. Put that way i agree.

I think the article could have used a different term, or made a more clear declaration of what they specifically meant with the term to resolve this issue. Though i don't think it was done intentionally to deceive since the article makes no mention of the formal literature or theory of "cognitive load" to back its arguments.


Some research:

Students learn and understand college math more when the classes are contextualized (usually engineering, biology, but you can also use everyday examples). See decades of research on situated learning and related approaches. https://careerladdersproject.org/docs/Contextual%20Approache...

Developmental courses can also be compressed https://blog.careertech.org/research-review-promising-practi...

Dual enrollment saves time and money and improves success. Let high school students take college math courses.

Corequisite remediation is the current best practice. Let students take regular (not remedial) math courses, and improve advising and support. https://ccrc.tc.columbia.edu/easyblog/future-of-corequisite-... https://strongstart.org/resource/corequisite-mathematics-too...


Contextualization was hugely important for me grasping math. I think one of the most dangerous things we do is relying on people who believe math is beautiful/interesting for its own sake to teach math.


There is that.

One approach to calculus is to teach it alongside Newtonian mechanics, with lots of experimental work. Those go together.

The problem is, it's too hard for high school teachers.[1]

[1] https://www.compadre.org/portal/pssc/pssc.cfm


My public high school offered a combined physics-math course (basically two different teachers and courses that coordinated with each other), and it was definitely an excellent way to learn calculus.

We learned derivatives for mechanics around the same time as limits for calc. So everything in calc was properly motivated. I think we moved into E&M around the same time as we got into integrals in calc. We had done basic integrals in the mechanics portion of physics, but got into it formally in calculus and into trickier applications in E&M.

IIRC, the class as a whole did very well on the AP exams. I’m often frustrated by courses that don’t offer similar motivation for math concepts. I think it makes the material far more interesting.


Nice. Once you see how acceleration, velocity, and position are related and how integration and differentiation describe them, what calculus is for becomes clear. After all, that's why Newton invented it. Not because he liked to sum infinite series.


This may also true for some (admittedly also-mathy) Computer Science topics.


Good to see further development in this space. Would be interesting to see how it compares to Decap CMS https://decapcms.org/ and Static CMS https://www.staticcms.org/

Me personally I'd like to see something that supports easily creating and using different types of objects besides pages (such as: events, books, recipes, etc.), like content types and fields and views in wordpress or drupal, ideally aligned with schema.org like https://www.drupal.org/project/schemadotorg I think Hugo might support content types in YAML or something.


You can configure whatever content type you want with nested fields, lists, etc. [1]

Disclaimer: I used to work a lot with Drupal 10+ years ago. I more or less wanted the same kinds of features in Pages CMS.

[1]: https://pagescms.org/docs/configuration/


Also try Tina CMS or ProcessWire.



I've done a lot of experimentation in this space with ChatGPT4 and also the Wolfram plugin. I've mixed but generally good results when working through basic physics problems though you have to be careful about how you prompt. In particular, you want to break down the problem into smaller bite size chunks and eliminate ancillary information. Interestingly, even when it gets the math and algebra wrong, I still find it useful because it gives me hints about how to approach the exercise. Sometimes having several parallel conversations with the Wolfram plugin, for example, can set you on the right track. I expect there will be significant improvements in this arena in the short term.


That doesn't sound anything like a product suited for young learners, many of whom are unprepared to practice the finesse you're talking about and many of whom will want to put no more effort in than strictly necessary.

That sounds like something an especially patient autodidact might use to automate some busy work or help them explore the basics of a new topic, which is fine, but not what the article is trying to champion.


I believe your insights are accurate. Nevertheless, in the age of AI, it's evident that critical thinking remains indispensable. The value of a liberal arts education, which fosters the age-old practice of intellectual scrutiny, cannot be overstated, LLMs or not.


Archive version: https://archive.is/yNsyW


There are some activitypub apps that support nomadic identity like HubZilla and Streams: https://codeberg.org/streams/streams

Another non-activitypub alternative is nostr, where your identity is a public/private key pair: https://github.com/aljazceru/awesome-nostr


MAA has a free, evidence-based instructional practices guide https://www.maa.org/programs-and-communities/curriculum%20re...


I'm not sure, but maybe GPT4All will eventually add support for it: https://gpt4all.io/


Educational researcher here. There's no such thing as a "science of reading." It's part of the highly politicized "reading wars" (see also the "math wars" which has been going on for decades). It's no coincidence that Republicans are pushing phonics as the end all be all solution to teaching reading, and you can cherry pick educational research studies that support or disconfirm various teaching strategies. Phonics has its place, contexts where it is appropriate and beneficial, but it is not the sole strategy that works or should be used in every context.

A recent meta-analysis https://www.researchgate.net/publication/338494581_Meta-Anal... and the What Works Clearinghouse have summaries of the evidence for different strategies for improving early reading skills: https://ies.ed.gov/ncee/wwc/practiceguide/14 Direct Instruction (also championed by one political side) is not an effective strategy: https://ies.ed.gov/ncee/wwc/EvidenceSnapshot/139

Here's just one post with a little more info on the political context: https://radicalscholarship.com/2023/01/14/does-the-science-o...

A bigger scandal is how states like Florida game the system to make their reading score rankings look higher: https://www.tampabay.com/opinion/2023/01/05/floridas-educati...

The good news is there are a lot of strategies that help with reading in various contexts. There's even research on how reading to dogs (or even robots) helps students with reading :)


I notice that the study you cite is measuring effectiveness in reading interventions. Obviously, that's where the data is coming from because we don't carefully track readers who learn successfully at a much earlier age.

However, I wonder if the ideal pedagogy would be different for younger students (maybe pre-K to 1st) who have less knowledge and smaller vocabularies? It's a bit tricky because a lot of the students who need intervention probably need remedial instruction in other areas too, but some of them may have been good students who struggled with reading.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: