Hacker News new | past | comments | ask | show | jobs | submit login
Simplifying programming with AI-tutors (edmigo.in)
48 points by sayonidroy 16 days ago | hide | past | favorite | 60 comments



The main issue with LLM powered learning is you need to have at least a surface level understanding of the topic to recognise if the output is wrong.

That requires some basic knowledge, a specific mental model for interacting with LLM’s, and a modicum of good taste in code to be able to correct for the code smells that LLM’s fall victim to.

You can never let the LLM take the wheel, you need to be the driver knowing what you want. This works if for example you know how to something in language x but need it in language Y, but it doesn’t work if you have never been able to solve the problem before.

For something like leetcode however which is far from novel, there are that many correct answers in the training data that the risk is low.


I am so glad I got my basics in before this AI era, I feel sorry for the folks who are just learning to code now. AI can do all the basic tasks and frameworks so it will probably feel so futile for them to try to learn it all by themselves.


This.

I don't know why, but I find it easy to imagine a future where real programmers who understand both the craft and the art from its lowest fundamentals to the highest levels of abstraction are the equivalent of https://en.wikipedia.org/wiki/Low-background_steel.


I'm seeing this play out among young people I know who were trying to improve their English writing skills. Everything they post now in any formal or semi-formal context is via ChatGPT. Their self-confidence has taken a nosedive.


I feel Programming is more about learning how to think and break problems down in smaller parts. That skill should always be useful


i think textbooks are still good


It's like the skill needed for calculator usage. You need to be able to quickly estimate what rough order of magnitude, first and last couple of digits, to check that the result "looks right". Using a calculator to do 100 quick sums is a different skillset to doing 10 by hand.


Very very good point and often understated imo.

Yes, LLMs are okay-ish at javascript and python, but if I ask it to generate me some VHDL, it often just produces plain garbage. And without a basic understanding of hardware design you wouldn't even know what to look out for. It's just not possible to learn anything from it.


> you need to have at least a surface level understanding of the topic to recognise if the output is wrong.

Often overlooked point. The only reason I even consider LLMs a useful tool for my work is because for the things I request from them I do have the prerequisites to validate their output.


The classic "How Many R's in 'Strawberry'?" LLM test is amusing to students...

Still useless for trying to understand subtle complex concepts not previously well-defined in the data sets.

The numerous slop articles posted on YC that obviously were partially written with LLM help often makes the alleged authors sound like they had a stroke. lol =3


Agreed. But LLMs are pretty good at answering questions and helping students understand concepts from fixed datasets ie. courses. That is what we're trying to achieve at Edmigo


Passively cramming information will not retain a lesson more than 3 months. Additionally, only 1/17 students have the discipline to succeed with self-directed studies.

I remain skeptical the current well-formatted LLM nonsense will help students, and have concerns it may cause hindrance with long-term information recall.

We shall soon see I guess... Best of luck, =3


I totally agree that passive learning is not useful. So Edmigo uses a hands-on learning approach where students solve problems on their own alongside an AI guide


Curious where you found the 1/17 statistic?


It is an internal metric from a College that offers kids GED/remedial-high-school-level math prep classes for remote students. There are various reasons a student may be unable to physically attend classes, and remote 1-on-1 instruction is not in the budgets. The rate for curriculum completion is a very low 1:17 for self-directed study.

The learning outcomes for students are often terrible for this path, but perhaps good if you charge for the tutorial service I guess. Attrition rates are not something institutions proudly promote in public.

YC seems to down-vote anything that deals with a data-driven reality. lol =3

https://www.youtube.com/watch?v=GYh7smM6YpM


Interesting points. but not very student can afford or has access to group studies or human tutors right?


In my country it is the law, that the state must educate you to at least high-school level free of charge. Regardless of the challenges one may face along the way (protects those who are autistic, blind, deaf, poor, paralyzed, or incarcerated etc.)

In introductory and undergraduate academics, the departments will usually offer unstructured TA lab hours for those who seek additional help understanding material. However, the reality is "A" level students do not require good teachers, as their grades are usually not correlated with instructor proficiency (often due to a 3 year head start from out-of-band after-school tutoring services prior to the conclusion of grade school.)

The danger from LLM contaminating long-term memory recall from an erroneous primacy effect is a concern.

Have a nice day, =3


LLMs kind of are, but imo it never meets up for a


Very valid points. We've controlled these issues by providing content which we created ourselves and running other pipelines to validate the tutor's answers


Someone out there apparently is in a hell of a hurry to get rid of all the pesky professionals who are standing in the way of their digital aspirations.

Little do they stop to consider the fact that most of the time they don't have an effing clue what they want until someone conceptualizes and builds it for them.

If I simply built what people asked for, as opposed to trying to understand their needs and problems, that would lead to very awkward situations once someone actually tried to use the system.


Sorry for the very basic question, but, is DSA, in this context, Disabled Students' Allowance?


Data Structures and Algorithms.

Very common acronym here in India. All CS "students" just want to "crack" "DSA".

I hate this terminology and more so the underlying idea and motivation behind doing all this.

That CS is a rote mechanical thing to do for the end goal of getting a job.


I'm yet to come across a "leetcoder" type who can write even basic code without constant help, even if it's mostly algorithmic in nature.


Data Structures and Algorithms


Using LLMs to learn about a topic you're not familiar with is very dangerous.


I tried learning PyTorch with ChatGPT and it's not so bad. It explains certain topics and provides examples. And if you run it side-by-side with a Python shell and type examples yourself you can see yourself if it hallucinates or not.


I've done the same, and it's decent, but in ML having the minor bugs that chatGPT introduces was awful for me to debug. Like, it doesn't fail catastrophically, but it just gave me really bad models. I basically got poor performance at the end for all of my models and didn't learn enough from it to debug the issues. At the end of the day, pytorch was way simpler to just learn. Have you gotten good results in some models using chat gpt with pytorch?


Agreed! That's why Edmigo is backed by content we hand-crafted


I would not recommend beginners to use LLMs to learn to code.

LLMs are most helpful for already-learned developers.

Learn from CS50, understand the fundamentals well.

Then use Cursor or LLMs heavily since you know what's going around.


We have high quality materials that the learner's can learn from, very similar to CS50. The LLM is there to resolve doubts and help learners if stuck which is currently impossible to do with static resources like CS50. Thoughts?


I get the idea, and I've considered using AI similarly, but one thing I realized: Being stuck is often the time where you learn the most. (And when you find out how to un-stick yourself is part of the most enjoyable parts of engineering).

Having the immediate aid of an LLM in a lot of ways also does hurt your reasoning capabilities in my experience. I'm curious how you would address this.


I believe this was in a pre-AI world. With AI you don't expect developers to dig through thousands of websites to find out why something isn't working. I will just ask ChatGPT because it saves me time. In the context of learning, students usually look at answers directly because they don't have a lot of other options currently. with Edmigo, we're trying to give them hints instead of telling them the answer directly so they can learn to solve on their own.


If you can't figure it out without the help of an LLM, you're gonna be in a world of hurt in a real world application where you're dealing with a mature codebase and aren't playing with toys you built yourself. Part of being a quality developer is learning how to get unstuck on your own.


As someone currently teaching CS50 at a high school, the Harvard team have incorporated an AI into the CS50 learning process, albeit with helpful 'gates' to keep it educational.


The more I use this stuff, the less I am convinced it can be used as a tutor. Actually, I think the reverse is true. It could be used to generate bad code for code reviewers to spot!

Just recently gave it a 80 line header-only C++20 code snippet to comment on. As first item, it concluded that my header should be split into a .hpp/.cpp combination. And it proceeded to take one of my template functions, and move the definition into the .cpp file... This is wrong, and broken. Because now, nobody can call that template.

And that is with gpt-4o, the latest flagship thingy... I am sure Claude users will now tell me their pet is more perfect.

Frankly, I think we are trying to beat a dead horse here. This is not helpful.


Gpt-4o, in my unprofessional opinion, is miles worse than the old gpt-4. I believe it has higher scores on benchmarks, but only because it spits out paragraphs and paragraphs of useless information.


>And that is with gpt-4o, the latest flagship thingy... I am sure Claude users will now tell me their pet is more perfect.

It's definitely much better than GPT-4o for most programming tasks, yes. And that's just the free version.


We've tried to fix these issues with Edmigo. You can give it a try!


I disagree with the title. Programming isn’t learning DSA.


We're starting with DSA. But we are planning to expand to other programming concepts soon


Online courses have stayed the same for decades. You read blogs, watch videos and you learn. That's it! But god forbid if you have a doubt --- brace yourself to sift through endless comments and discussions.

We at Edmigo deeply believe that this should change!

So, we've built an AI-tutor powered comprehensive DSA course at edmigo.in

In addition to quality materials, you get a personal tutor that can resolve all your unique doubts and help you code in LeetCode.

This is our first attempt to figure out if we can simplify online education. If you also believe that learning should not be this difficult, please check Edmigo out and give us some feedback.

Thanks :)


Having done the MIT and Harvard CS courses on EDX.org I have to say that any course that doesn’t put you on a GitHub code space with auto-mated tests is a bad course.

I did them because I’m an external examiner for CS students and when I stated I wanted to brush up on all the stuff you learn during your first years, but I was blown away with how good they were. It has been a long time since so maybe they’ve changed.

I’d worry about AI tutors considering how often they get things blatantly wrong. In our internal statistics and analytics on AI assisted programming it’s a very bad option for “junior” programmers. Basically it reduces productivity by quite a lot, it also produces a lot more pull request rejections due to really bad code. On the flip-side, things like co-pilot makes experts in their field of programming sooooo much more efficient. What is really worrying though, is when LLMs get explanations completely wrong. Which means they are teaching the actually engineering wrong, and it can be years before it gets “fixed” if it ever does. I’ve worked in plenty of places where people would never be challenged on wrong knowledge. Often because their co-workers simply didn’t know any better either.


> In our internal statistics and analytics on AI assisted programming it’s a very bad option for “junior” programmers. Basically it reduces productivity by quite a lot, it also produces a lot more pull request rejections due to really bad code. On the flip-side, things like co-pilot makes experts in their field of programming sooooo much more efficient.

There's been a study already. "AI" assisted beginners learned ... about nothing compared to the control beginners group. I think it was linked here on HN.

A LLM might help if it does not give you code but only answers short questions. Unless it's as good as those support chatbots.


I must not have expressed myself clear enough, sorry English isn’t my first language. What I meant was actually the opppsite or what you say. AI is fancy auto-complete for programmers who know when to ignore it. It’s terrible at answering questions. It will tell you the most amazing things about how computers work. I know it’s not that different from Google programming, probably even more accurate, but the key difference we’ve found is that people tend to trust the AI more than what they find on Google.


we totally agree that AI giving you coding solutions doesn't help. So Edmigo never provides solutions unless explicitly requested. Instead it gives hints to help learners think


It would get banned from StackOverflow then :)


Yeah but this is really a cottage industry. If companies didn't expect applicants to grind leetcode this ai tutor wouldn't exist. The tutor isn't there to actually teach or innovate education in any way (as the authors claim in another post). It's there to reduce the friction between unemployment and employment.


> brace yourself to sift through endless comments and discussions.

That's half the fun!


Haha! That's an interesting way to look at it


i've done a few on coursera, and they've added an AI to talk with, which seems OK.

can't say much about other sites.

GL with your course :)


Thanks for the feedback


what if it teaches some wrong content due to hallucination ?


We have provided LLMs with reference materials we created ourselves. Additionally we are running validators to check the output. I agree that the LLM will still not be 100% accurate but it should work pretty well


If you have validators check that the output is near 100% accurate, you could just cut out the AI and just use the validators as tutors.


NextJS undefined error in the console log.


Oh! Can you please share the error?


I sometimes pasted my code into llm to check if it can be simplified. sometimes it worked, sometimes not, sometimes it gave valid points.

Once it told me really nice beautiful soup optimization, I did not I could make cast soup to string, or something.

This is obviously done on my private code. Legality issues more often prohibits the use of AI


Yeah this is something we also experimented with. We realized we needed to configure the LLM for very specific use cases for it to work as expected


Loved the idea. I think ability to solve problem on edmigo itself would be great.


Thanks suryo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: