Hacker Newsnew | past | comments | ask | show | jobs | submit | sky2224's commentslogin

Chegg is a service many students used to get guidance and answers to homework problems for whatever courses they were taking. It was a sinking ship once GPT 4 came out, but GPT 5 was really it's final nail in the coffin.

I don't know any student that really uses it now.


No kidding, it took my CPU usage from 1% to 55% instantly sheesh

> I am suspicious of grifters and would like to find trustworthy advice.

If you want actual good advice: go to a doctor.

Don't go to a chiropractor, don't go to hackernews. Go to a doctor. You can either start with a physical therapist in your area or start with your primary care doctor to get a referral.

I'm assuming you're in the US, so I know it's expensive but this will genuinely shorten your life span if you let it get significantly worse.


I feel like people genuinely don't understand what vibe coding means.

Just cause you're using an LLM doesn't mean you're "vibe coding".

I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.

I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.

I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.

Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.

Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.


Can you provide an example of how you actually prompt AI models? I get the feeling the difference among everyone's experiences has to do with prompting and expectation.

[dead]


I find that the default Claude Code harness deals with the ambiguity best right now with the questionnaire system. So you can pose the core of the problem first and then specify only those implementation details that matter.

I wasn't implying that clever prompting needed to be used. I'm just trying to confirm that the person I was replying to isn't just saying what essentially amounts to "build me X".

When I write my prompts, I literally write an essay. I lay constraints, design choices, examples, etc. If I already have a ticket that lays out the introduction, design considerations, acceptance criteria and other important information, then I'll include that as well. I then take the prompt I've written and I request for the model to improve the prompt. I'll also try to include the most important bits at the end since right now models seem to focus more on things referenced at the end of a prompt rather than at the beginning.

Once I do get output, I then review each piece of generated code as if I'm doing an in-depth code review.


No one is saying “build x” and getting good results unless they didn’t have any expectations to begin with. What you describe is precisely right. Using the agents require a short leash and clear requirements and good context management (docs, skills, rules).

Some people (like me) still think that’s a fantastic tool, some people either don’t know how to do this or think the fact you have to do this means the tools are bunk. Shrug.


fwiw regarding getting sucked into youtube shorts: if you turn off your watch history youtube refuses to let shorts work. It will literally say, "turn on your watch history to continue with shorts".

I’ve had my YouTube History off since before shorts were a thing and never experienced what you described.

Weird. I just went to the shorts page now and it says what I described.

I just tried on Chrome and Firefox on desktop, and the iOS YouTube app. Both show me a message saying "Recommendations are off. Your watch history is off, and we rely on watch history to tailor your Shorts feed. You can change your setting at any time, or try searching for Shorts instead."

I'll also clarify, sometimes if you click on a shorts video that you searched for manually, a few related videos will be queued, but then the feed will try up and the watch history message will display again.

Do you have left over watch history from years ago you've never cleared, and maybe shorts is enabled since it uses that...?


> "Recommendations are off. Your watch history is off, and we rely on watch history to tailor your Shorts feed. You can change your setting at any time, or try searching for Shorts instead."

That's strange, because I don't even log in and I still get Shorts.


Even if you told it not to do this, it likely just searched the web and pulled this from github.

If you actually wanted to test this, you'd need to run Claude Opus 4.6 locally, which is not really possible.


I used the API and disabled websearch. I did not use the web interface. Even if I did, it would have shown some "searching" message.

You can configure web search when using the api, so this is actually possible to test

I've spent a good amount of time playing Downwell.

The metal slug games are also on the app store. It's a bit hard to play but you can set infinite lives and it's a fun way to kill some time while waiting around somewhere.


What are the numbers on companies indirectly use Claude via Github Copilot? Given that OpenAI and by extension, Microsoft, has folded to the will of the US Govt, I suspect that all Claude models have a pretty non-zero chance of being pulled from Copilot very soon.

I know many companies using Copilot have no contract work with the government, but this is the Trump admin we're talking about. They want to send a message.


I'm a little bit confused about the development workflow with this. It seems like you've developed a system for the AI model to essentially emit steps that it can follow to interact and return information about the state of your app's UI, but are these steps created saved anywhere? How do you ensure consistency for the same prompt submitted twice? How does this fit in deployment pipelines?

Also, maybe I'm misunderstanding what this library is for, but is this really a good application of AI? I'm getting reminded of gherkin + cypress but without the test actually being embedded in the code. I feel like I'd rather use AI to write the BDD test using gherkin rather than prompt the AI to figure out what test behavior I actually want.

I realize I'm coming off a bit pessimistic here, but I'm just trying to explain where my thoughts are so you can hopefully clarify things a bit for me here because I feel like I'm missing something about what you're trying to accomplish with this project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: