Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Transitioning from Web Development to AI–Where to Start?
2 points by connectsnk 4 months ago | hide | past | favorite | 3 comments
I've been a web developer my whole life and currently work in a support role at a FAANG-like company. I want to move into AI but don’t know where to start or which specialization to focus on.

One thing I’m sure of: I don’t want to develop my own models but rather use existing ones effectively. Given my family responsibilities and where I am in life, I can’t afford to take a deep dive into advanced math.

How can I leverage my web dev and system design experience to transition into AI? What paths should I explore that don’t require heavy theoretical math but still let me work with AI meaningfully? Any practical wisdom or real-world advice would be greatly appreciated!

Here is some JD I copied from one of the job postings. Hoping to build a similar knowledge base :

Experience with model fine-tuning and/or implementing RAG (Retrieval Augmented Generation) systems in production Experience with LLM frameworks such as LangGraph, LangChain, LlamaIndex, or similar orchestration tools Familiarity with different LLM providers (OpenAI, Anthropic, etc.) and their APIs Familiarity with the use of Open LLMs, either self-hosted or on through a third party provider, e.g. Amazon Bedrock Knowledge of LLM output validation and safety measures Experience with embeddings and semantic search implementations




I'm also a career web application developer with a strong interest in AI and I think you're onto something. AI apps are today where web apps were in 1995 and there's lots of opportunity to contribute to new kinds of apps and use cases.

I started out poking around with custom LLM tools and RAG about 2 years ago and have written a series of articles[0] on Medium that might give you some good working vocabulary as well as help you pick a specialization.

The TL;DR would be to think about how the web apps you already know how to build could benefit from LLM and RAG integrations, then build those yourself using the API from a commercial model (like OpenAI). Credits are cheap enough that you can build several apps (locally, on whatever web foundation you like) and test them out while you refine your skills.

[0] Something from Nothing: A Painless Approach to Understanding AI https://medium.com/gitconnected/something-from-nothing-d755f...


Thanks for your response. How do AI engineers ensure LLM output validation and safety measures


That's a huge topic. The short answer is that you can't control the output of the LLM. The idea of RAG is that, by inspecting the output of the LLM, you can use it to trigger tools ("tool calling") that pull supposedly-correct data from the real world (like a database). That code which is not based on a model but is traditional programming code in a normal language, must be the arbiter of what is allowed in and out. The LLM is always statistical in output and never fully reliable or controllable.

A banking application is a good example. You might have a chatbox that allowed the customer to write "Transfer $1M by Zelle to Linda Smith." The LLM would probably return the correct tool, but your actual app would not transfer funds you don't have, thus providing the "safety."




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: