Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What projects are You creating with AI?
10 points by tmaly on April 20, 2023 | hide | past | favorite | 16 comments
There is a huge trend in ChatGPT news.

But if you dig deeper, there are communities like huggingface creating all this amazing stuff for AI.

People are building on top of what they have created.

What projects are you creating or have created with AI? Can you share a link if the project is public?




I wrote a cli that gives GPT a containerized shell with access only to the current working directory. I use it for file conversions and things like that. It will install the software it needs into the container.

https://github.com/drifting-in-space/botsh


cool stuff


I've been working on www.revision.ai since 2019

It's a study tool that turns documents/videos into fun flashcards, automatically.

Originally it used more traditional techniques, but from 2019-2021 was highly built into the HuggingFace eco-system, with things like finetuned T5 models. That allowed us to create powerful flashcards for certain subjects, like psychology, more accurately than competitors, especially with specific document analysis tech (think OCR on steroids). ChatGPT really replaced that part though and smaller models for things like identifying acronyms just don't make sense these days.

I have also worked with embeddings to automatically plan lessons. I think there's potential there. The problem is overcoming the human-crafted quality bonus and now, with big players in edtech able to easily integrate ChatGPT to existing content, there is a difficult competitive landscape vs the 6 month to 3 year lead ML gave you in the past. Art, creativity and innovation pales in comparison to distribution.


Where do you see tutoring going with AI or even the ability to expand on topics a student would have trouble with in a regular set of lessons?


Basically I see three main directions: A) The natural conversational interface with callbacks to earlier in conversation sets up the precursor for widespread customised oral exams.

At least for serious learners, this is a genuine way to test skills despite AI cheating options, and harbours potential for skill upkeep too[1]. I have been predicting this for a long time now(2015+) and at recent tech conferences, it's became clear this is beginning to happen.

B) Personalised addressing of knowledge gaps and interest can both improve motivation and success for students. An AI tutor can now robustly head towards Blooms 2-sigma effect for learning (i.e. that private/small group learning is much more effective than class-wide learning, largely due to adaptivity). The larger potential is to do class-wide work which you identify wide misunderstandings (e.g. like Vertiasium' channels PhD dissertation on using videos with misconceptions to teach better), and enable students to tackle them together[2].

C) Use of AI to use a students understanding of one topic/area (such ideas from a game they love like Red Alert 2) to teach another one(like how cells interact with threats). Things like using comparisons, metaphors. This can be truly powerful for certain student groups.

An alternative here is using image generation AI to show certain concepts on a rolling, scroll-to-see kind of basis to try and loop into that human psychology. I was too early with this as I spent a lot of time with GANs/early Diffusion when the tech was a bit naff. I did win 2nd place on a Cohere Hackathon with "Learn Visually" which explored spatially-aligned AI generated images, but like said, that was before the real revolution, so the quality wasn't quite there.

[1] One of my controversial opinions is that most people are capable of 2-sigma+ performance on many skills, but fail to upkeep their knowledge or tackle problems regularly enough to ever obtain it, instead tackling problems repetitively or with cognitively-easy techniques, giving us a distorted view on human potential. Summer is particularly damaging.

[2] Because of social pressures to work and compete


I just started my first AI experiment yesterday, you can find it at https://doogle.app .. Right now it forwards you to other AIs or services based on your request, but early next week it will go a bit deeper and will automatically create Quests (think forms, workflows and checklists) for the enquiries where it makes sense. You’ll be able to either then go through these Quests yourself, assign it somebody you know or publish the Quest for somebody else to get it done for you. You can also attach a cash reward that the person finishing Quest can then either payout in cash, a gift card or donate!


Smart RSS reader with content-based recommendation. The main model I am running is “make an embedding and apply classical ML” and I have that completely industrialized. The MiniLM embedding also clusters documents with the greatest of ease, whether you are looking for broad clusters (put Ukraine articles together) or narrow clusters (cluster the 3 articles about the Arsenal game from last week into one.)

I’ve sporadically gotten a fine-tuned transformer model to outperform it but don’t have that industrialized yet but I expect to.


I'm currently creating a lot of private chatbots for people and organizations I follow online. I am scraping their content, from blog posts to articles and videos, and then storing these in vectors. Not for any commercial uses, but to be able to chat with their content. In very many cases this is now replacing Googling for me, if I can't already solve the issue I am having with ChatGPT out of the box.


I am doing something similar. How well does the search for you? Do you have any best practices for creating embeddings e.g. creating hierarchical embeddings? How much do you clean the data from videos/podcasts?


I would say quite well! I'm currently spending 1-2 hours with Langchain and trying different approaches. I'm using a RecursiveCharacterTextSplitter with 1000 word chunks and 200 overlap, which may not be the best way of doing things, but for my purposes it works. I'm still struggling with videos, but I would say I am trying to get the data very clean, doing my own transcripts and removing pauses. I'm now also creating agents, which use the various vector archives to cross reference data between each other. I'm not really sure where I am going with this all, but it is a lot of fun.


I am just trying to get started with some of this. I found this repo tonight https://github.com/mayooear/gpt4-pdf-chatbot-langchain that seems interesting.


Using LLMs to help people generate and edit content directly in their notes - https://saga.so/ai


Just released https://www.chesswith.ai today. It's a basic chess app, but it lets you play against hundreds of famous historic/fictional characters.

The in game dialogue is powered by OpenAI's ChatGPT API. This dialogue is always unique and tailored to the events happening on the board, making every game a one-of-a-kind experience.


Hyper niche standardized test prep


what are you using for training data?


Desktop accessibility tools not necessarily for disabled people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: