Hacker News new | past | comments | ask | show | jobs | submit | sc077y's comments login

I'm skeptical of the on-device AI. They crave edge compute but I'm doubtful their chips can handle a 7B param model. Maybe ironically with Microsoft's phi 3 mini 4k you can run this stuff on a cpu but today it's no where near good enough.

Impressive not technically because nothing here is new but because it's the first real implementation for the average end consumers of "ai". You have semantic indexing which allows series to basically retrieve context for any query. You have image gen which gives you emojigen or messaging using genAI images. TextGen within emails. UX is world class as usual.

However, The GPT integration feels forced and even dare I say unnecessary. My guess is that they really are interested in the 4o voice model, and they're expecting openAI to remain the front runner in the ai race.


This is 100 percent doable. Building something like this at scale might be a pain but locally it's fairly easy.


Very interesting. I'm building a RAG chatbot and I haven't done the inline citations yet, I honestly thought it was a lot more complicated then just telling the llms to cite with a number and then have numbers next the sources. I did something to that extent as kind of a joke and it worked but the llm didn't always listen. I thought either post processing (checking cosine distance between sentences and retrieved chunks) or function calling would be the way to go.


There are already a lot of open bibliographic databases, semantic Scholar and OpenAlex and to some extent Google Scholar. Researchers need fulltext analysis which with half of publications being locked behind a paywall makes it very tedious and complicated.


Thinking back, if LLMs are able to have Memory store and access then RAG becomes useless. RAG is like a system that shoves bits down the RAM (Context Window) and ask the cpu(LLM) to compute something. But If you expand the RAM to a ridiculous amount or you use the HDD, it's no longer necessary to do that. RAG is a suboptimal way of having long term memory. That being said, today it is useful. And when or if this problem gets solved is not easy to say. In the meantime, RAG is the way to go.


RAG is a fantastic solution and I think it's here to stay one way or another. Yes the libs surrounding it are lacking because the field is moving so fast and yes I'm mainly talking about LangChain. RAG is just one way of grounding, that being said I think it's Agent Workflows that will really be the killer here. The idea that you can assist or even perhaps replace an entire task fulfilling unit aka worker with an LLM assisted by RAG is going to be revolutionary.

The only issue right now is the cost. You can make a bet that GPU performance will double every year or even 6 months according to Elon. RAG addresses cost issues today aswell by only retrieving relevant context, once LLMs get cheaper and context windows widen which they will, RAG will be easier, dare I say trivial.

I would argue RAG is important today on its own and as a grounding, no pun intended, for agent workflows.


????? I just went through the entire table of suspended people and there were almost no far left groups besides Antifa. Most of the suspended were either far-right/alt-right, trump related, doxing related or spreading miss-information.


Very interesting article. Following the articles logic, you should be able to deduce a country's Social mobility based on the number of exotic items (such as supercars) you see in public. Furthermore, it becomes apparent that Europe has lower Social mobility than the United States. You will seldom see exotic sports car in the streets in Western Europe because most of the wealth are from well established institutions and families.


Western Europe has generally higher social mobility by most metrics than the US (mobility between income cohorts, education level vs parents, etc). The US probably does have more people becoming really, _really_ rich, though, and also has slightly different social norms around money and the showing-off thereof.

(That said, this is mostly differences at the margins. The mean price of a new car isn't much different between the US and the wealthier EU countries, say; the upper-middle-class are happy enough to buy cars that cost more than an average annual household income in both places. It's where you start getting past low six figures that people start getting squeamish)


Social mobility is the wrong metric. The economic mobility in the US -- the ability to increase your absolute income -- is far higher than in Europe. Countries with compressed wages have high social mobility, almost by definition, by low economic mobility. Since the US experiences very low wage compression compared to most countries, it will never be "socially mobile" but will have anomalously high wages for a large portion of the population.


You're right, I meant to say economic mobility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: