You mean for those who will just add it into their bookmark pile. You could have started yesterday and built a NN from scratch if you were truly interested.
that FAQ is accurate but (rightly) doesn't cover high-security deployments.
if I'm running the bridges local-to-the-client (I am, on my McBook) it's not meaningfully any less e2ee. encryption happens in the matrix client (running on the laptop), the encrypted message is sent to the homeserver on localhost, the bridge (on localhost) grabs the encrypted message and decrypts it, then the bridge re-encrypts it and sends it to Whatsapp (or wherever). the content of the message is as secure over the wire with this approach as using first-party apps directly
if one hosts their own bridges they're person-in-the-middling themselves and should take all the necessary precautions. if they're using beeper's hosted options they have to delegate read/write ability to beeper (though I think the signal and imessage bridges might be device-local), and beeper is clear about that.
Rag is limited in that sense. Since the max amount of data you can send is still limited by the token amount that the LLM can process.
But if all you wanted is a search engine that's a bit easier.
The problem is often that a huge wiki installation etc will have a lot of outdated data etc. Which will still be an issue for an llm. And if you had fixed the data you might as well just search for the things you need no?
I think it depends of what they want. Like a search is indeed an easy solution, but if they want a summarization or a generated, straight answer so then things get a little bit harder.
A solution that combines RAG and function calling could span the correct depth, but yeah, the context depth is what will determine usefulness for user interaction.
I'd like to play with giving it more turns. When answering a question the note interesting ones require searching, reading, then searching again, reading more etc.
You need to structure it in the form of "if the user says X, you say Y."
For example: if the user asks "where do I find red pants," say "we don't sell red pants, but paint can be found here"
The OP gave a quick example. You can take raw docs and generate a Q/A data set from it, and train on that. Generating the Q/A data set could be as simple as: taking the raw PDF, asking the LLM "what questions can I ask about this doc," and the feeding that into the fine tuning. BUT, and this is important, you need need a human to look at the generated Q/A and make sure it is correct.
Key in this. Don't forget: you can't beat a human deciding what is the "right" facts and responses that you want your LLM to produce
It seems like they're trying to solve a problem that they've observed. I know I'd like to read more on the "recognizes user intent and supports it with rich interactions" comment. Time for a manifesto?
Xterm feels faster to me than all alternatives I've tried (not kitty). I suspect it's input to screen latency. Also, crisp bitmap fonts.
Startup is instant too.
It has comparatively very good "fidelity", giving usable screen outputs with most applications without tinkering.
A factor of 2x in throughput is nothing to sweat about I think, given that xterm is fast enough.
Haven't verified the benchmarks in your link, and not tried kitty, but I believe the bench did not test xterm with bitmap fonts which I believe are significantly faster.
I'd also look at CPU rasterization performance. I often don't have graphics acceleration available (in a VM).
Just like with roads, more lanes means more cars. More flats can also mean more people moving to the cities.
I don't think anyone would look at Tokyo and be like, yeah more apartments is the solution. It's been what they have been doing forever, and the population and cost of the apartments just keep going up.
Tokyo actually builds a lot of new housing which accounts for why housing is affordable there. People think Tokyo is an expensive city -- and relatively speaking it is -- but it's a lot cheaper than the small US city I'm living in now.
Tokyo is a surprisingly affordable megacity because of housing availability.
My understanding that one thing that is unique in Japan is that housing is not an investment vehicle for preserving wealth. Housing is a consumable -- not an investment, and old houses actually lose value (most prefer buying new). Part of it is due to the fact that it's earthquake prone, so houses are ephemeral anyway.
Ever wondered why the population of Tokyo keeps going up? Hint: it's because of the apartments
> In the past half century, by investing in transit and allowing development, the city has added more housing units than the total number of units in New York City. It has remained affordable by becoming the world’s largest city. It has become the world’s largest city by remaining affordable.
> Those who want to live in Tokyo generally can afford to do so. There is little homelessness here. The city remains economically diverse, preserving broad access to urban amenities and opportunities. And because rent consumes a smaller share of income, people have more money for other things — or they can get by on smaller salaries — which helps to preserve the city’s vibrant fabric of small restaurants, businesses and craft workshops.
reply