Hacker News new | past | comments | ask | show | jobs | submit | snowfield's comments login

Just learn 30 papers

Or keep doom scrolling some other shit. Your choice.

At least this is relevant information for people who are actually interested.


You mean for those who will just add it into their bookmark pile. You could have started yesterday and built a NN from scratch if you were truly interested.

There are also directly inscentiviced to not talk shit about a company they a lot of stock in.

Still won't be E2E as per their FAQ


But at least you are in control of the computer where the decryption and re-encryption is happening.

They usually call it E2B (end to bridge)


that FAQ is accurate but (rightly) doesn't cover high-security deployments.

if I'm running the bridges local-to-the-client (I am, on my McBook) it's not meaningfully any less e2ee. encryption happens in the matrix client (running on the laptop), the encrypted message is sent to the homeserver on localhost, the bridge (on localhost) grabs the encrypted message and decrypts it, then the bridge re-encrypts it and sends it to Whatsapp (or wherever). the content of the message is as secure over the wire with this approach as using first-party apps directly

if one hosts their own bridges they're person-in-the-middling themselves and should take all the necessary precautions. if they're using beeper's hosted options they have to delegate read/write ability to beeper (though I think the signal and imessage bridges might be device-local), and beeper is clear about that.


A lot of patients do need daily care / checkups, but I haven't seen exactly what kind of people they expect to be caring for


Rag is limited in that sense. Since the max amount of data you can send is still limited by the token amount that the LLM can process.

But if all you wanted is a search engine that's a bit easier.

The problem is often that a huge wiki installation etc will have a lot of outdated data etc. Which will still be an issue for an llm. And if you had fixed the data you might as well just search for the things you need no?


I think it depends of what they want. Like a search is indeed an easy solution, but if they want a summarization or a generated, straight answer so then things get a little bit harder.


A solution that combines RAG and function calling could span the correct depth, but yeah, the context depth is what will determine usefulness for user interaction.


The LLM would have to be trained on the local data. Not impossible, but maybe too costly?


It sounds nice in theory but your dataset is most likely too small for the LLM to "learn" anything.


I'd like to play with giving it more turns. When answering a question the note interesting ones require searching, reading, then searching again, reading more etc.


This gets to the heart of it. Humans are good at keeping a working memory, as a group or individuals, as lore.


Given that we also have a huge shortage of nurses they're probably just thankful to not have to answer every single question everyone has all the time

It's the one AI application that is not going to replace any jobs


This is the simplistic view, but it might also create more confusion and therefore more questions too.


Does fine tuning datasets need to be structured a specific way or can you take unstructured data?


You need to structure it in the form of "if the user says X, you say Y."

For example: if the user asks "where do I find red pants," say "we don't sell red pants, but paint can be found here"

The OP gave a quick example. You can take raw docs and generate a Q/A data set from it, and train on that. Generating the Q/A data set could be as simple as: taking the raw PDF, asking the LLM "what questions can I ask about this doc," and the feeding that into the fine tuning. BUT, and this is important, you need need a human to look at the generated Q/A and make sure it is correct.

Key in this. Don't forget: you can't beat a human deciding what is the "right" facts and responses that you want your LLM to produce


I know it's crass, but why? Aren't terminals good enough


It seems like they're trying to solve a problem that they've observed. I know I'd like to read more on the "recognizes user intent and supports it with rich interactions" comment. Time for a manifesto?


Perhaps I should. Don’t have a newsletter currently but RSS should work, or bookmarking the site and checking in every now and again :)


I wish somebody built an emulator as fast as xterm and as configurable as something like kitty. Until then, xterm it is.


Kitty is fast enough for me. Why is speed such a concern for you?


xterm has extremely poor throughput performance. kitty and most other well designed terminals are atleast 2x as fast as xterm. For example: https://sw.kovidgoyal.net/kitty/performance/#throughput

Of all tested terminals xterm is faster only than konsole.


Xterm feels faster to me than all alternatives I've tried (not kitty). I suspect it's input to screen latency. Also, crisp bitmap fonts.

Startup is instant too.

It has comparatively very good "fidelity", giving usable screen outputs with most applications without tinkering.

A factor of 2x in throughput is nothing to sweat about I think, given that xterm is fast enough.

Haven't verified the benchmarks in your link, and not tried kitty, but I believe the bench did not test xterm with bitmap fonts which I believe are significantly faster.

I'd also look at CPU rasterization performance. I often don't have graphics acceleration available (in a VM).


You are of course welcome to use whatever terminal you like, just be aware that speed is not a reason to prefer xterm.


Have you tried 'wezTerm'?


It lets you run commands on program output text incrementally instead of piping the initial command or re-running it.


I don't really agree with that.

Just like with roads, more lanes means more cars. More flats can also mean more people moving to the cities.

I don't think anyone would look at Tokyo and be like, yeah more apartments is the solution. It's been what they have been doing forever, and the population and cost of the apartments just keep going up.


Tokyo actually builds a lot of new housing which accounts for why housing is affordable there. People think Tokyo is an expensive city -- and relatively speaking it is -- but it's a lot cheaper than the small US city I'm living in now.

Tokyo is a surprisingly affordable megacity because of housing availability.

(gift article, no ad wall) https://www.nytimes.com/2023/09/11/opinion/editorials/tokyo-...

My understanding that one thing that is unique in Japan is that housing is not an investment vehicle for preserving wealth. Housing is a consumable -- not an investment, and old houses actually lose value (most prefer buying new). Part of it is due to the fact that it's earthquake prone, so houses are ephemeral anyway.


Ever wondered why the population of Tokyo keeps going up? Hint: it's because of the apartments

> In the past half century, by investing in transit and allowing development, the city has added more housing units than the total number of units in New York City. It has remained affordable by becoming the world’s largest city. It has become the world’s largest city by remaining affordable.

> Those who want to live in Tokyo generally can afford to do so. There is little homelessness here. The city remains economically diverse, preserving broad access to urban amenities and opportunities. And because rent consumes a smaller share of income, people have more money for other things — or they can get by on smaller salaries — which helps to preserve the city’s vibrant fabric of small restaurants, businesses and craft workshops.

https://archive.is/pLedf


So solution is to stop demand. Ban employers from employing more people in cities.


Or ban people altogether.


> cost of the apartments just keep going up.

What's the average Tokyo rent rn?



Engineer angry with leadership


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: