Hacker News new | past | comments | ask | show | jobs | submit | sam_goody's comments login

I do the same for swimming.

Teach them to put their head in water, then to do the "dead man's float" for as long as they could. Without realizing, they often start pedaling - which is basically the stroke.

I have used this to great effect with dozens of kids, all who are swimming within a session or two.


> But I’m so sorry, I have a three-thirty.

I assume that means that she doesn't accept the proposal (lit. she has a meeting at 3:30), but don't quite follow how that.

Can someone reach out and break that down for me.


I have no clue either, but I'm feeling this is the point

BTW, I have seen the similar in several older texts.

One example is the Jewish lawbook "Toldos Adam V'Chava"[1]. Although many old religious texts have been reprinted with clear modern typesetting, no-one wanted to re-typeset this until recently because of the curse.

On the plus side, we know exactly what the author wrote 800 years later, even if it is hard to read.

[1]: https://hebrewbooks.org/10180


There was a clock on HN a while ago (which I cannot now find) - it has one mode in which the clock only updates itself every random number of seconds, or ticks at different times within each second, oor ticks according to a MIDI tune as well as it can while staying within 30 seconds of the real time.

So, while you can just set your clock forward 30 seconds and be done with it, there are some variations that someone who likes thinking about this are worth playing with.

[My favorite mode of the referenced clock, sets the clock randomly ahead between 1-5 min. Since you don't know how much your clock is off by, you have to assume your clock is on time, and that will give you an average 2.5 minutes to actually show up on time.]


So, a company that doesn't feel like sharing all their secret sauce with Anthropic can run DeepSeek Coder on three of these for $9K, and it should be be more or less the same experience.

Do I understand that right? It seems way to cheap.


Such a setup would likely work - assuming DS3 support on software stack but wouldn’t be able to serve as many requests in parallel as a classic gpu setup.

Main issue is the ram they’re using here isn’t the same as is in GPUs


Time Machine does not backup your desktop and other spots that might be essential in case of needing a backup. iCloud does.

I know users who would prefer not to trust Apple for anything, and only pay for and use iCloud to backup the Desktop [and similar locations]. If they were to hear that their opt-in for iCloud means that Apple starts copying random things, they would not be happy.

[OT, I use Arq. But admit that iCloud is simpler, and it is not apples to apples.]

IMO, the fact that Apple backs up your keychain to the Mothership; and that this is a "default" behavior that will re-enable itself when shut off, reflects an attitude that makes me very distrustful of Apple.


I have never had a email server be case sensitive, and often use that for mail filtering: myuSer@ - the big "S" is for spam!

In line with with that, I would expect the login to not be case sensitive when it accepts an email.


What are the minimum and recommended amounts of RAM, hard disk space, CPU or GPU to run this locally.

As someone who just follows this stuff from afar, it is hard for me to conceptualize if this is a SaaS only model, or if it means we are getting to the point where you can have a A1 model on a local machine.


Yes of course you can run it on your local machine... But the architecture of this specific model makes it extremely inefficient to run this locally for a single user. Here's why:

- to LOAD the model, you need at least 768GB of VRAM, which means 10xH100 GPUs or similar.

- to QUERY the model, it then uses one of the 37GB layers to perform the computation at any given time, which means that each GPU can process 2 queries concurrently - (37 * 2 < 80) - and the queries are very fast because of this.

So a single user setup would involve a crazy expensive rack of 10 h100 GPUs that can essentially process 20 concurrent requests almost as quickly as it can process 1 request in a single user mode...

The result is that the model is extremely cheap to operate if served as a SaaS, but ridiculously expensive for a single user setup


Whole model is 671B parameters. Downloadable from Huggingface, with 163 LFS file of around 4.3GB. Around ~700GB total.

Recommended RAM: more than most PC.


What front ends work with jujutsu? Do I have to start doing all on the command line, or can I use existing clients such as Fork?

Kudos for the article. I have been seeing jj here and there, but this is the first that made me want to try it.


AFAIK there is no polished standalone frontend for jj yet. https://github.com/gulbanana/gg comes closest but I ran into some quirks when using it on git compatible/colocated repos.

Someone has made a VSCode plugin but it's closed source and I believe it will be paid at some point? https://www.visualjj.com/

If you are willing to use a TUI, jj-fzf (https://github.com/tim-janik/jj-fzf) has been wonderful and development is extremely active too.

I exclusively use git through the GUI, but the jj CLI improves so much over the git CLI that I'm willing to live with the CLI for now.

Still hoping that the GUIs become more polished though, and also for Inteliij IDEA integration.


I've been itching to use jj for real but the lack of a dedicated Neovim plugin killed my enthusiasm a bit.

Yes, regular git plugins sort-of mostly works, but it's different enough to introduce a lot of painful edges when you do.


Richard Feynman, while still a student at Princeton, found an error in some well known proof, and set himself a rule to double check every theorem he would use.

I don't remember the details of the story (I read surely your joking years ago), and remember being amazed by how much time that policy must have cost him.

But now I wonder that he didn't hit dozens or hundreds of errors over the years.


I'd be careful taking anything from "surely you're joking" as a fact btw.

There's a good popsci comms video on why here https://youtu.be/TwKpj2ISQAc?si=bpZOBy9WBGQzi6sk


She is more like complaining some (most?) RPF "bros" act like jerks, emotionally unstable, etc.

I guess the same can be said about many other "fanboi", and have little to do with the facts in the book


No, she's claiming that most of those stories were made up by a third guy with Daddy issues (if you want to be reductive)


You may want to watch this if Surely You're Joking read years ago is your main reference point: https://youtu.be/TwKpj2ISQAc


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: