Hacker Newsnew | past | comments | ask | show | jobs | submit | krisoft's commentslogin

> if you take the highly reciprocal data of a biological connectome and unroll it into a DAG, you suddenly see motifs popping up that look similar to what we find in AI

That sounds interesting. Where have you heard about that? Or is this your own research?


> you'd rather say Andreessen-Horowitz, which is just as arbitrary as a16z

Yes. I know Andreessen-Horowitz and I don’t know a16z. Reading the title i thought it will be about the cryptography serialisation specification. Turns out i was mixing it up with ASN.1.

> Their website is literally a16z.com

I hear now. Before this if pressed i would have guessed that they probably have a website indeed. If you would have twisted my arm my guess would have been andersenhorovitz.com (yup, with the typos. I learned the correct spelling today from your comment.)

> exceedingly relevant for the HN audience

We contain multitudes.


> Yes. I know Andreessen-Horowitz and I don’t know a16z.

So the world needs to adapt to your knowledge instead of you learning to adapt to a often used, and well-known moniker?


They just want to sound technical.

> This study is based almost entirely on pre-existing "vignettes."

This is basically the only way how to ethically approach the topic. First you verify performance on “vignettes” as you say. Then if the performance appears satisfying you can continue towards larger tests and more raw sensor modalities. If the results are still promising (both that they statistically agree with the doctors, but also that when they disagree we find the AIs actions to fall benignly). These phases take a lot of time and carefull analysises. And only after that can we carefully design experiments where the AI works together with doctors. For example an experiment where the AI would offer suggestion for next steps to a doctor. These test need to be constructed with great care by teams who are very familiar with medical ethics, statistics and the problems of human decision making. And if the results are still positive just then can we move towards experiments where the humans are supervising the AI less and the AI is more in the driving seat.

Basically to validate this ethically will take decades. So we can’t really fault the researchers that they have only done the first tentative step along this long journey.

> if the Internet somehow goes down at my hospital, the Doctor can still think, while LLM services cannot

Privacy, resiliency and scalability are all best served with local LLMs here.

> If the power goes out at the hospital, the Doctor can still operate, while even local LLMs cannot.

Generators would be the obvious answer there. If we can make machines which outperform human doctors in realworld conditions providing generator backed UPS power for said machines will be a no brainer.

> You're going to need to improve the power efficiency of these models by at least two orders of magnitude before they're generally useful replacements of anything.

Why? Do you have numbers here or just feels?


But that does follow. The economics working is not some outside factor. If the robot “could do the task” but would cost more than paying a human to do the same task then the robot “does not work”. It is frequently because the robot would be too slow, or not reliable enough, or could only handle certain types of items. But ultimately all of these boil down to cost.

We have seen lab demoes of robotic manipulation for decades. The reason why they stay in the lab (when they do) and don’t become ubiquitous is because they are not good enough. In other words they don’t work. The economics and “does it work” is not two separate concerns but one and the same.


It's a continuum, not binary. The same robot that doesn't financially "work" for replacing a manual scavenger sorting garbage in an African slum might be quite cost-effective sorting recycling in Switzerland, and would likely have a niche regardless of price if used to (say) sort biohazardous or radioactive materials. And there are already millions of robots out there assembling cars etc.

> One can take a great 70B model and have it run in only ~16GB with no loss in capability and the ability to keep training, but the last few years funding only went for "bigger".

Awesome. What is holding you back? What do you need the funding for?


Presumably $100m to train the 70B model? I think you're assuming that the author meant you can take an existing 70B model and run it in 16GB. But it stands to reason that "no loss in capability" means it had to be trained under those constraints.

When an AI says things like that we call it “hallucination”.

You appear to have missed this part: “Candidates should answer THREE questions.”

> Animals have no ability to lie.

This is false. There are many documented cases of deception by animals. As an example here is one where researchers observed monkeys to supress their vocalisation during sex when copulating with the non-dominant male: https://www.nature.com/articles/ncomms2468


> You appear to have missed this part: “Candidates should answer THREE questions.”

I didn't :p

> There are many documented cases of deception by animals.

Good point. I was addressing the quote on its own (mistaken) merits.


> Where do you see that?

There are multiple mentions of him being motivated by wanting to shoot deer for meat. It is a through line via the article.

> The article is about someone in Scotland who took up marksmanship as a hobby.

I wish it were so. With a bit more self awareness the author could have said “initially picked up a rifle to learn to hunt deer, but doing so i learned how targets are scored and become interested in automating that process.” There is nothing wrong with that. But pretending that someone is doing all this coding to get better charcuterie is where it becomes frustrating yak shaving.


The guy is clearly an obsessive hyper-perfectionist- he's telling (or boasting) of taking a culinary obsession from reproducing fine-dining dishes (when most people are content mastering a few decent recipes) to building automates curing chambers and butchering whole animals. It's kind of obvious that this personality leads from any random objective to into the deepest of the rabbit holes where everything is studied and annotated with the utmost precision. Funny as a clinical case, not sure I'd like to be around someone like this though :)

Point is that sub-millimeter precision when measuring rings is doing absolutely nothing to further his shooting skills to take down a tasty deer. To the contrary. Time is limited, and every minute spent perfecting this automation was not spent improving shooting skills by, you know, shooting. In other words, this may well have made him a worse shooter than he could have been. Nothing wrong with it, but let's call it for what it is.

A perfectionist defines a goal and then finds the perfect path to get there. He was just giving in to distractions and "perfectionist" is the wrong label.


It's not about submillimetre precision (OP here), it's about knowing if you can shoot well. The most common deer stalking certification in the UK (DSC1) involves three shooting tests from 20, 70, and 100m - if I don't care about 8/10 vs 9/10 shots from 25 yards, there is no way I am putting a shot within a 4" circle from 100 metres.

> every minute spent perfecting this automation was not spent improving shooting skills by, you know, shooting

I mention in the post that I had access to the range only 1-2 evenings a week, so there was no way I could improve my skills outside of these few hours.


Thanks for joining the discussion!

> if I don't care about 8/10 vs 9/10 shots from 25 yards, there is no way I am putting a shot within a 4" circle from 100 metres.

Totally with you there. Though isn't what counts in the end how close you were to the center? If you look with your eyes and it looks like it was in the 3rd ring, what does it matter whether it "technically" wasn't because half a millimeter was in the next ring? It was surely much better shot than fully in the next one, unless you actually want to go to the olympics or are otherwise competing in the sport.

Don't get me wrong, I totally respect the challenge of automating the counting but that this actually helped your progress still seems doubtful to me.

> I mention in the post that I had access to the range only 1-2 evenings a week, so there was no way I could improve my skills outside of these few hours

Ok, fair. Though we can surely agree that even though the automation-building that you did during all this extra time improved your skills, it was coding skills rather than shooting skills? (Which, again, is fantastic!)


> It is absolutely an abandoned pet.

That. Or the family fabricated the story for online fame.

Not saying that i have any evidence either way. Fundamentaly it is an unverifiable feel-good story with great online “viral” potential. It might be a very lucky axolotl who got abandoned, found and re-captured in the short window it could survive in the wild. It can also be a viral content strategy capturing eyeballs. In my, admitedly very jaded, guestimate I would give the two options about equal chances.


Yes I simply do not understand how these kind of stories pass the editors except that they are not important if wrong.

If this was some kind of crime they would have censored information for a long while if not clearly correct.


We used to keep similar axolotls as pets. I believe the abandoned pet thing - they are fun for a while but then you wonder what to do with them. We donated ours and the tank to a school. They'd probably survive a while in the wild in the summer but I'm not sure they'd like winter much.

> TIL Scotland Yard is the Metropolitan Police.

It is not? But also it is.

You are right that when people say "Scotland Yard" they do frequently mean the whole Metropolitan Police. And you are also right that there is no other police entity (that I know of) which would be associated with that name.

But also, "Scotland Yard" was just the address of the original headquarters of the Metropolitan Police. Even then it wasn't the whole organisation, just the address of one of the buildings. Then they got a new headquarters and called it "New Scotland Yard". And to confuse matters further they repeated this multiple times. Which means there are 3 buildings which were called "New Scotland Yard" at various points in time.

And today of course the MET occupies far more real estate than just the famous "Scotland Yard". For example if you look at this FOI request[1] you can see that there were 226 other buildings the Metropolitan Police used in 2023. (Not counting covert/sensitive estate).

1: https://www.met.police.uk/foi-ai/metropolitan-police/disclos...

Scotland Yard was originally the name of the street in which headquaters of the Metropolitan Police.


Right. What I meant is, until today I believed that "Scotland Yard" was an entirely different law enforcement agency from MET.


But worth enough to send a new probe?

Generally we don’t construct and maintain expensive scientific equipment just for the fun of it. There usually is some question or debate we expect them to answer or settle.


Worth enough is always debatable but here are some future proposed science goals for an interstellar probe: https://en.wikipedia.org/wiki/Interstellar_Probe_(spacecraft...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: