Hacker News new | past | comments | ask | show | jobs | submit | garspin's comments login

I implemented it in the browser. For demo & src, see https://svelte.dev/playground/2c1bf42e0d2a4cebb38b907fa7f90a...

Adjust separation, cohesion and alignment manually, or use the presets.


> Pulling that off is the social skills test.

Surely the most significant side effect of any F2F interview is to evaluate social skills. An answer that the interviewer fully expects to be a lie just isn't required.

Anyway... "Why do you want to employ me over other candidates ? "


Who said anything about lies? In my experience, the primary goal of F2F interviews is to evaluate social skills like communication.


The only goal is to evaluate social skills.

For people who lack social skills the best way to pass those test is to lie, wich is pretty easy.


Google 'polarized training'. It's 80% really slow conversational pace jogging, and 20% really hard, struggle to breath flat out sprinting. Just avoid the intermediate jogging that's comfortable, but unproductive.


Flying for months at a time in the stratosphere, Zephyr offers game-changing capabilities that will be transformative for mobile connectivity and earth observation.

As a payload agnostic platform, Zephyr can transform into a multi-functional tower in the sky to provide low latency 5G direct-to-device mobile connectivity services.


Agree. A better response to '80% chance that this is a known shoplifter' is for security to keep an eye on them. It should very quickly become apparent that it's a false positive (or not).


The problem is that this technology is not sold that way, it's sold as a way to detect shoplifters. It's also extremely important how the result is phrased and presented: even "80% chance that is is a known shoplifter" simply means "this is a known shoplifter" to a layperson. But even a 99.9% or 100% confidence might be wrong so this isn't even an 80% chance - at best it's 80% times the statistical likelihood of this not being a false positive at 100% confidence, and that can never be a 100% chance.

The (psychologically) correct way to think of this is as a colleague making a claim and telling you how certain they are. But this tech is not sold as a colleague, it's sold as a machine that is better than humans. It's not a perfect super cop but that's how it is marketed and why people buy it.


I am afraid the Appeal to Authority fallacy is one we as a society are about to be exposed to on a massive level and recognize as a side effect of integrating AI into our daily lives.


Hardest to diagnose phone support I had to deal with was an elderly relative in the DOS days trying to find another lost file....

"type in 'dir \documents' and press Enter. What comes up?".

"Nothing".

"Nothing at all?"

"Nothing happens".

After trying many variations of this, it eventually became apparent that he was pressing the Space Bar instead of the Enter key.


Surely consciousness is not black & white, but a scale. Rocks at one end & us(?) at the other.

There was a recent post suggesting that our thoughts are 98% unconscious.

It's not hard to imagine that there could be some animals with more than 0.01% consciousness. After all, we started at 0% conscious and evolved a little - other species are probably some way down that track.


> mind-bending D3

That's something that bugs me too. D3 is pretty amazing but is only accessible to coders.

> we’ve almost completely lost the art of offscreen rendering

markdown2.com is something I'm working on to address that. It lets you design markdown documents containing charts & diagrams, and then Save As HTML/PDF or SVG/JPG. There's also an API for generating them in bulk.


…with a headless browser, I suppose. Which is not ideal.


I am working on a zero-code content-rich document generator at https://markdown2.com/ It's a quick and easy way to generate & share content rich documents in 30 secs. The sandbox https://markdown2.com/sandbox is a good demo of capabilities with some sample templates.

There's an API coming that will allow those customised docs to be generated at scale.


The point Marcus is making is that LLMs are solely next word predictors - intelligence requires far more than that.

LLMs will always have the same basic challenges - their 'kinda copy what a human has written' algorithm has hit its limit.

He is an advocate of symbolic learning - getting that to work is a lot harder, however it's far more likely to result in genuine intelligence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: