Hacker News new | past | comments | ask | show | jobs | submit | thih9's comments login

Video demonstration (Whoosh Bottle Rocket by RamZland): https://youtube.com/watch?v=Lq_6-0Ra4Hk&t=55s

Thought experiment, what is the worst thing the employer could do, assuming a 2fa email setup?

Would it be “just” learning the password and making a screenshot of the inbox and any open emails, or is it relatively easy to look at more?

Edit: right, also any email that gets written.


The risk isn't that much that your employer gaining access to your email (though you may potentially be risking the contents of emails that you view from that machine getting saved and accessed by someone at the company). It's more that you've legally entangled things. If your employer is sued or investigated, a judge can issue a subpoena for them to turn over records. If those records show that employees accessed external accounts from work systems, now they can get a subpoena to access those accounts and any other devices that have accessed those. I've seen this happen to friends. Employer gets sued and as part of discovery, they had to hand over all of their personal devices because they hadn't kept church and state separate. Took them many months and significant legal expenses to get their stuff back. If you never access personal stuff from your work devices and never access work stuff from your personal devices, you'll never be in that position.

2FA only protects login. If you're already logged in, someone with access to the computer can just copy the session token. Or instruct the email client that is already running to dump all your emails to a local file.

> how different Coq users use Coq differently and express different needs

> applying Coq to do software verification

> encourage others to learn and use Coq

To be clear, some people giggle when they read or hear the above and this is the reason.


If it silently crashed and started to output a static number, would this affect any systems negatively?

I asked this last time about the one in London, and was told that one of the checks is that the image has changed since the last run. Otherwise the data isn't used.

Prevents not only technical issues but attacks like someone blocking the camera or putting a static photo in front of the camera.


Interesting - sounds like it would have some negative effect then. Thanks for sharing.

Now I wonder about some periodic offsets. E.g. if the lights are off at night, or if the skies are overcast in winter, does it skew the results in some significant way. I seriously doubt that though.


Not really. It's just one source of randomness among many. The entire point of having multiple sources is that they are redundant, you don't need them all.

Surely not. If you're seeding a PRNG from multiple sources of entropy, you generally concatenate them. Or if you were limited in bytes you'd XOR them.

This is why, in an app, you might seed with timestamp and process ID and /dev/urandom, in case any of them happen to be non-unique or unsupported.


a random number (existing entropy) XOR with a static number (the crashed wall) is still a random number, me think

Probably not, unless it is their only source of entropy.

To be fair, we don’t need a new product, a tesla can do the same[1].

With AIs becoming more powerful and expanding to new areas, it makes even more sense to avoid businesses that are consistently user hostile.

I wonder if anti tesla protests and related bad pr will contribute to increased consumer awareness around the topic.

[1]: e.g. it can crash into a will e coyote style wall on autopilot: https://www.youtube.com/watch?v=IQJL3htsDyQ&t=899s


If there’s one thing about AI, it’s that you cannot avoid it. The idea that individuals can just “opt out” of plastic, sugar, artificial ingredients, factory farms, social media and all the other negative extrnalities the corporations push on us is a fantasy that governments and industry push on individuals to keep us distracted: https://magarshak.com/blog/?p=362

On HN, people hate on Web3 because of its limited upside. But really look at the downside dynamics of a technology! With Web3, you can only ever lose what you voluntarily put in (at great effort and slippage LOL). So that caps the downside. Millions of people who never got a crypto wallet and never sent their money to some shady exchange never lost a penny.

Now compare that to AI. No matter what you do, no matter how far you try to avoid it millions will lose their jobs, get denied loans, be surveiled, possibly arrested for precrime, micromanaged and controlled, practically enslaved in order to survive and reproduce etc.

It won’t even work to retreat into gated communities or grandfathered human-verified accounts because defectors will run bots in their accounts and their neuralink cyborg hookups and meta glasses, to gain an advantage and approach at least some of the advantages of the bots. Not to mention of course that the economic power and efficiency of botless communities will be laughably uncompetitive.

You won’t even be able to move away anywhere to escape it. You can see an early preview of that with the story of Ted Kazinsky — the unabomber (google it). While the guy was clearly a disturbed maniac who sent explosives to people, as a mathematician following things to its logical conclusion he did sort of predict what will happen to everyone when technology reaches a certain point. AI just makes it so that you can’t escape.

If HN cared about AI unlimited downsides like it cared about Web’s lack of large upsides, the sentiment here would be very different. But the time has not come yet. Set an alarm to check back on this comment in exactly 7 years.


> With Web3, you can only ever lose what you voluntarily put in (at great effort and slippage LOL). So that caps the downside.

Nitpick: That's not considering how it it has turbocharged and even commodified certain types of crime, such as ransomware.


When I asked it to deconstruct "Babbage"[1] I got "Derived from Babba's place", Some others:

- phonenose: The ability to detect sounds or voices through the nose

- legpc: Acronym for Laptop Easy Personal Computer

- gitls: A command in Git to list files

- housefreezing: The action of hardening a house with cold

- uncleftish beholding[2]: The act of viewing something that is whole

In any case it's fun to play with and the UI is nice too.

Note, the title looks editorialized, it's currently "A AI etymology deconstructor – can guess fake words", but the website says just "deconstructor.".

[1]: https://en.wikipedia.org/wiki/Charles_Babbage

[2]: https://en.wikipedia.org/wiki/Uncleftish_Beholding


We’re fine with “The big friendly giant” and the sahara desert (“desert desert”); big llm could join the family of pleonasms.

https://en.m.wikipedia.org/wiki/Pleonasm


When it's a different language it's fine.

Dismissed, Big LLM will live on along with Big Data.

Well, big data for me was always clear -- when data sizes are too large to use regular tools (ls, du, wc, vi, pandas).

I.e. when pretty much every tool or script I used before doesn't work anymore, and need a special tool (gsutil, bq, dusk, slurm), it's a mind shift.


All bios are short with two emojis. Is this project picking a random text from hardcoded LLM output?

https://github.com/LukeDunsMoto/BioCringe/blob/4d554d2bcc772...


> This site uses cookies. Not the tasty kind, just the ones that make the internet work. Deal with it?

Nope, there should be a “reject non essential cookies” button as well if the site is being served in the eu.

Or don’t use non essential cookies. Trackers used on the website and their cookies are not essential.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: