Hacker News new | past | comments | ask | show | jobs | submit | nomagicbullet's comments login

Thank you for introducing me to the Ventriloquist and Scarface. Absolutely bizarre characters. I love it.

I guess apparently the 2004 version did take after Tony Montana.

My initial reaction reading this (I had t heard the news) was disbelief. How could someone so accomplished and influential be driven to such despair over what seems to be an interpersonal workplace conflict? But the more I think about it, the more I get it. Many of us pour our identities into our work. It becomes more than a job. It’s our purpose, our sense of self. When that’s destabilized or taken away, the fallout can be devastating.

It’s a sad reminder of how vulnerable those who care deeply about their work can be in the face of toxic or unjust environments. Passion and commitment, when met with indifference or hostility, can push anyone to a breaking point.


>over what seems to be an interpersonal workplace conflict

Well, it started with an interpersonal workplace conflict, but ended up with him being forced to retire from his professorship and the entrepreneurial mentoring program he saw as his mission.


We have zero data or evidence to differentiate between "He was pushed out for being right about the ethics complaint" and "he was pushed out because he was wrong about the ethics complaint and did not take that well"

People are jumping to conclusions about who the aggrieved party is because they have a vague connection to the guy who made HowStuffWorks and zero connection to the counterparty. Especially on HN, people would rather believe that "the system is rigged against the brilliant individual" rather than "the individual can often be the problem".

Just as often as upper management is corrupt, a single individual who had NO PRIOR EVIDENCE OF NEGATIVE INTERACTION WITH ANYONE goes absolutely apeshit and attempts to destroy your life.

My mother is a well respected teacher. After about 4 years of working in a new school, one of the other teachers in her department seemingly got "triggered" and went utterly insane. He started fabricating ethics complaints, lying to administrators, and even went so far as to retain a lawyer to sue the school district to have her removed for his completely made up allegations. I read the complaint and I cannot believe a lawyer was willing to be paid to participate. It was pages of insane rantings, like manifesto level, full of misspellings and mistakes and made up entirely of outright lies. The internal investigation was terrifying, because it starts as "He said/She said". Luckily he was crazy enough to fill his allegations with things that were demonstrably disprovable with documentation, but without that, the school absolutely would have just let my mother go instead of fight it.

There was no "cause", no change in department policy that favored my mother over him, no change in pecking order, nothing. He just one day decided to go to war with her. He completely lost his connection to reality. Yet to his students, he continued to teach normally, and nothing seemed off.

We have no facts. We have no evidence. We likely never will. We should reserve judgement.


> Especially on HN, people would rather believe that "the system is rigged against the brilliant individual" rather than "the individual can often be the problem".

Slightly off topic question, but is this Silicon-Randian projection really particularly widespread on HN by and large or only for specific subset of topics?


Actually, my experience is, there is always a cause. But some causes could be very subtle or unexpected.


> It becomes more than a job. It’s our purpose, our sense of self.

Also, consider Brain made a quarter of 1 Billion dollars with the sale of his company. The man didn't have to work, it was his choice to work. I think this contributes even more to the feeling that work is your identity.


> Brain made a quarter of 1 Billion dollars

Incorrect. The company burned through venture capital for three years, then laid off 50% of workers and was sold in 2002 to vulture capital for ~$1MM with no cash trading hands, only a promissory note [1] to the previous underwater investors. No liquidity to founders.

The 2002 purchaser, Convex Group, scaled the company, took the company public via reverse IPO and sold it five years later to Discovery TV for $250MM. Seven years later, Discovery took a 82% loss, selling the company for $45MM.

[1] https://web.archive.org/web/20240717220914/https://genesis-c...


Oh interesting! The original article linked in the post really skips over all of those details and makes it seem like he sold the company for 250 million



[flagged]


That reading of the article is so poor it looks like you are trying to spin it intentionally.

The quote without your interpretation does not have any 'enormous ego' vibes.

One clear alternative interpretation is that he was being railroaded via office politics, wasn't equipped to deal with the hit to his image a firing would have, and didn't feel he had the energy to deal with it. No 'enormous ego' required.

Edit: were you involved in this? Your tone and interpretation made it seem so, and your username 'hulitu'... are you Dr. Li?

https://www.csc.ncsu.edu/people/hli83


FWIW Hulitu is a place.


The haughty projections about robots in the future with specific years seemed weirdly confident to me, too.


And yet we have them sooner.


Guess I need to pay more attention to the humanoid robots around me, and less attention to the fact that hotels offer less housekeeping services, restaurants offer smaller menus and less wait service, plumbers and electricians and other tradespeople have longer lead times…things that tend to happen when demand for work is increasing faster than supply of labor to perform the work.


Step one is telepresence for toil, allowing long distance labor arbitrage:

https://cybernews.com/ai-news/watney-robots-fold-your-laundr...

Step zero is purpose built robots, since the human form isn't necessary for most repetitive tasks:

https://www.aboutamazon.com/news/operations/amazon-robotics-...

What I was referring to though is that the Boston Dynamics dogs and Sony spaceman efforts of a decade ago have iterated faster than thought when those predictions were made:

https://cybernews.com/science/humanoid-robot-dance/

But here's an example where it's coming together, and this was the specific robot I had in mind when I said "they're here":

https://www.youtube.com/watch?v=Z3J250fr_V4


There are indications that the office space conflict was simply the last straw in a long line of interpersonal conflicts he had with other faculty.

For example, he frequently filed ethics complaints against other faculty after minor disagreements... putting their careers at risk over trivial matters.

He may have been well like by the internet but he was not well liked at the school, and the relative silence by faculty and students is pretty telling.


[flagged]


You do understand that the supposed genesis of this (in Mr. Brain's own words) was that he filed an ethics complaint about his boss because she decided to reassign some office space he wasn't using anymore?

In what world do you think that this sort of behavior is remotely acceptable? How would you feel if a co-worker filed a harassment claim against you (which could have serious career consequences for you) for moving their lunch bag in the office refrigerator? That's basically what he did. Being allowed to resign was letting him off easy; at most universities he could have been terminated for cause.

Again: Mr. Brain put a colleague's career in jeopardy over office space. Think about that really hard before you comment again.


> he filed an ethics complaint about his boss because she decided to reassign some office space he wasn't using anymore?

Where do you see that in[0]? In July, they wanted to take the EEP[1] (a program he ran for many years and was passionate about) meeting room for a new faculty member, and then somehow in September they decided to cancel the EEP program once he started complaining. He kept complaining until they ended up firing him (forced resignation) in October.[2]

[0]https://sites.google.com/view/marshallbrain/marshalls-last-e...?

[1]https://entrepreneurship.ncsu.edu/engineering-entrepreneurs-...

[2]"“You have three options: 1) Retirement, 2) Discontinuation, or 3) Separation. By continuing to argue I will take the path of "Discontinuation." "Discontinuation" means we will not renew your contract. By the end of business on Wednesday I will notify the university and the I&E team that the end of the semester is your last day. Everything ends at the end of the semester. To retire and to avoid "Discontinuation" you must send me your letter of resignation before the end of business Wednesday. If you prolong your argument I will make the "Separation" effective Wednesday, your email will be cut off, your office will be inaccessible, you will not finish the courses this semester. Everything ends Wednesday. To avoid an immediate separation you must not engage in the argument. If we agree to an amicable separation and you begin to argue later it will trigger an immediate separation.”"


NCSU receives federal funds. Do federal anti-bullying efforts apply to faculty as well as students? https://www.stopbullying.gov/resources/laws/federal

Do we know why the NCSU 14-year respected department head of ECE was replaced in 2023 by the person who later "exploded in fury" at Brain? https://ece.ncsu.edu/2023/department-head-dan-stancil-to-ste...

> After 14 years of outstanding leadership as head of our Department of Electrical and Computer Engineering (ECE), Dan Stancil will be stepping down from his position.


I’ve seen other very small bureaucracies who attract people who are great at politicking but are basically essentially evil and use the position for sadism every chance they get.


Wow, this is particularly disconcerting after seeing their No Retaliation clause in that thread. If they aren't willing to deal with ethics complaints in a non-prejudicial manner, then they shouldn't have such a policy.


somehow in September they decided to cancel the EEP program once he started complaining

The EEP program was not cancelled...Brain was simply removed from the program.

It is borderline gaslighting for him to claim that the EEP program was cancelled simply because he was no longer a part of it.

If he had filed an unmerited ethical complaint like this at a UC school like Cal or UCLA, he'd have been terminated for cause without the option of retiring gracefully (this actually happened to a professor at Berkeley while I was there).

Notably: it's been over a week since his death and none of his colleagues are defending him. There's no drama on campus from students despite the supposed injustice. The only people keeping this alive are a handful of people who liked his website. And that's because...there's more to the story than just the one-sided version of it people are parroting on the internet.


His response to everything is insane.

"Hey Brain, we want to use this space for <Thing>"

His response is to insist that they do not "need" the space, that "need" is a LIE, and that LIE requires an ethics inquiry!

What the shit? He then does the exact same thing for the email about them not recommending students for his program. "ABET says we aren't good enough, and your program is part of that, so we are going to go in a different direction" and again his reaction is insane!

"“Marshall - my colleague, my confidant, my advisor, my friend - you are over the line." He calls this retaliation!

Imagine going to your boss's boss, and nitpicking every single word of their communication to you, and then when he says "Hey uh you're a little out of your lane here" doubling down!

Now imagine believing it is unethical for your boss's boss to tell you that you are out of line! Imagine keeping your job after being such an unmitigated ass to an entire department and insisting you cannot possibly be wrong like human communication is some sort of logic system!

Sure is funny how much context has been removed from that email too!

Marshall Brain killed himself because he couldn't deal with perfectly valid college/educator administrative interaction! Poor guy must have lost his marbles.


> at most universities he could have been terminated for cause

Is that really true? In my country, there are employment laws to prevent the suffering of detriment post reporting of an incident, grievance, or violation, even if (especially if) the claim is found baseless.

It’s important for these regulations to exist because without them some people fear reporting true problems as they may lose their job, and the wrongdoers do more wrong in such a culture.

Processes should deal with baseless, frivolous, or even vexatious claims far in advance of any consideration of termination of employment.

But I’m clueless on US labor laws, apart from a general suspicion they’re relatively thin.


It wasn't over office space, it was over favoritism.

Are you being intentionally daft?


There is a specific accessibility setting to reduce motion and prevent videos from autoplaying on iOS.

Go to Settings > Accessibility > Motion and turn on "Reduce Motion"


Yeah this is super useful.

Also audio doesn’t autoplay on any websites. You have to interact with the page first for that to happen


> absolutely everyone hates doing their taxes and the whole shady process behind it all.

I like preparing and paying my taxes.

> The IRS should have to declare what they think you owe before you pay or submit any paperwork

Agree.


I agree, framing it as a mental model makes sense.

Here’s the issue: when a site rejects my password, I understand the potential reasons—wrong site, wrong account, or forgotten password update. But what does it mean when a passkey fails? How can I resolve this? Is it even fixable?

My lone attempt to use a passkey for login involved an unrecognized fingerprint authentication, leading to repeated failures and ultimately, a return to traditional passwords due to the opaque nature of passkeys.

For now, I’ll stick with what I understand.


> In a statement, the museum claimed that by “insidiously and maliciously [juxtaposing] the image of Michelangelo’s David with that of a model,” the publisher was “debasing, obfuscating, mortifying, and humiliating the high symbolic and identity value of the work of art and subjugating it for advertising and editorial promotion purposes.”


I very much liked this quote on choosing happiness:

> In the short term, you would be much happier if you accepted and admitted to yourself that the reason you don't have what you want is simply because you do not want it badly enough. The sooner you accept that, the happier you'll be. Then the next question is: Do you want to be happy or do you want to achieve what you want?


"Desire’s a contract you make to be unhappy until you get what you want." https://nav.al/desire


Am I missing something? The writer claims:

>The reason you don't have what you want is because you don't want it badly enough.

That is absurd. And I say that as someone who has nearly everything material I might want in life. Wanting something really, really badly doesn't ensure that you are going to get it.

Consider every failed startup. Those founders just didn't want it enough?

I don't get it, and I also don't get the second sentence:

>Then the next question is: Do you want to be happy or do you want to achieve what you want?

What if the thing you want to achieve is being happy? Why can't those be the same?

I feel like something is going way over my head here, but I'm not sure what it is.


I think the point is: if you truly want something, you'll get it or die trying. Being happy, then, is at some point accepting that you don't actually want the things you don't have yet.


That advice — given in many forms by many people — to me almost seems immoral to at this point. It manages to pack so much into a brief assertion, that people who don't achieve something just don't want it badly enough. It's meaningless and insulting to everyone who finds themselves in circumstances beyond their control, in whatever ways that might be obvious or subtle to other people not in the situation. It blames victims, it enshrines survivorship bias, and it relies on a sort of nondisprovable assertion, which in itself speaks volumes about its validity.

Of course, the author adds the end bit, about "do you want to be happy or do you want to achieve what you want", which ameliorates it a bit and makes it more accurate. But by the same token it makes it softer and murkier and less useful.

Then again, the whole essay is predicated by "these are simply the lies I tell myself to keep on living my life in good faith". I can understand where that statement is coming from, but then what use is it?

I appreciate the desire to put advice into the world, but too often I find it completely useless out of context. Or maybe even actively harmful. They're like Barnum statements that can be interpreted in all sorts of ways in any given situation.

I guess I just feel like we as a society need to stop taking advice from people based on their own personal philosophies, which have all sorts of self-justifying biases (regardless of how their life goes).


>Then again, the whole essay is predicated by "these are simply the lies I tell myself to keep on living my life in good faith".

I read the whole thing, and it seemed like the opposite of "the lies he tells himself."

Rather, it sounded like he was telling the truth as he sees it. I don't understand why he chose that title for his essay.


author here:

good liars deceive others great liars deceive themselves,

my goal was to call out that i do believe these are true, but i must acknowlege they are lies


It's good to remember that even if it seems like you have figured things out, there is still a lot left to learn, or even completely overturn.

Calling these "lies" is a way to keep a beginners mind, which is a path towards greater knowledge and discovery.


Self deprecating title to counterbalance life advice from a guy who is in his 20s

I'm not his target audience but I saw everything he wrote as true and wise


There is some nuance to the assertion that if one doesn't get something they didn't want it enough.

But, as pointed out in another comment, the idea is that you either get it or die trying. As long as you haven't given up, there's a chance you'll get what you want.

A more socially acceptable "spin" is to emphasize on perseverance. But, at least in my belief system, it's not only the physical act of perseverance that eventually gets you there, but also the mental aspect of actually really wanting it.

There are also many other nuances like people thinking they want something but instead they want something else (eg. people thinking they want a Lamborghini, but maybe they just want to be respected). And then there are cases where people want something but at the same time kind of want the opposite thing (eg. want a high status challenging job, but also want to live a peaceful and chill life.)

So yeah, with such nuances it's probably not a great idea to just randomly throw these ideas out there without some detailed follow up explanations, that part I agree with you.

But what I disagree is the "victim" part. Nobody is a "victim" of not reaching a goal unless they choose to be one. I mean, let's say for example a person fails to pass an exam because they had to take care of their sick mother -- well, good for them, because taking care of their mother is more important than the stated goal of getting good grades for the exam. Nobody is a victim here. Instead, everyone's free choices are respected. As long as we don't judge people for their lack of apparent, socially-acceptable "successes".


The idea of LLM-powered personal assistants sounds like a game-changer (eg. an AI buddy that not only helps with work tasks but also offers real-time advice and fact-checking during conversations)

Can't wait to see how this evolves. It has the potential of becoming as ubiquitous as a smartphone.


Sadly digital personal assistants are the biggest example of something that we've not yet figured out how to build safely given the threat of prompt injection.

If your assistant can perform actions on your behalf - even as simple as replying to an email - you can't risk that assistant being exposed to any potentially malicious text from an untrusted source that might contain instructions designed to subvert it.

So your assistant can't be trusted to summarize web pages. Or even to read messages in your inbox!

I wrote more about this problem - and provided a very disappointing partial proposed solution - here: https://simonwillison.net/2023/Apr/25/dual-llm-pattern/


It seems to me like there's sufficient value even if the assistant can't complete any actions fully (e.g., can compose but not send an email; or maybe not even compose). There's so much potential simply aiding with executive function: what to do, when to do it, acquiring any dependencies, handling partial work, helping break down tasks, and a great deal of potential with perception if the assistant is highly available and perceives what the human perceives.

(Rayban Stories could be pretty awesome for this, if they were hackable enough to actually prototype things on: https://hachyderm.io/@ianbicking/110833737363686936)

Imagine a personal assistant that was always ready to respond to the question "what should I do now?" – and of course enter dialog, not just dictate an action. That you could tell about all your tasks, but not just the tasks but also the _why_ of the tasks, giving it the chance to set or change something like a deadline on its own, or even simply discuss those deadlines.

Imagine you could co-develop a process with that assistant. Maybe there's times you like to do certain kinds of work... what are those? What features distinguish different kinds of work? If you have to do a certain kind of work, what do you need (time/place/mindset) to be successful? It doesn't need to be some magic algorithm, it can be a deliberative process that you engage in with your assistant, something conscious and explicit.

Maybe it helps both move through and construct to-do lists. You have an item on your list: either the item is very easy or the question is "what's the first thing you have to do to achieve that item?" – and the assistant has some idea (and can learn more) about what a good size of a task is for you personally. And now it's keeping this list of tasks and dependencies. It should be able to understand enough to mark subtasks complete if you complete the parent task. It can probably suggest items. If it has access to enough information – even if you have to put the information in explicitly – it can probably help you resume tasks by reestablishing all the context you need.

Like maybe all your assistant needs or should have is access to your clipboard (in and out), photos and screenshots, mic and speaker access (with a wake word), a library of notes and observations, and task initiation that isn't any more sophisticated than what you can do from a link (mailto:person?subject=...)


Completely agree - there's still a ton of interesting stuff we can explore with personal assistants if we're careful about it.

My concern is that prompt injection is the kind of vulnerability which you default to being vulnerable to if you don't understand it - so it's going to be really easy (and common) for people to build the unsafe assistants instead.


> So your assistant can't be trusted to summarize web pages

Under these circumstances people can't be trusted to summarized web pages either. Natural selection will weed out these "inapropriate" LLMs the same way inapropriate people are weeded out from e.g. companies by being fired. Models don't need to be perfect, just useful.


Unfortunately in this case they do need to be perfect. A model that reads a web page and then emails all of my private data to some attacker who put malicious instructions on that web page isn't useful.


So you are saying (human) personal assistants are not useful? I think many people disagree and most people would want to have one weren‘t it so expensive


couldn't you separate the agent responsibilities?

Couldn't you make it so the agent that summarizes your emails isn't the same as the agent that sends email, etc.



Sandbox/firewall it.


Fact checking has worked out so well in other places. I mean - social media sites and news outlets plus the groups that fund them being the primary funders of fact checking groups creates absolutely no conflict of interest at all.

Having some personal assistant developed by megacorp that helps me be a better worker for megacorp does sound game changing - but not in a good way.


Get ready for ultra personalized ads beamed directly into your brain.


And they don't even need to scan your eyeballs: https://www.youtube.com/watch?v=7bXJ_obaiYQ


I would get an extremely high value from an “Accountability Buddy” for ADHD. Has anyone seen anyone building this?

I think it would be pretty straightforward with basic memory and a good prompt.


Is there are Dreambooth equivalent for fine-tuning ChatGPT as there is for Stable Diffusion? I have to imagine that if we can add custom data to a DL text-to-image model, we should be able to do the same with a text-to-text one.

Edit to add: There are a number of Google Colabs for fine-tuning SD and I wonder if there are (or if it is technically feasible) to accomplish the same with other txt2txt models.



If you're running the text-generation-webui (https://github.com/oobabooga/text-generation-webui) it has the ability to train LoRAs.

It'll require a beefy GPU but I've seen some fun examples like someone training a LoRA on Skyrim books.


I own both Affinity Photo and Pixelmator Pro. I find the UI using Pixelmator to be more user friendly. This could be because I come from many years of using Photoshop and I found the way Affinity deals with layers and effects to be counterintuitive to me.

That being said, both products are solid and likely a matter of personal preference or specific feature needs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: