Hacker News new | past | comments | ask | show | jobs | submit | openmajestic's comments login

I like using AI or generative AI. Each term we use signifies the era of the technology. When I was starting, expert systems were the old thing, AI was a taboo phrase that you avoided at all costs if you wanted funding, and ML was the term of the day. Then it was the deep learning era. And now we are in the generative AI era - and also an AI spring which makes the term AI appropriate (while the spring lasts).

Pour one out for all of the people the founders convinced to join the company and paid with equity that is now essentially worthless as the founders all abandoned the sinking ship they drove.

I have seen this discussed in hiring decisions. I don't know that it played a large factor in a decision, but lack of experience in the standard tools/practices/terms of software development because of a career at Google was definitely a discussion point.

Side projects where the fundamental goal is to rediscover the joy. Not to complete something. Not to build well-tested maintainable software. Not to acquire users. Not even necessarily to learn or practice.

Just to play with computers like I used to play with legos. And then close the potentially unfinished useless pointless messy unmaintainable broken software project, never to reopen it, and be content that I had fully achieved what I had set out to do.


Feels like an incredibly shallow take on this technology. I predict this article will age horribly within a decade.

IMO if you can’t see the difference between crypto and LLMs, you aren’t paying enough attention. I think it’s much closer in impact to the internet than to crypto. Just because something is wildly overhyped, doesn’t mean that what’s underneath the hype isn’t huge.

There’s tons of value in AI, the only questions are how far will this improvement curve go before leveling off and how long will it take to usefully integrate the technology into various domains. The tide will go out and we’ll realize that many emperors have no clothes, but there will be many AI companies that have strong understanding of the reality of the tech, have strong technical and business fundamentals, and will be well positioned to become Fortune 50 companies.


It would be nice if you could provide more depth to this take, but you just countered shallow takes with more shallow takes (and hype).

The short answer is that the technology as of today works and is useful. Text summarization. Text rephrasing. Image, text and video generation. Intent extraction. Voice generation. AI brings structure to previously unstructured data. AI allows us to solve problems we couldn't solve before where we aren't smart enough to write down the rules but we are smart enough to collect and label examples. It's imperfect but it works well enough to be useful - the dumb hype is driven by usecases that haven't found the right use of the technology given current accuracy realities. If you have enough data and can frame a problem as a sequence problem, these models can help you solve it. The hardware and software is rapidly improving and costs will go down. As the productization of this tech gets better we are better able to leverage today's technology in useful ways - consider chatgpt vs copilot vs cursor as ways of leveraging AI to help with coding.

Is openmajestic an AI promoting AI bot trying to defend itself the only way it can with shallow takes? Its only a couple weeks old.

I agree that the world isn't changing tomorrow like so much of the hype makes it out to be. I think I disagree that engineers can skip this hype train. I think it's like the internet - it will be utterly fundamental to the future of software, but it will take a decade plus for it to be truly integrated everywhere. But I think many companies will be utterly replaced if they don't adapt to the LLM world. Engineers likewise.

Worth noting that I don't think you need to train the models or even touch the PyTorch level, but you do need to understand how LLMs work and learn how (if?) they can be applied to what you work on. There are big swaths of technology that are becoming obsolete with generative AI (most obviously/immediately in the visual creation and editing space) and IMO AI is going to continue to eat more and more domains over time.


It was called tool use before ChatGpt existed.


This use-case may be good or bad, but the logic underneath it is 100% correct IMO. Fundamentally, these new models allow you to encode and communicate high-level thoughts in the same way that the internet allowed you to encode and communicate well-defined information.

The natural evolution of this technology is to insert it into human communication channels and automatically transform raw thoughts into something better for the other end of the channel. "Better" is open to interpretation and going to be very interesting in this new world, but there are so many options.

Why not a version of HackerNews that doesn't just have commenting guidelines, but actually automatically enforces them? Or a chrome extension that takes HN comments and transforms them all to be kinder (or whatever you want) when you open a thread? Or a text input box that automatically rewrites (or proposes a rewrite of) your comments if they don't meet the standards you have for yourself?


> Or a chrome extension that takes HN comments and transforms them all to be kinder

I was about to start working on something like this. I would like to try browsing the internet for a day, where all comments that I read are rewritten after passing through a sentiment filter. If someone says something mean, I would pass the comment through an LLM with the prompt: "rewrite this comment as if you were a therapist, who was reframing the commenter's statement from the perspective that they are expressing personal pain, and are projecting it through their mean comment"

I find 19 times out of 20, that really mean comments come from a place of personal insecurity. So if someone says: "this chrome extension is a dumb idea, anti-free speech, blah blah blah" , I would read: "commentor wrote something mean. They might be upset about their own perceived insignificance in the world, and are projecting this pain through their comment <click here to reveal original text>"


Also on my project list - the AntiAssholeFilter. It's so interesting the ways you could handle this. Personally, I would want to just transform the comment into something that doesn't even mention that that the commenter wrote a mean comment - if it has something of value just make it not mean, otherwise hide it.

A couple things are really interesting about this idea. First - it's so easy for the end-user to customize a prompt that you don't need to get it right, you just need to give people the scaffolding and then they can color their internet bubble whatever color they want.

Second, I think that just making all comments just a couple percent more empathetic could be really impactful. It's the sort of systemic nudge that can ripple very far.


I appreciate someone actually putting an argument for this, just so we can tease out what's wrong here.

Customer support is one place where I don't want to just "send information". I want to be able to "exert leverage". I want my communication to be able to impel the other actor to take action.

The thing with hn comments is that the guidelines are flexible, even things that violate the guidelines are a kind of communication and play into the dynamics of the site. The "feelings" of hn have impact (for good and ill but still important).


Customer support is interesting here. I think you're very right (personally I think that the new AI in the article is a bad idea). I wonder if the ideal is transforming speech into multiple modalities. Maybe make the speech kinder to avoid activating people emotionally and then offer something visual that indicates emotions or "give a shit" level. But I dislike speech as a modality; the single channel nature is incredibly limiting IMO.

For HN comments, I think you're right. But I think there is still lots of potential there, from tooling to help Dang use his time more effectively to tooling that you can switch on when you are in a bad mood that lets you explore your curiosity but filters out/transforms the subset of comments that you don't have the emotional capacity to deal with well.

The cool thing is that this tech can be easily layered on top of the actual forum (does vertical integration give you anything? certainly crazy expensive) and so the user can be in control of what filters/auto-moderation they embrace. Plus text makes it easy to always drill deeper and see the original as needed.


Adding additional meta-data (like subtext to the raw text) is a good idea - however rewriting and automatically transforming behind the scenes is a fantastic way to create an even larger potential miscommunication gulf between two parties.

Even an individual extension or system that automatically transforms data into your desired content risks creating an artificially imposed echo chamber.


> risks creating an artificially imposed echo chamber.

I think that ship has sailed? Agree that the ramifications of auto-transforming communication is huge, but I think I'm more optimistic. The internet is a cesspool, I think that improving things is pretty likely now that empathetic software is no longer the stuff of dreams.


Smarter writing tools might be cool, but thinking of this as “enforcement” is kind of backwards given the current state of technology. AI is gullible and untrustworthy and it’s up to us to vet the results, because they could be bananas.

I think proposing a rewrite and letting people decide (and make final edits) could work well, though.


I think enforcement could have some positive uses. Think of reddit automod and what you could do with prompt engineering + LLM-driven automod. People's opinion of automod will vary, but I think it is a powerful tool when used right - there is so much garbage on the big parts of the internet, it's helpful to be able to deal with the worst stuff automatically and then incorporate a human escalation process.


>but actually automatically enforces them?

[strike]Democracy[/strike]Moderation is non-negotiable.


> Moderation is non-negotiable.

Typically, yes, it is.


[What you type in the text field]

This new moderation system sucks, who the hell thought of this?!

[What is displayed to the world]

I love this new moderation system, makes the site so much better to use!

----

You then think, fuck this, I'm deleting my account. Which you do losing access to it. But you see all your content is still there, and new posts are being made daily. Welcome to the new AI world where humans need not even apply.

Will it get this bad, no idea, but I bet some site will try it.


Google Brain spent oodles of money developing that tech only to watch other people capitalize on the research and potentially make Google Search (one of the stickiest products I've ever seen) obsolete. Freeform, self-directed, open-ended research labs are certainly a great approach from a technology breakthrough PoV, if you have 2010s Google margins.

But it's not obvious to me that approach was even a net win for Google as a business. Did Google Brain invent the technology that killed Google? TBD I think.


I wonder what would be the alternative, though? Other companies/universities would eventually have made the same breakthroughs, and I don’t think the answer to the innovators dilemma is to do less ambitious innovation?

In the case of Google there’s a lot of internal reasons why they didn’t leverage this opportunity, but if they had done that they might have ended up making their main product even more sticky.


> Thirdly, there is also often a lot more stress in being the founder. It is a complex, all day job. You have the weight of keeping things going for all employees, and when cash is low it’s your paycheck that gets delayed/cut first, not your employees.

I've seen startups from a founder perspective and from an employee perspective (VC style startups). I agree there is more stress as a founder, but people really underestimate the toll as an early employee - the gap is smaller than many people think. Particularly those ideal missionary-type early employees, they take on just as much mental ownership burden as the founders. It is also an all-day job. Let me tell you, when the money runs low, it is immensely stressful as an early employee - it is both hard to be the one making the decisions and it is hard to not be the one making the decisions. The ability to walk away isn't a benefit, it's a burden.

Their jobs can be different (or can be very much the same - depends on people and every startup is its own beast) with founders needing to deal with fundraising, board management, and ultimately having the impossible problems land at their feet which is often out-of-scope (and out-of-sight) for early employees. But the same core problem exists for both - your actions will dictate the success of the company.

And there is a huge amount of understanding of the founder burden and support for them, from financial to emotional to reputational. Where are the support networks for early employees? People will say the founders, but this is a load of crap for the same reason that founders rely on relationships with other founders rather than talking to their board or teammates.

Early employee is probably the worst engineering gig in Silicon Valley on most dimensions. Unless you just want to 0 to 1 build things. Then I haven't seen a job that can compare.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: