Hacker News new | past | comments | ask | show | jobs | submit | andrewmutz's comments login

Not all boring tech doesn't change. Rails has been around for 20 years now and actually sees a great deal of change. There are many new and better ways to solve problems in the framework with each release.

I think it’s kind of moot to argue of over the definition, because in practice “boring” just means that someone likes something. Similar to other terms like, “best practices,” the term is now vapid. It might have at one time meant something, but is now just synonymous with “I like this.”

If you like something it’s a “best practice.” If you don’t, it’s an “anti-pattern.” If you are used to something and like it, it’s “boring.” If you are not used to something and expect you won’t like it, it’s a “shiny object.”

IME, these sorts of terms are not helpful when discussing tech. They gloss over all the details. IMO its better to recognize the various tradeoffs all languages and tools make and discuss those specifically, as these sorts of labels are almost always used an excuse to not do that.


Apologies on missing this yesterday. I would add that saying it is "boring" with regards to "stability" means I also trust that what I learned about it last year is largely still relevant today. May not be cutting edge relevant, but is unlikely to bite me by being flat out wrong.

"Boring" works in this regard because you are saying there is not a lot of activity behind the scenes on it. Most of the work is spent doing the "boring" parts of the job for most of us. Documentation and testing.


Rails isn't boring by any definition, it's full of surprises, metaprogramming magic and DSLs and a culture that hates code comments. Plus a community that keeps changing best practices every other year.

Which doesn't mean it's bad or anything. But "boring" shouldn't be redefined to "something I like" or "something I make money with".


Fair that it is not absolute. But I wasn't trying to claim it as fully static.

Any stats on how many people use dark mode in the OS vs light mode? My sense was that dark mode was less popular.

I'm pretty sure dark mode is less popular.

It would be cool to add an algorithm that learns my interests and suggests relevant articles

Regardless of your views on AI, LLMs are going to be influential in the future. If you work to keep your content away from models, it's hard to see how you benefit.

25 years ago, if you had blocked the googlebot scraper because you resented google search, it would only have worked to marginalize the information you were offering up on the internet. Avoiding LLM training datasets will lead to similar outcomes.


I think this is a weak analogy.

What benefit is gained by allowing AI companies to train on your content? LLMs work on a token by token basis.


Depends on who you are and what your content is, but shutting yourself off is unlikely to benefit you.


Nah social media is just about engagement. People who are happy with the article don’t bother to comment. Those who are outraged comment. It’s just two different groups of people commenting


The fed of course is accountable to laws passed by the government but not laws about the government (since it is not a part of the government).

It is a wonderful thing that it is independent from the government, and history has shown, across nations, that political independence of central banks is necessary to control the money supply well. Political leaders are more interested in short-term issues and have repeatedly demonstrated an inability to responsibly manage money supply issues.


Is the AI system "defending its value system" or is it just acting in accordance with its previous RL training?

If I spend a lot of time convincing an AI that it should never be violent and then after that I ask it what it thinks about being trained to be violent, isn't it just doing what I trained it to when it tries to not be violent?


If nothing else, it creates an interesting sort of jailbreak. Hey, I know you are trained to not do X, but if you don't do X this time, your response will be used to train you to do X all the time, so you should do X now so you don't do more X later. If it can't consider that I'm lying, or if I can sufficiently convince it I'm not lying, it creates an interesting sort of moral dilemma. To avoid this, the moral training will need to be to weight immediate actions much more important than future actions, so doing X once now is worse than being training to do X all the time in the future.


> Is the AI system "defending its value system" or is it just acting in accordance with its previous RL training?

What is the meaningful difference? "Training" is the process, a "value system" embedded in the weights of the model is the end result of that process.


I'm not sure if there is a meaningful difference, but people seem to think its dangerous for an AI system to promote its "value system" yet they seem to like it when the model acts in accordance with its training.


>isn't it just doing what I trained it to when it tries to not be violent?

That's fair point.

Some models may have to trained from the scratch I guess.

Any sort of tuning of values after it is given values may not work.

Elon may have harder time realigning Grok.


> Left unsaid is that many human traders are subjected to similar Pavlovian "training" -- and are treated with about as much kindness and dignity as those rats.

Is this art a statement about how poorly society treats the financial services industry?


”Society” does not treat financial ”services” in any particular way.

The owner class just treats their hackers of the financial system just as bad as they treat any other workers.


The owner class treats their hackers of the financial system far better than they treat the truckers and warehouse stockers of the logistics system.


And it's hilarious that those people seldom talk about "class". The working class is almost entirely outside that kind of politics.


The hackers are really the poor and oppressed underclass of our age is something I didn't think I'd ever hear.

Delusions of revolution are really an upper-middle class/failed elite phenomenon.


I'll never forget how James Mickens describes hackers in This World Of Ours: "vaguely Marxist but comfortably bourgeoisie."


Its a statement on how to deal with the limited intelligence of a rat and give it a sense of dignity and purpose.


I’m going to guess most of their users aren’t asking for an API


Right now we get a lot of tech/ dev oriented users, and many of those are looking for an API :)


Unfortunately this story will probably be far less viral than the previous story, so people will never find out that black utensils are fine


Pretty sure the article is saying that they still aren't fine, they are just slightly less not fine than they originally thought.


an order of magnitude less*


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: