It should be remembered that most aspects of culture developed because they have a purpose. In the case of cooking and eating good food, there are definitely practical benefits, largely psychological.
One part of it is about directing attention. If you cook for yourself, you pay more attention to what you're putting into your body, and in learning how different flavors come together you learn intuitions about taste and aesthetics. In directing your attention like this, cooking can also serve as a kind of meditation / mindfulness practice.
In knowing how to cook, you become able to cook for others, which is a very common way for people to connect. If a loved one is sick, making soup for them can make them feel loved and cared for, just as it can make you feel good about putting in effort to help them feel better; especially when it comes to things that you just have to wait out, like flu, something like this is an excellent way of maintaining a connection. Conversely, in knowing how much effort it takes to make a good meal you become more appreciative of meals others make for you.
And finally, in cooking with someone else you learn about them and about yourself, about subtle differences that you might not have encountered otherwise. In solving a relatively easy, low-stakes problem together, you gain a sense of closeness without much risk or cost.
Overall, cooking is a practice centered on ideas that are underappreciated by people too engrossed in "hustle culture" etc, so it's important to have it as a tool in today's world. Of course, everything that it provides can be found elsewhere, but these are the reasons it's so deeply ingrained in human culture. I think you would also struggle to find other things that give you all of the above, and more that I didn't go into, for so little investment. It's not that cooking makes you human or something, but cooking does help you to connect with a lot of the deeper parts of yourself that do.
> Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
I dont think an agent fighting in a video game really counts? There is quite a significant gap between an FPS and a missile launcher, and it would be a waste not to explore how these agents learn in FPS environments.
They intentionally included combat training in the dataset. It is in their Technical Report.
How can combat training not be interpreted as "principal purpose or implementation is to cause or directly facilitate injury to people"?
Do you believe the agent was trained to distinguish game from reality, and refuse to operate when not in game environment? No safety mechanisms were mentioned in the technical report.
This agent could be deployed on a weaponized quad-copter, or on Figure 01 [0] / Tesla Optimus [1] / Boston Dynamic Atlas.
I think there's an inherent problem in the way we perceive AI bias. It's highlighted by Musk's claim to want to create a "TruthGPT" - we don't quite grasp that the way humans do concepts. When it comes to human thinking, "Truth" isn't really a thing.
Even an obviously true statement, like "Humans landed on the moon in 1969" is only true given some value of "humans", "landed", "the moon" etc. Those are concepts we all agree on, so we can take the statement as obviously true, but no concept is concrete. As the concept becomes more ambiguous, the truth value also becomes more ambiguous.
"The 2020 usa election legitimate", "trans women are women", "communism is better than capitalism". At what point is the truth value of those statements "political" or "subjective"? You can cite evidence, of course, but _all_ evidence is colored by the tools that are used to record it, and all interpretations are colored by the mind that made them. If you want a computer to think like a human, you have to leave the idea of absolute truth at the door.
> "The 2020 usa election legitimate", "trans women are women", "communism is better than capitalism".
Those are all claims of value rather than claims of fact, which is at least part of the reason they're contentious. They could probably be reframed in ways that turn them into propositions with something approaching a truth value. "Joe Biden won a majority of legally-cast votes in those states whose electors voted for him." "Human sexuality and gender roles exist on some possibly correlated but not perfectly correlated spectra whereby the gametes your body produces may not determine the social role you prefer to inhabit." "Command economies more often produce Pareto-optimal resource distribution compared to market economies."
Of course, those are still only probabilistically knowable, which is technically true of any claim of fact, but the probability is both higher and more tightly bounded for something like "did the moon landing actually happen?" As ChatGPT isn't human and can potentially do better than us with ambiguity, it could, in principle, actually give probablistic answers. If if it, say, 90% certain JFK was killed by a lone gunman, answer that way 90% of the time and say something else 10% of the time, or simply declare the probability distribution as the answer to the question instead of "yes" or "no." Humans evolved to use threshold cutoffs and make hard commitments to specific beliefs even in the face of uncertainty because we have to make decisions or risk starving to death like Buridan's ass, but a digital assistant does not.
> As ChatGPT isn't human and can potentially do better than us with ambiguity, it could, in principle, actually give probablistic answers
Surely it's the opposite? As ChatGPT isn't human, hasn't seen any video, visited any sites or had any experience of ballistics and is simply inferring connections between "JFK", "gunmen" and "grassy knolls" and a question about probability from its model of human texts, it has no novel insight into the probability of a second gunmen [but can hallucinate the probability on demand. You can get variety in answers by turning the temperature up, but the underlying distribution is the distribution of human writing on the subject included in its corpus, adjusted by answers rejected in training. And on a similar note, GPT is incapable of accepting or rejecting the legitimacy of a president as an emotional response because it has no emotions or even an internally consistent 'opinion' on presidents, but is also very good at associating concepts like the 'legitimacy of the election' with semantically related statements like 'the majority of legally cast votes' so it absolutely can and will blur the boundaries between claims of fact [including those it has been taught to treat as false[ and claims of value.
Projects like this try to get around the fundamental flaw in GPTs - namely that they do not have goals, plans, thought processes etc - without actually solving it, e.g. by having the AI write out its "goals" before continuing.
But this is a hacky fix, and will never be reliable enough for consistent use. For that, more actual research is necessary, on how to simulate and model goals and trains of thought and have them interface with the world model provided by an LLM.
I feel like there's an implication here that the research should be in modeling architectures and training sets and other specialized machine learning. But there is research here: in natural language modeling of goals, plans, thoughts, processes, etc.
Obviously we don't know what paths will be most successful. But a path where critical drivers of AI (like goals) are modeled in a transparent and comprehensible manner seems like a very attractive direction to take. I'd much rather be able to read my AI agents goals, plans, intermediate goals, self-analysis, etc., than have it all captured in a set of completely incomprehensible weights.
The AI would never say hello, but if you say hello to it, it will say hello back. Is that also a hack? Aren’t you just describing everything about LLM behavior generally not only something specific about goals/tasks? In that case the nature of the thing is less interesting than the results we're able to find from it and I wouldn't worry about this kind of purity test.
I mean most people don't have the resources needed to build a model big enough that these types of behaviors emerge so third party addons is all we got until Google/Microsoft/OAI drop something on us.
Part of the issue here is the massive amount of compute needed over what we're already spending. ToT is showing a likely 10 to 20x number of calls to get an answer, which when you are compute limited is going to be a problem for deployment in mass. It's very likely we're going to have to wait for more/faster hardware.
I think you're right about the framework-constructing.
The social games that make insincerity natural come more intuitively to neurotypicals, so they don't notice they're playing them. When they do, and try to analyse themselves, I don't think they get it right often, because they don't have to build up an understanding of said games from the ground up like we do.
This article isn't based on anything really, it's some rules conjured out of nowhere to explain some anecdotes. Like you said, a framework for the falsehoods more than a tool to be more sincere.
Hopefully it's free for now to gather users, at which point they'll introduce a paywall. If it keeps working as well as it is so far, and remains private and free from ads, I really would not mind paying for this service.
Especially as it keeps the company motives aligned with the users', i.e. providing good search results rather than showing as many ads as possible.
I've always thought computer science was the closest thing the real world had to magic, because the essense of software is always automation - you write the spell, so later you just have to invoke it and magic happens.
Whether the actual spell is written in arcane runes or python or encoded as a language model doesn't matter, the essense is the same.
I think the semi-obsession we have with "correctness" in style is actually something programmers do a lot better than most other disciplines.
Arguing over indentation is functionally the same as arguing over unix principles, or frameworks or languages. The fact that we're always looking not just for the solution, but the right solution, on every level of the process, is part of what makes programming so beautiful and enjoyable.
Law is actually a perfect example of the opposite; without that obsessive concern for correctness, the systems created are hopelessly confounding and difficult to navigate.
One part of it is about directing attention. If you cook for yourself, you pay more attention to what you're putting into your body, and in learning how different flavors come together you learn intuitions about taste and aesthetics. In directing your attention like this, cooking can also serve as a kind of meditation / mindfulness practice.
In knowing how to cook, you become able to cook for others, which is a very common way for people to connect. If a loved one is sick, making soup for them can make them feel loved and cared for, just as it can make you feel good about putting in effort to help them feel better; especially when it comes to things that you just have to wait out, like flu, something like this is an excellent way of maintaining a connection. Conversely, in knowing how much effort it takes to make a good meal you become more appreciative of meals others make for you.
And finally, in cooking with someone else you learn about them and about yourself, about subtle differences that you might not have encountered otherwise. In solving a relatively easy, low-stakes problem together, you gain a sense of closeness without much risk or cost.
Overall, cooking is a practice centered on ideas that are underappreciated by people too engrossed in "hustle culture" etc, so it's important to have it as a tool in today's world. Of course, everything that it provides can be found elsewhere, but these are the reasons it's so deeply ingrained in human culture. I think you would also struggle to find other things that give you all of the above, and more that I didn't go into, for so little investment. It's not that cooking makes you human or something, but cooking does help you to connect with a lot of the deeper parts of yourself that do.