If used as an agent, given access to execute code, search web, use other form of tools, it could do potentially much more. And most productive usecases require access to such tools. If you want to automate things and get most of the modeel, you will have to give it ability to use tools.
E.g. it could have been trained to launch a delayed attack if context indicates it has access to execute code and given certain conditions, e.g. date, or other type of codeword that is input to it.
So if a malicious actor gets to a certain stage with an LLM where they are confident it will be able to reliably run this attack, all they have to do is open source it, wait for enough adoption and then use some of those methods to launch such attack. No one would be able to identify it since the weights are unreadable, but really somewhere in the weights this attack is just hiding and waiting to happen given correct pathway triggered.
But if it's specifically trained to react to a date in its context, it seems very doable. Or to a combination of otherwise seemingly innocent words or even a statement or topic. E.g. a malicious actor could make some certain notion go viral and agentic LLMs integrated with news headlines might react to that.
It seems like it would be very arbitrary to train it to behave like this.
Most agentic systems would provide a date in the prompt context.
For simplicity sake imagine a scenario like:
1. China develops LLM that is by far ahead of its competitors. Decides to attribute it to a small start up, lets them open source it. The LLM is specifically designed to be very efficient as being an agent.
2. Agentic usage starts to get more and more popular. It's very standard to have current todays' date and major news headlines given to the context.
3. The LLM was trained to given a certain range of date and certain headlines being provided in its context to execute a pre-trained snippet of code. For example China imposing a certain type of tariff (maybe I lack imagination here, and there can be something much more subtle).
4. At that point the agentic system will attempt to fish all data it can from all sources it's being ran within.
Now maybe it's not very practical, and it's extremely risky with current state of the LLMs. I don't think it's happening right now. And China has a lot of other tech available to it already that they could do much more harm (phones, robot vacuums), but I think there's still at least potential attack vectors like this and especially if the LLM became very reliable.
Decision of where to allocate money creates value. Imagine you have 500k to allocate. You can choose to invest in A or B, after analysis you realize A is a failing business, but B with 500k invested will create enormous value by producing product C. If you didn't allocate in B this company wouldn't have had enough money to produce C and succeed.
Yes, but this can still be true if the system works in a perverted way.
In your example, you should include how many $ go to the entity doing the allocation. If this is an insane amount of money, then maybe we are better off without finance and just figure out the optimal allocation in some other way.
But who will do that other way of optimally allocating?
To take a simple extreme edge case, let's imagine that there's a business that requires 500k to truly flourish, but if it gets that 500k, it will become $10B company within 5 years. If it doesn't get the 500k it will likely fail.
Only 1 person has noticed and believes that this business would have that potential. And let's say they either have 500k or they manage to convince others to get the 500k, how much of that $10B would they deserve?
There are many problems with individual ownership though. It is a whole large system where people constantly change. You need to have multiple owners and redundancy otherwise all the projects are dependent on one individual who might quit any time. Things happen in the past, people make mistakes and you start to incorporate processes to avoid it because people are and will always be imperfect, you end up with thise processes and bureaucracy.
They care about the wrong things. Ultimately everyone cares about something and then there is tons of things no one or that anybody doesn't care from simply because you have limited amount of possible care to give.
It's another country's culture. It's really hard to holistically judge what is right or wrong. If people want to be a workaholic, then I can't really judge their lifestyle.
That said, yes. If this is pressure from their society, they probably should revisit those mindsets. Especially when the birth rate right now really can't afford a higher mortality raet. Fortunately they are starting to in some sectors.
In elementary school I had a girl in class who the other girls made fun of. There was nothing physical. Boys kind of made fun of her as well, but what really stuck to her was the other girls. She did therapy, but even her therapist told her that she is a hopeless case. Which is obviously extremely unprofessional and terrible. She ended up taking her life in her 20s. It was just mental bullying by peers. It is very sad to think back at the time. There was absolutely nothing wrong with her to deserve this bullying, and peers did it as some sort of self esteem popularity type of thing.
I do remember school being this survival of the fittest type of thing as well. Some were naturally good at it, others not so much, different people handled it differently.
Depends how rest of his career experience has been. He was an engineer and he has been a founder. If the product is technical it very well makes sense for me that he is well beyond junior or mid level depending on overall experience.
I seem to have the same thing. I can't get double images or any tricks at all.
I tried putting folded paper between my eyes to divide the images in such a way that I would only see left side with my left eye, and right side with my right, and I can alternate between image from left and image from right, but I can never see the image at once, or right side when I'm using my left eye.
E.g. it could have been trained to launch a delayed attack if context indicates it has access to execute code and given certain conditions, e.g. date, or other type of codeword that is input to it.
So if a malicious actor gets to a certain stage with an LLM where they are confident it will be able to reliably run this attack, all they have to do is open source it, wait for enough adoption and then use some of those methods to launch such attack. No one would be able to identify it since the weights are unreadable, but really somewhere in the weights this attack is just hiding and waiting to happen given correct pathway triggered.
reply