I can just say that this is awesome. I just did spend 10$ and a handful of querys to init up a app idea I had in a while.
The basic idea is working, it handled everything for me.
From setting up the node environment. Creating the directories, files, patching the files, running code, handling errors, patching again.
From time to time it fails to detect its own faults. But when I pinpoint it, it get it most of the time.
And the UI is actually more pretty than I would have crafted in v1
When this get's cheaper, and better with each iteration, everybody will have a full dev team for a couple of bucks.
Imho is stunning, yet what is happening there is super dangerous.
These videos will and may be too realistic.
Our society is not prepared for this kind of reality "bending" media. These hyperrealistic videos will be the reason for hate and murder. Evil actors will use it to influence elections on a global scale. Create cults around virtual characters. Deny the rules of physics and human reason. And yet, there is no way for a person to detect instantly that he is watching a generated video. Maybe now, but in 1 year, it will be indistinguishable from a real recorded video
Are Apple and other phone/camera makers working on ways to "sign" a video to say it's an unedited video from a camera? Does this exist now? Is it possible?
I'm thinking of simple cryptographic signing of a file, rather than embedding watermarks into the content, but that's another option.
I don't think it will solve the fake video onslaught, but it could help.
Cute hack showing that its kinda useless unless the user-facing UX does a better job of actually knowing whether the certificate represents the manufacturer of the sensor (dude just uses a self signed cert with "Leica Camera AG" as the name. Clearly cryptography literacy is lagging behind...
https://hackaday.com/2023/11/30/falsified-photos-fooling-ado...
This is what I think every time I hear about AI watermarking. If anything, convincing people that AI watermarking is a real, reliable thing is just gonna cause more harm because bad actors that want to convince people something fake is real would obviously do the simple subversion tactics. Then you have a bunch of people seeing it passes the watermark check, and therefore is real.
I agree is probably a losing battle, but maybe worth fighting. If the metadata is also encrypted, you can also verify the time and place it was recorded. Of course, this requires closed/locked hardware and still possible to spoof. Not ideal, but some assurances are better than a future of can't trust anything.
Nikon has had digital signature ability in some of their flagship cameras since at least 2007, and maybe before then. The feature is used by law enforcement when documenting evidence. I assume other brands also have this available for the same reasons.
Which take millions of dollars and huge teams to make. These take one bored person, a sentence, and a few minutes to go from idea to posting on social media. That difference is the entire concern.
We already have hate and murder, evil actors influencing elections on a global scale, denial of physics and reason, and cults of personality. We also already have the ability to create realistic videos - not that it matters because for many people the bar of credulity isn't realism but simply confirming their priors. We already live in a world where TikTok memes and Facebook are the primary sources around which the masses base their reality, and that shit doesn't even take effort.
The only thing this changes is not needing to pay human beings for work.
Instead of calling for regulations, the big tech companies should run big campaigns educating the public, especially boomers, that they no longer can trust images, videos, and audio on the Internet. Put paid articles and ads about this in local newspapers around the world so even the least online people gets educated about this.
Do we really want a world where we can't trust anything we see, hear, or read? Where people need to be educated to not trust their senses, the things we use to interpret reality and the world around us.
I feel this kind of hypervigilance will be mentally exhausting, and not being able to trust your primary senses will have untold psychological effects
You can trust what you see and hear around you. You might be able to trust information from a party you trust. You certainly shouldn't trust digital information from unknown entities with unknown agendas.
We're already in a world where "fake news" and "alt-facts" influence our daily lives and political outcomes.
What I see and hear around me is a miniscule fraction of the outside world. To have a shared understanding of reality, of what is happening in my town, my city, my state, my country, my continent, the world, requires much more than what is available in your immediate environment.
In the grand scheme of understanding the world at large, our immediate senses are not particularly valuable. So we _have_ to rely on other streams of information. And the trend is towards more of those streams being digital.
The existence of "fake news" and "alt facts", doesn't mean we should accept a further and dramatic worsening of our ability to have a shared reality. To accept that as an inevitability is defeatist and a kind of learned helplessness.
Have you seen the Adam Curtis documentary "Hypernormalisation"? It deals with some similar themes, but on a much smaller scale (at least it is smaller in the context of current and near future tech)
One absolutely should not trust what you see and hear around you. One cannot trust the opinions of others, one should not trust faith, one can only reliable develop critical analysis and employ secondary considerations to the best of their ability, and then be cautious at every step. Trust and faith are relics of a time now gone, and it is time to realize it, to grow up and see the reality.
I wonder if we’ll eventually see people abandoning the digital reality in favor of real-life, physical interactions wherever possible.
I recently had an issue with my mobile service provider and I was insanely glad when I could interact with a friendly and competent shop clerk (I know I got lucky there) in a brick&mortar instead of a chatbot stuck in a loop.
Yeah I think it's a real possibility that people will disconnect from the digital world. Though I fear the human touch will become a luxury only afforded by the wealthy. If it becomes a point of distinction, people will charge extra for it. While the rest are pleaing with a brainless chat bots
No it's not. We are not at the stage where reality is completely indistinguishable from fiction. We are still in the uncanny valley. Nothing is inevitable
This is like trying to hide Photoshop from the public. Realistic AI generated videos and adversary-sponsored mass disinformation campaigns are 100% inevitable, even if the US labs stopped working on it today.
So, you might as well open access to it to blunt the effect, and make sure our own labs don't fall behind on the global stage.
That is reality, that is nature. The natural world is filled with camouflaged animals and plants that prey on one another via their camouflage. This is evolution, and those unable to discriminate reality from fiction will be the causalities, as they always have since the dawn of life.
The naturalistic fallacy is weak at best, but this is one of the weirdest deployments of it I've encountered. It's not evolution, it's nothing like it.
If it's kill or be killed, we should do away with medicine right? Only the strong survive. Why are we saving the weak? Sorry but this argument is beyond silly
Deception is a key part of life, and the inability to discriminate fact from fiction is absolutely a key metric of success. Who said "kill or be killed"? Not I. It is survival or not, flourish or not, succeed or not.
But why must the deception take place? Evolution is natural, The development of AI generated videos takes teams of people, years of effort and millions of pounds. Why should those that are more easily deceived be culled? Do you believe that the future of technology is weeding out the weak? Do you believe the future of humanity is the existence of only those that can use the technologies we develop? You might very well find yourself in a position, a long time from now, where you are easily deceived by newer technologies that you are not familiar with.
Deception takes places because deception takes place, because it can. I'm not the gatekeeper of it, I'm just acknowledging it and some of the secondary effects that will occur due to these inevitable technologies. I don't believe the future is anything other than a hope. That hope will require those future individuals to be very discriminating of their surroundings to survive, all surroundings includes all the society information and socialization, because that is filled with misinformation too. All that filled with misinformation right now, and it will just get more sophisticated. That's what I'm saying.
Sure. No amount of perception will let you see the financing of Al-Quida or Al-Nusra soldiers. You can't perceive your way out of your blindness. You need to reflect.
It will also reinforce whatever bias we have already. When facing ambiguous or unknowable situations our first reaction is to go with "common sense" or "trusting our gut".
"Uh, Is that video of [insert your least favourite politician here] taking a bribe real or not? Well, I'm going to trust my instincts here..."
And no big tech company would run the ads you're suggesting, because they only make money when people use the systems that deliver the untrustworthy content.
Isn't the whole point of OP that we're currently watching the barrier to generating realistic assets go from "spend months grinding Photoshop tutorials" to "type what you want into this box and wait a few minutes"?
I still don't really know why we're doing this. What is the upside? Democratising Hollywood? At the expense of... enormous catastrophic disinformation and media manipulation.
The society voted with their money. Google refrained from launching their early chatbots and image generation tools due to perceived risks of unsafe and misleading content being generated, and got beaten to the punch in the market. Of course now they'll launch early and often, the market has spoken.
Of course; but this is the current society, and attempts to reform it, e.g. communism, failed abjectly, so by evolution pressure, the capitalist society dominated by market forces is the best that we have
There's no evidence that this fearmongering over safety is actually correct. The worst thing you can do is pummel an emerging technology into the grave because of misplaced fear.
Spend any amount of time on mainstream social media and you'll see AI-generated media being shared credulously. It's not a hypothetical risk, it's already happening.
Even if you're not convinced that it's dangerous, at the very least it's incredibly annoying.
If someone dumped a trailer full of trash in your garden, you're not going to say "oh well, market forces compelled them to do that".
Tbh, sometime I wish the same would happen in whole europe.
This platform is out of control. It's threatening democracy if it keeps on publishing fake news en mass with no control whatsoever. Let the platform die, it was a good time. But it's time for sth. new that keeps society intact.
Sounds interesting. I can only imagine what can be done if they can increase the resolution further so one can target tiny cell patches in the human body/living organism. Maybe even stop internal bleeding or more.
Feels like Enterprise on the medical station. Being able to do this always reminds me I'm old and this is the future :D
Contrary to what this press release implies, radiation treatments are focused too. The basic idea is that you use a rotating beam source with a center of rotation at the exact spot you want to treat. The beam delivers only a fairly modest dose to most of the surrounding issues because of its continual motion, but it always passes through the center, so the cumulative dose in that area is far higher. This image illustrates the idea:
It's not perfect, and acoustic waves should have fewer side effects, but we already have the ability to selectively target a specific location inside your body.
I've had lithotripsy to destroy kidney stones. If the patches of cancer are smaller than that, then I can imagine it might be more difficult.
The trick with lithotripsy is that the stones have to be visible on a regular x-ray machine in order to be able to steer the beams of sound to the correct target. I don't know how you would detect the correct targets with cancer cells, at least not in an immediate feedback system that could be used by the doctors while the patient was on the table.
This is so sad to hear! I wish him all the best and hope he can recover.
I think he is one of the most influential persons in the last decades, not only regarding GNU or FreeSoftware but also about technology overall. While sadly at the same time lots of people underestimate his works and foreseeing.
He has really been and still is an inspiration for me. Really all the best to him!!
Because web developers are sometimes lazy and copy code and think it will and should work on all devices. It would take a whole `if` statement not to do it.
I would agree. Too often they are folding like lawn chairs. I push back against a lot of ideas, but I have my limits too. In the case of adding all of this invasive fingerprinting, it’s not really acceptable.
The basic idea is working, it handled everything for me.
From setting up the node environment. Creating the directories, files, patching the files, running code, handling errors, patching again. From time to time it fails to detect its own faults. But when I pinpoint it, it get it most of the time. And the UI is actually more pretty than I would have crafted in v1
When this get's cheaper, and better with each iteration, everybody will have a full dev team for a couple of bucks.
reply