Can we regulate China and USA secret AI programs ?
Nuclear non proliferation treaties work because nuclear industry is harder to hide, and is expensive to produce and maintain nukes you will probably not need anyway. But when we are able to negotiate with China and USA to stop this then we can entertain the idea of regulating AI research.
I wish we could regulate based on evidence and data and not on feelings of fear
We can regulate some uses of AI based on facts, like not allow "smart dudes" create AI therapists, medics, lawyers as commercial things,because where it is greed there will be less concern for safety and human supervision will not exist .
Maybe you are very average, or your tools are hard to configure or you are forced to work on other computers daily.
I have bad eyes so would be stupid to use regular font sizes instead of bigger fonts,
I also use a Zoom tool that has the shortcuts set to be easy to remember but require 2 hands to use so of course I change them to use them with one hand.
Most of the keyboard shortcuts use Ctrl so I remap it to a more comfortable position (it would be confusing for others to use my computer but this is not a shared machine).
I also use TTS software, the default voice speed is good for the average person but I maxed the speed , I can do things much faster.
I used Windows, Mac, GNOME but KDE is my system because I can configure it, like Windows Zoom tool had no way to reconfigure the keyboard shortcuts last time I checked and this tool was supposed to be created by people that think about accessibility.
Side story, my son is different, I installed a program for him, then hours later I asked about this experience and he complained that some keys were not set as he was used to but he managed, he did not consider to check and change the keyboard shortcuts, some people just prefer to suffer mental and body pain and adapt to the tool instead of adapting the tool for them (GNOME users are the supreme example)
Showing ads for things you already own, and this tech companies want to track us more because the ads would be "better".
But when Sony shits on the users I am not surprised they get a lot of shit back, greedy assholes.
As an opposite examples I allow GOG to send me emails with promotions, because they are not scum company I can tolerate looking trough the email and see what is new or what has a discount if I am in the mood.
>How are you so sure these users are actually bots? Just because someone disagrees with you about Russia or China doesn't mean that's evidence of a bot, no matter how stupid their opinion is.
If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.
> If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.
I'm not as certain as you about that. Last time the US had a presidential election, it seems like almost half the country is either absolutely bananas and out of their mind, or half the country are robots.
But reality turns out to be less exciting in reality. People are just dumb, and spew whatever propaganda they happen to come across "at the right time". Same is true for Russians as it is for Americans.
I think it's mostly a timing thing. It's one thing for someone to say something dumb, but it's another for someone to say it immediately on a new account. That, to me, screams bot behavior. Also if they have a laser focus. Like if I open a twitter account and every single tweet is some closely related propaganda point.
Will I be allowed to say just provide some links instead and let the community inform themselves if I am not allowed to share my observations? Or links to real news events are also not allowed.
OK< I will use wikipedia links, my problem is with Ruzzians (ZZ refers to the Russians that support invasion and war crimes) making new accounts and commenting here, we should not let this people spread misinformation here, or bring bullshit like "Russia is as bad/good as USA". At least they should use a regular , years old account so they can risk banning like I am risking when my account when debating them.
>Hallucinations are less of an issue because you can just dump all the source data in, whole codebases can go in for stuff ranging from a refactor to ‘write documentation of the API’.
Is there no risk ? I mean say for testing purposes we give the AI a giant CSV file and ask it to make it a json is the chance for error 0% ? Because today we need to double check when we ask AI to transform some data or transform some code, there is the risk of it messing something up but if it is not something that would crash immediately you risk introducing a ton of new bugs by asking an AI to refactor instead of using some good tools.
But when you ask a model to rely on just the input data, you are (mostly) trying to tap into its reasoning, not knowledge side. Obviously what's kind of magical is that some knowledge will be needed for reasoning, and you have it. But LLMs small and large are pretty good at doing the in-context stuff. It is precisely what they're trained on, and in fact it was kind of a surprise how well they seemed to generalize outside of this task in the first place.
What about it? "ammo depo sabotaged" does not yield any meaningful search results. If it were confirmed, certainly The Telegraph and the New York Times would shout it from the rooftops.
The GPS jamming is classified by the Finnish transport agency as a "side effect of Russia's anti-drone activities":
"Jamming GPS signals over the Baltic Sea is “most likely” a side effect of Russia's anti-drone activities, Traficom, the Finnish Transport and Communications Agency, said today.
“The interference intensified when Ukraine's drone attacks on Russia's energy infrastructure began in January 2024,” Traficom said in a press release.
Estonia also blames Russia for the signal jamming, but the Finnish agency doesn't agree with the Tallinn government in defining the interference as a hybrid attack."
I am wasting my time to repsond for others that might read ehre, not sure hpow incompetent you must be not to find soemthign with Google or not to be aware by the sabotages that happened.
But let me assume good faith and you are a poor Ruzzian kid with soviet mentality due to bad parents/grandaprents
you can do a Google search like putting this text in the input box
use Google and find more, though I am 100% sure you will balme CIA and Israel as a good soviet of fabricating the evidence.
You have enough evidence with names and photos of Ruzzian agents and their movement. You have evidence from Russians that Putin tried to blow up apparments in Russia to achieve his goals and then you act surprised that Putin could do something bad in Europe, like his crimes are limited ot only Ruzzia and their exUSSR territories.
Sinking a ship into a river to screw over a country is a typical soviet thing to do, same with abandoning a ship with explosives in a NATO port and blowing it up months later, you need to know the soviet mindset and then you will not be confused that they could think and act upon such terroristic plans.
Yeah, and for HN, this accounts created a few seconds before they start defending a terrorist regime should be flagged, the IP should be blocked for a month.
The guy might claim he does not want to lose karma for his support of terrorist regime but IMO if you support terrorists then you should be "alpha" enough so your ego can resist some karma hits.
>But people expect updates and changes. Let's not talk about "incomplete games being finished through updates", games get updated all the time now. Even for physical copies. Games now often install on the hardware and check for updates on first run.
How so, I do not expect that a developer will add more content to a finished game for free. And AFAIK in the games I own the content updates are paid DLCs, I can only think at No Man sky as an exception that added more free content, in fact the thrend is to have like 50+ paid DLCs and milk the players for at least a decade.
I am not an X user, I remember Elon making a lot of noise about bots, did Elon fixed the bot issues? Or all that noise was an attempt to get a discount.
>Was he complaining about bots before he signed? Genuinely can't remember.
I can't google it for you, so from my memory , he complained that he was tricked by Twitter because a large number of users are bots , and then he promised that he will fix this issue after he get's the control. Probably bots and trolls are good for business since other social media like reddit are also making super easy to create tons of bot/troll accounts and spam the network.
He signed the world's most iron-clad corporate acquisition contract you could for a price far above market rate right before the tech crash, and waived every single right to due-diligence which normal investors include, specifically for things like "oh what if the financials/users aren't what they seem?" or "what if the stock price substantially changes".
It was an utterly ludicrous agreement to sign (though far eclipsed by the the number of financial institutions which agreed to put up the money for it and are basically never going to get it back - though a bunch of the managers and executives losing their bonuses for it is kind of funny).
> he complained that he was tricked by Twitter because a large number of users are bots , and then he promised that he will fix this issue after he get's the control
How making a noise about it after the price is agreed be “an attempt to get a discount”? If anything, it was an attempt to get out of the deal.
(And if I remember correctly, the argument failed because he had been complaining about bots prior to signing. Either way, has nothing to do with discounts.)
Well, during that time he was trying to pull out of the deal, when he was citing the bots as an argument of why he should be able to, he was using that as leverage to try to get Twitter to agree to give him a discount: the idea being "I might be able to get out of this deal completely, but let me buy at a 20-30% discount instead and I'll go quietly." So it was kind of both an attempt to get out and an attempt to get a discount.
The bot stuff was pretty transparently not a good faith argument from Musk; the real issue likely being more that the markets had gone down since the offer and what was already an overpay on day 1 was now a big overpay. The same dynamic made Twitter determined to keep the original offer: shareholders had basically demanded to take the offer to begin with, and with the down market (not to mention Musk himself publicly running the company down during his efforts to escape the deal) it was just that much more of a better deal than they could otherwise hope to get.
IIRC he tried to back out of the deal under the premise that twitter over-represented its value by under-reporting bots. He stated something like he wanted proof of non-bot users before proceeding. When taken to court, the judge ruled he could get more info from twitter. I don't remember what came next, I think elon quietly dropped the point and the next set of rulings held him to the already signed purchase agreement.
Citations:
"Elon Musk says Twitter deal 'cannot move forward' until he has clarity on fake account numbers" [1]
"Musk was seeking information on essentially all of Twitter's account reviews and actions. Judge McCormick dubbed that request "absurdly broad," noting Twitter has already agreed to produce a "tremendous amount information.""[2]
OpenAI APIs for GPT and Dalle have issues like non determnism, and their special prompt injection where they add stuff or modify your prompt (with no option to turn that off. Makes it impossible to do research or to debug as a developer variations of things.
>While that's true for their ChatGPT SaaS, the API they provide doesn't impose as many restrictions.
There are same issues with GPT API,
1. non reproducible is there in the API
2. even after we ensure we do a moderation check on the input prompt, soemtimes GPT will produce "unsafe" output and accuse itself of "unsafe" stuff and we get an error but we pay for GPT "un-safeness" IMO if the GPT is producing unsafe stuff then I should not pay for it's problems.
3. dalle gives no seed so no reproducible, and no option to opt out on their GPT modifying the prompt , so images are sometimes absurdly enhanced with extreme amount of details or extreme diversity, so you need to fight against their GPT enhancing.
What extra option we have with the APIs that is useful ?
Respectfully, this just seems like a few reasons LLM's are frustrating at the moment. Having said that, there is indeed a seed and temperature parameter in the chat/assistant API that will enable (much stronger) determinism. The reason it's not 100% guaranteed to be deterministic is because they may run their model across different hardware, and hardware-level mistakes may accumulate.
With regard to DALLE - that's a fair complaint I didn't realize they don't have a seed for their API. You should really try switching to an open model if you can. You'll have complete control. I recommend flux-schnell or flux-dev.
you are not allowed to tell anyone how much you make so you might be in trouble if your found out but the companies share this info without your consent. From my POV make it all transparent.
I have had a career spanning over 30 years at this point. I've worked in businesses with 6 employees and F500 corporations. No one has ever told me that I can't tell anyone else how much I make.
I wish we could regulate based on evidence and data and not on feelings of fear We can regulate some uses of AI based on facts, like not allow "smart dudes" create AI therapists, medics, lawyers as commercial things,because where it is greed there will be less concern for safety and human supervision will not exist .
reply