> The risks in terms of privacy and copyright infringements are currently too high, State Secretary Alexandra van Huffelen (Digital Affairs) said in a draft proposal...
Seems reasonable to me.
> Van Huffelen added that she doesn’t want to write off generative AI use within the government entirely. She plans various experiments to see how government services can use the technology safely. The pilots should be ready by mid-2024, after which the government can draw up guidelines for the responsible use of AI. There will also be a training program for civil servants.
This is pretty much how I'd like the issue to be handled by governments. The technology is clearly potentially useful, but there is a lot we do not understand about it yet. Experimenting with it until we have a better understanding is the way to go, and starting mid-2024 is early enough. The article headline is slightly overdone though. To me it suggests a permanent ban, while this is nothing of the sort.
As another Dutch person I'm actually disappointed that this is purely limited to generated images or text, given that the Childcare Benefits Scandal[0] also involved machine learning algorithms that encoded institutional biases against parts of our society (among many other structural issues, but let's focus on the one relevant to this topic).
But then again, I no longer live in the Netherlands so maybe I missed an earlier memo that the government decided to no longer do that kind of thing. I highly doubt it though.
I could be wrong, but afaik it involved "just" algorithms not machine learning. Which is a meaningful distinction, as traditional algorithms can actually be audited much easier in regards to how they make calls.
I know the Dutch government, did lot of dev work for them, these pilots mean a lot of taxpayer money.
Half of the pilots won't be completed and the other half won't be used.
There is no way that in 6-7 months experimenting will lead to different insights then we have now. It does mean a lot of expensive consultants and reports.
The only useful thing would be an on-premise solutions, which I'm sure lots of startups are working on; not something these pilots will be able to manage in 6-7 months unless with lots of luck (finding the right people) & cash.
Yes, this seems like somebody seeing AI as an area they can "take over" including prestige and a headcount.
In this case, every department of the government already has a legal team, a privacy and data use officer, etc, responsible for the use of any product. Why does this need to be centralised?
In my org, somebody got hold of 50 license seats for a popular code generating AI, and required users to fill in lengthy forms and provide regular feedback about their "use cases" and "experience". Somehow, this "pilot" never ends, because then their makework would end.
This becomes an issue if each department is not consistent with each other.
Recently at an American university a department was banned for new tenure positions for two years after it was found that their hiring process differed from the university's as a whole, in a way that could get the university as a whole into legal trouble. https://www.washington.edu/news/2023/10/31/university-takes-...
Getting ahead of it with a universal policy to set a minimum bar doesn't sound like the worst idea.
The real issue seems to be the data governance and people forgetting good data governance when they interact with AI. Does it matter if you are asking Chat GPT to organize this list of PII or asking a random Reddit user to organize it, both are leaking data to an unauthorized third party that can leak it elsewhere. If the model is entirely self hosted and is treated similar to a database so that any PII it is given means the model is treated as always having that PII in it, even if the average user can't find a way to extract it, then the privacy concerns are no different than using any other self hosting application that can store data. You will need to treat it like a data source where all users have either full admin access or no access, unlike a database where you can delegate well controlled partial access.
On the copyright side, that seems more uniquely tied to current AI technology so the accusation of AI being a buzzword wouldn't apply.
Indeed to me that's not a ban, but a restriction in the use of possibly risky software until they had the time to draw up guidelines. Honestly everyone working in a company or large organisation aught to have a similar stance. Until we know for sure, don't use it.
It's not that hard to just not stick your sensitive data in. If find it absurd that people would even consider it.
I had a whole presentation today on how important it was to 'scrub' your data before sticking it in a generative model, and I was just like 'hell no', I'm not doing that because data that's supposed to be scrubbed shouldn't go anywhere near the model.
It's reasonable - I think they just want to be able to get policy in place before we have politicians typing in state secrets like "How can I get Rutte to _____?" I wouldn't like it (also Dutch, hallo!).
It's a little bit more complicated than that, it was "the rules said so" and then they just used humans to check if people used the rules. The government at the time was very hard on fraud after a couple of high-profile fraud cases by immigrants. Mostly Bulgarians falsely claiming welfare
Anyway, several ministers and other high officials pressured the welfare department to be hard on fraud, which caused the imposition of harsh fines for minor filing errors. The department also got into a bit of a witch hunt were anyone being investigated for fraud was almost always presumed to be guilty. This, combined with the fact that most of the welfare recipients were not very educated, didn't speak Dutch all that well and/or were under a lot of stress due to being generally economically disadvantaged, led to some big fines levied against people who didn't objectively do all that much wrong. Several of them lost their homes or got deep into debt due to the fines and legal costs.
Simply claiming it was "the algorithm said so" skips over all the history about why the rules were written the way they were, and the human errors made along the way.
Yes, there definitely was and it wasn't so much driven by the tech itself as it was by the need to be seen to be doing something against perceived fraud by minorities which led to mass profiling and huge fall-out (and too little, in my opinion). It was effectively racism under a thin veneer of plausible deniability provided by technology.
Tech by itself can be bad enough, tech wielded to achieve nebulous goals without any consideration for the human side of the effects is outright dangerous.
Most practitioners can tell you that naive anti-crime initiatives often end up targeting minorities. In many Western countries, minorities are significantly over-represented in crime and jail statistics (compared to the general population). Is that ipso facto racism or racial profiling?
This is exactly the kind of thinking that caused this whole thing to spiral out of control.
The little bit of fraud that was uncovered did not justify declaring a very large fraction of the Dutch population as fraudulent. I'm kind of surprised that your takeaway is exactly of the kind that has been proven to be utterly wrong and counterproductive during this whole affair. Yes, what you are advocating for is racism and racial profiling.
This being Dutch politicians dealing with technology it wouldn't surprise me if they completely miss the point. In fact by basically only mentioning ChatGPT they may have already.
You can do quite a lot of harm with very simple models, all you need is unclear rules, little oversight and no avenues for escalation. Which in fact did go wrong, terribly, plunging tens of thousands of households into bankruptcy. You don't need something capable of modelling human language to make things worse. At most a more powerful model can simply do more, but the risks are roughly the same.
And if neural networks writing obtuse language with little empathy following a complex set of rules are causing problems then I've got some bad news about Dutch officials.
this is a quick anecdote from California.. a very recent paper by Google on the "AI Opportunity Agenda" starts out with very optimistic claims about AI for administering cities, the example is "timing stoplights to increase traffic efficiency and reduce carbon emissions" .. yet, what is actually implemented now in the SF Bay Area is .. automatic license plate readers at the bridges 24x7, and new Gov Newsom approved automatic speeding ticket generators on major roads in the largest cities. Traffic cameras at intersections have been implemented more and more in ten years.
How can an intelligent citizen hear "optimized traffic lights" in the industry white paper, and see "automatic speeding tickets" in reality ? What number of cars on the road are adhering to the posted limit at all times? .. let's be blunt, it is a money machine, for local govt.
Other examples of distributing public benefits, or handling public appeals, could easily see this level of duplicity. One thing is said in PR, another thing entirely is prevalent in practice with citizens.
I claim this example is relevent to the Dutch concerns because a) Netherlands is similarly diverse, crowded and high-tech, and b) national level analysis has detected potential for abuse at the local govt level on a wide scale.
It’s funny how language makes all the difference in telling stories like yours:
A vendor published a marketing flyer suggesting ways their new expensive technology could improve services typical of a potential customer.
The customer considered the marketing pitch in the flyer and found the ROI unconvincing in most cases, especially given that the technology remains largely unproven and would leave them at high risk of failure or cost surprises as an early adopter.
Nonetheless, they did see at least one place where the implementation was more mature and might pay for itself (and perhaps even contributing revenue) while fulfilling an ongoing service mandate.
— —
Sounds like a dutiful government to me! Rather than being sucked into a flashy sales pitch by a vendor pitching an unproven and expensive product, they’ve stood firm and decided to mostly wait for others to absorb the early adopter risks, carefully stewarding public money.
That governments can sometimes do very stupid and irresponsible things doesn’t mean that they’re doing so every time.
I hate receiving speeding tickets as much as the next guy, but I am pro automated speeding tickets. I must be getting old.
- May encourage slower (safer) driving habits, fewer accidents
- Automates some police duties, frees law enforcement to deal with actual crimes
- City generates more money without raising taxes
These are all hypothetical, and obviously corruption and apathy paves over most of it, but in practice, automated speeding tickets are a great use of "AI"
There seems to be a strange medium in which people think that speeding limits should exist but we shouldn't try too hard to enforce them. Like they just want a little bit of criminality, but not too much. If you are opposed to speeding limit enforcement, then just get rid of them. We do not need "sometimes laws".
there is an expression in English that says: you put the cart before the horse. I think this applies here.
It is not "we wish things to be this way, so they are this way" .. the real world is not like that. For several thousand years, new technology like carts or horse-stirrups, have changed a balance of power between groups of humans who use those, and groups that do not; in this example, the cart for the agrarian, the stirrup for the hunting pack.
When new technology is implemented, the behavior of all humans at all times does not change instantly, nor does it change "because we want this".. It changes in fact, in practice, and over weeks or decades.
Right now, there is some agreement among humans driving cars. They drive a certain style, on roads, with law enforcement and prices. Costs are applied for goods like tires or gas, and also penalties like speeding tickets due to a speed limit that is posted in a way that is considered fair and public... in most places.
When I learned to drive, the legend of the "hidden speed limit sign" became dinner table talk. The ability of a small town to generate revenue for their police force, by waiting for people who are not fairly informed, or really just the police lying.. is common knowledge, because it is real.
Your comment makes it appear as if whole populations "want criminality" and this appears to be a conveniently simple, blaming the driver, point of view. It sounds ok in two sentences, but does not hold up to deeper inspection of policy over time in real places. You might say "oh we are not in some small town" well guess what.. large cities have corruption, believe it. Whole countries have corruption, in fact.
This is the right decision for now. gov officials have access to lots of data that should not en up in OpenAIs servers. Long-term they should have their own inhouse or some ministry of digital infrastructure.
I could understand that a people decides that it should be governed with pure human processes, and therefore that its government and civil servant has to use only human brain power (and compassion) to govern and administrate.
I would be afraid to miss some good or even just rational decision making, but I think we can all agree that a 100% AI government or administration would be a bad idea, so how to ensure that there's always a reasonable and informed human to decide, even with the help of AI and computer models?
The wiseness of this decision should probably spread largely, for example, when considering the decision making that has been done and keep being done around things like climate change MODELS, and COVID spreading MODELS.
This makes a lot of sense in my opinion. OpenAI has only satisfied the absolute minimum requirements for their data controls. If you disable "Chat History & Training" (also known as "Chat History" in the app), then almost all functionality is blocked. Specifically, Voice interactions (speech to text works, but not the full touchfree voice mode), and GPTs.
I do have to compliment OpenAI on the fact that they did improve the situation now with the new interface. Browsing and image generation does work when "Chat History & Training" is disabled whereas it used to not work a few weeks ago. Apart from the missing GPTs, ChatGPT is appears quite usable with "Chat History & Training" disabled.
This needs to happen at an EU level. I am pleasantly surprised that companies and now governments are taking steps to protect in the right direction - even if at bare minimum - our data. Next we need to protect our property.
Seems reasonable to me.
> Van Huffelen added that she doesn’t want to write off generative AI use within the government entirely. She plans various experiments to see how government services can use the technology safely. The pilots should be ready by mid-2024, after which the government can draw up guidelines for the responsible use of AI. There will also be a training program for civil servants.
This is pretty much how I'd like the issue to be handled by governments. The technology is clearly potentially useful, but there is a lot we do not understand about it yet. Experimenting with it until we have a better understanding is the way to go, and starting mid-2024 is early enough. The article headline is slightly overdone though. To me it suggests a permanent ban, while this is nothing of the sort.
I'm Dutch though, so maybe I'm biased.