Hacker News new | past | comments | ask | show | jobs | submit login
Belgium legalises ethical hacking (law.kuleuven.be)
451 points by jruohonen on May 7, 2023 | hide | past | favorite | 67 comments



> The new Belgian whistleblower law (Klokkenluiderswet) has changed the legal situation for ethical hacking in Belgium. A natural or legal person is now authorised to investigate organisations in Belgium for potential cybersecurity vulnerabilities, even if they have not consented to such investigations.

Cool. Though, Belgium will soon have the most secure systems in the world, or no one dares running open computer systems there. This will be fascinating to follow.

> The second condition mandates that ethical hackers report any uncovered cybersecurity vulnerability as soon as possible to the Centre for Cyber Security Belgium (CCB), which is the national computer security incident response team of Belgium.

I really hope they will take the opportunity and publish statistics about the reports.

> The final condition is an obligation for ethical hackers to not disclose information about the uncovered vulnerability to a broader public without the consent of the CCB.

Right, I agree with OPs comment, that this is a bummer. Why not let the organization self decide this? Or one-out-of-two? This seems a bit fishy. I hope it works out.


Good. Make it law in every other country too. You would not believe the amount of duct tape holding systems together; crowd sourcing the inspections would at least get eyeballs on the problems, even if it caused an uptick in security incidents.

(Former pentester @matasano, though only for a little over a year.)

After witnessing the results of over 50 pentests, you’re dragged to the conclusion that (a) companies usually get pentests because they’re forced to by other companies, and (b) the security incidents that do happen tend not to affect the companies themselves.

By (b) I mean “security doesn’t matter,” in the sense that very few companies have ever died from security incidents. The cost is borne by the customers whose data is exposed, not the companies who allowed the breach.

De-legislating cybersecurity will improve security, almost by definition. As you say, you’re forced to secure your systems. This is probably a net positive, and hopefully the experiment in Belgium will show why.


> By (b) I mean “security doesn’t matter,” in the sense that very few companies have ever died from security incidents. The cost is borne by the customers whose data is exposed, not the companies who allowed the breach.

Shouldn't the companies be liable for millions of dollars on a sane justice system?


Know of any?


Technically not a case of hacking but still:

https://facebookuserprivacysettlement.com/


> By (b) I mean “security doesn’t matter,” in the sense that very few companies have ever died from security incidents.

Ashley Madison, and Mt Gox come to mind. I suspect Lastpass will be added to that list soon.


Just checked: Ashley Madison is still in business and reached their highest (known) peak of users in 2019, about 4 years after the leak.


This begs the question: who's more stupid, the business that negligently screwed their customers, or the customers who came back after?


There is an unstated implication that negligence doesn't happen twice.

I'd sooner assume the membership count comprises bots and repeat throwaway/voyeur accounts.


it wasn't that customers came back, aspiring women noticed the gap in the market from the data leaks and chose to be sex workers, replacing the sock puppet users with real women. the hack and leak was a blessing in disguise for ashley madison.

this is already reported but once you leave the corporate sex worker exclusionary echo chamber it becomes more obvious how efficient that market is, from the supply side

it would alter most of your assumptions and make a lot of things more obvious without needing to be “studied”


Compare and contrast Google's Project Zero's disclosure policy: https://googleprojectzero.blogspot.com/p/vulnerability-discl...

They don't wait for your permission to publish - they give you 90 days to fix and 30 days from when the fix is ready, but if you don't cooperate, they're posting it anyway.

As I understand it they don't hack into someone else's system, but they might disclose a vulnerability in your software running on my device.


> Belgium will soon have the most secure systems in the world, or no one dares running open computer systems there.

There are three groups of people:

* People who hack for profit and want to exploit you.

* People who hack for fun and don't care about telling you.

* People who hack for fun and are polite enough to tell you about it afterwards.

#1 isn't going to be deterred by law, as hacking is illegal already. #2 might do it anyway, since the risk of getting caught is low, you just won't hear about it. #3 is the only group that this new law will affect, and the way it will affect them is that now they can tell you about it without fearing repercussions.

I don't see this making people more afraid to run open computer systems. The bad guys were always going to get to them anyway.


> Though, Belgium will soon have the most secure systems in the world, or no one dares running open computer systems there.

I doubt it'll deter anyone from running computer systems.

After all, hacking already goes unpunished if it crosses the right jurisdictional borders, or if the attacker can't be traced. And for most attacks both are true so the cops don't do anything.

The only difference this law makes is for in-country hackers who disclose their own identity.


> Why not let the organization self decide this? Or one-out-of-two?

I think there could be two different situations. In the first one, the company doesn't want to let the pubilc know about a vulnerability they had to not damage their image. But the CCB gives permission after the vulnerability has been patched.

The second one is that the company gives permission, but the CCB wants to wait to first evaluate how many companies could have the same problem and get in touch with them. Think e.g. a vulnerability that can be potentially widespread like log4j.


The context here is "without authorisation of the owner of the system"; if they agree to publication then the additional protections [need to] don't apply


It’s going to be interesting in the cases where the security researchers cause denial of service conditions, data corruption, or just weaken or breaks a security control totally leaving systems and data wide open.


Looks like the new legal framework puts everyone at the mercy of CCB, a government body. Hope they have enough incentives and processes to do the right thing with all those 0-days that will flow to them.

Otherwise, this could undermine the public disclosure concept itself, a mechanism to force a stubborn vendor to fix their vulnerabilities.


Unicorn world. How lovely: "hackers" will start to give "vulnerabilities" for "free" to some "agencies"... I wonder how many will stay secret for those "agencies" own agenda.

How can you expect any trust in the shady world of "vulnerabilities"? Are you serious?


It is about deterrence. I had more than one instance where I did not tell a government agency or an airport or whatever about a flaw in theie system, because I did not want to deal with them potentially "shooting" the messenger.

Makeing certain things legal explicitly has it's merits.


But the "process" is still "secret", and the whole issue is the shadow around this, gov agencies or corps, same same. At the time I was seeing "computer security" people: namely at the time I did learn deliverable "security" is a fantasy, some said they were "pressured" by some corps to shut up with significant help from the authorities.

There is only one way to do that safely: immediat full public disclosure. Corps and govs have to be ready for that all the time, this is a significant part of their job. The real issue is the trust about the source of "vulnerabilities", because upon publication, it must be validaded by competent ppl, and here we go again in the shadows where things could become "delayed" indefinitely (I guess as long as possible).


Ah, a crowd sourced NSA. Very clever.


Yes, a very cheap solution. Very typical, as security always ends up at the bottom of the budget plan.


Vulnerabilities are by definition dangerous. In some cases critically dangerous.

It is reasonable, common sense even, that vulnerabilities should not be publicised without a vetting process.


I've wanted this for the longest time. It makes a lot of sense because any existing vulnerabilities are ALREADY vulnerabilities. If exploited they're only going to go to whatever bad actors are trying to be secretive.

If you incentivize people [and young kids] to try hacking into their local government/company infrastructure by offering a reward in exchange or an explanation, you're basically making a cybersecurity immune system for your entire country.

When chinese/north korean/russian hackers are savagely going after hospital infrastructure and trade secrets in western countries, this is the best step forward towards improving the upstream and training your own hackers in response to these kind of events.


Some progress but with some notable weaknesses (a state institution determines whether public disclosure is appropriate).


Seems more than reasonable given that the law does not appear to exclude state institutions from being the target of hacking. If that provision did not exist you can bet the law would never apply to any security sensitive sector.


> If that provision did not exist you can bet the law would never apply to any security sensitive sector.

„Oh yeah, he is very sensitive in this regard. Please don’t Talk to him about his insecurities“


"""The new Belgian whistleblower law only applies in Belgium."""

I haven't read the PDF of the law but from the article it doesn't sound like it is limited to Belgian citizen but only to the location. I hope there will be some pressure on the government organization to basically rubber stamp the "disclosure allowed" authorization in a reasonable time if everything was reported to them but remain a bit sceptical.

Quite curious on the implications for the announced eu-west-3-bru-1a (AWS Brussels). I hope Amazon won't cancel it :)


I guess the people cheering this have not lived in Europe. Typically what happens is that some of the local hackers who naively trust the state and disclose their hacks will have the book thrown at them. Either on the basis of an inconsequential technicality or because authorities arbitrarily decide the hack intended to cause harm or was not "proportionate", enabled by the vague wording of the law. Meanwhile the actual criminals who mostly aren't located in the EU to begin with get off scot-free, even if they can be tracked back to Russia or somewhere local law enforcement can do nothing about it in the majority of cases.


I once reported a leak on a government website to the National Cyber Security Centre, hoping to get a cool t-shirt out of it ("I hacked the Dutch government and all I got was this lousy t-shirt").

Turns out that system wasn't government but contracted out to the private sector. That got me into a lot of trouble since I reported a leak on a private company. Luckily I didn't get arrested or sued after explaining my intentions.

I have since not disclosed anything I find. Too much of a risk.


Ah yes, the National Cyber Security Centre is constantly in the news for arresting and suing people that report vulnerabilities to them /s

... seriously, what?


If you read my comment it will become clear to you that the National Cyber Security Centre did not have any intention of suing me, as the vulnerable system was not their responsibility.

The NCSC has not been in the news for arresting people and suing them.


I live in Europe and I don't know many if any story of ethical hackers getting incarcerated.

In my youth in Italy, when I dabbled in "hacking", the stories going around at the time on IRC were that if you were ever nabbed hacking a server, you would get recruited by the local cyber police force (Polizia Postale)


French hacktivist bluetouff was condemned for finding and reporting that government files were left unprotected on the internet:

https://www.silicon.fr/bluetouff-blogueur-condamne-recherche...

30 hours locked up in the police station, all equipment confiscated.


German hacktivist Lilith Wittmann was charged for responsibly disclosing a vulnerability in a server of a political party, but charges were dropped after a public outcry.


Does this mean anything for the legal protections of Belgian citizens who research security vulnerabilities in foreign, rather than domestic, systems?


No:

> The new Belgian whistleblower law only applies in Belgium. If a cybersecurity vulnerability concerns an IT system outside of Belgium, hacking might be covered by the rules of the country where the system is located.


That part is obvious, if you commit a crime somewhere else, then you commit it somewhere else. The question is whether Belgium protects you as their citizen, or doesn't.


Typically extradition requires dual criminality.

i.e. you can’t be extradited for something that is a crime in a foreign country but isn’t in yours.


So if a Belgian hacker is researching a Belgian company and a single server happens to be outside of Belgium territory, they're suddenly breaking the law?


Well, unfortunately, yes.

Belgium can’t give you a license to commit a crime in another country.


Surely the law here should be pedantic here, no? Does the location where a server is physically located or the location where a company is registered count?


Or what if a company buys a set of previously-used-in-Belgium IP addresses and now uses them in France?

Something like this happened on the cloud when they were running low on IPv4 addresses.


IP addresses are assigned to organizations, not countries.

There's nothing at all preventing me from geolocating my /32 of v6 to anywhere at all I want.

Or chopping it into smaller subnets and then allocating those wherever.


But the inverse seemingly works with regards to GDPR?

If a Belgian citizen in Belgium hacks my US server they are not protected by this Belgian law.

Yet if a Belgian citizen in Belgium visits my US server they are protected by GDPR?

How does that work then?


If a Belgian citizen in Belgium visits your US server for commercial purposes then international trade treaties apply, and those treaties model your business (which might just be a sole proprietorship) as having a Belgian subsidiary that is doing the actual commerce with them.

Same reason that if a country Y has a law against selling thing X, but no law against buying thing X, then you, outside of country Y, are still not allowed to sell+ship X to people in country Y. For purposes of commercial interactions with people in country Y, you're acting as a local subsidiary subject to those laws. (In fact, for tax reasons, you may not even be able to sell into many countries without having a real established domestic incorporated business in those countries.)

Note that this doesn't apply if there's a (multinational) import/export business involved — in which case, you have no obligation to avoid selling X into the country, because you're selling X to your own domestic country-Z arm of the importer/exporter. It's then the import/export business's duty to comply with laws about what can be sold in country Y (and to pay any import tariffs, etc.)


> Yet if a Belgian citizen in Belgium visits my US server they are protected by GDPR?

No, not if you and your server have no EU presence. There might be other reasons to comply, though.


Yes they are. If a US company provides a service to a EU citizen, they must comply with EU regulations.

If an EU company provides a service to a US citizen they must comply with US regulations (eg Sales Tax, etc)


What you describe would be one of the "other reasons to comply" that I mentioned, where the EU can affect you and/or your business if you don't comply. Additional context was not provided so I made no assumptions.

An example: I don't have any legal obligation to the GDPR for my (hypothetical) US based blog as a US citizen, regardless of who visits it. But if I want to sell a t-shirt from the site, potentially to people in EU, failure to comply may result in the EU taking action to prevent me from doing business in the EU.


Why not? The US does this regularly.


So do other intelligence services, but that doesn't mean regular citizens are allowed to so this as well.

(and whether secret services are "allowed" to do so, is a seperate question)


Any cloud datacenters in Belgium?


Google has a large datacenter there. (europe-west1)


eu-west-1 is in Ireland. There are no currently open availability zone in Belgium.

Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-re...


That’s AWS you’re linking to. Google cloud europe-west1 is in St Ghislain, Belgium.

https://cloud.google.com/about/locations/


Interesting to point out that Euroclear and Euronext are based in Belgium.


also SWIFT, the NATO HQ, the SHAPE, and most EU institutions


I'm divided on this one. On one hand, I can see a lot of good in this, because, well, I'm on HN.

On the other hand, I think people would find it weird that anybody would be allowed to do that IRL with physical building, so why allow it on the internet?

Given that the consequences of probing a website are less than cracking on an office, and the surface of attack bigger on a website, with potentially a larger cascade, I think I can find more arguments for than against.

But it's not so easy to answer.


The physical world and the Internet are completely different environments, and analogies don't transfer. In the physical world attackers are resource constrained, create evidence that allows for attribution, post-facto enforcement is mostly successful, attacks are mostly destructive, and security can only ever be "good enough" bar.

Meanwhile in the electronic word, many attacks can be easily scaled/automated so they're always happening, attribution is very hard, there's little post-facto enforcement especially across borders, most attacks don't do much damage in and of themselves, and most security problems are logic bugs which are yes/no affairs.

It's not particularly interesting if a "white hat" physical attacker demonstrated they could get into a safe with a drill over the course of a week. And one probably can walk down a street testing businesses' doors after hours without too many repercussions if you're dressed nice and don't fit the cops' stereotype of "criminal".

Also there's a huge tendency in the digital security world for system owners to play up damages from minor break ins or even mostly innocuous actions (eg port scans), to distract from the humiliation of themselves having failed. And if we did want to make punishment more in line with real world analogs, then the penalty for most unauthorized accesses should be akin to misdemeanor trespassing.


Your last point makes a lot of sense actually, although pretty hard to convey to the general population.


Like with physical business, if you can't guarantee proper security - you should not be in this business.

Companies cut costs on cyber security whenever they can. And if you try to expose it you can get sued. It's about time this ends. Hopefully everywhere soon.


There are millions of small businesses that have an online presence and no way to protect themselves against a physical building penetration.

Heck, most administrations can't. A small town city council cannot be expected to have the budget or expertise to deal with someone trying to break in.


> Like with physical business, if you can't guarantee proper security - you should not be in this business.

What's an example of a business that can guarantee proper physical or computer security? I can't think of any. All I know about managing security is that you first acknowledge that you can't ever be completely safe. I'll assume you mean they guarantee that they have done due diligence to reduce their threat profile — which isn't really saying much either.


> so why allow it on the internet?

I find this weird also. The explanation I've been able to come up with is that there isn't prosecution on the internet the way that there is IRL. You're just screwed if you got hacked by someone whose VPN provider didn't already have a tap order in place on this customer. Without deterrence, the only remaining thing you can do is make it impossible to perform in the first place.


Heh, wonder which hosting operations have data centres in Belgium?

Sounds like they'll be getting some new customers setting up proxy software. ;)

"The hackers are coming from inside Belgium, we can't do anything!"


I don't really know, more like a guess: Norway, Denmark, Finland, and Sweden.


I know everyone is high fiving but this like most Euro tech laws, is so obscenely murky and essentially opens the flood gates on anyone running servers over there.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: