Lessons learnt by NSA - never over estimate the skill level of your network admins.
Lessons learnt by Microsoft - never under estimate the loyalty of your Chinese Windows XP users, both XP and Win10 have 18% of the Chinese market .
Lessons learnt by the Chinese central government - NSA is a partner not a threat, they build tools which can make the coming annual China-US cyber security talk smooth.
I like to imagine that one of the developers on that team filed a tech debt item to do exactly this, was never able to get their manager to prioritize it, and is now pulling out their hair saying, "I told you so!"
Malware authors have budgets and schedules too. It's a business, probably more profitable than 90% of the startups in SV
No, it's not, and it's pretty damn rude to make that claim in the presence of legitimate businesspeople.
I'll take an honest crook any day.
The typical moral distinction b/w a business and other entities which make money is a business (presumably) does it within the constraint of their counterparties enjoying the liberty of choice. This becomes a grey area when government enters the picture and removes liberties--which is why there is debate about the legitimate role of government's monopoly on legitimate violence/aggression here.
1 - note, a further distinction could be made between entities which create value, and those which transfer it.
That's not true. A business is a vehicle for making money, that's it. Most businesses do this by providing goods or services, but certainly not all - for example, financial traders that only manage their own funds, like the Renaissance Medallion fund.
Even here, we still can observe that in most cases (except those where government interferes, or perhaps with organized crime) the counterparty is also enjoying a choice in whether or not they want to do the deal. I would argue that in cases where a counterparty has no choice, such a scheme should not be viewed as a legitimate busines, as it historically would not be.
I'd still argue that a business does not have to provide a good or service to be considered a business though. The Medallion Fund that I mentioned solely exists to make money for its owners - it does not provide any goods or services.
Hospitals just happened to be disproportionately affected by this attack because a lot of them have ineffective IT departments/mangement and never applied the MS17-010 patch.
Of course, these people are still felons and are likely responsible for millions of lost family photos, work and school documents, etc. They just aren't going out of their way to target hospitals.
This means that they knowingly or with reckless negligence unleashed such an attack on the world. If they had been more "scrupulous" criminals, they would have more narrowly tailored their attack on targets they believed deserved to be extorted or where such extortion would not interfere with life critical systems.
I'm not a lawyer, but if they were a nation state, I believe they would have violated the Geneva Convention's prohibition on attacking hospitals.
That said, I think this attack gives more weight to NSA critics that contend that their exploit research should be focused more on defense rather than offensive capabilities. Their carelessness combined with another group wanting to embarrass them is what allowed this indiscriminate attack to be inflicted on civilian infrastructure.
Old news. The more recent and much more insidious variant is calling the hospitals simply "valid targets".
Or in case of an unexpectedly intense media backlash, "a mistake".
Which is why we're currently in a situation where zero-days that NSA easily knew would be leaked were not patched at least a month ahead of time were left unpatched. The costs aren't significant enough to motivate them to respond to their failures.
People like to blame the capitalistic incentives for not upgrading from Windows XP but to me the failure to respond to this obvious outcome of the leaking of NSA malware is far more insidious. These sys-admins managing old systems were not prepared for state-financed malware to be released to C-level cyber criminals as a 'threat-actor'.
The poor state of corporate information security has been exposed in the last few days, but even that sorry state is nothing compared to the failed responsibility of the US government to value their citizens over internal objectives. Which is increasingly a common narrative that is a unsurprisingly a result of the unencumbered growth of the security state and by proxy the executive branch whom they ultimately report to.
Then, if the commanders force patients to stay by threat of violence to stay as human shields, that's a further war crime. The responsibility of casualties here is more with those using patients like this, than anyone else.
I'm not really advocating yes or no to bombing hospitals or schools to kill terrorist leaders hiding within - but your assertion is false. We will kill the hostages. All actual breaches involve a risk of % losses and that's baked into the decision to go in. Just a person somewhere trying to make a decision about the best outcome, for the "greater good".
Now before everyone buries me, total war is a rather rare military state, and probably only present a select few times in the 20th century.
A small prize to pay for not having nazies stomp around my backyard (almost literally, there are remnants of a nazi bunker not half a mile from where I live).
Edit: good book on the subject https://www.google.co.nz/amp/s/amp.theguardian.com/books/201...
Until today, there was nothing to apply if your computers were running XP or 2003. Guess which Windows versions are the most popular in UK hospitals? So I think your sentence should read like "Hospitals just happened to be disproportionately affected by this attack because they were forced to trust Microsoft would never put corporate profit before social responsibility."
"because a lot of them have ineffective IT departments/mangement and never applied the MS17-010 patch or are running ancient operating systems."
edit: And in fact, Microsoft did release a special XP hotfix for this vulnerability yesterday: https://blogs.technet.microsoft.com/msrc/2017/05/12/customer...
That doesn't tell a story of missing money or maintenance contracts. It tells of poor or even irresponsible and incompetent deployment procedures.
You shouldn't allow your CAT scanner to write over your patient records at a server. You shouldn't even have them in the same network segment.
So AFAICT 32-bit W10 can run most anything 32-bit XP can (likewise the 64-bit versions, though neither can run 16-bit programs), and IE11 can run most anything IE8 can (with minor configuration).
Is it software that relies on undocumented APIs? (I can't imagine why hospital software would require exotic methods of poking at the kernel or hardware).
Good luck finding a windows 10 compatible PC that has ISA slots for example. A lot of old custom hardware hooked right into the ISA bus
In my experience, industrial software is often pretty poorly designed, so it wouldn't surprise me if it's more common in a hospital environment.
We're talking about medical equipment, such as CAT scanners, dialysis machines, radiation therapy devices, chemical analysators and the like. Stuff where the computer interface could be an afterthought, added to a machine that was designed years ago with a physical knobs-and-dials type of user interface, and implemented and certified for a particular PC hardware generation. Then this interface PC becomes obsolete in 15 years even if the equipment itself would work for a hundred.
Other reasons for network connectivity include retrieving and sending image sequences and data files (basically the actual scans) which is done all day everyday.
The more alarming part is the retrieving of raw data which is the unreconstructed scan. This involves attaching a memory stick that is supposedly clean and uploading to that. Generally this stick is stuck into any old researcher PC and files are off loaded. Vendors don't particularly like this but getting 10-20 gig files off the scanner via command line is pretty clunky at the best of times.
That the NHS has not done this is their actual failing and negligence. It doesn't take that much money to move such devices to a quarantined network.
I'd guess that most hospitals don't do in-house development for the software they use. They paid someone else for it, probably at "enterprise" rates; it's hard to blame them for not having the budget or desire to replace working systems with new shiny (complete with new bugs) every X years.
...are how the state-of-the-art is advanced in other industries? Imagine if the FAA's response to an air disaster was, "Never mind root causes, you just should've bought a newer plane".
With that the GSN (Government Secure Network) is still a good ring-fence (that's outsourced as well) but once something gets inside, boom.
Now with the Trusts - they do have a local IT bod and in the cases I dealt with, somebody who knew how a PC works and enthusiastic, which is nice but also dangerous and I had to deal with a few issues that were as I call them "enthusiastically driven". As such you have all these Trusts operating at some level as independants and with varity of results.
One case, was one `IT manager` at a Trust who was posting on a alt.ph.uk (UK hacking usenet group) and offering up inside information about how they operated. That did not happen as the alt.ph.uk lot are a moral ethical lot and health services are taboo, so was rightly shot down and equally the chap was soon in talks with security services.
But with so many legacy systems, and an event driven support mentality (again Y2K being an exception) then such events can and will happen. Sadly many trusts lack provision to handle such issues and as with many IT area's are event driven instead of being proactive. Indeed ITIL the golden managment love-in solution for support management is event-driven and many an implementation ticks all the ITIL boxes of compliance and yet still lack proactive support. This alas is mostly gets compared to firefighters pouring water on buildings so they won't catch fire and sadly pretty darn systemic in many an organization.
With that the best anybody in IT can do it to flag up an issue in a documented way to cover there ass then the outlined event does transpire to prevent unfair scapegoating. A sad situation of which many of not all IT support staff in all capacities can attest too.
Ironicaly DOS based legacy systems with no networking and exitic ISA cards in some equally over-priced hardware still work and the need to replace them does become moot, alas that example gets projected upon other systems that are networked. But the whole health industry has many legacy setup's that are expensive to replace, more so if they work and the motivation to limit potential damage from future events above and beyond backup's becomes a management issue that lacks a voice for budgets.
But making BTC hard to cash out is a hard problem. Although particular addresses can be blacklisted, mixing services are now mainstream. Some return fresh BTC from miners. Even so, it's problematic to mix humongous quantities. For example, the Sheep Marketplace owner/thief overwhelmed Bitcoin Fog with 10^5 BTC. The trail went dead after that, but he got busted while cashing out. His girlfriend was receiving huge international wire transfers, and could not explain where the money was coming from.
All this means instead of pc plod being unable extradite the perps from eastern Europe to you get the serious players involved.
Similar attacks using other vulns or tooling are inevitable but this is prob much less impactful and the registration probably mitigated a lot of damage
Was it "orchestrated", or did the worm just spread randomly and opportunistically?
By the look and UX of the virus (yes there's a UX there too), they do seem to have a better grasp than most script kiddies, who usually can barely extend whatever script they've got.
The site itself doesn't seem to have enough ads or well placed enough ads to be "income as a goal". So I'm guessing it's a " proof I can do stuff " or "trophy room" blog, which doesn't care (HR and recruiting will happily use worse websites to judge canidate value, or trophy rooms will be put in a room no one else wants/cabin so far from everyone it doesn't have electricity)
Or maybe this story isn't really accurate and there was no accident...
And if it isn't the role of those agencies to defend the public health IT infrastructure, which agencies are responsible, if any?
Then, due to lax controls, the exploit got leaked and used by the ransomware developers.
Their culpability goes back a lot further than not noticing a kill switch.
Even in this case though, you would think the NSA, etc have to do less analysis of the payload since they got to inspect and play with it for much longer than anyone else. Therefore they could waste less time on that and more quickly focus on the rest of the issue.
There are some three-letter agencies that do work on fighting malware, often by partnering with relevant companies like Microsoft (who was a major anti-malware player here too). I know the FBI does so publicly, and some government groups invite large companies to low-secrecy briefings on security.
But I've never heard a mention of the NSA 'fighting' malware that isn't obviously governmental. Even if they knew about the exploit, used the exploit instead of disclosing it, and are well-placed to fight it, I think that's just filed under 'not my department'.
Right now looking at how the election scandals went they are there at prosecution and have access that they are given willingly.
If anything they will learn to automatically disable any nodes that are clearly operating out of a public office building.
...see the problem?
Win 7 is rising again for months
Win10 and WinXP are shrinking
If you are suggesting that developers, regardless whether they develop mobile apps or ransomware, will start relying less on DNS, I respectfully disagree.
Someone else in this thread commented how reliance on DNS makes systems "fragile". With that I strongly agree.
The same old assumptions will continue to be made, such as the one that DNS, specifcally, ICANN DNS, is always going to be used.
How to break unwanted software? Do not follow the assumptions.
For example, to break a very large quantity of shellcode change the name or location of the shell to something other than "/bin/sh".
Will shellcoders switch to a "robust statistical model" instead of hard coding "/bin/sh"?
Someone once said that programmers are lazy. Was he joking?
1. Yes, I know it may also break wanted third party software. When I first edited init.c, renamed and moved sh I was seeking to learn about dependencies. I expected things to break. That was the point: an experiment. I wanted to see what would break and what would not.
Even though the POSIX standard says:
> Applications should note that the standard PATH to the shell cannot be assumed to be either /bin/sh or /usr/bin/sh, and should be determined by interrogation of the PATH returned by `getconf PATH`, ensuring that the returned pathname is an absolute pathname and not a shell
> For example, to determine the location of the standard sh utility:
command −v sh
Wow, +1 Insightful!
However, MalwareTech's sinkhole intervention has bought enough time for patches to be pushed out, so at this point it is absolutely imperative that everyone apply these patches as soon as possible.
Even though this fortunately turned out to be false, what if it had been true? Would the security researcher be held in any way accountable for activating the ransomware? If I were the author, I might be a bit more careful in the future before changing factors in the global environment that have the potential to adversely affect the malware's behavior, but of course I'm not a security researcher, so I really don't know.
 I suppose a domain could probably be made to appear unregistered after being registered - depending on the actual check performed - but there are other binary signals (e.g., the existence of a certain address or value in the bitcoin blockchain) that might not be so easy to reverse.
When there's a global infection spreading wildly and crippling essential organizations, you want everyone to act fast, not spend weeks making sure everything is perfect. If you see the malware connecting out to an unregistered domain, you just register it now. Whoever is first gets it, and the attacker could realize their mistake at any time. Even without knowing what this malware does with the connection, odds are 99.9% that the situation is better with the domain controlled by a security researcher than by a malware author. Punishing researchers if something done in good faith turned out badly would incentivize them to overanalyze everything and delay taking any potential beneficial action until it's too late.
So, if connection = successful, then we're being analyzed and don't execute.
If connection = unsuccessful, then we're on a real workstation, execute!
Then the scheme fell apart when someone registered that domain, so all connections = successful and malware will not execute (but machine still infected).
But next they'll likely use more domains, and more expensive ones, so that random security researchers can't just expense the registration on the corporate credit card. I know .ng costs 50k, but .np might be pretty comical to deploy if you're not really worried about a global off switch.
If the motivation is to have a killswitch, you don't want something expensive, because the attackers would then have to pay for it if they want to activate it for whatever reason.
A responsible researcher would have a fully isolated, both from the corp net, and the internet. Then will slowly being to allow connections out as they can confirm that's not how it's spreading...
If you're a bomb enthusiast or researcher, you'd absolutely be liable if you tried to defuse a bomb without being requested to by the police. This is no different because of the potential for massive collateral damage. You want to see what happens when the domain is registered? Resolve the DNS on your own network.
It's only when acting under government direction that you should be immunized from liability.
Why? I 10000% disagree that the Government should be immunized from liability in the first place. This entire mess is a direct result of the NSA not being held accountable and hoarding Vulnerabilities.
Your reliance on government == good, everyone else === bad is alarming to me.
That said, I do not believe neither the government nor a Research should be held liable under the circumstance proposed in the hypothetical we are discussion.
I do believe the NSA should be required by law and policy to alert any and all software vendors to vulnerabilities they discover.
Got all riled up and then saw the username.
I am curious why you disagree with me, though.
A bit more econo-mathematically stated: we expect more good than bad to come by us if we indemnify them from whatever liability they may have had by accidentally triggering the mechanism. Perhaps because there's more smart people outside the government than inside it, just as there are more smart people outside any corporation than inside it, or anywhere really, because human ingenuity is widely distributed.
Good Samaritan laws only apply to emergency care rendered to people in need of it (and only if they don't refuse). You wouldn't even be covered if you grabbed someone's broken arm and tried to set it without permission. (That would actually be assault, for which you'd be liable.) You definitely wouldn't be covered if you unilaterally released a protein in to the atmosphere because you suspected it would stop the flu. You'd probably get charged with using a WMD if it backfired and people got sick.
I think people who are arguing that it's a good Samaritan situation are simply being selfish, because they don't want to have to consider how their actions might impact others or act with restraint and professionalism.
There are plenty of ways to proceed with getting help from security consultants even if there is liability -- eg, confining their actions to a single network and being indemnified by the owner.
Globally poking a widespread infection without a care in what the infected prefer is emphatically not what Good Samaritan laws are meant to protect and should carry global liability.
Ed: To address the question under me, since I'm "posting too fast" --
My problem is that many of these FBI programs exist in a legal limbo -- the researchers are working with the government, but I'm not sure they have the kinds of immunization agreements that government contractors usually get (eg, that you have to sue the government not the contractor since the actions are taken under government authority because you're working for the government) nor that they have to observe the restrictions placed on government actions. Too much of cyber security exists in these (intentionally) gray areas.
I dislike this Wild West state of affairs and want the matter of liability and restrictions/accountability to be directly addressed, even if it's just making de jure the de facto situation. I think cybersecurity, as currently practiced, is probably ripe for some nasty lawsuits if a researcher screws up a situation like this.
Does anyone believe that if the registering the DNS address had bricked the NHS systems, the NCSC would've taken the fall?
Since you seem to take the possibility seriously, what benefit would the authors of ransomware derive from that? Some sort of game theoretic red-wire to slow down forensics?
I think malware authors derive a game theoretic advantage by having tampering with DNS C2 systems result in data loss, because a non-trivial portion of people will prefer to pay and retrieve their data. Some of the frustration from that will be pointed at the people who actually tripped the switch.
Further, because of the current legal status, if a security researcher issues the command to the DNS C2 system that deletes the data (by messing with the DNS records), not the malware authors, they're quite possibly liable for the data loss, going to face hacking charges, etc. (Hacking charges because they knowingly issued commands to malware that gave them unauthorized access to computer systems.)
I don't believe that security researchers should be the ones making that call -- I think the only sane way to make it is through collective mechanisms like government.
It is said that there is no problem in computer science which cannot be solved by one more level of indirection.
Just make someone else responsible for it. Problem solved.
Read my other comments for a more nuanced view discussing how it would play out in the real world, with changes relegated to particular networks and trade groups making deals for systems under their control.
But the only groups that should be able to authorize decisions about other people's things (free of liability or possible prosecution) are groups under collective control, ie governments.
Imagine a person in a locked building with ticking bomb, authorities are nowhere in sight. People, including loved ones, are vulnerable.
Should that person just wait and do nothing for the fear of you, sir, SomeStupidPoint, had created a law that holds that person accountable if something goes wrong?
Its her life and your law doesn't mean a thing, if not downright unethical and tormenting. She has every right to try to defuse or shield the bomb.
Should the IT departments of fortune 500 companies not try to respond and save their assets or should they just wait for Authorities?
You think a ticking time bomb is a crime scene, I think of it as a self-defense situation that hasn't played out completely yet. She has full right to self-defense, successful or not.
Look, ma, words on paper, that must stop bad things from happening right? Ma? Maaaa?
> It's only when acting under government direction that you should be immunized from liability.
I also thought that you meant it as a satire.
Anticipating your next question, can I ask what kind of internet police he should have asked?
I don't think globally releasing changes meant to tweak malware is a good idea, because of jurisdictional issues and liability. Down thread, I suggest confining changes to a network and indemnification from network owners (who may in turn be indemnified by network subscribers).
In practice, this would look like trapping malware DNS queries to security researcher controlled servers at the Comcast network DNS level, rather than registering a global name for it, with the researchers being indemnified by Comcast (who likely is indemnified as part of your subscription agreement).
This has well tread liability law behind it and moves us out of the situation where every random group feels free to potentially cause harm to hundreds of thousands or millions of computers across the globe because they're "good guys" and shoot from the hip.
I expect that network operates would quickly establish industry groups and a certification process for getting researchers to help protect their networks, and we would quickly be back to mostly the same situation, sans questionable legality. (They likely would be liable for occasional collateral damage and pay that out of industry membership dues to the group.)
A good Samaritan must know all workings of the malware, including trip wires, actual triggers and threat scales. And, must know if any threats that get released (not yet in sandbox) as a consequence of his investigations.
A good Samaritan must seek and get all necessary approvals using a procedure and checklist. If such procedure, checklist and inventory of authorities to seek approval from does not exist, the good Samaritan should immediately embark upon the task of defining one. Fully realizing that just creation of such listing would require constitutional amendment.
A good Samaritan should be able to look away from current mayhem (patient support systems, ambulances, public infrastructure collapsing) while above things are settled first.
On an existing command chain, sure, up to a point. US doesn't have any control if someone at Kaspersky had done it or some kid in Asia trying to understand how it works.
What you are proposing is going to make people on good side to disengage at the thought of prosecution. You would be shooting your soldiers for not successfully defending you. Worst, you would shoot enemy of your enemy for triggering the common threat.
The control that you desire goes against nature. Unless you put every single human being in matrix, you won't have that control.
I think that would be the equivalent of an arsonist also leaving a water activated chemical at the scene of the fire, and then blaming the firemen for using water to put out the fire when it made the situation worse.
From the Talos Intelligence blog:
>The above subroutine attempts an HTTP GET to this domain, and if it fails, continues to carry out the infection. However if it succeeds, the subroutine exits.
It's not clear if the subroutine being shown is the main entry point in which case return 0 exits (which is good for us), or if it's part of a larger framework that would be doing stuff later on (which is potentially bad for everyone because it could decide to do other things if it finds that domain sinkholed?)
The blog author checked on whether or not the domain name changes, but didn't specify any details about anything going on higher in the stack:
>All this code is doing is attempting to connect to the domain we registered and if the connection is not successful it ransoms the system, if it is successful the malware exits (this was not clear to me at first from the screenshot as I lacked the context of what the parent function may be doing with the results).
So my question is how much knowledge did they have of the rest of the code when registering the domain? Would the analysis environment have provided more information if the malware had continue to run after realizing the domain was sinkholed?
Better in the hands of someone like this.
The ransomware prematurely quits if the domain resolves to an IP, and a webserver listens to that IP.
"As of a little while ago (it is around 7:45 PM US Eastern on Mon 15 Sep 2003 as I write this), VeriSign added a wildcard A record to the .COM and .NET TLD DNS zones. The IP address returned is 220.127.116.11, which reverses to sitefinder.verisign.com. What that means in plain English is that most mis-typed domain names that would formerly have resulted in a helpful error message now results in a VeriSign advertising opportunity. For example, if my domain name was 'somecompany.com,' and somebody typed 'soemcompany.com' by mistake, they would get VeriSign's advertising."
I'm not the most diligent follower of security news, but I'm pretty sure that SMB network sharing is riddled with security vulnerabilities, latency issues, etc, and is generally wildly unsuitable for being left wide open to the entire internet. How could any institution with a competent IT department not have had this service firewalled off from the net for years?
1.) What is SMB? And is it easy to remove from systems by simply uninstalling it (like I have done)?
2.) Does WannaCry just land on a machine through a simple point-and-click exploit? Do they just enter a vulnerable IP address and they can plant the exploit on the machine and run it?
3.) I am aware that it also gets onto machines by people randomly clicking on shady e-mail attachments, but I am very curious about how it simply lands on computers with very little or no user stupidity at all?
 I uninstalled SMB by going to > Add or remove programs > Remove windows features
This exploit worked in two stages. First, there was a massive email campaign. Then, when employees would click on the attachment, the malware would worm its way onto other computers on the local network using an exploit in the SMB file sharing stack (which orignally came from leaked NSA malware). Then it would encrypt the user's files and demand the ransom.
That's quite an high abstraction level programming thing to do to use a domain name registration state as a boolean. Is that a regular thing ?
They could've achieved the same sandbox detection effect by just registering the domain and pointing it at 18.104.22.168 or whatever. The non-sandboxed connections would still fail, and no one else could take the domain.
That would leave a paper trail, potentially revealing who's behind the malware.
I apologize and will try to do better next time.
Each condition is satisfied by a different domain lookup.
The malware could have just as easily used the registration of that domain as a flag to start deleting data, no?
It's probably easier, as you point out, to have the virus delete its keys and wipe itself out. (And has the added benefit of taking some forensic info with it.)
But in a marketing sense, blaming people interfering with your network for the lost data may make you safer, as many victims are likely to prefer you extorting them to the good guys causing data loss by stopping you.
Being a criminal is all about customer service.
Two domains, one defuses the ransomware, the other detonates it.
And one of the domains will be called redwire[randomchars].com, and the other bluewire[randomchars].com. Which one do you sinkhole, the red wire or the blue wire?
The researcher in this case registered the domain right away because he had experience that that creates a positive result. Once that sort of thing starts creating bad results, then researchers will start testing more carefully before grabbing domains.
Although it was only a thought, with what `cesarb` mentioned in mind.
It isn't an either-or proposition, and the psychology of the conflict is important. If you force your opponent consider every possible move to be potentially dangerous, you slow them down by more than just the cost of the game with a domain name. And that's valuable.
Googling for "OODA Loop" might be helpful in thinking about this.
Well, I guess maybe they didn't want things to get too out of hand and now if they want they can be back up soon with that fixed.
And that's exactly what is so wrong about the NSA and others not being good stewards of their own bloody malware. A lot of these criminals would not be able to get their act together at this level without being partially funded by the three letter agencies. Think of it as an advanced form of script kiddies, they can use the tools and wrap them but they could not come up with those tools of their own accord.
They were clever enough to execute this attack, collect over £160k according to the last estimate I've seen (likely way more now), and achieve that in one day. You seem to underestimate them including assumptions that this was simply missed. There are many potential scenarios where this is beneficial to the authors.
This guy is sort of a hero, IMO. Given that this is affecting healthcare systems, he might very well have actually saved a bunch of lives! I hope he slept well, totally deserved it :)
Uh, no. Here's an archived copy:
EDIT: After looking explicitly for it I found www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com.
Despite Jones being the stereotypical name for people in Wales, the old Welsh language, Cymraeg, doesn't have a J, nor have Z, K, V, X (IIRC). Jones is an English loanword, brought over apparently with the Norman conquest (though derived from Hebrew).
The modern language of Wales is of course British English with a ~100% use rate; Cymraeg still has an approx 8% (but falling) of the population who say when surveyed that they can speak it fluently, however.
Yeah, I'm terrible at parties.
- insurance company which provides legal protection
The map is populating much faster now, maybe they integrated it with the URL?
Edit: I'm not so sure now. The whois record seems to suggest recent activity:
Domain Name: GWEA.COM
Registrar: 22NET, INC.
Sponsoring Registrar IANA ID: 1555
Whois Server: whois.22.cn
Referral URL: http://www.22.cn
Name Server: PK3.22.CN
Name Server: PK4.22.CN
Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
Updated Date: 18-mar-2017
Creation Date: 17-mar-1999
Expiration Date: 17-mar-2018
The hacker, though, didn’t register the gwea.com domain name. On Friday morning, a 22-year-old UK security researcher known online as MalwareTech noticed the address in WannaCry’s code and found that it was still available. “I saw it wasn’t registered and thought, ‘I think I’ll have that,’” he says. He purchased it at NameCheap.com for $10.69, and [...] 
If it is, it seems to contradict the whois record.
> a dot-com address consisting of a long string of gobbledygook letters and numbers ending in “gwea.com”
jstoja mentions it above: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com