At best they took down 30% of hidden web services with published addresses at aggregator sites like Hidden Wiki.
It's not that you don't know where its hosted, it's that you don't know who is using it or where they are. That includes publishers with hosted content, even from the host itself. 5Eyes couldn't just drop a tap in front of Danial's Host and see anything useful, just intermediate nodes with no idea what was on the other end. (barring a >51% attack where they own the first and last nodes in the circuit.)
For example facebook runs an onion server for their service, they don't need to hide the service itself. So they configure their Tor relays with no anonymity on the service side (HiddenServiceSingleHopMode 1) and get better performance.
Such non-anonymous onion services can have many goals, for example:
* Reducing load on Tor exit nodes
* Providing users a secure, authenticated connection without depending on the CA system (assuming you got the URL through a secure channel the first time, you know only the key holder can provide service on that host).
* NAT traversal for services that otherwise have no need for anonymity
30% is wrong, but there are other ways these metrics  are extrapolated . You can see a dip in that first chart, didn't check dates to see if related.
0 - https://metrics.torproject.org/hidserv-dir-onions-seen.html
1 - https://blog.torproject.org/some-statistics-about-onions
Here is your Data about .onion websites: https://www.reddit.com/r/onions/comments/9yfwfb/my_personal_...
About 7000 .onion sites worked before, about 3100 after...
There are very few hidden services and most of them are... questionable.
The article cites child porn. That's about right.
It definitely referenced it when I first read the article.
I wonder what the motivation would be for removing that reference.
EDIT: Note that I am not user "runn1ng"; I am corroborating what they referenced about the contents of the article
EDIT2: They even mentioned three-or-so specific forums or websites, or something.
This adds speculation and sensationalism on top of an ill informed article. They even defined it as a 0-day vulnerability, when it is obviously not the case as explained by the article itself. Can't simply trust the quality of the information in there.
There’s nothing which would cause an otherwise competent business leader to even realise they don’t understand the limits of any given security system, never mind knowing who to ask for advice.
Especially, they arguably lack the expertise to do remote hosting right. Also, as spurgu says, because they don't want stuff hosted locally. And if they thought it through, they wouldn't want the traffic back to their location.
Given all that, they arguably figure that these shared-hosting sites must know what they're doing.
I also checked out your site. I hope to be at your level of expertise one day.
Even though you're new to all this, for others wanting to do this programmatically, there is Stem for Python and I've written one for Go . It's such an easy self-hosting NAT traversal technique, I'm surprised it's not used more often in situations not requiring great bandwidth/latency (e.g. p2p chat).
0 - https://github.com/cretz/bine
Talk to the Dread Pirate Roberts next time he's in the neighbourhood.
However, Tor is vulnerable to traffic analysis. And running a server, adversaries can easily modulate/fingerprint the traffic, which facilitates traffic analysis. If you can see the signal, and have taps on major AS, you can drill down to the server.
One can route Tor traffic for .onion servers through VPNs, or even through nested VPN chains. That makes it a little harder, because the hosting provider can't easily tell that it's Tor traffic. Also, one can run a private obfsproxy, which isn't listed or indexed by Tor.
Hosts are assigned a dns name <id>.onion so clients can connect to that service.
I'd forgotten that :(
And yes, v3 is a huge space. That is, several orders of magnitude larger than the IPv6 space. Which is itself humongous.
Edit: Oops. Got that very wrong. Onion v3 is orders of magnitude greater than IPv6 /64. But orders of magnitude less than IPv6 overall. It's like this, I think.
onion v2: ~1.84×10^19 [16^16]
IPv6 /64: ~1.84×10^19 (which is why OnionCat works)
onion v3: ~9.35×10^27 [56^16]
all IPv6: ~3.40×10^38
I actually gave it a quick thought that I was curious how the hostnames were assigned but posted right before bed.
Many would love having DNS for .onion addresses. And there's been much talk of a .onion domain.
So by 'hosting' they mean being the rendezvous address?
Each of those .onion sites would have its own Tor entry guard relays, and would negotiate its own rendezvous points. An .onion service, just like a Tor user, selects a few entry guards that it uses consistently. And gradually replaces with new ones, over some weeks. But rendezvous points get picked fresh for each client-server connection.
"Chat lines" are hosted on telephone networks and inaccessible outside it.
Naturally there are still risk with using a hosting service even when all information is intended to be public. The owner might change the content (integrity of the data), and it might be removed (denial of service), which is trade off for the convenience of not having to host it yourself and the uptime of 24/7 servers.
Plus it has built in side-effect anti-DOS properties so no need to centralize through companies like cloudflare.
There's huge benefits to hosting on tor even if it's just a regular website. None of my websites hosted on tor are illegal in my country.
But even with backups, the .onion private key has been compromised, so you can't come back with the same .onion address.
Given that Docker containers rely on kernel namespaces and cgroups for isolation, they're not as secure as using full VMs. But they're far lighter, and much better isolated than these late .onions. Alternatively, one could maybe use FreeBSD jails with Docker.
And about backups. I get his argument for not backing up. But maybe a setup with relatively fast rotation, and thorough deletion of old backups, would be secure enough. I'd use LUKS with dropbear for server FDE. That's still vulnerable, sure, but attackers would need to take some care while impounding the server. Also, I'd keep backups on another server, with only .onion-.onion connections (maybe OnionCat).
I do agree with you that VMs, at least, would be more secure. But hosting thousands of VMs takes substantial resources. And when there's free shared hosting available, few would understand enough to pay the premium.
I wouldn't have thought a darkweb hosting service should have any logs?
Though I'm not familiar with thttpd or savant, after briefly looking them over they appear to be http servers just like apache or nginx.
What would make them more appealing for a dark web host? They dont seem to be particular "dark-web-centric" with what i could read at face value. though most times dark web stuff has tons of other info thats not found 'at face value'...
Not your parent, but it wouldn't surprise me to learn that "dark web sites" are using thttpd ... it's a very simple, lightweight, dependable web server. The major downside - the lack of SSL - is perhaps not an issue as you are running over an encrypted channel anyway.
If I just needed to throw something up - perhaps on a remote or throwaway host - thttpd would certainly be my first choice.
System logins, auditd, supporting services logs, etc all may provide clues as to what happened.
Censorship comes in many forms, and technological literacy in the form of running a LAMP Stack and knowing how to modify apache is slim in the world, php/mysql, forget about it.
6.5K pages is not a lot of data, especially for interactive and probably non-rich dynamic pages. Consider it as a geocities in the 90's.
What probably happen was there was an jail escape that enabled a live shell and that person killed it. From who knows whom -
As for running a tor site, the most suspicious thing about it is being a tor exit node.
I'm wondering why encrypted backup is not an option for them?
Now, I'm sure there's a ton of data that the NSA will be able to break with quantum computing, because people often do transmit data by encrypting it with an asymmetric cipher, e.g. PGP. But... there's already work on quantum-proof asymmetric ciphers.
And TLS uses RSA, so maybe they can crack all the https traffic? Well, assuming today's algorithms are configured correctly, they employ perfect forward secrecy. That means the key exchange algorithms never transmit or store the encryption key, rather an emphemeral encryption key is recreated by both sides. Of course, even with PFS they do get all the metadata and certs when they crack RSA.
It is also provably secure unlike every other cryptosystem.
That's what google did with CECPQ1, they use "new hope" which is a quantum resistant algorithm with traditional methods (X25519). That way if new hope is cracked, they are still using an industry standard you'd have to crack as well.
if you are not doing that, best stick with known good encryption schemes.
that being said you can implement a lot of encoding schemes like you say in ways that make it arbirarily hard for people to decide what is junk and what is data, to make it nearly impossible to crack especially if you say do that xor with random values etc. because if you'd receive a data intercept you have a hard time to rebuild the data from the junk and then decode it
Since this is a hosting provider for the dark web (and one that advertises they don't do backups, logs, etc), they don't know what the content of the sites or databases involve. Keep backups of that content is an additional liability for their business.
Site owners for all hosting providers, including when you're running on clouds like AWS, should put thought into their backup strategies independently of guarantees from their provider.
For providers like AWS you will likely use their storage options for backups such as S3, but you are taking a risk especially if you're storing in the same region. Everything in this industry is a series of trade offs, you need to be aware of what trade offs you're making at any given time.
>loudest 'Amen' in recorded history emanates from my cubicle
This seems to be the exploit they used?
I find this to be a very upsetting attempt at technical clickbaiting. Feels like a journalist trying to appeal to semitechnical readers with hackerman slang.
If it was known for a month, it's not an 0day.
Day zero starts counting from the time where either the developers/maintainers of the system or the general public are informed about the vulnerability. If the vulnerability has been discussed in some private forums or exploited by a NSA for many years, it doesn't matter, it's still day zero until it goes public (or privately disclosed to, in this case, maintainers of PHP). And as the attack was something like ~24 hours after disclosure, it's close enough to call it a zero-day attack.
My understanding was that 0day just meant that it was unpatched, not "publicly" known and being used for the first time "in the wild".
"A zero-day (also known as 0-day) vulnerability is a computer-software vulnerability that is unknown to those who would be interested in mitigating the vulnerability (including the vendor of the target software)" Patching and knowing about the vulnerability are different things.
And so, this is not 0day.
Interesting. If I were into this kind of activity I would love to have backups designed, implemented and maintained the "dark" way. It seems to me that by just running the services you are not 100% into this business. You'd have to come up with the whole "dark ops" approach.
The "dark" way is to assure that any and all traffic and interactions are anonymous as possible. Keeping a record of communications/behaviors is the direct antithesis of this.
The "dark" way is to never backup
I hashed for a month to get a vanity one, others have done longer. Did this hosting service allow custom private key uploads?
Then when the onion service private key is breached, the site operator just changes the header and DNS record to a new, non-breached one?
As you say though, querying these does reduce anonymity.
My expectation is that a good number will look to an alternate service, though the ones who are willing to try again would be more likely to have recovery plans.
On startup you'd enter password and decrypt a baked-in filesystem.
I guess this would still be possible to break into as long as the app is working since decryption key would be saved in RAM but as soon as server goes offline data would be inaccessible.
Then you host this, say, in Russia or China and you should be safe from USA authorities.
Pedantic, but is the preferred nomenclature today.
Moreover, its predecessors, Freedom Hosting and Freedom Hosting II, were both the biggest purveyors of CP when they were taken down. If it walks like a duck...
The motivation for hosting something like this is to hide what you're hosting in a bunch of noise. Legal or illegal. Your goal is to make it hard to gather metadata about the service. There are perfectly legitimate reasons to want that...
That said, it's a foregone conclusion that if you host a service like this, people who want to distribute child pornography are going to flock to it. If you don't police the content, that's even more true.
In fact, in the case of Freedom Hosting II, the host was allowing the pedos to circumvent the normal limitations of the platform, in theory because they were paying cold hard cash, as well as allowing scammers to operate on the platform, also for cold hard cash. It's been suggested that as much as 50% of the content hosted on Freedom Hosting II was child porn.
If you knowingly allow people to use your service to distribute child pornography or commit other crimes, you're every bit as culpable as your customers. This type of platform costs not-insignificant money to run -- it's a business. Nobody is hosting this many services out of a sense of altruism.
And honestly, in the case of CP, if you're hosting it, including running a service that facilitates the hosting of it, you deserve to serve consecutive life sentences in prison and whatever harm befalls those in prison who get found out as pedos.
I mean, they even believe this is a zero day attack, I can't simply trust the quality of any information there.
Edit: they updated the link, here's the old link - https://nakedsecurity.sophos.com/2018/11/21/hacker-erases-65...
Freedom Hosting was the biggest service and it served CP.
Freedom Hosting II was the biggest service and it served CP.
Daniel's Hosting was the biggest service -- what do you think it served?
CP is the reason Operation DarkNet et al target these platforms for destruction.
That's the problem with prejudices, the people who have them can never see beyond them.
Also, I wasn't aware that "people who engage in or support child exploitation" were a protected class.
You're still trying to put words in my mouth, but by all means keep defending the despicable.
It's already a matter of law (in most western countries) that hosts have some amount of responsibility for what users of their platforms distribute and that's why other hosting services aren't structured this way.
What are your real motives?
See, I don't have to make a throwaway account, and I'll tell you right here that in my youth I had two different pedos in my life attempting to groom me into being victimized. I guess the nature of predation made me a target because of my vulnerability at the time. Luckily I got through that part of my life unhurt, but when I think about it -- just barely.
All those people whose pictures are being looked at did _not_ get away unhurt and those that consume such material deserve to rot.
That said, I agree with your point that these people need help and support and that blindly locking them away for looking at pictures on the internet is generally not a good use of resources.
I'm specifically talking about the latter case, even though I personally see no such distinction.
I will concede your point that rehabilitation is a possibility, but a crime was committed and that help and support can begin in prison.
Could begin in prison. In most countries' prisons, including in the US and most of Europe, it can't.
Consumption tells the filthy creatures making the material that it is in demand, and they make more. How the hell are you defending pedophilia?!
Maybe a mod can replace the link with the actual source.
PHP zero-day vulnerability.
Another time, I set my user agent to an XSS string, just because I was testing something, but forgot to set back. A week later, I noticed that my colleague was baffled by a popup coming up each time he was browsing the logs. Oops, that was my XSS user agent... I'm guessing a lot of places don't sanitize logs.
You'd be amazed how many places these things work :-)