Check out https://medium.com/@grantm/obtaining-instant-breach-transpar... for some more info about how this might be possible.
It's still unclaimed I believe: http://ownme.ipredator.se
> PSA: the bitcoin piñata will be reduced by a large amount, the owner who lend the 10 btc wants to spent 9 on useful projects
"This challenge started in February 2015, and will run until the above address no longer contains the 10 bitcoins it started with, or until we lose interest. In 2018 we will likely reuse most bitcoins for other projects."
So I'm not sure what happened frankly.
There’s a reason companies use bug bounty platforms instead of just having a bunch of bitcoins lying around.
Code can be found here: https://github.com/kale/image-cache-logger
Edited to add:
Here's the view key if you're interested. Just append it to the end of uriteller.io:
A crawler on AWS hit it two minutes after I posted it.
EDIT: I believe it's because the CSPRNG state ( https://metacpan.org/pod/Bytes::Random::Secure::Tiny ) was created before the process forks, so they shared the initial state and generated the same token. I've reduced it to 1 worker pending an actual fix.
Sorry about that, and thanks for pointing it out.
In general I suspect if anything you'd be more likely to mess it up by reading bytes from /dev/urandom manually than by using a library.
Open the file. Read from it. If no failures on open or read, you have random bytes. In essence, there is already a library for this: `open` and `read`, which seems to be the same API surface area as this library.
First, the author mentions that a `read` from urandom can be interrupted. I am unaware of any system where this is actually possible. And even if it were, the author's original code (and my description of an implementation) already works! The `read` call will return an error, and that error is handled. His "improved" code is simply an optimization around retrying from this device, but it's not an improvement in safety.
His second argument is that /dev/urandom might not have enough randomness in it. This is, quite simply, not a concern for anyone not writing code for specific embedded devices or for extremely early in the kernel boot process. Anyone who is writing code for these environments is almost certainly already aware of these limitations. And even then, using a library like the one the GP is using doesn't actually help, since it's virtually guaranteed to just be reading bytes from `/dev/urandom` for its seed in the first place.
The rest go into situations that — quite frankly — border on ludicrous. If someone has replaced your `/dev/random` with `/dev/zero`, you have already lost and there is nothing you can or should reasonably do besides nuke the machine from orbit.
So instead, we use them to seed CPRNGs, and use them for speed.
Care to share your stack?
It's mostly http://mojolicious.org/
SQLite for the database (for now; that'll change if it gets too big), and nginx as a reverse proxy in front of Mojolicious. CentOS 7.
Monitor the URL for access. You then receive an alert that someone accessed the URL.
This gives you a real-time notification that a reverse engineer has looked at your firmware. It also gives you an IP address. So you now “know” that someone might try to hack your device. And you also have an IP address.
This is a billion dollar security play!
Jokes aside. It is a good concept for IoT firms who have a security advocate, but no budget. Would help persuade people that there are hackers targeting their devices/apps with quantifiable data of degrading value.
I guess that’s all doable with the “private server with root access” under enterprise pricing. What a great way to precisely measure cover time. You could inject arbitrary URLs into an application to see if your API has been reverse engineered.
This is a signal, but not a game changer for security pros.
Having retrieved API secrets offensively, and overseen secret rotation defensively, I’d say it would be a game changer. It’s an excellent idea to automate this discovery with an alarm. The current discovery system is either an internally developed, half-baked version of this that comes from sophisticated logging, or manual oversight.
If the URL gets accessed at all, you will know your secret has been leaked.
Also Yahoo Slurp is crawling my email URLs. Sigh.
For added security, maybe better to hide the canary URL in a bit.ly link? Someone might know your 3 URLS.
If seeing that the bit.ly URL redirects to a known-urlcanary domain would put you off visiting the URL, then seeing the raw known-urlcanary domain (not behind bit.ly) would also be enough to put you off visiting it.
MyURL.com/101/passwords /private /logins
Is the idea that you'd embed this in a way that it is automatically triggered? Or that you would leave it in plaintext somewhere and assume someone would eventually visit it if they were snooping around your stuff?
(But obviously it needs to be a hostname you're not already using for something else).
This means you don't break the system when you move IP address. Moreover, should you ever need to, you can round-robin the domain for either reliability or load-balancing (though I doubt that would be necessary).
Also, doesn't bit.ly access a URL to pull a title or generate a preview? This would send a click through to the canary as well.
But of course that heavily depends on the use case
You can also register your own domain and point it at my server and your canary will work just fine on that domain.
If you're playing at a high enough level that you've specifically blackholed URL Canary traffic by IP address, then you're a worthy adversary. And additionally, that is a splendid problem for my project to have.
On the enterprise tier you even get your own server with its own IP address, so there should be nothing linking it to URL Canary at all. (Although the enterprise tier is extremely expensive, and if anyone buys it I will probably panic).
If you check out https://canarytokens.org you will notice the ability to create several others (be notified when someone resolves an IP address, be notified when someone opens a file, be notified when someone views a QR code, etc)
Although the generated URLs don't have "canary token" in them.
How many attackers are going to click that link or any link for that matter? Seems the value prop of the product is based on the assumption that folks will click. Maybe it's a solid assumption. I just can't see the evidence for it.
I don't think it has commercial value really. But for social awareness, showing that mail, or notes, or storage providers aren't always as private as you'd hope, that's where the value is.
I see what he's trying to do, but this isn't the way I don't think. Mathematical proofs that verify that a payload is observed, opened, or accessed work. They are deterministic. They are also way more complex. I think this is trying to solve a problem in a simple way but it's still just as nondeterministic as without this solution IMO.
It could go in backups, in your git repository, bug tracker, internal wiki, etc.
For an average non-techy, I don't know... they might want to put one in their diary?
If that's a problem for you then you can mitigate it a bit by checking the User-Agent and IP address in the alert email you receive.
Another way would be to put a fake user in your users database and then watch password leaks and see if it shows up. Then you know you've been breached. It should never happen but if it does it's good to know.
Essentially to do the same thing as URL Canary you'd set up an action that only emails and trigger that with a custom filter that scans your web server's access log for accesses to a particular URL.
while echo -en "HTTP/1.1 200 OK\r\n..." | nc -l $IP $PORT; do
cat $MESSAGE | sendmail -i -t
Canaries on the other hand are exclusively used by technical people, who won't mind developing and spinning up a tiny honeypot.