Page is unusable to me, if I try to opt-out through settings it just crashes :/
Nice way to force users to accept - not that I think this cookie terror banner practice makes any sense.
I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?
I've been meaning to write a proper post-morten about all that, now that the dust has settled. But in the meantime, just quickly:
- I did not make billions. You're off by quite a few orders of magnitude. After taxes it was well below $500k.
- Nothing I did was illegal; that's how I got away with it.
- Coinhive was not ransomware. It did not encode/hide/steal data. In fact, it did not collect any data. Coinhive was a JavaScript library that you could put on your website to mine Monero.
- I did not operate it for "years". I was responsible for Coinhive for a total of 6 month.
- I did not organize a doxing campaign. There was no doxing of Brian Krebs. I had nothing to do with the response on the image board. They were angry, because Brian Krebs doxed all the wrong people and their response was kindness: donating to cancer research. In German Krebs = cancer, hence the slogan “Krebs ist scheiße” - “cancer is shit”.
- Troy Hunt did not "snatch away" the coinhive domain. I offered it to him.
In conclusion: I was naive. I had the best intentions with Coinhive. I saw it as a privacy preserving alternative for ads.
People in the beta phase (on that image board) loved the idea to leave their browser window open for a few hours to gain access to premium features that you would have to buy otherwise. The miner was implemented on a separate page that clearly explained what's happening. The Coinhive API was expressly written with that purpose: attributing mined hashes to user IDs on your site. HN was very positive about it, too[1]
The whole thing fell apart when website owners put the miner on their page without telling users. And further, when the script kiddies installed it on websites that they did not own. I utterly failed to prevent embedding on hacked websites and educating legitimate website owners on “the right way” to use it.
I only have access to the trade volume of coinhive's wallet addresses that were publicly known at the time and what the blockchain provides as information about that. How much money RF or SK or MM made compared to you is debatable. But as you were a shareholder of the company/companies behind it, it's reasonable to assume you've got at least a fair share of their revenue.
If you want me to pull out a copy of the financial statements, I can do so. But it's against HN's guidelines so I'm asking for your permission first to disprove your statement.
> Nothing I did was illegal (...) Coinhive was not ransomware
At the time, it went quickly into being the 6th most common miner on the planet, and primarily (> 99% of the transaction volume) being used in malware.
It was well known before you created coinhive, and it was known during and after. Malpedia entries should get you started [1] [2] but I've added lots of news sources, including German media from that time frame, just for the sake of argument [3] [4] [5] [6] [7] [8]
----------
I've posted troyhunt's analysis because it demonstrates how easily this could've been prevented. A simple correlation between Referer/Domain headers or URLs and the tokens would've been enough to figure out that a threat actor from China that distributes malware very likely does not own an .edu or .gov website in the US, and neither SCADA systems.
As there was a financial benefit on your side and no damage payments to any of the affected parties, and none revoked transactions from malicious actors, I'd be right to assume the unethical motivation behind it.
> I did not organize a doxing campaign. There was no doxing of Brian Krebs.
As I know that you're still an admin on pr0gramm as the cha0s user, that's pretty much a useless archive link.
Nevertheless I don't think that you can say "There was no doxing of Brian Krebs" when you can search for "brian krebs hurensohn" on pr0gramm, still, today, with posts that have not been deleted, and still have his face with a big fat "Hurensohn" stamp on it. [9]
As I wrote in another comment, I also said that there are also nice admins on the imageboard like Gamb, and that they successfully turned around that doxxing attempt into something meaningful.
> I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?
This is not personal for me, at all. But I've observed what was going on and I could not be silent about the unethical things that you built in the past.
To me, doing that lost all trust and good faith in you. The damage that you caused on a global scale with your product coinhive far exceeds whatever one person's lifetime can make up for. And I think that people should know about that before they execute your code and are going to be a victim to a fraudulent coin mining scheme.
Calling this hyperbole and misinformation is kind of ridiculous, given that antivirus signatures and everything are easily discoverable with the term "coinhive". It's not like it's a secret or made up or something.
Your "portfolio page" is quite disrespectful and in line with your behaviour in this HN submission. You've made up too many blatantly obvious lies and are now stooping down to provocating a reaction, because you having nothing better to say. I don't think anyone should trust you.
> Your "portfolio page" is quite disrespectful and in line with your behaviour in this HN submission.
Care to elaborate what is "disrespectful" about my own personal website? How did I offend you, specifically?
> You've made up too many blatantly obvious lies and are now stooping down to provocating a reaction, because you having nothing better to say. I don't think anyone should trust you.
I've cited a lot of news articles, blog posts, insights, even malware databases from multiple globally known and trusted security vendors.
The original JavaScript engine “Impact” from 2010 is at the end of its life; the C rewrite “high_impact” is new and will (potentially) be around for as long as we have C compilers and some graphics API.
The JavaScript engine had a lot of workarounds for things that are not necessary anymore and some things that just don't work that well with modern browsers. From the top of my head:
- nearest neighbor scaling for pixel graphics wasn't possible, so images are scaled at load time pixel by pixel[1]. Resizing the canvas after the initial load wasn't possible with this. Reading pixels from an image was a total shit show too, when Apple decided to internally double the Canvas2D resolution for their “retina” devices, yet still reporting the un-doubled resolution[2].
- vendor prefixes EVERYWHERE. Remember those? Fun times. Impact had it's own mechanism to automatically resolve the canonical name[3]
- JS had no classes, so classes are implemented using some trickery[4]
- JS had no modules, so modules are implemented using some trickery[5]
- WebAudio wasn't a thing, so Impact used <Audio> which was never meant for low latency playback or multiple channels[6] and generally was extremely buggy[7]. WebAudio was supported in later Impact versions, but it's hacked in there. WebAudioContext unlocking however is not implemented correctly, because back then most browsers didn't need unlocking and there was no "official" mechanism for it (the canonical way now is ctx.resume() in a click handler). Also, browser vendors couldn't get their shit together so Impact needed to handle loading sounds in different formats. Oh wait, Apple _still_ does not fully support Vorbis or Opus 14 years later.
- WebGL wasn't a thing, so Impact used the Canvas2d API for rendering, which is _still_ magnitudes slower than WebGL.
- Touch input wasn't standardized and mobile support in general was an afterthought.
- I made some (in hindsight) weird choices like extending Number, Array and Object. Fun fact: Function.bind or Array.indexOf wasn't supported by all browsers, so Impact has polyfills for these.
- Weltmeister (the editor) is a big piece of spaghetti, because I didn't know what I was doing.
Of course all of these shortcomings are fixable. I actually have the source for “Impact2” doing all that with a completely new editor and bells and whistles. It was very close to release but I just couldn't push it over the finish line. I felt bad about this for a long time. I guess high_impact is my attempt for redemption :]
I loved Impact and paid for it back in the day, though I never ended up finishing the project I was working on. Did you ever throw the source for "Impact2" up anywhere? What's missing from it being releaseable?
It does, but the main speedup comes from using WebGL instead of Canvas2D. Sadly, Canvas2D is still as slow as it ever was and I really wonder why.
Years back I wrote a standalone Canvas2D implementation[1] that outperforms browsers by a lot. Sure, it's missing some features (e.g. text shadows), but I can't think of any reason for browser implementations needing to be _that_ slow.
For things missing/hard in WebGL, is it performant enough to rely on the browser compositor to add a layer of text, or a layer of svg, over the WebGL?
I have some zig/wasm stuff working on canvas2D, but rendering things to bitmaps and adding them to canvas2d seems awfully slow, especially for animated svg.
Ah man, I'm still looking for a general canvas drop in replacement that would render using webgl or webgpu if supported. Closest I've found so far is pixi.js, but the rendering api is vastly different and documentation spotty, so it would take some doing to port things over. Plus no antialiasing it seems.
To be fair, they modified Impact _a lot_. In some of their development streams[1] you can see a heavily extended Weltmeister (Impact's level editor).
Imho, that's fantastic! I love to see devs being able to adapt the engine for their particular game. Likewise, high_impact shouldn't be seen as a “feature-complete” game engine, but rather as a convenient starting point.
In game development this isn't true for better or worse. There is a lot of sunken cost mindset in games that we just go with what we have because we have already invested the time in it and we'll make it work by any means.
Of course you can. Not really sure why this still is tossed about. You just get a shiny turd with a lot less stink. I've made a career of taking people's turds and turning them into things they can now monetize (think old shitty video tape content prepped to be monetized with streaming).
I like this take, but you're saying something different I think, which is more along the lines of "Customers don't care about how the sausage is made".
You didn't polish a turd, you found something that could be valuable to someone and found a way to make it into a polished product, which is great.
But "You can't polish a turd" implies it's actually a turd and there's nothing valuable to be found or the necessary quality thresholds can't be met.
that's just crazy bs, starting from open source code and adding specific features needed for a project is a very common strategy, doesn't mean at all that the tool wasn't good to begin with
phoboslab was downplaying their own efforts by saying that the Cross Code team customised the Impact engine a bunch. My point was that no amount of customisation can turn a bad engine into a good one (you can't polish a turd), so phoboslab definitely deserves credit for building the base engine for Cross Code.
> no amount of customisation can turn a bad engine into a good one (you can't polish a turd)
At a risk of being off-topic and not contributing much to this particular conversation (as I doubt it's relevant to the point you're making), I'd like to note that I often actually find it preferable to "polish a turd" than to start from scratch. It's just much easier for my mind to start putting something that already exists into shape than to stop staring at a blank screen, and in turn it can save me a lot of time even if what I end up with is basically a ship of Theseus. Something something ADHD, I guess.
However, I'm perfectly aware this approach makes little sense anywhere you have to fight to justify the need to get rid of stuff that already "works" to your higher-ups ;)
I assume you're talking about the pack_map.c used during build? That just was written in C because I had the .map parser already lying around from another side project. If I had done it from scratch for this project it likely would have been JS or PHP, too.
For the past few years I've been building a B2B SAAS. Naturally I've chosen my "get shit done" language PHP to do it. While the product itself is as boring as it gets, I've had stretches of good fun with it by building _everything_ myself.
Naturally the web frontend is written in PHP, so are the cronjobs, shell scripts, message queues, WebSocket server, client software, statistics and the server automation (think Ansible, but imperative). Sharing my own libraries between all these parts is extremely valuable. Owning the whole stack means I never have to deal with black boxes or fiddle with dependency management. The language just gets out of the way so you can solve your actual problems.
The fact that PHP is reasonably good at all these things is a big advantage. The performance is certainly good enough (never became a problem) and the long running server applications (e.g. the WebSocket servers) are rock solid.
Hell, in PHP 5 era and before, it even HAD to, PHP had been designed as "you get a brand new process every request" and so a lot of things with memory went wrong if you ran for too long.
This changed with PHP 7, some framework appeared to allow an event based system, and then PHP 8 (or a later minor update) added fibers to the core to support it more easily.
These day, event based php has several well supported frameworks, is stable and fast, and allows you to have the same codebase on your websocket or other event system as you use on the rest of your app (sharing libraries is super cool, using the same DTO on the sender and receiver also, etc ...)
Does it also run in the browser, so you can share type definitions and libraries and tools between client and server, and seamlessly perform isomorphic server side rendering?
If only there were a language that was as universally deployed and well supported and extremely optimized on both server AND client, and had all of the capabilities of PHP and many more...
Because today, using an anisomorphic language that can't run on both sides is like riding a bicycle with only one leg.
Oh I would agree but Javascript is anything BUT stable in its environnement and frameworks etc... Trying to update a one year old php code base with composer means no issue at the end in 99% of the time.
Oh don't worry, some day JavaScript will as stagnant as PHP, too. Just wait, time flies.
If it's a stable environment that runs old code you want, then if I had my way we'd all be programming interactive distributed network extensible web apps in PostScript instead of JavaScript, and that language has been stagnant for a much longer time than PHP has ever existed (13 years before PHP 1.0 was released in 1995), since you can still print and view all of your old PostScript documents from 1982, 42 years ago.
HyperMedia browser with embedded "applets" in PostScript:
It's definitely the only worthwhile or practical language for isomorphic web app development.
Of course you could choose to run a WASM PHP interpreter in the browser, but you be you.
Or you could petition all the browser vendors to support PHP directly. Get back to me on that when you've sorted it all out and resolved all the political and technological and economic issues, and also implemented a better framework than SvelteKit with it.
Kinda related, I was exploring how much complexity you really need with https://qoaformat.org/ - compresses at 278 kbits/s, but is much simpler than even MP2.