If that's not "Hacker" News, what is?
However I wonder why the OP didn't try to keep vector graphics and use to SVG for instance? That would allow for infinite scalability. It is mentioned that "GPUs don't like vectors" but if the game doesn't change too often it should not make a lot of difference?
For 1) I decided I'd rather just release an update with larger textures if these ones ever start to look dated. That way I get to keep the runtime code simple. Less code means fewer bugs. I don't want to spend a lot of time fielding support requests from users who hit edge cases in the rasterizer. As for not changing too often, that's true, but taking advantage of that means doing change tracking with dirty-rectangles or similar, which not only adds complexity but also feels like it would make performance less predictable.
And for 2) the game as it stands now is under 50MB so I didn't feel a pressing need to make it smaller, although a tiny executable would be cool in a satisfying, demoscene kind of way.
When Adobe hacked on "HTML5 support" to Animate, they did exactly the same thing the OP did. It renders every shape in the FLA to a sprite sheet and then draws it to a canvas tag. If you have knowledge over what will be drawn ahead of time, this is the most reasonable thing you can do.
Even before HTML5 support, the AS3 Starling framework that let you "use the GPU" would pre-render all your vector assets to bitmap textures at runtime. And that was a framework built for Flex developers; if you were accustomed to building things on the timeline, you rendered on CPU, because that's where all of Flash's very particular rendering logic has to live.
Ruffle gets around this by tesselating every shape into a GPU-friendly triangle mesh. This lets us render stuff that's technically vectors on GPU. But as you can imagine, this creates its own problems:
- Flash has very specific stroke-scaling rules. If a stroke is smaller than 1px, it will be rounded up to 1px. This is how Flash's "hairline stroke" feature works: it actually asks for a 1/20px stroke, and that gets rounded up to whatever the current scaling factor for that shape is. When your stroke is a polygon mesh, you can't vary the stroke to match Flash without retesselating, so hairline strokes on shapes that stretch don't animate correctly.
- Likewise, any stretch of a stroke that changes the aspect ratio also distorts the stroke, since its baked into the tesselated mesh. There's a minigolf game that does this to hairlines and it will basically never look right in Ruffle.
- Tesselated vectors lose their smoothness, so we have to sort of guess what scale the art is drawn at and add enough detail polygons for things to render correctly. Most of the time we get it right. However, there are some movies that do crazy things like store all the art at microscopic scale and blow it up. This provides a compression benefit, because it quantizes the points on the art with little visual difference on Flash Player. On Ruffle, however, the art becomes very low-poly.
- All the tesselation work takes a significant amount of time. There are certain movies (notably, a lot of Homestuck) that would hang the browser because of how much ridiculously complicated vector art was being processed before the movie even loads. We had to actually limit how much art could tesselate per frame, and expose that to movies as bytesLoaded, which is why Ruffle movies have preloaders even though we don't support streaming download.
There's another approach to drawing Beziers on GPU: drawing the hull of control points with a procedural texture that calculates the underlying polynomial. This is especially simple for quadratic curves (the ones with one control point), which is what all Flash timeline art is.
However, strokes are more complicated. You'd think we could just take the hull of the stroke and draw that as a fill, but you can't. This is because the offset of a Bezier curve is not a Bezier curve. Drawing the stroke in a pixel shader would make sense, except you still need to define a polygon mesh around your stroke with a reasonable texture coordinate system to make the math work. And the polygonal outlines of Bezier curves can get really funky; there's no obvious way to quickly say "here's a curve, now give me the polygon that encloses a 5px stroke around it". Remember how tesselation takes so long that it would hang Ruffle?
 I'm not sure if this is virtual or device pixels.
 Flash specifies vectors in fixed-point units called twips. That's why the zoom factor on Flash movies was locked to 2000%.
 Flash can draw cubics - the two-control-point curves you think of when you think Bezier - but only in response to an AS3 graphics command. We haven't implemented this in Ruffle yet.
Well fuck me, I had no idea. I figured that 'simple 2D vectors' would be beyond piss-easy for modern GPUs. I'd never considered that it wasn't the actual math space that was accelerated, rather the fast memory mapping of everything on presumably comparatively simple geometries. You've just turned my view of the world upside down :(
they are, but on nvidia only... https://developer.nvidia.com/gpu-accelerated-path-rendering
But signed distance fields work well and easily on GPUs.
Here's a recent article and discussion about it:
Vector graphics on GPU (gasiulis.name)
235 points by mpweiher 4 months ago | hide | past | favorite | 95 comments
It's definitely not a simple solution, but might enable you to do runtime rasterisation with a good framerate on the GPU rather than pre-rasterising all the vector art.
(OK those links are for NVidia, but if they can do it I guess others can too ?)
(also see https://www.researchgate.net/publication/262357352_GPU-accel... )
I've seen some font rendering work that has already embraced them for high perf rasterization.
Still, may be hard to get flash equivalency.
Your writing itself is great. A lot of writing linked by HN seems too verbose because people try to sound smart. Yours is succinct while engaging. Right length, just technical enough, and interesting. I really enjoyed this, thanks for writing it!
> Object-orientation is not fashionable right now in gamedev circles
Can you elaborate on this?
I watched this talk years ago: CppCon 2014: Mike Acton "Data-Oriented Design and C++"
I was very impressed by the ideas presented.
Java/C# can be used to write a program with "just a bunch of structs + static methods". Isn't that data-oriented enough? Ya, I know we can do 72 levels of inheritance also... for ignore that for a moment.
Whether or not an ECS will actually be faster depends entirely on the type of game and access patterns of the inheritance structure. Further, it matters how you are doing inheritance (LTO can eliminate a lot of the pitfalls even if you use virtual function calls).
This is nothing against ECS. I just find the current claims of performance dubious and potentially flat out wrong given modern compiler optimizations and changes in CPU caches (mainly that they got a LOT bigger).
Ideally, compilers should be able to transform classes into ECS layout when generating code. Shouldn't be hard to have a pass before compilation, if we use clang.
(Edit: I love your games and used to play a lot when I was a kid)
It's an impressive piece of writing. You're definitely good at this.
Is it too long, too short, too technical, not technical enough? Boring? Interesting? Is it enlightening or does it just come off like content marketing
Oh, and you forgot funny. This part made me laugh:
Flash stores its vector graphics in XML format. You might argue that XML is a poor choice for graphics data, but you weren't a product manager at Macromedia. Behold: ...
I enjoyed how you explained certain technical concepts succinctly and using simple language. For example, as someone who never used Flash, you explained the 1000 ft view of how Flash works. When I’m looking at a new tool or framework, I try to star by understanding the mental model behind it. Yet, a simple explanation of it is often difficult to find.
I was also struck by how much technical work was involved here—you made it sound easy. How long did this project take you?
I second that. I find the better someone knows a subject, the harder it is for them to provide this perspective, making it all the more impressive.
Did you investigate Haxe or OpenFL?
Also, I think Flash was more expensive to maintain than Adobe’s other design applications because the Flash Player needed constant security patches. Adobe was usually happy to milk cash cow products, such as Director and Shockwave, transferring maintenance to remote teams in India.
I mean you've probably learned a lot, but you could have learned to author games using a more modern tool than flash instead and that would have been useful in the future, right?
Looks like that feature was added in Animate CC, which I don't use, so I haven't tried it.
Looking at the docs, it looks like something I would have at least tried. Might have saved me writing the rasterizer, though I'd still have wanted to atlas everything together into bigger atlases, and make binary animation files.
One downside vs the .fla parsing approach I went with is that even if you can script away the GUI clicking, you'd still have to have Flash open while the build script runs. I quite like having an exporter that is just a CLI program you can run without Flash, though it's not a massive deal.
I'd be interested to see what exactly Animate puts into the animation.json file it generates. The docs don't seem to mention it, and I couldn't find any example ones online.
Your tool looks great, by the way — have you made any games with it?
Some other posts are born with all the tiny details, are too long to be read on one session and I end up missing even the key points because that second session never happens.
I wish the Hapland series was coming to Mac, but I respect your decision to snub Apple.
Also, the idea of using text/assembly as an intermediate format of binary files is pure genius. I will probably use that occasionally for similar use cases.
> I think if you're a big important game studio you don't have to stand for that and they give you a bulk upload tool, but I'm not one of those so I looked at the HTTP calls it was making, saved my login cookie to a file and wrote my own.
No way. Fucking hell, Valve. Sort your shit out. How many thousands of hours have been wasted because of this?
> Although I developed the game mostly on my Mac, during development Apple invented this thing called “Notarization” where if you run any app on a new version of MacOS, it'll make a network request to Apple to ask if the app's developer pays Apple a yearly fee. If the developer does not pay Apple a yearly fee, MacOS will pop up a dialog strongly implying the app is a virus and refuse to start it.
No way. Fucking hell, Apple. Sort your shit out. How many thousands of dollars have been wasted because of this?
This is to prevent abuse. Some games will have 1000s of achievements as a feature, because gamers like games with lots of achievements (and because Valve kinda encourages that with some of their store features), so forcing people to do it manually is a way to decrease that kind of activity somewhat.
> By default, games are limited to 100 achievements at first. Reach out to us via the "Support" option at the top of this page if you have questions about this.
You can do it with a free Apple Developer account. You only need to pay $99 if you want to distribute it through the App Store.
Read question: Can this be automated with Selenium? For a long time I was using very low level Chrome DevTools libraries. Then I switched to Selenium. It is so impressive. It actually works, without weird 1-5% failures that you waste time chasing... then adding small sleeps to make it smooth and working.
They all thought it was pretty shit, but to me it was more important that some random person across the globe genuinely picked up something I wrote, played it, and then wrote up a review of it.
I showed it to my parents and they immediately started pushing me to learn programming.. and well here I am now. Thanks Flash.
It blows my mind that no one has made a tool like that in so long. It had the same quality as Excel, where any non-technical person could just look up a tutorial, right click a bit, and make something usable. Why has the pinnacle of ease of use in game creation been achieved and never replicated since?
I believe that this is due to a tragedy of the commons in the software industry. We're all so creative, with so many dreams, but spend the majority of our lives working 40 hour weeks to outcompete each other and make rent. Then when someone wins the internet lottery, they succumb to their ego and do a bunch of stuff in business to hoard even more wealth, rather than working to reform the system which keeps so many others down.
I would very much like to write a programming language, a blogging tool, a web framework, a game, a game development tool, etc etc etc. But I never will. Most of the people reading this never will either.
So I endure. I meditate, I practice stoicism, I go to the gym. But I had to let my dreams die for my own self-preservation because opportunity cost grew to consume my entire psyche. I simply can't think about all of the other things I could be doing while I have to work. And all I do is work.
There doesn't seem to be much help coming in terms of UBI or having N people split a 40 hour workweek or forming artist communes for makers. Sure, we hear the occasional stories. But it's like we're all waiting around for a billionaire to liberate us, rather than creating a scalable cooperative system to sustain all of us right now today.
What I'm saying is that until we look beyond the technical challenges and see that the problem is subjugation, we'll never get free of the lackluster tools, because we'll never have time to make better ones.
I am so sick of the competition in most tech jobs / companies to climb the stupid ladder. Everyone just wants more more more and our society reinforces that. I _REALLY_ miss working on teams that didn't have that amount of competition causing mild dysfunction.
I'll take 40 hours over that kind of life in a hearbeat, thankyouverymuch. Although I am still trying very hard to make it so that I don't have to do even that much for the rest of my life.
In general, I don't disagree that it's sad that we spend such a huge proporation of our precious, finite time on our planet working. Most work without enjoyment because that's how you put food on the table.
On the other hand, the person who cracks this nut will enjoy unprecedented fame and their name in the history books for the rest of human civilization if they figure it out. (And no, it's not going to happen no matter how many 140-char political rants there are on the bird site. Very arguably, those are in fact the opposite of progress no matter which "side" you think you are on.)
You can’t have it both ways complaining about being left behind in comp while not participating in the grind that allows those comps to go higher.
But you have a point: at 80k requirement you could easily freelance to earn that, for non technical people. Help set up shopify stores etc. and not need
to constantly be on a learning grind.
There are 8 billion humans, the world doesn't need another hundred thousand programming languages or web frameworks or game development platform or even indie games or fiction or music or film, produced every year when the whim strikes someone's fancy. Even if we could, who can even consume that much content?
Being a cog in a machine at least means you're probably doing something of value to someone instead of writing another book that nobody reads.
As of now we have no better solution to coordination failure except to create the AGI that will save/doom us.
The bigger thing would have been the web framework, or the programming language. But even those aren't quite big enough.
If I'm being honest, the biggest stuff is curing disease, eliminating wealth inequality.. biblical stuff. Truly I want to be an inventor and work to solve the very hardest problems.
So it feels undignified to be fixing someone's computer when I could be helping researchers to like, cure death and stuff.
But you're right, that act of helping someone is the very essence of our humanity and shouldn't be written off as something inconsequential just because it's not glorious. There is glory to be found though, a different kind of glory in that covenant.
And I totally agree about AGI.
Edit: as far as scarcity goes, I think that economics correctly attempts to allocate resources between people with unlimited wants and needs. It just never got as far as the endgame we're in now, where the primary scarce resource is time. That's an artificial scarcity, so we might have a chance to turn things around if we optimize for that instead of material wealth.
With that said, I’m consumed with FOMO and opportunity cost every day. I’ve missed the boat on being founder-level or early designer/engineer on so many projects that have gone on to raise Series B it’s mind numbing. Also, we were paid $0 bootstrapping our stuff in the first five years, and now finally we get paid, but $60k a year.
I’m saying this as a warning to all devs and designers out there. Do NOT do this. If you can get into Google or FAANG or whatever, build wealth that way. Maybe you’ll feel like a sell-out or you never scratched your itch or done something good for the world, but then you can also remember you can take a year off in Bali or whatever.
There are exceptions of course.
But most of us could stop working tomorrow, and aside from the lack of income, the scarcity of resources available to us would not change.
The real problems that causes hunger/poverty in our world has more to do with logistics and politics, not a deficit of resources.
Who would grow your food? Who would make the tools and fertilizer that enable the farmers to grow your food? The robotics and software that enables those tools and fertilizer to be produce? Who would organize, transport, package, preserve and process the produce so that you can access it?
Apply the same questions to healthcare, education, transport, construction, entertainment, etc.
Some jobs are superfluous and arise out of coordination failure, that's true. Lawyers, administrators, salespeople and parts of finance and the government comes to mind. But "most people could stop working" is an unreasonable assertion.
>> There are exceptions of course.
There are about 2M farmers and ranchers in the US. 22M health care professionals. 4M teachers. 2M truckers, 135k rail workers, 40k miners, etc.
Even at our highest employment levels, under half of those in the US are employed. They're just not counted as unemployed because they're either children, in school, retired, or simply not seeking employment.
Yes, I do believe that most jobs are superfluous - that many more people could not have to work without a material impact to the availability of goods.
Also the guy that fills the milk bottles. My brother used to work in a milk bottling plant, but he was not filing the bottles by hand. A machine filled the bottles, someone overlook the machine, my brother was making the chemistry and biological test to ensure the milk was safe to drink. And there was some additional people they call in case the machine gets broken (or to build a new machine.)
You are forgetting to count a lot of people.
And every job currently held today is absolutely not essential.
You can cut out the entire “entertainment” or whatever industry you think isn’t essential, but we still need to convince the people working the essential jobs to do it while everyone else free-rides.
> Yes, I do believe that most jobs are superfluous
I think you just don’t understand the purpose of many jobs. If the accountant for the local grocery store didn’t exist, what do you think would happen?
If your society focuses on supporting a population's needs, not much.
Stuck with this "work virtue" mindset, society would find a way to make it so half the people were paid to dig holes and the other half paid to fill them in, just so we could go on with this fixation on the need to work for a living.
From what I've seen, most programmers are indeed spending their time building experiments that have the primary goal of making money for the person that pays them - if they create true value that's more of a lucky side effect. And lots of those experiments fail.
Who will maintain buildings, infrastructure, essential services, government etc.
I think we could work towards long term fewer hours and more people who don’t need to work though.
And so, if a majority of less than half of the US population is working unnecessary jobs (i.e. only valuable to our economy)... how much of what we do is really valuable to us as a species?
Which is a roundabout way to say that scarcity is a mostly if not completely solved problem; working to survive is largely no longer necessary, just customary.
Likewise, you're also assuming that jobs that meet needs are more essential than jobs that meet wants, but the point of post-scarcity is that not only our needs are met, but our wants are as well. Picard can make himself a cup of earl grey whenever he wants, as opposed to sustaining himself with the recommended nutri-pack. One of my most fulfilling jobs was working for a t-shirt printing company - in theory, nothing we did was at all essential, but it was clearly very important to a lot of people's lives. Some of the most fascinating orders were memorial t-shirts being printed for people's funerals. Not essential at all, but I would argue very valuable to us as a species.
I doubt that's an example of a job/project that isn't just pure administrative burden. Even your example of a t-shirt printing company is more than just pure administrative burden - people need t-shirts, and people like their t-shirts to have art of some form. While art may be a luxury, it is also pleasant and required to some extent.
However, there are some roles that are pure administrative burden, and I tend to think the GP is correct that these roles are a lot more prevalent than we want to admit. I think the 80/20 rule could be applicable here - only 20% of the population produces 80% of the wealth (read: essential or luxury items) while the other 80% of the world are stumbling on themselves in bureaucracy to maybe make the rest of the remaining 20%.
Only in Silicon Valley, and that's probably a big part of the problem. In some of the highest paying, most creative occupations on the planet people are drooling at the feet of billionaires to save them while they slave away M-F.
Most people throughout history didn't look at the rich with anything but contempt. In Silicon Valley they're held up as geniuses on every subject. Look at how a subsection of the industry treated Elon Musk taking the axe to Twitter (politics aside).
Citation needed. I don't think wealth as the main cause of contempt.
It doesn't even have to come down to greed though.
If some meme stock speculation 1000x'd, my family could live off the gains in perpetuity and I could spend all my time working in a soup kitchen, doing good.
But if I instead figured out how to turn that into even more money, I could start a new soup kitchen and help even more people, doing even more good.
But if I instead figured out how to turn that into even more money, ...
It's effective altruism if they use the money in a smart way, that reasonably maximizes the good it does for others, instead of doing something stupid or just extremely suboptimal.
Ofc, there's an evil undercurrent to the "any non-technical person can use it". We ended up having to support a tool that some "business analyst" had written. My eyes bled.
On one hand, he was ignorant
in regards to everything we thought was mandatory for development: version control, code style, code structure. We, young Cpp/Python programmers, despised him.
But nothing we had (QT, 2007-2008 web tech, etc) made it possible to spit out a business UI in an hour.
To be honest, I still don't see anything comparable.
I think it's just around the corner for those types of applications to come back. And surprisingly, they will come hand in hand with note-taking outliners, not your run-of-the-mill no-code or low-code tools.
See for example Obsidian canvas, released this week. I see it as functionally similar to Visual Basic GUI editor, except simpler and easier to edit; and its layout is stored as JSON in a .canvas file. The open-source Logseq is adding similar visual layout templates together with data-recovery primitives.
They just need to add the core interactive data widgets (text field, text area, buttons) and expose the cards to a programming language in a end-user accesible way, and you'll have the power of a visual language, except web-integrated and with a really simple way to add data (the outliner editor).
Thus the abundance of those browser-wrappers such as Electron. The html/js stack is far from trivial but it really is truly portable.
So the new visualbasic has to be different, and I don't think newer ms tools cover this niche.
I can still get there with modern tools, but the defaults / conventions in VB are simply superior to anything I’ve worked with since.
Yet in this community of all the people who could make it happen, we can't make it happen! What are we missing? Time? Money? Co-ordination?
But imagine in Tkinter and Python, probably the least steps to get a basic UI, there is so much to manage.
In vb you double clicked a button in the designer and got a place to type code. All the stuff about event loops, and not blocking the thread was dealt with for you. In Python if you try the same thing the app freezes continuously. Then you want to do simple everyday things like a date picker, or progress bar, and find you need to implement it yourself????
Then having built your app in Tkinter and Python, you want to share it with some colleagues...so where is the 'compile button'?
It is easier to implement a machine learning model in Python than it is to make it into a standalone app with a file importer and a 'go' button!
The way translation is implemented today is horrifying.
Likewise for high-DPI. Why are they still using some linear scaling of icons instead of a super-resolution GAN and some caching?
Why can't an OS translate a single-language app on the fly using state-of-the-art translation neural nets and using visual attention to capture context? It's almost 2023 ffs, and people are still using some gettext nonsense?
I didn't get the hate. Maybe software engineering folks don't want coding to be fun and productive.
Rebol, Red, tcl/tk are good and easy programming languages, but kinda niche and obscure. I have no idea why computing is getting more complicated instead of easier.
But yes, looks like the web was a trendsetter that pushed things into becoming harder without any good reason. Or maybe the web was just following the trend of the toolmakers all going out business at the early 00's (thanks a lot to Microsoft).
FWIW it never went away¹ and targets standards-based runtimes these days. I'm sure there are many kids every month having the same kind of epiphanies with it!
That being said, because Adobe Animate is marketed as a tool for professionals rather than kids/enthusiasts, orders of magnitude more kids are having the kind of experience you had with Minecraft and Roblox instead.
Absolutely fantastic, yes! Roblox does allow you to make a game from scratch — even though the primitives are generally less primitive and the constraints much different — but my point is just that if you're wondering where the spirit of dj_mc_merlin's experiences went, two places where it can be found in spades are in the Roblox and Minecraft communities. (Source: Am a parent experiencing this vicariously via my kids.)
...with the caveat that once Roblox (the company) folds, all the games will be gone.
...and that since Roblox changes the code all the time, you have no idea whether your game will look the same in a year. Or even run.
So yes, while I understand what you mean by the spiritual successor.. Roblox takes everything that made Flash bad (i.e. dependence on a clunky binary that one day may not be supported), and makes it worse.
Source: worked in Roblox when a new shiny material pack was rolled out, which replaced existing materials in existing games, messing up the look of games where materials were used creatively.
True, and I guess Flash is a useful lesson in that sense, although no platforms I jammed with as a kid are around either. It makes me wonder if Godot apps written today will work on the web 20 years from now.
A lot of games today use an HTML-powered UI library from Coherent Labs which can still be authored using Adobe Animate (i.e. what the Flash editor turned into): https://coherent-labs.com/
PS At least that was the case 9-10 years ago when I was involved.
Anyways, the client ui team definitely used Actionscript, which I had to read from time to time as a server-side dev.
I just tried Adobe Animate recently and nothing worked out of the box or like flash used to.
I basically had to tinker, until even basic animations somewhat worked.
And I even have quite some experience with EaselJS, the target framework, but that didn't helped much.
The power of flash was the ease of use - that is gone.
I must admit, this isn't the most inspiring landing page for someone like me, who could be your exact target demographic (long-time Illy user and someone who likes to tinker with new web tools). I don't really 'get it' and (this is entirely personal) the design looks quite dated.
I'll bookmark it, share it and keep an eye on it though - Good luck!
The newer one written in C++ was first called "pocket edition" and targeted mobile. It's now called "Bedrock edition" and is also the version that has RTX support.
It'll be hard to move the modding community over to the Bedrock edition. I personally don't have any knowledge how limited modding is in the latter one.
Do you know if there's been any progress on preserving the older Macromedia games? I remember those starting to get flaky even in the Flash heyday.
I think there's a tendency to look back at these games as trash, but it was a pretty wildly experimental time. Indie games as we know them now weren't really a thing yet.
I am surprised Adobe did not manage to get Flash / Adobe Animate adapted to html5/js without people jumping ship.
I've made a fully animated pixel platformer tech demo type thing in it, it's somewhat capable.
And to be honest Unity is really easy at the tutorial level. Shipping a real game is the hard part.
For anyone not familiar with Flashpoint, it's a project to preserve old flash games and animations and keep them playable on modern platforms. It's open source and includes a huge library (including it looks like Hapland 1-3).
Edit: Reading a bit further down the article, it looks like they were able to make some big improvements by building their own engine like supporting wide screen and higher FPS so that sort of answers my question!
Gotta be careful to avoid the porn games though. Flashpoint archived some.
Ruffle is the flash emulator
Or maybe it's just me reminiscing
Nothing really exists that beats Flash, partially because the switch to mobile killed Flash's momentum. It also led to the switch in Internet content from Flash animations/toons/games to Youtube/Patreon/Twitch videos. It was easier to make money with weekly videos than trying to get Adsense bucks by making a game every few months.
I remember back in the early 00s playing with the PHP Flash module. It was a really interesting environment that had a lot of promise. But, alas, it was doomed.
I was always surprised that it didn't get more traction. Though Typescript is leaps and bounds beyond where AS3 ever was
1. Clicked on your link
2. Moused over "Construct 3" in the header, clicked "Showcase."
3. Clicked on a game in the showcase at random ("Bunnicula in Rescuing Harold").
4. Page loaded with a graphic & a play button.
5. I tried clicking play & nothing happened.
6. Shortly after, Brave (Chrome) showed its "Page unresponsive" dialog.
There's not because the DOM isn't suited for complex animations. People manage to animate SVGs (see Greensock etc.), but it's a pain in the nether regions
Halfway, the dev complains about the orignal devs using XML, because it is not efficient "Hey, I'm not complaining, it makes my job easier."
Well sounds like the original developers of Flash made a good decision. If it makes it easier to parse the content at a later stage, I'm willing to call it a win!
1. Using text formats embedded in XML -- he took advantage of this when reverse engineering
2. Generating text ASM -- he said this aids debuggability, and allows using existing ASM tools that he didn't have to write
3. Generating C++ -- taking advantage of the type system, as mentioned, and a huge array of other tools (profilers and debuggers)
It's data-centric rather than code-centric. It's a interoperable and transparent architecture, not a monolithic one. (Which is not surprising because Flash itself was made for the web.)
Related: The Internet Was Designed with a Narrow Waist https://www.oilshell.org/blog/2022/02/diagrams.html
A Sketch of the Biggest Idea in Software Architecture https://www.oilshell.org/blog/2022/03/backlog-arch.html
This was one my favourites parts in the article. An the real piece of the pie comes right after it: there's a screenshot with like thirty icons and they're different enough that you can tell one from the other.
Ever since they made this move to every icon being a minimal shape with the 4 Google colors, I haven't been able to quickly differentiate between app icons. I have to read the labels half the time now cause they're so non-indicative of the app's purpose.
This sounds like a great title for a really good book.
Funnily enough, The Lost Art Of UI Design sounds like a potentially very mediocre book to me. One of those stretched-out "self-teaching" design guides that listicles will recommend you. It chooses a couple of very specific rules of thumb, it gives way too much importance to them, and somehow manages to water them down to 250 pages. Probably written either by someone who's a skilled writer but barely knows enough about the industry, or an experienced specialist in the field who unfortunately can't write for life.
Hell, I don't think they're original to Flash to begin with, more like borrowed from Photoshop when Adobe bought Macromedia (this is purely my speculation; but Photoshop is much older).
"oooh let's not put too many buttons to scare users away", "they got 22 inch screens, let's just waste space for no good reason, it looks modern and airy"
The crucial difference between then and now is the amount of whitespace.
Usage can't be it, because I use a ton of software that I despise and get depressed by, but only because I get paid to do so, or the app is so niche there isn't a viable replacement.
You'd have to actually talk to your users. But that requires genuine human investment, so it doesn't scale. It doesn't come in the form of a third-party SaaS with a pretty dashboard, generous free plan, and integration via single <script> tag.
In short: they could measure this, they just don't bother to.
I seriously think that's what's going on.
If you cared about what you built, with clever hacks and bitwise performance tricks then the Flash runtime could run your code efficiently. I remember developing an app which used Box2D, camera based gesture control with sound effects and background music all running simultaneously, reaching 60 FPS. In other projects we used software based 3D (à la Papervision 3D), Adobe dropped the ball and Molehill/GPU accelerated 2D/3D arrived too late. Perhaps it’s not common knowledge but we could develop true cross-platform apps, compiling for different targets (SWF, IPA for iOS and .app/.exe).
AS3 was a good language, and definitely reminds me of TS. Here’s a piece of code from 15 years ago: https://github.com/PureMVC/puremvc-as3-standard-framework/bl...
That letter from Steve Jobs destroyed it all, many talented developers left the Flash world at that time. It was a bit depressing to see all the (unjustified) mainstream hate for the Flash platform that started to appear at that time, which felt bad as there were many of us that put a lot of time, care and effort in creating amazing stuff with it. Thanks Flash!
You might not have seen it but the hate for Flash was well-established by then, solely due to Adobe’s gross mismanagement of their platform. Flash was what crashed and took out your entire browser session. Flash was what made drive-by compromises so easy and common. Flash was what popped up annoying “you should manually update” prompts because they sat out the automatic update trend for nearly two decades. Flash was also what made your entire computer visibly slow down because it was so inefficient - they weren’t an option on phones due to the UI differences but performance and memory alone would have ruled them out in any case.
People made cool things despite that but I attribute almost all of the blame for Flash’s demise to Adobe. They could have cut their executive bonus pool by 10% to hire a QA team and they’d have had a much better chance at long term success. Instead, they let the platform stagnate to the point where one of the early Chrome selling points was that when Flash crashed you’d only lose that page!
1. I bought a Flash license once. I naively filed bug reports for reproducible crashes in the player and IDE (the debugger was especially unstable), as well as compatibility and correctness issues with their various libraries. Each time, I got no reply before the next major release and then I got a generic “please pay $1,500 for the upgrade and let us know if we fixed it” message.
The fact that I now get to read and understand articles by this legend, and work in the same industry vertical him... what a privilege!
Thanks for your work. Hugely inspirational to my career.
But the idea of having a single file that could bundle, code, audio, graphics that was dead easy to build and could run independent of specific browsers and operating systems.
That is something magical. It is harder to build that kind of content today and near impossible to implement it on that scale today than it was 20 years ago.
Theoretically you could run a Flash file on a Linux system via Konqueror browser and it would be identical to that on Internet Explorer on Windows 98. This is something we have lost.
It is a shame that Adobe/Macromedia treated it like absolute trash. A free/open solution that had the same good properties would have been brilliant but it never happened. Instead we have the whole HTML5+ stack. I mean it is cool and has done some amazing things for the web, but it is also a massive pain to manage. One browser update and boom, something has broken - time to get digging on what happened! Now it works on Chrome but not Firefox!? Oh dear.
I don't miss Flash itself, I mean to be charitable, it was a mediocre product at best. But that vision implemented in a sensible fashion could have been wonderful. Could have allowed for the decentralized web to continue on a bit longer than it has.
Relative to what though? There was nothing else with its ubiquity, ease of use and deployment, and breadth of functionality.
It pioneered and enabled so many things that we take for granted on the web today.
Personally, as someone who used it extensively, I don't feel it was particularly buggy, especially relative to the alternatives or developing in the browser. It could have performance issues, if developers did bad things or pushed it too hard, but you could say that for just about anything.
As far as security, it was the single most ubiquitous runtime on the web, and thus was a prime target for hackers. Yes, it had security issues (although every web runtime did), but its not like the rest of the web / browsers were particularly secure themselves.
fyi, I worked for Macromedia and work for Adobe now, but these are my personal opinions.
Maybe I am just remembering it differently as this was more than a decade ago. But it seemed like almost every week, heck almost every day, another zero day would be discovered. Yes, this was the peak time of zero day exploits as every system was being hammered from all sides, but the Flash name came up far more often than anything else I recall expect for maybe general Windows Xp/7 issues.
As for performance, that was a major issue personally. I recall a time when I had nested items a few layers deep so that I could have a character with movable joints and the system was buckling under this work load. Going from fully ridged models running smooth as butter on 3DS Max to the same system crumbling under some basic vector graphics left a bad taste for me. It just always felt like it was dragging its feet compared with what these machine could achieve.
I don't recall any specific bugs only that recall the feeling of them. Sorry I am so light on specifics. A minor bug was that I do recall was as you rotated objects, there must have been a small rounding issue that meant items would slowly shrink as you rotated them multiple times while going about your process. It would take hundreds of such movements before it became noticeable, it was a funny little thing but would lead to some projects becoming a bit frustrating.
I say this as someone who absolutely loved Flash. I wish it had kept on going regardless of those issues. It just felt like it needed a Windows Vista to Windows 7 moment. Where it was taken to task and team got behind it to refine the project into something much more lean that still had the same functionality. Instead it felt like it was treated as that thing that would get barely any attention until it died from atrophy. That mobile wasn't taken to task is probably the thing that ended up killing it quick.
As someone else said here, Flash felt like a vision of the future. Where computers were tools of creation made easy not just consumption.
It was a pioneer in so many things and I wish it was still a part of the web. It made interactive media distribution so dang easy! Build it in Flash, add a single line of HTML and BOOM - multi-platform audio visual experience that could scale like a champ!
Yeah video kind of sucked, early Youtube performance a good example in that as it would bring my G4 Mini to it's knees! But for everything else it was such a visionary idea that we have lost.
On a tangent.
This is something that I feel a lot of modern technology has forgotten. That vision of not just new tech but tech that is enabling to even those that have no interest on the base technology. Flash had that vision, not just of creation but the ease in which it could be spread. Hypercard had that vision. Heck BASIC had that vision. As Steve Jobs always pushed in the early 80's, the computer is a cycle for the mind. Nowadays at times it can feel like the sedation of the mind.
Everytime I see here on HN another link about thing Ruby, GCC, CLANG, some JS framework, Tensor... something - I get it, in those fields it is all very cool and useful. But it is also very much for those that work in the weeds. Great for those that do that, but it also feels so lacking of that "Wow!" factor that I think brought a lot of us into this field. I hope we rediscover that one day and soon.
One of the tradeoffs of the time was that they had to work very hard to both crunch the binary size down for Flash itself, as well as Flash files, because people everywhere were extremely bandwidth-limited - when dialup was the most common method for connecting to the internet and not even 56k was necessarily ubiquitous at that. And of course, compiler technology hadn't come as far as it has nowadays.
The weight of all the decisions that made Flash able to succeed eventually worked against it, but for its time, it was an extremely good implementation that made the web as a whole more interactive.
I'm happy it's been superseded, but I think that it was the right product for its time.
There were some projects when I was running on a G3 iMac where I would drop the render screen down really small to see how it would perform on a more powerful machine. It felt like if they even managed to get decent GPU acceleration possible then they could have solved a lot of these issues.
The technical legacy of Flash made it a poor fit for touch screens. Apps filled that void with astounding speed.
This part is just a vague opinion I am riffing with - With mass available video, a ocean of video content may have changed tastes enough that the demand for interactive content diminished somewhat. Social media also killed of the dedicated website for a lot of places that used this technology and thus the demand for this stuff.
Once Google via Chrome closed the Flash hole, I do wonder if they will ever let anyone else to try and get in on that space? Time will tell.
Unity kind of took its place in the game space.
Web pages - nothing. They're all using some boring non visual JS library.
(note: I'm a Ruffle dev)
But sadly, I haven't been able to make Crystal Saga work using Ruffle
I understand contributing to OSS is a pain (I often end up with half implemented features in my own branch and never manage to merge them upstream) but he could have saved himself some trouble.
The tool that the author made reads Flash fla files and exports data from that.
Swf files contain binary data. The ActionScript code has been compiled into bytecode. The vector data is probably also represented in binary format.
Fla files, as pointed out in the OP blog post, contain XML data.
It sounds like OP is doing something quite different from what Ruffle does.
Ruffle tries to be a player for swf files.
OP wanted to export data from Fla XML to other formats, so that he could build executable games.
Ruffle still doesn't support Actionscript 3.0, I suspect the task is not that straightforward.