From their post mortem https://medium.com/@gabriel_20625/technical-lessons-from-bui...
> Performance is never a justification for anything if you haven’t measured it
This just isn't true. Anyone with a reasonable amount of experience is often able to look ahead - well before any code is written - and see how performance could be better if road A is taken instead of road B.
Its madness in that case to blindly choose a road without even thinking about performance - even if it is unmeasurable and in the future.
There's more in the same vein:
> Go back to step 2 and make sure you completely ignore performance, because you probably haven’t.
Now I have to imagine that these guys don't genuinely believe what they wrote - but in that case, ditch the words "never" and "completely ignore" and go for something more reasonable.
(Of course that would make the phrasing less dramatic and click-baity, which is maybe what it takes to get eyeballs these days)
I would love to get all these "Performance First! Its so easy" engineers in a room for a week. Hell, give em three weeks and they still wouldn't have a single line of code written. But they would still be arguing about the best way to deal with their upcoming 1 million hits per second.
Rule #1 of any business: Sales cures all. You know what's really hard to sell? A product that hasn't even made it out of the damn door yet.
"Steak before sizzle"
Get something done and working, even if performance sucks. Then do your performance logging and optimizations. Don't attempt to pre-optimize, because you may make a good situation worse.
That doesn't preclude selecting correct algorithms and such beforehand; ie - don't be stupid about it. But don't try to be clever before you know you need it, either.
Second thing I always think should be necessary for any software project:
Get your security design working first.
Don't try to patch in security after the fact; if this isn't at the top of your design and planning, it should be (assuming the project needs it or can be foreseen to need it). Too often I have witnessed the opposite - and it almost never works without major refactoring (or scrapping and starting over).
I like it!
> That doesn't preclude selecting correct algorithms and such beforehand; ie - don't be stupid about it. But don't try to be clever before you know you need it, either.
Exactly. "Avoid premature optimisation" doesn't mean "make it as slow as possible", it just means "use std::vector until it becomes an issue instead of hand-coding some arcane data structure in assembler".
Security, Durability, Availability.
From my perspective, that's the order of importance too, but any one without the other two and you're not going to gain or retain customers.
On your second point ... Im not sure. Sony is still in business. Equifax and co are still printing money. OPM is still around. Security and trust doesnt seem so binary in practice.
Tbe ahove applies to accessibility too, by the same reasoning.
Often if "performance sucks" then it isn't done and isn't working. Performance is a client requirement like any other.
Yet it is what I sometimes see when this performance discussions take place.
You know, sales is everything and I agree that "there are two types of software: software that people complain about and software that nobody talks about", but some of us take some pride on doing stuff with enough performance and not pure crap...
Myst was at the cutting edge of technology at the time. They had to write some weird custom extensions in order to jam color into HyperCard at all. (Myst was initially written in HyperCard!) One could argue that designing the game around static images that could be rendered offline was itself an optimization. It gave the illusion of an immersive 3D space within the very harsh limitations of early Macs.
Also, the couple of QuickTime videos used in the game were very carefully integrated into a surrounding static image to give the illusion of an entire live scene while only animating a small rectangle of it because that's all most computers could handle at the time.
Myst is a fantastic example of making very thoughtful performance decisions given the constraints at the time.
That they managed to pull it off for home computers in 1993 is nothing short of a performance focused mindset which led them to static images and QuickTime overlays. As soon as they had the ability, they released realMyst which was a realtime 3D version of the original, followed by realMyst Masterpiece in the last 5-10 years.
For that matter, Minecraft may be poorly optimized (it is) but without the effort spent on the chunking system that renders 16x16x(128/256) blocks as a single mesh, it wouldn't run at all. There's plenty of low hanging fruit (which may be more difficult to retrofit into the existing engine, or in the JVM itself) that other games have found and utilized, but without that initial performance optimization Minecraft wouldn't run at all.
Take as much pride as you want. Just remember to, you know, ship something from time to time.
You're not making games, you're selling entertainment. Focus on the latter first, and optimize accordingly.
I see this time and time again - beautifully architected and performant platforms that completely bomb because noone actually wants it.
I agree with that. And yet, Minecraft is possibly the worst example:
- It's an exception to the general wisdom that games do have to worry about performance in order to deliver fun. Just because Minecraft became popular despite it's abysmal performance, doesn't mean games can get away with that in general.
- Notch never expected Minecraft to become that popular. In a sense, Minecraft was an accident.
- Minecraft's performance was so bad, a lot of people had to install a mod called OptiFine just to be able to play it on their hardware.
That being said, I agree that performance might actually be integral to your product. Quake 3, for example, probably would have bombed if everyone was playing sub-10fps.
Either way, the priority is "figure out who wants your product and why." That helps determine when and where to focus your optimization efforts.
That's not true in the way that I think you're using video games as an example. Video games must be fun, being fun requires being performant enough, and for graphically demanding games that requires lots of optimizing. But there are tons of games that are not graphically demanding.
Game studios were one of the first areas to care for UX.
During the 80's and early 90's many kids got into games by starting as group testers after school.
It wasn't always the same group of kids.
Writing casual games with AAA tech is needless overengeniring.
I don’t know about this. I generally end up building the things that have already been sold.
In other words—don’t go to extremes in any direction.
It sounds very much like this conpany took an extreme perf-first approach from the outset, and advocate a reactionary overcompensated opposite extreme in their postmortem.
Except that's not the argument in the comment you're reacting to. Here's what the comment actually said:
> Anyone with a reasonable amount of experience is often able to look ahead - well before any code is written - and see how performance could be better if road A is taken instead of road B.
In other words, some optimizations come with no significant cost. As a banal, perhaps exaggerated example, if you're writing code that needs to store data in a collection and you know that it's going to perform random access to that data on a regular basis, you're not likely to pick a linked list over some kind of array.
What @abraae is criticizing is the practice of asserting "best practices" that are likely to be taken out of context, widely propagated and misinterpreted.
Knuth's famous assertion "premature optimization is the root of all evil" is an excellent example. People love quoting it, but few seem to take into account the context, to the point that it has been largely forgotten. Here's the quote in its context:
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
To be clear, it's not Knuth's fault that people are taking his very rational and sensible advice and reducing it to "yeah, you should totally disregard performance when first writing your code". This is not about pointing fingers and assigning blame, it's about being responsible when offering advice. Knuth's advice was reasonable and nuanced, and it still got taken out of context and reduced to something different and arguably harmful. Now look at Gabriel's advice, which is arguably a lot easier to take out of its context and misinterpret, and imagine the consequences it could have.
That is what @abraae is reacting to.
EDIT: There's another thing that really, really bothers me about your reply:
> Rule #1 of any business: Sales cures all.
This, right here, is what I personally feel is wrong with software nowadays. It's how we end up with poorly written, bloated, bug-ridden, unstable, unreliable, insecure crap that we end up using anyway, because there's no better alternative.
See https://medium.com/@jaredpochtar/on-performance-and-software... and the full section on https://medium.com/@gabriel_20625/technical-lessons-from-bui...
> Most of the code we wrote was fast enough (<2ms) that making it faster wouldn’t be noticeable, so it would definitely be a waste of time to optimize. Making code that’s 0.1% of your runtime 100x faster only makes your latency <1% lower.
I read the post in the second link before writing my original reply, because I don't like jumping into a discussion without making sure I understand what's going on ;)
I get that your intention was the same as Knuth's. And yes, I agree that @abraae took your advice out of context. That was, I believe, his point and it's certainly mine.
Specifically, when you look at that numbered list describing your code-writing practices, the numbers 1 and 3 stand out as hyperbolic. People will* take that stuff out of context and pass it around. Hell, people are already having a hard time staying reasonable in this very discussion thread.
I can't speak for anyone else, but don't take my comments personally. You guys made your own product and ran with it, which is more than I can say for myself, and I respect you for that. I also agree with what you're trying to say. However, I stand by my criticism of the hyperbole, because I've seen people who take that stuff too literally and proceed to write crap.
While premature optimization is definitely a problem, In my experience not handling it correctly before the app is in prod usually results in having to fix logical bugs , refactor for performance and add new features users are clamouring for at the same time.
We don't get lot of chances with users to say you will eventually fix these issues for them, you will simply loose them .
As all things there has to be a balance between both ends
It seems to be within the set of concepts and suggestions that cannot fully sink in and be truly appreciated until some degree of failure or experience relating to them has occurred.
Yeah, they read it. they get it. But they still don't get it, if you know what I mean.
But wow - this one sure does take on some laser etched permanence once it does get through.
A "justification" implies that there is a compromise being made in the name of performance. It's very reasonable to choose, in the first iteration of a program, to choose to disregard performance concerns in favor of readability, which is exactly the context of how they framed it.
When you make the big initial decisions based on readability, that informs all the rest of your choices in working on that piece of code. If you skip a few steps ahead because e.g. "we're making some extra database calls here, so we should grab all the data we need the first time instead", you'll find it much more difficult to optimize your way back to the most readable code possible.
Is it the only way to write software? Of course not. And it may make more sense for the systems and frameworks they use than for yours. But I for one admire their willingness to have a point of view and a principle when it comes to writing code. I don't think it's madness. And I don't think you're wrong either. I just don't think you would have been a good fit for their team. By putting their philosophy out there, they help make that clear. And likewise! I don't begrudge your philosophy, and I think it's often correct. But please don't accuse others of drama and clickbait for describing strong opinions.
They are saying if you are saying "this piece needs to be more performant" you need to measure that first, before making your choice of what to make better.
You are saying, "you can with an idea in mind optimize before you go and it will have better performance" but the current prevailing business logic is do not optimize until you know it's "the right thing".
AKA premature optimization is bad.
Did I mis-interpret what you are saying?
As such there are very few black and white rules you can follow and succeed.
The correct approach is somewhere to the left of "premature optimization" and somewhere to the right of "don't even think about performance until you can measure it".
More experienced and talented people will be more successful at picking that sweet spot.
Telling a dev that the correct approach is to pay absolutely no attention to performance before it can be measured is ... crazy IMO. I would expect them to have at least thought about it, and have some opinions, just as they should not have put excessive effort into prematurely optimizing things they don't know are going to be a problem.
My point is - its grey, use your brains, don't buy into absolute rules.
A lot of assumptions there and real world is almost never that clear-cut.
The general advice of "don't optimize until you've measured" is still almost always correct.
The issue of course being that when designing code, various design choices can have significant performance implications, and can be very hard to change once implemented and in production.
> we were too focused on the technical side
Yes. Oh boy we were.
> should probably pick the most boring stack
Agreed. I'd worked in the Coffeescript compiler so it was "boring" for me, but you should pick whatever's "boring" for you— likely not Coffeescript
We don’t think any lesson here should be taken as an “always true” type thing.
"Hi, nice to see you guys here. This is my husband. He's a policeman"
"I would like to talk to my lawyer"
Because we know the world isn't black and white we're able to interpret these statements. It's human speech. Lots of the details are left out because conciseness is gold.
But what you mean is that you are able to interpret these statements.
Plenty of others, maybe less experienced, are not.
Next thing you know, they are spreading the word - "hey, did you know there's a new development methodology? The idea is that you completely ignore performance until after you've written some code. Yeah, you heard me - completely ignore!. It sounds crazy, but that's how it works!"
I don't want you to think we didn't care about performance. We did— a lot. We just think most people do performance optimization badly.
There's a lot of superstition on the internet about how to structure your code so it's fast. In our experience, following this advice usually does not make your code fast. Most people have bad intuition about performance.
To counter this, you have to constantly measure the performance of your code.
There's 2 parts to this idea.
1. Most things are fast enough that optimizing them is not worth making the code less readable, and is a waste of time.
2. Measuring, rewriting, and repeating is the right way to do performance anyway, full stop. There's lots of superstition around performance, which is absurd because it's extraordinarily measurable.
For point 1:
We goalled on a clear and flexible style, which helped us move incredibly fast with very few people. See https://norvig.com/spell-correct.html for an example of a golden piece of code in this style. Rewriting for performance takes moves us away from that goal.
Most of the code we wrote was fast enough (<2ms) that making it faster wouldn't be noticeable, so it would definitely be a waste of time to optimize. Making code that's 0.1% of your runtime 100x faster only makes your latency <1% lower.
It's not because we're brilliant coders: we just realized that most of the code we wrote ran on arrays with few enough elements that even O(n^2) algos could run in <1ms. Some of the code we wrote ran on a server, where network was the dominant factor, and we could always just scale up our servers with money. Some of the code ran on click, where it could be as slow as 5 seconds without the user really caring.
For point 2:
There's no such thing as “fast” code and “slow” code— you can put in the work to optimize any piece of code, and make it incrementally faster.
Because you can put in work to optimize any particular code, we either budgeted hours of engineering time for performance, or set performance goals. In either case, you want to spend your time optimizing the code that's the slowest, so you get the most “bang for your buck.”
Most people have terrible intuitions about performance. We certainly did— if we'd prematurely optimized the things we'd thought were going to be slow, we'd have wasted a ton of time
I felt there were lots of superstitions about performance that were just wrong. For example, considering big-O time complexity was often wrong for us, because our n was usually pretty low, so we were dominated by constant factors.
Performance is incredibly measurable. You should always measure, rewrite, and repeat until you hit your perf budget. Otherwise you're just shooting in the dark.
Because we goalled on clean code, tearing and replacing whole subsystems for performance was relatively easy.
I'd like to hear more about this. I'm not sure if it was ever turned into a blog post? I don't understand how using JSX differently can eliminate the need for a visual editor for components.
- If they abstract a lot, you can target designers but then it inevitably produce unmaintainable spaghetti code because it does not know exactly where you are going
- If you abstract it less, you can only target developers but what's the point for developers to use an UI when they have to know how to code in the first place?
The next step we are currently in: a lot of new languages compile to C. And it produces absolutely unreadable code. But who wants to read and write C when you can get the same speed with a much nicer language ?
I agree visual editing may be more complicated than compiling / transpiling, but I believe we'll get there eventually, and people will no longer read and write css manually.
> We’re releasing it open source both so you can keep using it, and so we can share our ideas about how to build UI tools
Maybe there are no buyers, or the investors are on board with the open-source plan, who knows. But it's not always the right answer or even possible.
Software isn't exactly like buying groceries where you can go to a different grocer and get a different, equivalent solution. Instead, it's more like a bakery, where a baker has their own (in software terms, proprietary) recipe that they use. If that bakery goes away, their recipe might go too, and that would directly affect those who buy their bread, especially if it was a joint that sold to other businesses, as software normally is. If I were that baker, and I went bankrupt, I would probably give that recipe to my customers- after all, I can't profit off of it, but they still need it.
Unless I'm extremely mistaken, they haven't been bought by anyone else and won't be resuming operations ever. If they had been bought out, that would be different- their storefront would still exist, albeit somewhere else, likely- and people would still be able to get your products.
I am a believer in Right to Repair, and the overall principle here is similar- documentation of products is really helpful in fixing things, and if a company goes out of business, and has no recourse or any way to deliver support, it would be amazing for customers if they publicly released as much documentation as possible.
To be clear, I don't think that using an old thing forever is a good idea, but it often takes a long time to find something else that fits the bill perfectly.
(though unlike most of the startups on that blog, Pagedraw seems to be shutting down in the most open and responsible way)
> We’re moving on, but We’re very proud of the technology we’ve built.
My hypothesis is that unlike other mediums like paper or video, the elastic nature of the web makes understanding the positioning paradigms/box model/etc a necessity that no visual tool can really abstract. By the time you understand all that, it's just faster to write and maintain code.
Ultimately we didn't get all the way there, which is part of why we closed up shop
Feel free to hit me up if you want to know more.
They're just not popular with the HN crowd.
It's not a dealbreaker for most businesses but when you're competing on SEO, these little things matter
1. They tend to produce output that is not ideal to work with using another program (e.g. your editor of choice).
2. When you make changes to the output, the WYSIWYG editor might not understand it. This very much depends on what is changed, how, etc, but it could be a problem.
Both essentially force you to use the same software, and I think that scares people away (rightfully so I'd say).
The source code looks very interesting (and a bit unusual for a React app.) 
Looks like Coffeescript + "Coffeescript JSX", which I hadn't heard of before.
I didn't look too closely, but I didn't see how they're doing state management in the editor. No Redux in the package.json . I was hoping that I might be able to repurpose some of this for a new project I'm working on, but it all looks very foreign to me , and the file organization doesn't seem to follow any conventions that I'm familiar with.
> We just used global mutable state instead of the whole immutable Reduxy patterns people liked to enforce. It was simple and it worked great.
Also, are you sure you need react?
Integrating with ClojureScript ended up working, but it was a real pain. That's less on them than us though, so I won't fault them for not supporting every language which compiles to JS.
However, PageDraw never worked in Firefox. Perhaps now that it's going open source, this can be prioritized, but it was a serious issue for me, since don't use anything else. Just for PageDraw, I installed Chromium; they had a desktop app, but it was much less stable than the site, in my experience.
PageDraw had some annoying bugs, as well. Invisible components in Figma would be imported as visible, but I could work around that. However, there were codegen issues with multi-states which required me to go into the code and make tweaks. I had reached out to Gabriel several times with reproduction cases and debug info, but never heard back.
Aside from that, a slew of UX issues (copied right from my notes):
* Add a "Select children" button when only one item is selected; it shows up when multiple are selected, but not one
* Add a "Delete branch" button which deletes the selected and all children
* Allow searching for component within project
* Allow viewing of an image's source, if it's imported from Figma
* Allow inter-project component references, so I can not have every screen in the same project
* Allow importing of path-based icons with overlaps from Figma; currently, they're all broken
* Allow selecting multiple items with overlapping properties and setting them all at once
I love the idea of PageDraw and I wanted so badly for it to help us with our app development and converting Figma designs to React code which we can use from ClojureScript. Alas, the experience was viable for neither me nor my designer.
Thanks for trying, PageDraw devs, and thanks even more for open sourcing your work as part of this shut down. Best of luck in your future endeavors.
So it took them 1 year to shut it down ? Is that a quick failure ? Wonder why they didn't go any further. Was it Funding related ? Curious.
Kudos to the team - they might have been able to raise money to keep plugging away, but they've seen the writing on the wall.
I know nothing about their company or their team, but I do know that product-market fit is something you have, or you don't. There's no real middle ground, so if you don't have it, you need to madly iterate/pivot until you do, or cut your losses and move onto the next project.
Note: That page is clearly just their old homepage + the shutdown message; whatever it says was written pre-shutdown.
Hopefully these guys buck the trend, because there looks like there is some cool stuff in there.
I didn't find "vue" anywhere in the source except for the logo in the landing page. No branches besides master, so if there was any effort put in to supporting Vue by the pagedraw team it didn't make it to this open source release.
Still, it's great the source got released, thank you Pagedraw folks!
Some time ago someone asked if they still in business and they said they had planned some things.
I liked their idea, but well.
Looking into Draftbit and FramerX right now.