Hacker News new | past | comments | ask | show | jobs | submit login
Pagedraw is shutting down and going open source (pagedraw.io)
343 points by jameslk on Feb 28, 2019 | hide | past | favorite | 117 comments

One absolute rule for success in software is that there are no absolute rules.

From their post mortem https://medium.com/@gabriel_20625/technical-lessons-from-bui...

> Performance is never a justification for anything if you haven’t measured it

This just isn't true. Anyone with a reasonable amount of experience is often able to look ahead - well before any code is written - and see how performance could be better if road A is taken instead of road B.

Its madness in that case to blindly choose a road without even thinking about performance - even if it is unmeasurable and in the future.

There's more in the same vein:

> Go back to step 2 and make sure you completely ignore performance, because you probably haven’t.

Now I have to imagine that these guys don't genuinely believe what they wrote - but in that case, ditch the words "never" and "completely ignore" and go for something more reasonable.

(Of course that would make the phrasing less dramatic and click-baity, which is maybe what it takes to get eyeballs these days)

Its comments like this that make people hate HN. I agree with their writeup fully. They focused wayyyy too much on performance and not enough on an MVP, pivoting, marketing, growth, and staying agile.

I would love to get all these "Performance First! Its so easy" engineers in a room for a week. Hell, give em three weeks and they still wouldn't have a single line of code written. But they would still be arguing about the best way to deal with their upcoming 1 million hits per second.

Rule #1 of any business: Sales cures all. You know what's really hard to sell? A product that hasn't even made it out of the damn door yet.

How I've always put it:

"Steak before sizzle"

Get something done and working, even if performance sucks. Then do your performance logging and optimizations. Don't attempt to pre-optimize, because you may make a good situation worse.

That doesn't preclude selecting correct algorithms and such beforehand; ie - don't be stupid about it. But don't try to be clever before you know you need it, either.

Second thing I always think should be necessary for any software project:

Get your security design working first.

Don't try to patch in security after the fact; if this isn't at the top of your design and planning, it should be (assuming the project needs it or can be foreseen to need it). Too often I have witnessed the opposite - and it almost never works without major refactoring (or scrapping and starting over).

> "Steak before sizzle"

I like it!

> That doesn't preclude selecting correct algorithms and such beforehand; ie - don't be stupid about it. But don't try to be clever before you know you need it, either.

Exactly. "Avoid premature optimisation" doesn't mean "make it as slow as possible", it just means "use std::vector until it becomes an issue instead of hand-coding some arcane data structure in assembler".

I would personally expand it to this: Steak before sizzle, but don't get everyone sick by serving it raw...

Except that salespeople say "don't sell the steak, sell the sizzle!", so this could get confusing.

Isn't that the point, though? You sell the sizzle... but you can't deliver without the steak.

Actually, this is now possible, as Kubernetes makes it easy to deploy steakless applications. I think i heard that.

From an operations perspective, there are three critical things:

Security, Durability, Availability.

From my perspective, that's the order of importance too, but any one without the other two and you're not going to gain or retain customers.

Your order also captures permanence. If another actor gains access to the data, I cant revoke it. If I lose the data, I cant access it. If I cant access the data now, I can try again later.

On your second point ... Im not sure. Sony is still in business. Equifax and co are still printing money. OPM is still around. Security and trust doesnt seem so binary in practice.

>Get your security design working first.

Tbe ahove applies to accessibility too, by the same reasoning.

> Get something done and working, even if performance sucks.

Often if "performance sucks" then it isn't done and isn't working. Performance is a client requirement like any other.

True, but when writing a 2D card game, aiming for an engine capable of doing 3D VR at 120fps is probably a bit too much.

Yet it is what I sometimes see when this performance discussions take place.

I once had a multi-day argument with the other 2 devs on the team about whether a set of MySQL queries and code would scale to thousands of reporting devices. My stance was that it didn't matter because if we didn't make the TWENTY devices we had work by next week, we'd lose that contract and pretty soon our jobs.

Well, as someone else said, it's not always black or white. Videogames MUST deliver performance, it's not only sales. Some desktop AND web applications could benefit from some performance boost here and there, and I mean being fast enough and not some abominations I've seen/heard about (minutes to get Excel files and examples like that, just because they're "thousands" of rows from a database).

You know, sales is everything and I agree that "there are two types of software: software that people complain about and software that nobody talks about", but some of us take some pride on doing stuff with enough performance and not pure crap...

Minecraft is generally known to be fairly poorly optimised, and the graphics weren't great. Mist was mostly a series of static images. They are two of the best selling games of all time.

> Myst was mostly a series of static images.

Myst was at the cutting edge of technology at the time. They had to write some weird custom extensions in order to jam color into HyperCard at all. (Myst was initially written in HyperCard!) One could argue that designing the game around static images that could be rendered offline was itself an optimization. It gave the illusion of an immersive 3D space within the very harsh limitations of early Macs.

Also, the couple of QuickTime videos used in the game were very carefully integrated into a surrounding static image to give the illusion of an entire live scene while only animating a small rectangle of it because that's all most computers could handle at the time.

Myst is a fantastic example of making very thoughtful performance decisions given the constraints at the time.

Myst is not the example you think it is. It's a fully 3D world with moving parts like the library staircase or the redwoods or the train section.

That they managed to pull it off for home computers in 1993 is nothing short of a performance focused mindset which led them to static images and QuickTime overlays. As soon as they had the ability, they released realMyst which was a realtime 3D version of the original, followed by realMyst Masterpiece in the last 5-10 years.

For that matter, Minecraft may be poorly optimized (it is) but without the effort spent on the chunking system that renders 16x16x(128/256) blocks as a single mesh, it wouldn't run at all. There's plenty of low hanging fruit (which may be more difficult to retrofit into the existing engine, or in the JVM itself) that other games have found and utilized, but without that initial performance optimization Minecraft wouldn't run at all.

Yep...100x this! What is that they say about pride and what it cometh before???

Take as much pride as you want. Just remember to, you know, ship something from time to time.

> Minecraft is generally known to be fairly poorly optimised, and the graphics weren't great

Perfect example.

You're not making games, you're selling entertainment. Focus on the latter first, and optimize accordingly.

I see this time and time again - beautifully architected and performant platforms that completely bomb because noone actually wants it.

> You're not making games, you're selling entertainment. Focus on the latter first, and optimize accordingly.

I agree with that. And yet, Minecraft is possibly the worst example:

- It's an exception to the general wisdom that games do have to worry about performance in order to deliver fun. Just because Minecraft became popular despite it's abysmal performance, doesn't mean games can get away with that in general.

- Notch never expected Minecraft to become that popular. In a sense, Minecraft was an accident.

- Minecraft's performance was so bad, a lot of people had to install a mod called OptiFine just to be able to play it on their hardware.

Oh, I do think performance matters - but that's usually only once you've discovered your underlying market. Minecraft's audience clearly didn't care about performance, so it would have been wasted effort to optimize in the early days.

That being said, I agree that performance might actually be integral to your product. Quake 3, for example, probably would have bombed if everyone was playing sub-10fps.

Either way, the priority is "figure out who wants your product and why." That helps determine when and where to focus your optimization efforts.

Guessing Minecraft has enough performance to be playable. Wolfenstein 3D? Doom? Pretty good performance anyone? World of Warcraft being playable on low end PCs? Come on... Not everything is black or white.

People say that, but it still runs playably on pretty ancient hardware, it's certainly easier to get running that most AAA titles.

Videogames MUST deliver performance

That's not true in the way that I think you're using video games as an example. Video games must be fun, being fun requires being performant enough, and for graphically demanding games that requires lots of optimizing. But there are tons of games that are not graphically demanding.

Ehhh, I don't think your example disproves the idea you're responding to. Perhaps video games MUST deliver performance in the end, but often game prototypes are not performant at all for the sake of rapid experimentation, and are only optimized towards the end of development once all the systems are set in stone...which is what's being advocated for here.

In the old days, customers would never be able to get their hands on video game prototypes. You had one shot. Today, it's obviously a bit different, though many companies still treat it like one shot anyway.

Sure they would.

Game studios were one of the first areas to care for UX.

During the 80's and early 90's many kids got into games by starting as group testers after school.

At that point, they were insiders or even employees, not final customers, no? Games were released onto cartridges that couldn't be updated once the cartridges were manufactured and in the customer's hands.

Insiders if you will, studios would get the kids into a room to play test ongoing development.

It wasn't always the same group of kids.

Yeah, but even in the old days the developers would still be prototyping. Internal stakeholders are still stakeholders, and it’s developers optimizing for things other than speed. Nintendo devs famously build nothing until Mario’s jump (the MVP) feels right.

Yeah, but my point was that once it was in a cartridge in a customer's home, there wasn't anything that the company could do to update it. They certainly didn't send their customers copies of the prototypes. Whether or not companies ate their own dog food doesn't change the crux of my point.

Yeah, but the original point you were responding to was endorsing the idea that “you should invest time in optimizing/performance last, and focus on making something people want to use first.” I’m saying your point that videogames have to be performant at the end is irrelevant to the point at hand because that’s exactly what the games industry does, they prototype without care to optimization to make something worth optimizing—i.e. something internal stakeholders reasonably believed there would be a market for.

That depends on the game.

Writing casual games with AAA tech is needless overengeniring.

> You know what's really hard to sell? A product that hasn't even made it out of the damn door yet.

I don’t know about this. I generally end up building the things that have already been sold.

The commenter you’re replying to isn’t saying “performance first”, they’re just saying “performance matters”.

In other words—don’t go to extremes in any direction.

It sounds very much like this conpany took an extreme perf-first approach from the outset, and advocate a reactionary overcompensated opposite extreme in their postmortem.

Just apply some common sense and paretos rule. I.e don’t do something dumb because there is no measure. E.g. a loop that needlessly iterates a billion times.

> I would love to get all these "Performance First! Its so easy" engineers in a room for a week.

Except that's not the argument in the comment you're reacting to. Here's what the comment actually said:

> Anyone with a reasonable amount of experience is often able to look ahead - well before any code is written - and see how performance could be better if road A is taken instead of road B.

In other words, some optimizations come with no significant cost. As a banal, perhaps exaggerated example, if you're writing code that needs to store data in a collection and you know that it's going to perform random access to that data on a regular basis, you're not likely to pick a linked list over some kind of array.

What @abraae is criticizing is the practice of asserting "best practices" that are likely to be taken out of context, widely propagated and misinterpreted.

Knuth's famous assertion "premature optimization is the root of all evil" is an excellent example. People love quoting it, but few seem to take into account the context, to the point that it has been largely forgotten. Here's the quote in its context:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

To be clear, it's not Knuth's fault that people are taking his very rational and sensible advice and reducing it to "yeah, you should totally disregard performance when first writing your code". This is not about pointing fingers and assigning blame, it's about being responsible when offering advice. Knuth's advice was reasonable and nuanced, and it still got taken out of context and reduced to something different and arguably harmful. Now look at Gabriel's advice, which is arguably a lot easier to take out of its context and misinterpret, and imagine the consequences it could have.

That is what @abraae is reacting to.

EDIT: There's another thing that really, really bothers me about your reply:

> Rule #1 of any business: Sales cures all.

This, right here, is what I personally feel is wrong with software nowadays. It's how we end up with poorly written, bloated, bug-ridden, unstable, unreliable, insecure crap that we end up using anyway, because there's no better alternative.

Ironically, Knuth's full argument is exactly what we're saying, and what @abraae's taking out of context to criticize.

See https://medium.com/@jaredpochtar/on-performance-and-software... and the full section on https://medium.com/@gabriel_20625/technical-lessons-from-bui...

> Most of the code we wrote was fast enough (<2ms) that making it faster wouldn’t be noticeable, so it would definitely be a waste of time to optimize. Making code that’s 0.1% of your runtime 100x faster only makes your latency <1% lower.

> Ironically, Knuth's full argument is exactly what we're saying, and what @abraae's taking out of context to criticize.

See https://medium.com/@jaredpochtar/on-performance-and-software... and the full section on https://medium.com/@gabriel_20625/technical-lessons-from-bui...

I read the post in the second link before writing my original reply, because I don't like jumping into a discussion without making sure I understand what's going on ;)

I get that your intention was the same as Knuth's. And yes, I agree that @abraae took your advice out of context. That was, I believe, his point and it's certainly mine.

Specifically, when you look at that numbered list describing your code-writing practices, the numbers 1 and 3 stand out as hyperbolic. People will* take that stuff out of context and pass it around. Hell, people are already having a hard time staying reasonable in this very discussion thread.

I can't speak for anyone else, but don't take my comments personally. You guys made your own product and ran with it, which is more than I can say for myself, and I respect you for that. I also agree with what you're trying to say. However, I stand by my criticism of the hyperbole, because I've seen people who take that stuff too literally and proceed to write crap.

Yeah it's not super clear, but that list isn't of independent points: it's the steps we used to ensure a new feature is good code. "ignore performance" is explicitly step 3 of 5, where step 5 is "then you optimize"

It is not that simple.

While premature optimization is definitely a problem, In my experience not handling it correctly before the app is in prod usually results in having to fix logical bugs , refactor for performance and add new features users are clamouring for at the same time.

We don't get lot of chances with users to say you will eventually fix these issues for them, you will simply loose them .

As all things there has to be a balance between both ends

Can your comment help anyone?

It seems to be within the set of concepts and suggestions that cannot fully sink in and be truly appreciated until some degree of failure or experience relating to them has occurred.

Yeah, they read it. they get it. But they still don't get it, if you know what I mean.

But wow - this one sure does take on some laser etched permanence once it does get through.

> Performance is never a justification for anything if you haven’t measured it

A "justification" implies that there is a compromise being made in the name of performance. It's very reasonable to choose, in the first iteration of a program, to choose to disregard performance concerns in favor of readability, which is exactly the context of how they framed it.

When you make the big initial decisions based on readability, that informs all the rest of your choices in working on that piece of code. If you skip a few steps ahead because e.g. "we're making some extra database calls here, so we should grab all the data we need the first time instead", you'll find it much more difficult to optimize your way back to the most readable code possible.

Is it the only way to write software? Of course not. And it may make more sense for the systems and frameworks they use than for yours. But I for one admire their willingness to have a point of view and a principle when it comes to writing code. I don't think it's madness. And I don't think you're wrong either. I just don't think you would have been a good fit for their team. By putting their philosophy out there, they help make that clear. And likewise! I don't begrudge your philosophy, and I think it's often correct. But please don't accuse others of drama and clickbait for describing strong opinions.

I think you missed the gist?

They are saying if you are saying "this piece needs to be more performant" you need to measure that first, before making your choice of what to make better.

You are saying, "you can with an idea in mind optimize before you go and it will have better performance" but the current prevailing business logic is do not optimize until you know it's "the right thing".

AKA premature optimization is bad.

Did I mis-interpret what you are saying?

I'm saying there are very few real black and whites, and everything in life and in software is more kind of grey.

As such there are very few black and white rules you can follow and succeed.

The correct approach is somewhere to the left of "premature optimization" and somewhere to the right of "don't even think about performance until you can measure it".

More experienced and talented people will be more successful at picking that sweet spot.

Telling a dev that the correct approach is to pay absolutely no attention to performance before it can be measured is ... crazy IMO. I would expect them to have at least thought about it, and have some opinions, just as they should not have put excessive effort into prematurely optimizing things they don't know are going to be a problem.

My point is - its grey, use your brains, don't buy into absolute rules.

Agree with you fully. People take these blanket generalizations and make them rules and then use them to ignorantly judge someone's decisions. I've worked on projects where performance needed to be addressed up front or else it just wasn't viable.

I think GP is saying, if you are at a point in development where you can choose A or B, and A has foreseeable better performance or scalability, all else being equal choose A.

A lot of assumptions there and real world is almost never that clear-cut.

The general advice of "don't optimize until you've measured" is still almost always correct.

> The general advice of "don't optimize until you've measured" is still almost always correct.

The issue of course being that when designing code, various design choices can have significant performance implications, and can be very hard to change once implemented and in production.

Only 20k LOC in 3 years is impressive. And I agree that you should watch out for premature optimization an premature abstraction. What i found most interesting was using coffeescript that compiles to js that compiles to react that compiles to js. They might have been too focused on the technical side. You should probably pick the most boring and proven tech stack and focused more on the actual problem.

Thank you! I'm one of the cofounders— keeping LoC low was one of our primary measures of code quality.

> we were too focused on the technical side

Yes. Oh boy we were.

> should probably pick the most boring stack

Agreed. I'd worked in the Coffeescript compiler so it was "boring" for me, but you should pick whatever's "boring" for you— likely not Coffeescript

Would you say the same thing if they were using Typescript instead of Coffeescript?

An actual problem might be how to generate a responsive design for "mobile" and different screen sizes. That's what you should focus on. The problem with picking new cool tools and languages is that you might end up solving dev/sys-ops issues, learning how to use them, and develop additional tooling, instead of solving actual problems. Unless you think it's a really good fit for the actual problem domain and thus worth the extra investments.

Paragraph 3

We don’t think any lesson here should be taken as an “always true” type thing.

Which is entirely inconsistent with the statement "Performance is never a justification for anything if you haven’t measured it".

Dude. This is just how people talk. It's called hyperbole and it's a rhetorical technique. Like "never talk to the cops without a lawyer". Really? Never?

"Hi, nice to see you guys here. This is my husband. He's a policeman"

"I would like to talk to my lawyer"

Because we know the world isn't black and white we're able to interpret these statements. It's human speech. Lots of the details are left out because conciseness is gold.

Yeah, I get that, I really do.

But what you mean is that you are able to interpret these statements.

Plenty of others, maybe less experienced, are not.

Next thing you know, they are spreading the word - "hey, did you know there's a new development methodology? The idea is that you completely ignore performance until after you've written some code. Yeah, you heard me - completely ignore!. It sounds crazy, but that's how it works!"

Hi— Jared from Pagedraw here. (Stunned that this is the top comment on HN.) I stand by what we wrote 100%, but to add some color:

I don't want you to think we didn't care about performance. We did— a lot. We just think most people do performance optimization badly.

There's a lot of superstition on the internet about how to structure your code so it's fast. In our experience, following this advice usually does not make your code fast. Most people have bad intuition about performance.

To counter this, you have to constantly measure the performance of your code.

There's 2 parts to this idea.

1. Most things are fast enough that optimizing them is not worth making the code less readable, and is a waste of time.

2. Measuring, rewriting, and repeating is the right way to do performance anyway, full stop. There's lots of superstition around performance, which is absurd because it's extraordinarily measurable.

For point 1:

We goalled on a clear and flexible style, which helped us move incredibly fast with very few people. See https://norvig.com/spell-correct.html for an example of a golden piece of code in this style. Rewriting for performance takes moves us away from that goal.

Most of the code we wrote was fast enough (<2ms) that making it faster wouldn't be noticeable, so it would definitely be a waste of time to optimize. Making code that's 0.1% of your runtime 100x faster only makes your latency <1% lower.

It's not because we're brilliant coders: we just realized that most of the code we wrote ran on arrays with few enough elements that even O(n^2) algos could run in <1ms. Some of the code we wrote ran on a server, where network was the dominant factor, and we could always just scale up our servers with money. Some of the code ran on click, where it could be as slow as 5 seconds without the user really caring.

For point 2:

There's no such thing as “fast” code and “slow” code— you can put in the work to optimize any piece of code, and make it incrementally faster.

Because you can put in work to optimize any particular code, we either budgeted hours of engineering time for performance, or set performance goals. In either case, you want to spend your time optimizing the code that's the slowest, so you get the most “bang for your buck.”

Most people have terrible intuitions about performance. We certainly did— if we'd prematurely optimized the things we'd thought were going to be slow, we'd have wasted a ton of time

I felt there were lots of superstitions about performance that were just wrong. For example, considering big-O time complexity was often wrong for us, because our n was usually pretty low, so we were dominated by constant factors.

Performance is incredibly measurable. You should always measure, rewrite, and repeat until you hit your perf budget. Otherwise you're just shooting in the dark.

Because we goalled on clean code, tearing and replacing whole subsystems for performance was relatively easy.

> Ultimately, we think Pagedraw is the wrong product. We think you can get 90% of the benefits of Pagedraw by just using JSX better. Our findings on this will be controversial, as they go entirely against the current “best practices,” so we’ll save them for a later blog post.

I'd like to hear more about this. I'm not sure if it was ever turned into a blog post? I don't understand how using JSX differently can eliminate the need for a visual editor for components.

You've probably seen their blog post from other comments, which doesn't directly address the issue. They essentially made a visual editor for developers - this is the crux of the issue. It's easy for developers to add, preview, and edit components without a purpose-built visual editor. They appear to suggest that following some of the "technical lessons" in their blog post is what these devs really need - not a visual editor. Having said that, a visual editor would be great for designers who don't want to get their hands dirty with coding.

That's exactly the whole problem with web editors:

- If they abstract a lot, you can target designers but then it inevitably produce unmaintainable spaghetti code because it does not know exactly where you are going

- If you abstract it less, you can only target developers but what's the point for developers to use an UI when they have to know how to code in the first place?

That's true for now, but I believe we will get there eventually. Who is writing assembly code anymore ? You can argue that compilers produce unmaintanable spaghetti assembly code, which is certainly true. But almost no one will look at the assembly output.

The next step we are currently in: a lot of new languages compile to C. And it produces absolutely unreadable code. But who wants to read and write C when you can get the same speed with a much nicer language ?

I agree visual editing may be more complicated than compiling / transpiling, but I believe we'll get there eventually, and people will no longer read and write css manually.

I like the subtle nod through capitalization in the phrase "It’s been An Incredible Journey".

Though unlike an Incredible Journey, this is the right way to shut down: Open up your code and contribute what you did to society.

I think it isn't meant so much as a gift to society as it is a best-efforts gesture to your existing users regarding their sites and investment:

> We’re releasing it open source both so you can keep using it, and so we can share our ideas about how to build UI tools

It was both of those two things— we wanted to do right by our users who trusted us with a dependency, but also because we think we did some things differently than your average WYSIWYG builder, and wanted those ideas to help move the standard forward.

Maybe, maybe not. Did they have investors? The code is an asset. Can they sell the code/IP for any significant amount of money? If so, they should do that and pay back the investors/reduce their losses.

Maybe there are no buyers, or the investors are on board with the open-source plan, who knows. But it's not always the right answer or even possible.

This is a very narrow view. The code is not just a company asset, it's a customer asset as well, and their needs ought to be considered as well. Have you ever had software you were using just stop working because remote servers failed or something of the sort? What about if they never fixed that problem?

Software isn't exactly like buying groceries where you can go to a different grocer and get a different, equivalent solution. Instead, it's more like a bakery, where a baker has their own (in software terms, proprietary) recipe that they use. If that bakery goes away, their recipe might go too, and that would directly affect those who buy their bread, especially if it was a joint that sold to other businesses, as software normally is. If I were that baker, and I went bankrupt, I would probably give that recipe to my customers- after all, I can't profit off of it, but they still need it.

Unless I'm extremely mistaken, they haven't been bought by anyone else and won't be resuming operations ever. If they had been bought out, that would be different- their storefront would still exist, albeit somewhere else, likely- and people would still be able to get your products.

I am a believer in Right to Repair, and the overall principle here is similar- documentation of products is really helpful in fixing things, and if a company goes out of business, and has no recourse or any way to deliver support, it would be amazing for customers if they publicly released as much documentation as possible.

To be clear, I don't think that using an old thing forever is a good idea, but it often takes a long time to find something else that fits the bill perfectly.

For those that haven't seen it: https://ourincrediblejourney.tumblr.com/

(though unlike most of the startups on that blog, Pagedraw seems to be shutting down in the most open and responsible way)

That’s a fairly depressing read. While I think it’s understandable the founders run off with a ton of money, it’s not really a great state of affairs.

Thank you! I was worried this would be too subtle :D :D

Is there also a hidden meaning in the second We're here:

> We’re moving on, but We’re very proud of the technology we’ve built.

lol no, I wrote that and it's a typo :P

More details on the shutdown here, with some good insights: https://medium.com/@gabriel_20625/technical-lessons-from-bui...

We've been writing HTML for a couple of decades now. There have been dozens (hundreds?) of WYSIWYG editors and nothing has really caught on.

My hypothesis is that unlike other mediums like paper or video, the elastic nature of the web makes understanding the positioning paradigms/box model/etc a necessity that no visual tool can really abstract. By the time you understand all that, it's just faster to write and maintain code.

We thought so too. The key idea in Pagedraw is you could design in the positon:absolute world designers think in, and we'd translate it into the Flexbox world apps need.

Ultimately we didn't get all the way there, which is part of why we closed up shop

there is another way to tackle this but it takes a lot more work than a DnD editor can deliver:








Feel free to hit me up if you want to know more.

SquareSpace, Wix, Shopify, and Elementor (a WordPress plugin) have all caught on. The first three are more than page-builders, but the page-builder is an important component.

They're just not popular with the HN crowd.

So true, I have seen total beginners create nice looking pages with the tools you've mentioned, but if they stay in it for the long haul they find themselves limited by these tools - after which they either gain the technical skills required and move away from the page-builder or hire a dev. That's why I personally won't use such tools. They make it quicker to begin with, but I'll inevitably find myself fighting against the tool.

I used Elementor for a recent site but it churns out such bloated code that I was worried my performance was taking a hit. Ended up writing everything from scratch in HTML and site performance has improved by 25%+

It's not a dealbreaker for most businesses but when you're competing on SEO, these little things matter

What kind of beginners? If someone wants to make websites for a living, of course they need to get the skills. If they just want one website, those tools are often enough.

I'm not referring to individuals who wants to make money as a web devs. What I'm referring to is the entrepreneur who lacks technical skills but is able to put up a nice little website using page-builders, and after a while wants to take it further and add features not supported by the page-builder.

Have caught on with pros or people that do not know HTML?

I think there are also two other problems with WYSIWYG editors that haven't really been solved:

1. They tend to produce output that is not ideal to work with using another program (e.g. your editor of choice).

2. When you make changes to the output, the WYSIWYG editor might not understand it. This very much depends on what is changed, how, etc, but it could be a problem.

Both essentially force you to use the same software, and I think that scares people away (rightfully so I'd say).

Sorry to hear about the shutdown, and it looks like it was a pretty useful tool. Great that they open sourced it.

The source code looks very interesting (and a bit unusual for a React app.) [1]

Looks like Coffeescript + "Coffeescript JSX", which I hadn't heard of before.

I didn't look too closely, but I didn't see how they're doing state management in the editor. No Redux in the package.json [2]. I was hoping that I might be able to repurpose some of this for a new project I'm working on, but it all looks very foreign to me [3], and the file organization doesn't seem to follow any conventions that I'm familiar with.

[1] https://github.com/Pagedraw/pagedraw/tree/master/src

[2] https://github.com/Pagedraw/pagedraw/blob/master/package.jso...

[3] https://github.com/Pagedraw/pagedraw/blob/master/src/editor/...

http://www.paulgraham.com/avg.html is how we felt about Coffeescript + JSX

Frankly, picking a non-conventional language in order to associate yourself with "true hackers" is PG's least useful advice. I am not even sure if he would still agree today. Hackers use the language they are most comfortable with and that gets things done. Everything else is secondary. I wrote a blog post on the subject [1].

[1]: https://shubhamjain.co/2018/12/01/why-paul-graham-wrong/

Yeah we didn't do it for the hipster cred— we did it because I agree with PG that some languages are better than others, and Coffeescript is just better than Javascript.

The reason they didn't use redux was explained in the medium post.[1]

> We just used global mutable state instead of the whole immutable Reduxy patterns people liked to enforce. It was simple and it worked great.

[1] https://medium.com/@gabriel_20625/technical-lessons-from-bui...

I'm actually really disappointed that I found out about them because they're shutting down. This looks perfect to me because I've been wanting to make some sites fast but I don't have the time to expand my React skills currently. Does anyone have a good recommendation for an alternative?

I'm looking forward to Modulz. https://www.modulz.app/

They’re making a desktop app (that will be released as open source as they shut down the website). Maybe it would fill your needs?

Use the open-source desktop app they released.

Interesting, is the desktop app on their Github the final version?

Yes, it's the final app from Pagedraw... but we encourage forks! There's some good stuff in there :)

Thank you for releasing all your hard work to the community!!

React Studio is free: https://reactstudio.com

Framer X is supposed to have a similar feature set, but Mac only.

Also, are you sure you need react?

I'm open to any language! My skill in front-end development isn't great. I only know a bit here and there to get by. I've been trying to make a small webapp tied to Amazon Sagemaker but I struggle to make the site presentable where it doesn't look like it's made in the 90s....

If your app doesn't require complex interactions with immediate feedback I would suggest using static HTML or a server rendered MVC framework. React is pretty overkill for simple apps, and probably the most difficult entry point for front end development. Anyone that says react is a good way to learn software development is lying or trying to sell you something. :)

I tried using PageDraw for LetsBet when we ported our ClojureScript + React codebase to desktop with Electron. All of our designs are done in Figma, by our designer, so we were trying to rely on the Figma project importer. Unfortunately, the importer was very unreliable, bringing in something that looked like a Vangough-like representation of what had been designed. Basically, our designer would have to use Pagedraw's editor to build the UI, but it simply lacked the feature set compared to PageDraw.

Integrating with ClojureScript ended up working, but it was a real pain. That's less on them than us though, so I won't fault them for not supporting every language which compiles to JS.

However, PageDraw never worked in Firefox. Perhaps now that it's going open source, this can be prioritized, but it was a serious issue for me, since don't use anything else. Just for PageDraw, I installed Chromium; they had a desktop app, but it was much less stable than the site, in my experience.

PageDraw had some annoying bugs, as well. Invisible components in Figma would be imported as visible, but I could work around that. However, there were codegen issues with multi-states which required me to go into the code and make tweaks. I had reached out to Gabriel several times with reproduction cases and debug info, but never heard back.

Aside from that, a slew of UX issues (copied right from my notes):

* Add a "Select children" button when only one item is selected; it shows up when multiple are selected, but not one

* Add a "Delete branch" button which deletes the selected and all children

* Allow searching for component within project

* Allow viewing of an image's source, if it's imported from Figma

* Allow inter-project component references, so I can not have every screen in the same project

* Allow importing of path-based icons with overlaps from Figma; currently, they're all broken

* Allow selecting multiple items with overlapping properties and setting them all at once

I love the idea of PageDraw and I wanted so badly for it to help us with our app development and converting Figma designs to React code which we can use from ClojureScript. Alas, the experience was viable for neither me nor my designer.

Thanks for trying, PageDraw devs, and thanks even more for open sourcing your work as part of this shut down. Best of luck in your future endeavors.

Interesting that they launched pagedraw exactly 1 year ago (Feb 26th, 2018) and I remember reading that launch post [0]

So it took them 1 year to shut it down ? Is that a quick failure ? Wonder why they didn't go any further. Was it Funding related ? Curious.

[0] https://news.ycombinator.com/item?id=16467387

I think a year is more than enough to figure out if you have the wrong product.

Kudos to the team - they might have been able to raise money to keep plugging away, but they've seen the writing on the wall.

I know nothing about their company or their team, but I do know that product-market fit is something you have, or you don't. There's no real middle ground, so if you don't have it, you need to madly iterate/pivot until you do, or cut your losses and move onto the next project.

Ever heard of them, but love the concept. Perhaps the right thing would be someone like Sketch to incorporate this and make it mature and maintained. They could charge a nice extra for doing so.

very interested in the vuejs implementation. How soon is soon, now that its moving to open source?

Odds are, never. If some volunteer open-source developer decides to pick it up, it might happen, but as the company's shutting down, any "soon" promised by them in the past is obviously now a "never".

Note: That page is clearly just their old homepage + the shutdown message; whatever it says was written pre-shutdown.

This is correct. We wanted to preserve the pre-shutdown homepage, but there is no future development planned.

Sad but true, from having seen upfront a similar process. It takes just a fraction of work to bundle all the tech up for open source, but at the point of shutdown everyone is usually just toast... sadness to it too. Most people can't put in that day or two required to jettison it into open source land, right now, and then in a month or two the rot has already set in.

Hopefully these guys buck the trend, because there looks like there is some cool stuff in there.

The link to the source is right on the page? https://github.com/Pagedraw/pagedraw

Me too :)

I didn't find "vue" anywhere in the source except for the logo in the landing page. No branches besides master, so if there was any effort put in to supporting Vue by the pagedraw team it didn't make it to this open source release.

Still, it's great the source got released, thank you Pagedraw folks!


Some time ago someone asked if they still in business and they said they had planned some things.

I liked their idea, but well.

Looking into Draftbit and FramerX right now.

The really funny thing that the same cycle repeats in jsland that we have seen in java land a decade ago. XD I can't even put into words how hilarious is this. We need visual editors. We don't need visual editors. We need visual editors. We don't need visual editors... Etc

We pretty explicitly said "hey, the web is a regression from the 90s... let's just make a company that fixes that"

looks like they just did not find the type of investors who would help them all the way to the finish line. i hate to see products that are trying to solve hard problems but falls short due to funding.

I'd love to have a similar product, but will produce only the React Element in json form only.

Was this like a wysiwyg editor for badly performing web applications?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact