This is so inspirationally horrible someone spent hours remaking it, like it should've been: http://studentweb.infotech.monash.edu/~wlay/FIT1012/muse-dem....
Original: 1496 lines, 77.9kB
Your version: 104 lines, 4.75kB
I'd thought we'd progressed beyond the state of a decade ago where Dreamweaver or what-have-you would build you a cumbersome and baroque html splooge to match whatever you had done in the designer, but I guess we haven't advanced that far. Just goes to show you that front-end devs are still as necessary as ever I suppose.
For many purposes, optimizing the HTML nerd out of the process is a much bigger win than a 20k download (don't forget gzip) is a loss.
I know this is going to get me downvotes, but I think the dogmatic "HTML shall be written by hand!!1" attitude all over this thread is just people clinging to the past.
And then there are the higher level issues. What happens when you get a bug report about how the page is rendered in a specific browser/OS? Do you want to wade through 1500 lines of html or 100 lines? Which do you think will be easier and faster to fix? Which do you think will be easier to inspect for correctness from the start? What happens when you need to figure out why your page is rendering too slowly? Which is easier to analyze, which is easier to speed up? What happens when you want to change the design? What happens when you want to take the design and use it as a UI for a web-app?
Using a tool that generates such crappy HTML may allow an inexperienced person to create a web page with a decent appearance, and it may even save an experienced designer a few minutes upfront. But over the lifecycle of a project it ends up being an enormous drain. If you're an enthusiastic teen putting up a web page for your mother's knitting circle, it's fantastic! But this is not in any sense a truly professional tool.
For example, there is no big difference in functionality if the Adobe web site takes a few extra milliseconds to load. Customers just want to buy a product and get out of the way. And Adobe is a big company... This is even more true for their customers, thousands of small websites that just want to publish content as quickly as possible.
We don't write HTML/JS/CSS by hand because it's fun, that's for sure. We focus on code-reuse and delayed loading plus AJAX (plus gzip, compress, CDN, cache, etc.) because the customer focuses on speed and the CEO focuses on the bottom line.
And, not to mention no business owner want to get locked into a solution, they want their "data" transferable and standardized to at least some extend.
The code this produces is so horrible you've lost all the time spent with it. Muse dies out, and your time spent in it dies with it.
And, not to mention no business owner want to get locked
into a solution, they want their "data" transferable and
standardized to at least some extend.
Still, it absolutely defies reason that a business would need to hire someone specifically to edit a bit of text or change a font on a web site. Once a credible product along these lines hits the market, the terms HTML and CSS will disappear from the overwhelming majority of job descriptions, forever.
That's only weird to those who think open standards to be
the only viable standard.
MS Office is the de facto standard
your data is not vendor-locked when using it
This is, however, mostly thanks to people reverse engineering Microsoft's original binary file formats, and MS was not really happy about this to begin with. If they could have prevented it, they would have done so (and they tried). Even the newer OOXML is not entirely documented and prevents free implementations due to patents (which, no matter what Microsoft may claim, is the exact opposite of an open standard).
Also, while this conversion might work fine for simple, small documents (or other files), the more complex and larger your filed become the more impossible it becomes to convert without a major hassle, which brings us back to your misunderstanding of 'vendor-lock'. The terms doesn't necessarily mean that it's impossible to switch to alternatives, but also applies when measures are taken to make it as hard as possible to switch without investing heavily in time and money.
As a side note, I am not attacking MS Office specifically. It's just the best example for showing all that is wrong with closed standards and proprietary file formats.
In the history you're clinging to you completely ignore that there was no real alternative. Open standards weren't in a viable state. Furthermore the competition from closed standards have forced open standards to shape up.
I definitely want and wish for tools that can allow me to design websites without having to know what IE developers were thinking the day they decided to have some fun.
The reason vim, emacs and the likes are still used a lot isn't because of nostalgia but because it works great for the user.
The issue with all the WYSIWYG editors is they take the "a web site can be anything and everyone can be a designer"-approach which is completely make believe. Till they make some form of WYSIWYM editor aimed at the web designer to use, all other approaches are worthless because they are basically just Word that's fancied up.
Most graphic designer classes have a field trip to see how typographers worked for newspapers 60 years go, and learn why their job was important - and such a day will come for designers some day too when it comes to writing code in hand.
But till then, design and code still matters.
Second, HTML is for telling the browser how to lay out content. When you use Muse or Dreamweaver or iWeb or whatever, you're essentially "scripting" your HTML in a proprietary GUI. When that GUI changes or disappears, how will you maintain that page? Yes, by hand.
All WYSIWYG GUIs should seek to output human-maintainable code, at the very least. It isn't a performance issue.
~ $ curl -s 'http://studentweb.infotech.monash.edu/~wlay/FIT1012/muse-demo/' | gzip | wc -c
~ $ curl -s 'http://muse.adobe.com/index.html' | gzip | wc -c
I thought to check this because of ridiculousfish's old article (note the FAQ at the bottom, "Isn't that a humungous flood of markup?"): http://ridiculousfish.com/blog/archives/2009/06/01/roundy/#f...
Even small changes in response times can have significant effects. Google found that moving from a 10-result page loading in 0.4 seconds to a 30-result page loading in 0.9 seconds decreased traffic and ad revenues by 20% (Linden 2006).
I really do not want to see that 1mb picture of your dog mascot, but i was actually interested in your 5kb product description... Oh well if you dont care about your site, you probably dont care about selling your product either.
As for the rest of us, we don't have to be concerned till Muse starts learning the quirks and features of CSS faster than ourselves.
They're using a hidden div full of <img> elements to load the hover images before they're requested by an actual hover. It's all the way at the bottom of their code code:
[Removed for brevity]
Sprite sheets are another option (using the sliding doors technique), but they're a bit more ungainly. They would save a couple HTTP requests, but that extent of optimization isn't necessary on most sites. Unless I've already combined all my stylesheets into one file, I certainly wouldn't start combining images.
What really matters is perceptible lag to a user, and either technique work just as well for that.
I find Adobe's technique kind of neat, and I'll probably use it in some of my future websites.
As I noted the other day, “Almost no one would look
inside, say, an EPS file and harrumph, ‘Well, that’s not
how I’d write PostScript’–but they absolutely do that
Sure, if you want a personal homepage, or put up an ode to your dog Scruffy on the internet, tools like this may very well work for you. But in those use cases, we've already had capable tools for years, many of which produce cleaner code than this.
For anything bigger/more professional, dirty/bloated HTML/CSS means a few things:
- Bigger downloads for your visitors, wasting their bandwidth. But whatever, you're not paying for that, right?
- More data transfer for you and your host. That you do pay for.
- Increased latency (sometimes massively) and decreased accessibility for people with slower connections. Given the awesomeness of American broadband performance, that means most of your customers/visitors. More latency = more bounces = fewer visitors buying stuff from you, reading your ad copy, etc etc.
- Increased rendering times. See: bounce rate.
And these aren't negligible effects. The difference between a 10KB and 100KB file is very significant, and you don't need millions of uniques a month to find out the difference.
Dirty EPS files are not the same thing - because unclean PostScript suffers from none of these deficiencies except bigger downloads - which for EPS files has little to no consequence in the typical use case.
Will Google design their homepage with Muse? No, they want to hand optimize it. Will people design websites for small businesses that get accessed a handful of times per day with something like Muse, when its output gets better? Yes. It might not make nerd-sense, but for a lot of sites it does make business-sense to have an increased page size in return for saving a lot of design and development time. Think of it this way: of all the things a person designing a website for a small company might spend their time on, is reducing page size the most profit increasing activity?
You also should keep in mind that Muse is a beta and the Muse site itself was actually written with Muse.
In this case, I think Adobe deserves the benefit of the doubt. They're trying to showcase this product with hopes that people like us will come to the table and give constructive feedback rather than point to the past and say, "See, I told you! Damn Adobe!".
Adobe could be learning a ton about their product from this response. If they say to themselves, "Those Developers just want to say, 'See, I told you! damn Adobe!'", they will have wasted a major opportunity to better their product.
Their target market is obviously not developers, and there is a lot of feedback here i would just chuck as interesting but meaningless to the product. There is some, though, that will directly impact whether an engineer will laugh when his non-tech friend asks about Muse or shrugs and says "It's not perfect, but if your needs are light, it'll work" (just like I was telling my father-in-law Dreamweaver would work for the genealogy stories cd he wants to make for the family).
Adobe needs to get the product to the second place.
> Certain kinds of human creativity and expertise cannot be reproduced by machines. […] [machine's] music can never be the Eroica or “This Land is Your Land,” because there is no algorithm with the creative and life experience of Beethoven or Woody Guthrie.
Of course, I agree with the practical point where no current machine can do human art. Because of that we can't currently automatically extract the semantics of an image, or even convert a post-script document into clean HTML. So, it doesn't affect the conclusion in the foreseeable future.
But one can't seriously believe there's no algorithm behind an artist's art without believing in some kind of ghost controlling her brain, and that ghost somehow doesn't run an algorithm. As far as I know, there is no such ghost. It very much looks like our cognition (including our art), is entirely the product of physical processes, even though it definitely doesn't feel like it.
Now, I reckon art is not just the product of some internal algorithm, running in isolation from the rest of the world. We're highly interactive beings, and our output mostly depend on our input. But there is some kind of algorithm which does all these interactions, though it is likely incredibly complex.
My point is, I wouldn't loose hope of automating something that currently, only humans can do. Take spam filters, for example. With very little knowledge, they can take out spam with stunning accuracy. But if no-one told me about Bayesian filters, I would likely try to make the computer parse the whole e-mail like a human, then give up, thinking that only humans can understand those e-mails well enough to filter them (note my mistaken assumption that the spam filter somehow must acquire some high-level understanding of the e-mail to do its job).
I can't find the page right now, but it had samples and they sounded quite good.
Maybe this? http://artsites.ucsc.edu/faculty/cope/experiments.htm
Management: "We need you to change all the buttons on this web site and enlarge the logo."
Web dev: "OK, these pages were puked out of Muse. Do you have the Muse project files and a copy of Muse for me to install?"
Management: "Muse? What's that? No, we don't have the source files. Can't you just edit the page?"
Web dev: "Sure. After I drink this bottle of Scotch and look for other job postings."
So guess what? HTML is still source code not a machine language.
Your metaphor would be great, if it wasn't for all the times developers (myself included) have been asked to dig into assembly or bytecode to fix something when the higher level source file had been lost to the ages.
Compared to those, editing machine generated html is a treat.
I'm going to mention something else: fluidity.
It is very easy to patch together a good looking design in Photoshop. But the Photoshoped-content does not move. The window cannot get resized. The divs are not flowing, they are not resized or shrunken as the user moves the window handles.
Also, while CSS has many flaws, it does provide separation of presentation from content.
Many authors prefer writing entire books in Latex. Do you know why? It's because of the What-I-See-Is-What-I-Mean style of writing, which is far better as from a single Latex source you can generate HTML, PDF files and optimized ePubs, for whatever medium you desire and have it look exactly as you intended it to look.
HTML is also about developing input-interfaces.
I did work with Delphi, also with the editor in Netbeans called Matisse, which does a kick-job at defining Swing interfaces. Also worked with Visual Studio and with Adobe Flex Builder -- nothing comes close to the ease with which I work with HTML.
Heck, I'm working on a desktop client right now that just embeds a webkit view with native-hooks, simply because I don't feel like learning yet another sucky GUI-toolkit that will disappoint me in one way or another.
Also, I started my webdev carrier by using Adobe Dreamweaver. It was one of the dumbest things I ever did.
This was with a single vendor spec that was fairly comprehensive. The web is a loose spec with multiple vendors.
The software used is Business Catalyst (http://businesscatalyst.com/), an enterprise CMS (all-in-one solution urgh) Adobe bought 2-3 years ago. And yes, the semantics are very bad.
In fact, I disliked this piece of software so much, I wrote a post about it a few months ago:
We did read your review and took it at heart - it wasn't easy as you were very direct. However, there is a lot of truth in your review and there's a lot for us to improve. And believe me, that's our goal - to improve the product until customers love it and we have folks like you give us much better reviews :) - because you would actually like the product.
I'd love to chat with you and try to show you where we're going and get your perspective on what should we do differently. Your feedback would be very important for us.
If you're interested, please contact me at email@example.com
There were a lot of positive articles on-line when I was evaluating BC. But I later found out that most of them are written by various BC-partners (a good business decision there) - but that also meant that they were a bit biased and skipped over the bad parts.
I'll send you an email. I'm pretty sure Adobe is able to make it better (I'm an avid adobe fan mind you)
<!-- BC_OBNW -->
And if you view a screenshot of the html of the top of the page and compare that with the BC website:
When you think of it, it is very possible that the website's theme/html/CSS was built using muse at one point, but then it might have been transformed into a BC theme/template
BC can host any HTML files, we don't enforce a specific markup or template constraints.
I'm willing to bet everybody on this thread writes their HTML by hand. How many of you have a framework for programmatically writing HTML? How many have a framework for writing HTML as broadly as Muse can?
Also, HTML is not ASM, it's a very high level language. The ability to code a scant few hundred lines and nevertheless have that become a full-featured, rich, beautiful, and complete layout for a web page or a web app UI is tremendously powerful, and it's no wonder that high end web devs and designers take advantage of that. Until web design "compilers" approach anywhere within spitting distance (even within a factor of 2) of hand coded designs in terms of maintainability, cross-browser compatability, performance, and code size that's not going to change.
Well, compilers may be good for languages such as C++. But for most languages such as Python and Ruby, the code executed is just plain terrible compared to ASM. I don't see anyone writing Ruby code and complaining about that waste, though...
People (including me) like to delude themselves though. "I'm not a web/database/embedded developer. Here's a tool that will give me a quick win". But in the end because you find yourself debugging the mess that it generates you have to rewrite by hand anyway meaning that in the long run a quick win turned out to be twice as long as just sitting down and learning how to do it. BTW I think that DSL's could be the next thing that leads people down this path. A great idea but...
Current and past implementations are or have been bad, sure, but why can’t it work in principle?
Which means asking questions like "this is 36 pixels high: is it a heading,; or is this just a landing page with big text?" etc. So maybe I lack imagination, but I don't see it being that easy a problem to solve from a UI point of view.
Yeah, you would still be able to just ignore all that – just like you can adjust each headline individually in InDesign – but that would only mean that you are an incompetent user of the software. A good WYSIWYG editor wouldn’t always have to produce great markup but a competent user of the software should be able to easily make it produce great markup.
I’m really not that sure why adding a GUI for semantics would be so hard, especially since existing concepts like paragraph or list styles already map very well to HTML concepts.
Then there is the fact that something like an Indesign document doesn't really have any concept of separating the style from the actual content. For the time being at least, humans are much better at logically ordering and marking up text, and then adding the style with CSS, unlike programs which add a whole lot of extra markup and CSS to do the same thing.
Those are just a few things which make WYSIWIG work a lot better for print than it ever could for web - and there are more... These things will probably be somewhat resolved as time goes on (ie. with wider support for @font-face and so on) but I think that hand written HTML will be better for a long time to come.
Being able to flexibly test (e.g. different fonts, different resolutions, different browsers) would certainly be one of the requirements, as would be the implicit assumption that pixel perfect designs are not possible.
As for style and content? InDesign does actually separate them to a degree. It’s not perfect but neither is HTML at that task. The concept of character, paragraph, list and table styles is central to InDesign. It’s what makes InDesign so great. It even has a bare bones text input mode where all you do is type completely unstyled text. It’s then easy to apply all the different styles you made to your text.
I think a truly powerful WYSIWYG HTML editor is eminently possible, someone has only dare to do it.
Everytime my artist friend wants to put their portfolio online, I suggest they start by writing some simple markup and half an hour later I'm explaining the finer points of the box model while they whinge about how they'd rather be using Adobe Illustrator. The truth is that there will always be a group of people that are able to visualize what they want to create better than they can verbally, or symbolically.
The right way to do a GUI HTML editor is to provide some templates and some customizations.
Until then, deal with it.
We've already trod this path and it lead us to dark places.
Almost the entire page is duplicated within a `<!--[if lt IE 9]>` block...
HTML is high-level and designed for humans to read and write. It maps directly to the concepts it conveys.
I think the more appropriate comparison is to those software generators, or whatever they are called, in which you drag and drop buttons and text fields and have arrows (or something) to convey action.
You don't see a lot of good software made with those, now, do you?
HTML is not Assembly. It's not even C. It's Python, or Ruby.
In the end, humans are a lot better at describing things than algorithms, and I think that's the biggest problem with WYSIWYG HTML editors.
I think compounding all this inefficiency is how they have "<!-- group -->" on EVERY SINGLE DIV.
Someone should use the Tilt extension to visualise this page in 3D (http://hacks.mozilla.org/2011/07/tilt-visualize-your-web-pag...) and screen cap it.
Or, you could have "What You See Is What You Want, Based On What I Think Is A Good Way To Do This (But You Can Also Choose)"
Things like resolution independence, mobile/small display support, gestural interaction support, impaired-senses accessibility, graceful degradation, and multi-browser compatibility all go towards into that, IMO. It may not be identical on every system, but it will look good and be usable.
I can't really see WYSIWYW,BOWITIAGWTDT(BYCAC) being the next sales paradigm shifting quantum leap of buzzwordology though.
One passage consisted of a string of empty divs of the form <div class="wrap"></div> which a decent 'code generator' would simply have omitted. In fact the bulk of the bloat appears to be divs which add nothing to the page.
I strongly suspect one previous version of this code generated tables, then they got the "tables used for layout are bad" memo and did a simplistic translation to nested divs.
Perhaps, to be charitable, they have debugging turned on.
Q. How many times do you inspect the compiled assembly to see what it generated?
One day these tools will be perfect, it's just a shame for me that Adobe have copied what I've done for the second time:
<div id="so-you-can-div-while-you-div">This site built in Muse (code name) by Adobe®</div>
I don't know what the output is like, though.
So instead of Adobe automagically converting my design to HTML/CSS/images, you get humans from India (or wherever) doing it by hand.
For $100 - $200 you can get clean, cross-browser, easily tweakable HTML/CSS and images. And you get it in 24 hrs.
I've used www.psd2html.com a few times and the quality was quite high.
I don't get that CSS snobbery. Tables do work on even older browsers. That's something you want for a tool that generates code for people who don't want/can't put up with writing HTML themselves. For them the product has to work and look good in most of the browsers.
You really are idiots if you're worried about 'that much text' going over the internet. You have no concept of data.
Not to mention that when a browser doesn't render the page correctly, some people don't like wading through thousands of lines of div-soup when they could be debugging one hundred and fifty lines of markup instead...