> most websites have customers, we have users.
Interesting thought: Do Facebook, Google, etc have "users" and not "customers" for their consumer products?
For many people, leaving digitally can have bad effects on their social life. It's arguably not as bad as moving out of the country, but it's still a practically impossible hurdle for a lot of people.
Advertisers are the customers. Non-advertisers are the product. The goal is not to provide a good service to the product, it's to drive engagement of the product up so customers are happy.
To Google’s credit, Youtube Red for instance goes in that direction. Same for Google Suite customers, Google Play users etc.
Of course the power balance is tipped toward the companies who are willing to pay more, but that’s a balance, and not a one sided relation.
For facebook I think the picture is darker, but then that’s facebook afterall.
Advertisers are the customers of the Facebook Ad product line (Business manager, boost, campaign, pixel facebook).
Users of Facebook Social Networking services are prospects tracked by Facebook.
- the tracking and targeting tools
- the social network
Users aren't the product. Interfaces to target segments based on users's data is one of the two products Facebook offers.
Insomuch as they can be anthropomorphized they don't even truly care about whatever their 'customers' are or their well-being, so long as the bottom line is optimized across time.
The rank and file humans that make up the functions of a corporation might be moral and may influence the corporation to make 'irrational' choices due to human morality, but that is not the rule.
I had a pretty spirited argument that was ongoing for about a week with my friend, who was my co-founder at the time, where my position was similar to yours. (Note: this is all in the context of U.S. Law and Government) His argument was that it's impossible for any entity to be amoral because whether a corporation or person, both are treated as persons or entities meaning that they can provide their will. So, once a corporation reaches a point it indeed can be moral because it goes beyond a legal instrument and is dictated by a collective staff, leadership, and/or stakeholders. He would have said that in the case of a solo entrepreneur's company, it would just be an abstracted will of the entrepreneur. I eventually got to the point where I couldn't argue against the legal precedence of Corporate personhood.  I think there is a point when a corporation outgrows their founder and becomes self-sustaining where it's corporate culture dictates it's "morals".
This is especially true when the corporation outgrows the control of its original founders who may have had a moral vision.
In the end the analogy I am making is that a corporation has reins, and as it grows it becomes increasingly unwieldy for the handlers (= staff) to direct it where it does not want to go (away from profit).
It is of course easy to have a corporation act moral when this objective overlaps with optimization of its objective function. I don't consider such a happy occurrence to qualify as truly moral, however.
- Neal Stephenson, Snow Crash
That's nonsense. It is absolutely not "practically impossible" to leave Facebook, and anybody saying so is exaggerating its importance and being melodramatic.
On the other hand, it can be literally impossible for a person to legally leave their country.
All of these things can be worked around, but the alternatives are slower paced and less efficient and leave you ultimately out of the loop. So it's not a problem of imagination, it's a problem of wanting to be able to keep up with what's going on.
Now that I am settled in, my usage has wound down a lot. I have found a comfortable rythmn with the activities and things I need, but should I ever want to discover new experiences or join social gatherings I would have to bring it up once again.
If only that sentiment were contagious. The status quo with every government department where I live seems to be "fk you, we're the government we do what we want." I wish more departments would take a "well our users have no choice so let's try not to suck too bad" attitude.
The quintessential examples are transportation departments and urban planners. A lot of them are aware of newer developments in those fields, and that the current paradigms are problematic and misguided. But rocking the boat will just result in the status quo and leave the person without a job and no references for any future ones, especially if it is potentially unpopular.
It all comes down to the same thing, though, which is the importance of making the site useful.
Does YouTube have the ability for authors to attach or display a text version of their video?
2) I think the videos were thrown up on YouTube in a "why not" sort of way, with minimal effort. Notice how the whole channel is just that specific event over that couple of days. I'm glad they posted them, but it was primarily an in-person event.
In reality, captioning isn’t that hard. Very few people are willing to go to the trouble of uploading a video but won’t caption it.
What obligation? They're not even obligated to put the video up; it's just a gesture of public service. Giving people the obligation to put captions when they upload videos for free consumption is like giving a beggar part of your lunch out of generosity and he complaining that he likes his sandwiches with more cheese.
Not to specifically liken the deaf and hard of hearing to beggars, since all YouTube viewers are beggars in my analogy. However, the point I'm getting at is that when someone does you a favor like upload a video for free for your consumption you should generally be grateful and do what you can with it, not complain and demand more. It's just not the right attitude.
It's like FOSS. You can ask the developer for more, but not demand it.
I think if we want more captions from videos submitted without remuneration, it's gonna need to be automated. Have the machine do the work. I think YouTube already does this, now that I think about it.
EDIT: Added some more on the analogy in case people thought I had something against deaf people.
In fact I would be very surprised if people were not already using gig platforms like that for captioning already.
It’s more like your giving a sandwich to every beggar except the Deaf one. And eventually that Deaf beggar will starve.
Deaf people have significantly less access to public discourse. I think it’s the responsibility of every person engaged in public discourse to make their content accessible.
Transcripts are useful beyond hearing-impaired people, but the cost realistically must be balanced with the organizations other responsibilities...
No, actually it doesn't. You start with the automatic one that youtube creates and then edit it.
It takes about twice as long as watching the video (i.e. about 2 minutes for each minute of video).
Source: I've started doing this for videos I upload.
Yes, if your audio quality is bad, and youtube can't understand anything so you have to start from nothing it would be significant effort, as you say. But if your audio is clear, it's really not that hard.
If you want perfection (line breaks in logical places, not too much text at a time on the screen, captions synchronized perfectly with the speaker), the time goes up to about 4 or 5x, which is still not "significant effort".
This isn't really true, on YouTube it's basically a check box. Admittedly the quality isn't always perfect but it's trivial and better than nothing.
Affine. Affine linear space, my friend.
Actually their captions are the karaoke format and you can do decent typesetting. I hope for a future when all video has captions with CSS styling rather than the baked in format of TV news.
A collegue of mine is heavily involved in them and gets very animated in championing them. It all sounds very interesting when he talks about them, however it doesn't tend to stick in my mind.
Maybe if you're an expert at it. Transcribing, then timing the subtitle seems like a lot of work.
Have you even tried with auto captions? Youtube is pretty good with it lately. I watched the first 2 minute of the video with auto captions and it's about 80-90% accurate.
You can also add a transcript or a link to a transcript in the video description.
that is genuinely pretty cool and a good compromise between manually adding timestamps to captions and dealing with the hilarious but often very incorrect auto-captioning.
Youtube can do some automated transcription but it can be buggy.
For funsies, I recently did the transcription of a video for a creator I enjoy. There was a blog post based on the video but it wasn't a 1:1 match so it had to be tweaked by hand. It was time consuming.
Here is the (2016) blog post introducing the posters: https://accessibility.blog.gov.uk/2016/09/02/dos-and-donts-o...
My feeling is that gov.uk has got a lot better since, but it's low-information-density by design. This is absolutely the right thing to do for a lot of their users in a lot of cases, and doubly so when targeting users who might not speak good English for whatever reason. But I still feel that information meant for professionals could be presented in a more useful way.
(I know a few people who worked on various parts of it and for what it’s worth: they’re all legit. They care, and they’re good. It should come as no surprise that the site turned out the way it did, if they’re hiring these sorts of people.)
Having looked up the relevant part, let me transcribe it for you:
> "One of the first things that struck Jen and me as we entered the GDS office on an upper floor of an old office building high above a busy London street was a large sheet of butcher paper covering the picture window in the lobby. In the paper was a small cutout through which you could see the people on the street below. The cutout had a large arrow pointing to it, labeled "Users", reminding everyone when they walked in just whom the unit was meant to serve."
I found it really drove home the service's users-first approach. Following that anecdote he elaborates on the service's "10 commandments" (the 10 GDS Design Principles). They're also quite interesting, but I'm sure a search engine lookup can help you there. I really recommend the book if you want an optimist's outlook on the future of technology, government and economy.
I don't know if it was recorded or not, but if it was it's an interesting insight into getting things done in the UK government.
I imagine working for a government is interesting from an incentives perspective too, simply because the optimisation is not about profit in the market sense.
Related: GDS (the GOV.UK people) was a huge inspiration to us from the start. Both 18F and USDS (another Federal Gov org focused on digital transformation) have strong connections to GDS, and there's lots of conversation between the groups.
Specifically the posts like this https://gdstechnology.blog.gov.uk/2016/09/19/why-we-use-prog...
My only complaint is that they're sometimes too dogmatic about the one question per page thing, which is actually a pretty stupid UX commandment as soon as you get to complex logic flows.
I think they've relaxed a bit on that though, I didn't notice it last time I was doing my company tax return.
One of the "downsides" of it being a government site is that it needs to be usable for everyone. Which can be a bit frustrating for power users.
Except, I knew which type I had to submit and it was telling me the wrong one. I was obviously answering one question wrong. But I couldn't go back and I could see my previous answers, I could only start again.
That's basically a ten page glorified wizard with no back button, which is actually a big UX no-no. Must have gone through the damn thing 5 times before I finally got the right form.
A PDF document can be dated and versioned, you download a report from April 2010 and you can keep that as a document. You can then download a newer one if you want, the older version is still available and largely immutable. Archiving a dynamic web page is a lot harder.
How is it any let alone a lot harder? Save a timestamped copy of the file. Save it as a web archive if you fear assets depletion. Hell, pretty much every platform lets you save pages as PDFs natively, good luck doing the opposite.
Want to have a forum as an optional /extra/ on an otherwise static page? Maybe OK, but please have a way of getting that forum as a different page (which is updated on some basis, maybe per post, maybe with a cooldown of maximum 1 generation every X or only during low server load, etc).
So can an HTML page.
> you download a report from April 2010 and you can keep that as a document... Archiving a dynamic web page is a lot harder.
Right click, save page as... Am I missing something?
In addition saving web pages is a huge pain with all the scripts and CSS to save as well, so then you need to compress that.
At last, saving and browsing saved HTML is a pain on mobile and similar devices without a specialized tool.
I don’t understand this argument at all. The exact same thing can be said for HTML pages.
> In addition saving web pages is a huge pain with all the scripts and CSS to save as well, so then you need to compress that.
That’s what the webarchive format is for.
Part of the reason may be that some browsers download PDFs for display in the user's native reader. Though, even in-browser PDF readers like Firefox on desktop have a toolbar with a prominent save button. The use of PDF is a signal to the user that the document can be saved.
The success of this depends on how the page is written. If it fetches some content dynamically you may not get a full and accurate picture of the contents of the page as viewed at the time the article was available live.
I remember Obama said something along the line to tech leaders, "Your job is a lot easier when you only have selected group of customers to please, but when you have to cater for everybody and every interest group, things are a lot harder." Paper based document will continue to be used for a decade or two more.
Why cant we just have both? Too much man power in auditing or publishing? Well that is what tech is for, automate it. The tech should be catering for its users, which is not just the public but civil servants as well ( I cant believe I just wrote that ), not trying to force the tech on everyone else.
P.S - I thought PDF/A is an open standard. Why is everyone suggesting PDF is a closed format?
In general the landscape of PDF compatibility has improved a lot, but it's still a lot worse than HTML.
Are there test suit for PDF/A compatibility?
I hope not. I look forward to the death of paper.
Guess what libraries provide? Free book and computer access. Nothing changes.
Paper is an amazing medium for transmitting information: cheap, battery-free, easily copied or lended, transferable, DRM-free, non-proprietary, no monthly subscription needed...
Now books, easily copied?
When's the last time you tried to share a copy of a book with a friend? Or did you buy a blank and copy it by hand? While that's a laudable example of dedication it's not exactly easy nor convenient.
Cheapness is likewise debatable, printing out a 300-pages book is fairly expensive.
Most other concerns are concerns only if you use broken-by-design sources.
They look great printed on paper, but not very transferrable.
I know there is software  out there that tries to parse tables out of PDF documents, but from my experience you'll still end up doing manual adjustments afterwards to correct what the parser could not infer.
That's the main reason why I support this motion. At least make the data both PDF + HTML, so you have options.
Some people provide their latex, which improves matters, but most people don't.
Funnily, Donald Knuth's AoCP fascicles ship as postscript files, which Preview.app converts to PDFs that can't be copied from.
This relies on what should be a reasonable assumption that the latex source is provided.
I mean, it's a small screen.
A proof of concept is here: https://thelocalyarn.com/excursus/secretary/ On every page at that micro-site, you can view the Pollen markup source, or the PDF version. The PDF and the HTML are generated at the same time, from the same source.
Another example is my blog The Notepad (https://thenotepad.org/). Both sites have links to their source on Github.
You also end up with this awkward two-step between the format and the tools. If some capability is missing, you wait for the format to define a way to do it, then the toolmakers to support it; or less ideally, the toolmakers define their own incompatible ways of doing it without waiting for consensus.
This covers the problems I have with Markdown. It as wide support, but because vanilla Markdown only covers a 1995-era subset of HTML, there are all kinds of things (footnotes, figures, formatted code blocks etc) that people want it to do. Any given editor or CMS or site generator will support 95% of your preferred flavor's way of doing things and disagree with your other tools about the last 5%.
The difference with Pollen is that it isn't a format or a markup specification; it's a programming environment. So you design the markup, and you tell it how to get from the source markup to your target format. The format is yours and the implementation is yours; they are one and the same.
It is a bit more work, true, but it's less work in Pollen than it would be in any other environment because it does parsing for you and applies your transformations in a logical, ordered way.
Adobe Acrobat Reader is now only supported on macOS and Windows. And only Acrobat Reader fully supports all the varieties of forms in various PDF specs. I think it's a big problem for government to in effect require the use of a monolithic and proprietary operating system to fill out government forms.
And then the whole state of signing PDFs is a confusing mess. Learning how to create or buy a certificate is irritatingly confusing, and then where the certificate goes and how to use it. Google search guides and it's completely different instructions depending on platform and Acrobat version. The latest versions of Acrobat Reader let you do a thing called signing a document, but it doesn't use certificates, doesn't let you add a password to the document to prevent modification, but you can add text/image/drawing of a "signature" - no doubt this confusing thing exists because the certificate based signing is so difficult. And then verifying digital signatures isn't something people know how to do: technical companies I work with do not accept them unless the digital signature includes a visible human signature and the PDF security options enable printing the PDF!
* If your internal workflow is designing documents then the conversion to a web document will be clunky at best.
* Every end-user device has a PDF reader and their web browser probably opens it transparently.
* You need the PDF either way way because the website will never be the authoritative source so now you have the problem of making sure they don't get out of sync.
* You probably have a few internal graphic designers that can do wonders with print but it's unlikely you have an internal web development team.
* It's much easier to make a PDF accessible than a website. (Before you disagree remember that the offices we're talking about likely don't have a dedicated web designer) You can be sure it will print correctly, and for most users it's automatically offline.
I'm pretty sure the team at Gov.uk wrote a pretty decent article on this topic.
1. The web is a medium where you cannot (within reason) exercise too much control over the design of things. You lay out general guidelines that tell the browser how to render whatever content is thrown at it. I won't pretend this gets us predictable design, but at least it's uniform, which is good for human consumption.
2. The web is a medium that lends itself to structural document construction in such a way that it is easy to use and reuse the content in ways not anticipated. This is good for machine consumption, which in turn empowers the human.
These arguments may seem philosophical, but they do have real-world effects. Once I managed to explain it in ways the customer understood, they were always eager to be of assistance in the endeavour.
Zooming PDFs on mobile can be a pain but it's certainly less of a pain than using some fixed width site from the early 2000s with a jQuery menu you can only operate with a mouse and links so small that a mouse would fat finger them.
Machine consumption of the documents is one of those benefits that sounds cool but really just boils down to SEO because very few other things are going to be scraping your site. Certainly worth something but it's usually not high on the priority list for governments and search bots scrape PDFs anyway.
I think the main point in favor of the web is that you can have multiple presentation layers for the same underlying content and you can improve presentation independently of content. Not necessarily that it's reusable because that's just templates and a style guide but that you can backport all your fixes.
All the issues you stated stem from the fact that people don't want to change. Just because it's not the easiest way doesn't mean it's not a good one.
Everyone that can design a PDF can learn to design a nice HTML document.
I don't think it's that people don't want to change, you're still going to need print designs regardless and it's much easier to host a print design than it is to convert a website into one.
> Everyone that can design a PDF can learn to design a nice HTML document.
I mean I wish that was true. I think people can learn to write content in a web compatible way but a designer that spends all their time in PS and ID isn't going to suddenly be able to crank out high quality web pages.
Hopefully not forever.
Let's take perhaps the simplest most important accessibility feature:
How do I increase the font size without making the content absurdly wide, while viewing a PDF?
You'd have the same problem trying to read a regular book through a mobile-phone-sized window. That doesn't make books an unsuitable vehicle for document delivery.
It's not a developers job to tell me whether or not my device is suitable for reading their information because they can't be bothered to make it available as HTML.
Certainly not if the publishers took that attitude. And what you consider suitable simply isn't relevant if half the audience is now using a mobile device.
What we really should have done for our governments is figured out the 20 or so essential layouts and pre-coded CSS for them, but always allowed the government to fallback to custom HTML / CSS.
Normal human beings understand pages, in the papery sense of the word. They don't understand viewports. It takes an enormous leap of abstraction to reason about a document that could be any shape or size. Responsive design is a fundamentally unintuitive process. A lot of professional designers fail to understand this, even after years of designing for the web.
Documents are generally created and laid out for print and email distribution and often have very carefully laid out data tables, charts and page number referencing (as well as less important cosmetic formatting). Simply changing the file format isn't going to solve the accessibility problems and is going to create new formatting/compatibility issues, and and changing the workflow of every member of the civil service to require them to use an accessible HTML generation tool in place of word processors for anything that might be later made public isn't a trivial undertaking either. Said accessible HTML generation tool is probably going to want to change those nice HTML files back to PDF or Word files for restricted email distribution too...
Yes, but it's not a fully ACCESSIBLE document, which is half of the point of the article.
You can write everything in Markdown and export to your org's Word template and have HTML, LaTeX, etc. export.
These people aren't programmers, most of them will only have a loose grasp of what a file format is. What they need are tools that don't require a programming background.
This might be considered a positive.
I set myself low-bar goals measuring government engagement. The Australian whole-of-government portal is abjectly awful, it has well done 2FA but continually nags me with badly designed 'do we have your permission' and 'remember you're talking to government' intersititials.
State government planning web, uses a web design method which is simply unworkable on touch: the 'permit us to say we want you to agree to terms and conditions' overlay won't scroll. when the underlying page does, so the [agree] button can't be pressed because its off-screen. Gak!
"They’re not designed for reading on screens"
That's a subjective distinction.
"It’s harder to track their use"
Not relevant for most .gov use cases. Also, sounds like something that isn't a user problem.
"They cause difficulties for navigation and orientation"
So does responsive design that makes discovery of content difficult in many scenarios, especially atypical scenarios.
"They can be hard for some users to access"
So does HTML where a poor accessibility process is in place.
"They’re less likely to be kept up to date"
Conversely, they more easy for a consumer understand when changes take place, and encourage a stronger release process.
"They’re hard to reuse"
That may or may not be a bad thing.
Also, I'm not sure how 'not designed for reading on screens'
is subjective. PDFs are paper-document oriented and don't display well for reading on anything other than a large size monitor in virtually every case I've seen. When delivering content in a web browser, why on earth would someone prefer to view PDF vs a reasonably well laid out HTML version?
I think PDFs are common in gov't websites because much of the internal culture is paper-document centred, and having those nicely printed PDFs solve internal problems for gov't employees, not because they are actually any better for the users of the website.
Canadian Gov't websites are full of PDF content too. :(
... they can be easily created from popular applications that people are already using to author and share documents."
This appears under the heading "Why do people use PDFs?"
However I would have listed this as the sole reason that documents should be distributed as HTML. The reasoning is simple.
Imagine a hypothetical where one has a choice of distributing documents in two formats, A and B, and there are particular advantages to each format. As such, some users prefer format A, while others prefer format B. Not to mention those users who would like to have both formats available.
In the hypothetical, users can easily convert from format A to B however converting from format B to A is difficult.
Assuming one can distribute the documents in format A, it makes no sense to distribute in format B. Users who prefer format A will be unhappy.
Distributing in format A keeps users who like format B happy because they can easily convert from A to B.
Think of it this way. If you're publishing a pdf, then you master the formatting using your word processor (latex, word, what have you). On the other hand, if you're publishing on a responsive web site, then you really ought to have a content management system to guide you through the requirements of the platform. It's a significantly higher hurdle, both for the authors and the platform owners.
As a bioinformatician, big tables inside pdfs are essentially useless. What's the point of a few hundred rows worth of a table if you can't manipulate it with whatever tool you prefer?
Moreover, I'm writing my thesis and dealing with many pdfs from the 90s in most of which I can't just highlight and copy text so I need to type it out like a savage. Is it guaranteed that today's pdfs will be easy to handle for future people?
In my opinion publishing should be done in plain text and .tsv files and the onus of displaying it on screen should fall on the editor (isn't that their job anyway ??)
For bonus points, simply distribute the latex file so that the users can convert and read it in whatever alternative format they prefer besides html.
Or basically, semantic focus rather than presentation.
Using PDF means having to deal with crappy PDF software, hurting accessibility and scriptability and adding needless overhead on the other end.
I wish these systems were designed by sane and benevolent programmers, rather than Pointy Haired Bureaucrats.
( https://www.dataprotection.ie/docs/raise-a-concern-Form/m/17... )
You might think what does sport have to do with gov website UX? But it's all politics - GDS delivered and it is making the other politicians and gov officials look bad so they are being undermined.
That's my reading of that anyway - no good deed goes unpunished.
I like making HTML reports (rmarkdown), but sharing them requires telling people to download then open them in a browser. Google drive, for example, happily just shows you a preview but then if you click on the file you get raw html. Customers just don't understand.
PDF however, is absolutely fine to move around as a single chunk, but has problems in almost every other way.
There is MHTML, sadly it fails the second bit because AFAIK Chrome dropped its support and FF and Safari can't open it natively. Apple has its own WebArchive format, and Firefox's MAF extension generated MAFF file but is not compatible with newer versions.
I wish there was an agreed-upon standard format that had cross-browser support.
Does HTML with data URIs meet these criteria?
Technically, all this is fine - the problem comes entirely from not having a nice agreed way of opening them. Which makes it more frustrating.
That, in chrome, generates an html file and another folder. I would need to tell users to download the html file, download the folder, put them in the same place and open it in chrome. If I put it on google drive to share it, they'll just be shown raw html unless they then download and open it.
I also tested it on a page using chrome and it didn't properly load the pictures.
I'd be surprised if there isn't a similar Chrome feature. Chrome and Firefox usually have all the bells and whistles.
Web CMS were invented (among many other reasons) to let non technical people write and edit content directly inside the browser and publish it. I wonder if gov.uk doesn't have a CMS or their authors don't want to use it.
HTML in the browser is the best tool for consuming documents; we can read and write on any device, bookmark documents or chapter headings, resize and style at will in a device independent manner, enhance documents and make them interactive, even add videos if required. The best part is that the source is all stored in plain text and is version controlled without requiring Sharepoint or similar.
I always look up to the fixed width font model used by IETF RFCs. They are extremely readable and searchable and last for a long time.
Someone mentions "blockchain" in 3... 2... 1...
Not a stable format as in still gets updated? Very few formats are stable by that yardstick, and PDF certainly is not one: https://www.iso.org/standard/63534.html
> Will a browser in 2028 be able to render those pages correctly ?
Correctly as in "in a way which can be consumed", most certainly. A modern browser can consume and render 20 years old websites just fine.
> If the content needs to be available for a longer time PDF/A would be a much better choice than HTML.
It really is not. PDF is a very complex format and an absolute bear to manipulate and extract data from.
Why not? Modern browsers can still render pages from the 1990's.
Still, there is so much out there that is available only in one of the MS Office formats, and Gov.UK is apparently doing better than that. So there is actually some cause for celebration here, IMHO.
PDF/A can certainly accept, and a government policy can require the use of, accessibility features and also even digital signatures.
However gov.uk already does both so it's not an actual issue here.
Having hard pagination and consisstent layout is, for me, a cognitive gain, esspeciially for longer documents, say, 20+ pages. (I frequently read 500+ page docs.)
Other formats such as ePub are frequntly compact in space utiisaation, but again, the free-flowing text lacks the mnemonic framing of even a basic print book, let alone the expertise of a masterpiece of layout & typography such as Tufte.
Not that HTML isn't well-suited to other cases, or that PDFs can't be awful. But there's a place.
And the only option to disable is to click through and install a browser add-on to opt-out.
This doesn't seem very GDPR-friendly.
See https://github.com/alphagov/govuk_frontend_toolkit/blob/cf1c... and https://github.com/alphagov/govuk_frontend_toolkit/blob/cf1c...
The fact that they use Piwik on a couple of pages they consider sensitive (usually to do with payment) shows that even gov.uk know this cannot be relied upon to fully hide things.
It shouldn't be there and I've raised a complaint with the ICO.
Note that the page says that it can be configured to strip all that info, not that it does by default. One would have to look at each page to see how this is configured. And it could still be wrong to switch this on by default, under the GDPR.
But they do not, on that very page. I resized the page up to full screen and then back again in a WWW browser and all that happened is that huge areas of whitespace opened and closed around the text, which remained word-wrapped in exactly the same places.
And it's clearly you who has some axe to grind. I merely point out that the behaviour of the very page itself is not as the article describes the operation of that WWW site. It does not behave as advertised, and does not change to suit the size of my device. The headline remains word-wrapped after the word "should", for example, and huge areas of whitespace open and close around it.
I think the OP's point was that they wanted the "responsiveness" to not have an upper limit of width. That if (example) they had a 4k monitor that was 2,160 pixels wide, and they maximized the browser window, they wanted the site layout to reflow to use all 2,160 (minus window decorations) of pixel width to lay out content. As it is now, if they had a monitor that wide, the site will only ever use 1,023 pixels of width to lay itself out, leaving large margins of unused space on the sides.
: A bit more research shows this CSS declaration on a <div> with the class of "container": max-width: 964px;.
Turning off that one CSS declaration allows the site to widen to fit a maximized browser window.
"If you need to convert files from one markup format into another, pandoc is your swiss-army knife. Pandoc can convert documents in (several dialects of) Markdown, reStructuredText, textile, HTML, DocBook, LaTeX, MediaWiki markup, TWiki markup, TikiWiki markup, Creole 1.0, Vimwiki markup, OPML, Emacs Org-Mode, Emacs Muse, txt2tags, Microsoft Word docx, LibreOffice ODT, EPUB, or Haddock markup to
XHTML, HTML5, and HTML slide shows using Slidy, reveal.js, Slideous, S5, or DZSlides
Word processor formats
Microsoft Word docx, OpenOffice/LibreOffice ODT, OpenDocument XML, Microsoft PowerPoint.
EPUB version 2 or 3, FictionBook2
DocBook version 4 or 5, TEI Simple, GNU TexInfo, Groff man, Groff ms, Haddock markup
Page layout formats
LaTeX, ConTeXt, LaTeX Beamer slides
via pdflatex, xelatex, lualatex, pdfroff, wkhtml2pdf, prince, or weasyprint.
Lightweight markup formats
Markdown (including CommonMark and GitHub-flavored Markdown), reStructuredText, AsciiDoc, Emacs Org-Mode, Emacs Muse, Textile, txt2tags, MediaWiki markup, DokuWiki markup, TikiWiki markup, TWiki markup, Vimwiki markup, and ZimWiki markup.
custom writers can be written in lua."
PDF is terrible because you can not easily/sensibly even parse the text. That also means it is hard to diff two versions and see the differences.