Kind of like when I first dragged a window across multiple monitors in the 90s. We are so used to content being stuck in artificial containers that it's kind of crazy when it transcends them.
This is a privacy leak. I have a 24" screen, and I don't keep the browser window maximized because it would be too big. I presume other people do to, and I'm pretty sure most have a preferred size and position.
Allowing a site to execute JS in your browser is equal to trusting them, like it or not, and browser vendors are definitely in the business of adding new APIs rather than reducing attack surfaces.
But sometimes the content does not render at all or the page layout is broken beyond recognition. Then I'll try "temporarily allowing" executing JS from the home domain of the site (I've noticed most sites these days bundle JS from 5+ domains, most of which are analytics and social networking services). For maybe 80% of those sites that do not work without JS, this fixes the issue and I am able to read the content. Takes maybe 2 secs to temporarily white-list and reload the page.
The rest are generally pages that make assumptions like that the analytics library is always present in the page JS scope and then crash when it is not, leaving the content unreadable because the JS layout code never runs. A quick peek at the JS console when the page is loading generally reveals what is the issue.
Sometimes I just ignore those pages, but if I really badly want to see the content, I can launch an one-off incognito window for the page and have the page execute with all the JS tracking and social network code allowed. This solves any issues for almost all remaining pages. If problems still persist, these are generally pages that are just simply broken – maybe the page only works with some specific browser, like Google Chrome (I use Firefox), to start with.
And even then, I would say 70% of the time, some critical piece of content on a website does not work with JS disabled, be it images or text or video or etc.
If you want to watch video, you're out of luck. If you want to use a web app, you're out of luck. But if you just want to consume text content, the majority of the web just works, and a lot faster too.
(I've never been able to get NoScript to work right, it's always given me problems. Perhaps part of the problem is NoScript?)
What percentage of websites is that though? Of course it depends on your browsing habits if it is feasible or not. I don't click social media stuff, I participate only if I really want to or if I am part of a community.
The majority of sites I visit are either regular revisits (rules are easily set up then) or random browsing where security & privacy by default is good.
I never used NoScript but am a bit uMatrix fan. There I can easily allow things. NoScript looked super complicated.
Then there's the occasional 'funny photo' site which won't work until you enable 15 different sources - in which case, I just pop open Chrome if I really want to see that funny photo.
I was trying many years ago to create a public DB/wiki telling us which things we need to turn on to get the page to work, but it got abandoned before I really started.
Back in the 90's, Microsoft and Netscape were all too happy to give JS developers the world with almost no regard for security consequences.
We've spent the last 20 years trying to fix their mistakes.
Web developers have always pushed for more access to information about the user and their environment. Browser and tool developers are happy to provide that access. There's always some use case that sounds reasonable, but you're right that it's just a security issue waiting to happen.
These holes are also being talked about in the new Wayland display server on Linux. Warping a mouse pointer, color picking, knowing your apps place on the desktop are all security violations. They are being very careful with that stuff because it's an insecure free for all with X.
Every time I upload an attachment to gmail or a picture to facebook, I wonder how secure things are. Those seem to require user action, but do they really?
I know the quip about how in IT paranoia is not a sickness but a job requirement, but damn it...
As for web developers pushing for more information. No surprises there, its so they can more precisely fine tune the layout of the "app" (notice how they refer to what used to be called a site with a term that used to denote something running locally).
That it also can be used to fingerprint the computer, and by extension the user, is a side effect, not a goal.
Custom sized ones make you easier to identify only if you don't change them.
Personally I don't care either, just thought you might want to know!
It seems like the child windows are 'special', perhaps the web page can obtain the relative coordinates of these child panels?
I don't think the method you're describing exists. If you want child coordinates relative to parent coordinates, you would use both the parent and child's absolute coordinates.
edit: I found the original site that I linked to this. Courtesy of archive.org: http://web.archive.org/web/20120214090814/http://blog.insicd...
The NeWS window system let you reshape the entire framebuffer to any orientation or clipping, as well as individual windows and sub-windows, in the late 1980's.
Why shouldn't I be able to lay down on my side next to a laptop and read a web page off the screen sideways, or adjust the rotation of the window to match the inclination of my pillow?
The other thing I want the window manager to support (which NeWS couldn't do since PostScript only supports 2D affine transforms) is estimating my head position relative to the screen, and projecting the window in perspective so it looks rectangular from an oblique viewing angle, so I can watch a movie on an extra screen off to the side.
If you could rotate windows around on the screen, Browser Ball should be able to use the laptop accelerometer to detect the true direction of gravity, and bounce the ball accordingly as you turned the windows and the laptop itself around.
Unfortunately Apple stopped putting accelerometers in laptops with SSD drives, since they were only used to retract the heads when falling.
You can do a lot of fun things by modeling and tracking the positions of multiple people's heads and multiple devices in augmented reality! Here are two people using two tablets, two laptops, and a desktop computer together with Pantomime: 
The pseudo billboard idea is cute, but I wonder how it fares in readability on non HD screen (font rendering in 3D space may impede it, then viewing angles.). I'd love to see it done anyway, it's cute idea.
Another 'this is <year>' rant, we should have way more sensors on usb or i2c and have our laptops integrate with the real world more.
 aQuery: http://donhopkins.com/mediawiki/index.php/AQuery
Morgan Dixon's work is truly breathtaking and eye opening, and I would love for that to be a core part of a scriptable hybrid Screen Scraping / Accessibility API approach.
Screen scraping techniques are very powerful, but have limitations.
Accessibility APIs are very powerful, but have different limitations.
Think of it like augmented reality for virtualizing desktop user interfaces. The beauty of Morgan's Prefab is how it works across different platforms and web browsers, over virtual desktops, and how it can control, sample, measure, modify, augment and recompose guis of existing unmodified applications, even dynamic language translation, so they're much more accessible and easier to use!
James Landay replies:
This is right up the alley of UW CSE grad student Morgan Dixon. You might want to also look at his papers.
Don emails Morgan Dixon:
Morgan, your work is brilliant, and it really impresses me how far you've gone with it, how well it works, and how many things you can do with it!
I checked out your web site and videos, and they provoked a lot of thought so I have lots of questions and comments.
I really like the UI Customization stuff, and also the sideviews!
Users could literally drag controls out of live applications, plug them together into their own "stacks", configure and train and graphically customize them, and hook them together with other desktop apps, web apps and services!
For example, I'd like to make a direct manipulation pie menu editor, that let you just drag controls out of apps and drop them into your own pie menus, that you can inject into any application, or use in your own guis. If you dragged a slider out of an app into the slice of a pie menu, it could rotate it around to the slice direction, so that the distance you moved from the menu center controlled the slider!
While I'm at it, here's some stuff I'm writing about the jQuery Pie Menus.
Web Site: Morgan Dixon's Home Page.
Web Site: Prefab: The Pixel-Based Reverse Engineering Toolkit.
Video: Prefab: What if We Could Modify Any Interface? Target aware pointing techniques, bubble cursor, sticky icons, adding advanced behaviors to existing interfaces, independent of the tools used to implement those interfaces, platform agnostic enhancements, same Prefab code works on Windows and Mac, and across remote desktops, widget state awareness, widget transition tracking, side views, parameter preview spectrums for multi-parameter space exploration, prefab implements parameter spectrum preview interfaces for both unmodified Gimp and Photoshop: http://www.youtube.com/watch?v=lju6IIteg9Q
PDF: A General-Purpose Target-Aware Pointing Enhancement Using Pixel-Level Analysis of Graphical Interfaces. Morgan Dixon, James Fogarty, and Jacob O. Wobbrock. (2012). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '12. ACM, New York, NY, 3167-3176. 23%.
Video: Content and Hierarchy in Prefab: What if anybody could modify any interface? Reverse engineering guis from their pixels, addresses hierarchy and content, identifying hierarchical tree structure, recognizing text, stencil based tutorials, adaptive gui visualization, ephemeral adaptation technique for arbitrary desktop interfaces, dynamic interface language translation, UI customization, re-rendering widgets, Skype favorite widgets tab: http://www.youtube.com/watch?v=w4S5ZtnaUKE
PDF: Content and Hierarchy in Pixel-Based Methods for Reverse-Engineering Interface Structure. Morgan Dixon, Daniel Leventhal, and James Fogarty. (2011). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '11. ACM, New York, NY, 969-978. 26%.
Video: Sliding Widgets, States, and Styles in Prefab. Adapting desktop interfaces for touch screen use, with sliding widgets, slow fine tuned pointing with magnification, simulating rollover to reveal tooltips:
Video: A General-Purpose Bubble Cursor. A general purpose target aware pointing enhancement, target editor:
PDF: Prefab: Implementing Advanced Behaviors Using Pixel-Based Reverse Engineering of Interface Structure. Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '10. ACM, New York, NY, 1525-1534. 22%
PDF: Prefab: What if Every GUI Were Open-Source? Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '10. ACM, New York, NY, 851-854.
Morgan Dixon's Research Statement: http://morgandixon.net/morgan-dixon-research-statement.pdf
Community-Driven Interface Tools
Today, most interfaces are designed by teams of people who are collocated and highly skilled. Moreover,
any changes to an interface are implemented by the original developers and designers who own the
source code. In contrast, I envision a future where distributed online communities rapidly construct and
improve interfaces. Similar to the Wikipedia editing process, I hope to explore new interface design tools
that fully democratize the design of interfaces. Wikipedia provides static content, and so people can
collectively author articles using a very basic Wiki editor. However, community-driven interface tools will
require a combination of sophisticated programming-by-demonstration techniques, crowdsourcing and
social systems, interaction design, software engineering strategies, and interactive machine learning.
The way jQuery widgets can encapsulate native and browser specific widgets with a platform agnostic api, you could develop high level aQuery widgets like "video player" that knew how to control and adapt many different video player apps across different platforms (youtube or vimeo in browser, vlc on windows or mac desktop, quicktime on mac, windows media player on windows, etc). Then you can build much higher level apps out of widgets like that.
Target aware pointing is one of many great techniques he shows can be layered on top of existing interfaces, without modifying them.
His research statement sums up where it's leading: Imagine wikipedia for sharing gui mods!
Berkeley Systems (the flying toaster screen saver company) made one of the first screen readers for the Mac in 1989 and Windows in 1994. https://en.wikipedia.org/wiki/OutSpoken
Richard Potter, Ben Shneiderman and Ben Benderson wrote a paper called Pixel Data Access for End-User Programming and graphical Macros, that references a lot of earlier work. https://www.cs.umd.edu/~ben/papers/Potter1999Pixel.pdf
This is really scary to you? I get that you can do fingerprints, and honestly there's a LOT more than just browser window position/size in them, but "really scary"?
Maybe we need to stop exaggerating on this sort of stuff if we want people to take us seriously.
This tester uses it I believe: https://panopticlick.eff.org/
Maybe it only recognizes American addresses or something, but I don't know any of those.
edit: Ok I got it working by putting in Amsterdam, which happens to be a city and not an address.