

Understanding the iPhone 6 Plus Screen - elo
http://oleb.net/blog/2014/11/iphone-6-plus-screen/

======
userbinator
Does anyone wonder what's with the odd 414x736 resolution? I can't say I've
ever heard of that resolution before; 360x640 at 3x would fit the 1080x1920
native resolution perfectly, or 2x the 540x960 of the iPhone 4. Apple's "Think
Different" mentality at play here?

~~~
refulgentis
Word on the Apple rumor street was that the display was supposed to be a full
3x 414x736 (1242x2208), but the display yield was too poor.

~~~
ChuckMcM
That would be my guess as well, a compromise, no sapphire and a fudged screen.
That said, my take away from this article was that once your screen
resolutions gets above 300 ppi the resolution available is not nearly as
important as it is otherwise.

~~~
Bud
The sapphire is a completely separate issue and has nothing whatever to do
with the screen resolution.

In addition, it's not even clear yet that a sapphire screen on an iPhone (as
distinct from a watch) would be an improvement, because sapphire apparently
shatters more easily. Which is why Apple is reportedly working on a screen
incorporating a layer of sapphire on top for more scratch resistance, since
it's harder, bonded to a layer of glass below, for more durability.

------
jordanthoms
iOS still doesn't have proper resolution independence, hence hacks like this.
There's no reason to be doing post scaling on a rendered image unless you have
poor software which can't render at the proper resolution in the first place.

If anyone else did this they'd be pilloried for it being the total battery
wasting font-destroying hack that it is, but Apple gets away with it.

This is true on the Macbook Retinas also - when I set mine to looks like 1680
or 1920, fonts get blurry due to this scaling, when if the software was better
it'd be rendered at the native resolution and they would look as good as they
do at the 2x resolution. (where the numbers work out that you can scale
without a loss of fidelity)

~~~
ridiculous_fish
I worked on resolution independence on OS X over a period of years. I think
the above comment is not made with full appreciation of the difficulty of the
problem. Supporting arbitrary resolutions well is very difficult.

The underlying frameworks (e.g. CoreGraphics) are absolutely capable of
rendering at arbitrary resolutions, and have been for a long time. It's not
particularly hard: you interpose an affine transform which converts from local
to device coordinates, apply it to all your geometry, and it falls out.

But "rendering at arbitrary resolutions" is not the same as "looks good at
arbitrary resolutions." For example, a line 1 px wide at 1x will be 1.5 pixels
wide at 1.5x. That means at least one edge must be aligned on a partial pixel,
and there's the potential for weird antialiasing effects. There's also a
behavior change: you cannot redraw half a pixel, so now "dirtying" a pixel
requires redrawing more components at 1.5x than at 1x, which can cause
performance problems and other bugs. And there's also a question of which
pixels get the partial alignment: do you round in a direction? If so, which
one? Or maybe you don't round and you have two pixels that are a quarter
covered?

Which leads to the problem of centering. I wish to center a bitmap image
within a button's border. The image is 101 logical points high, and the border
is 200 logical points high. With a 2x scale factor, I can center absolutely,
and still be aligned to device pixels. With a 1x, 1.25, 1.33, etc. scale
factor, centering will align me on a partial pixel, which looks like crap. So
I have to round. Which way? If the goal is "make it look good," then the
answer is "whichever way looks good," which depends on the visual style of the
bezel, i.e. whether the top or bottom has more visual weight. So now we need
hinting.

And that's where things start to get really nasty. In order to make things
look good at arbitrary resolutions, we want to round to device pixels. But the
rounding direction is not a local question! Consider what happens if we have
two visual elements abutting in logical coordinates, and they round in
opposite directions: now there's a device pixel between them. That's very
visible: you get a pixel crack! So you have to coordinate rounding.

WPF is a good example of a framework that attempted resolution independence
and encountered this problem. Initially it has the "SnapsToDevicePixels"
property, which triggers rounding behavior at draw time. But draw time is too
late, because of the "abutting elements rounding in opposite directions"
problem. So they introduced the "UseLayoutRounding" property, which
does...something. And the guidance is basically "turn it on and see if it
helps, if not, disable it." Great.

The web also has this problem in spades. Websites break in all sorts of fun
ways when you zoom in or out. We tolerate this because, frankly, the bar is
super-low for websites.

As I see it, the two options are:

1\. Make everything vectors. You'll have to choose between weird antialiasing
artifacts and potential pixel cracks; either way things will look bad. And
you'll encounter bitmaps eventually, and have to deal with the necessities of
resampling and pixel aligning at that point. 2\. Scale only to integral sizes,
and resample. You'll avoid antialiasing and pixel alignment issues, but pay a
performance penalty, and things may look slightly blurry.

So which option is better? #1 has the potential for the highest-quality
output, but at a significant price: developers must test their apps at more
scale factors, and the failure mode is ugly drawing artifacts or outright
bugs. #2 is more utilitarian: the output is not as nice, and you incur a perf
penalty, but that's borne by the system instead of the apps, and the overall
system is more consistent. #2 is also more forward looking: if you expect that
pixel densities will continue to increase, then resampling artifacts will
eventually be indiscernible, but a pixel crack will always be visible.

Apple took the practical and forward-looking approach to this problem. I can't
fault them: to my knowledge, nobody has successfully implemented true
resolution independence in a framework with wide adoption. If they have, I'd
love to know how!

~~~
grey-area
_Make everything vectors._

IMHO this is what Apple should have done a long time ago, certainly on ios.
Yes it has issues, but those issues are mostly faced by the OS vendor _once_ ,
and have been successfully dealt with in the case of font rendering for
example with antialiasing, hinting etc. You don't hear anyone decrying fonts
being vector formats and asking to go back to pixel fonts nowadays, and
resolutions are only going to increase.

The OS should be handling rendering, hinting, caching, raster representations
based on vector data. As you point out this is a hard problem and Apple are
uniquely qualified to handle it. Instead Apple have punted on this and
developers are actively encouraged to produce raster representations of each
asset which break for every new device which comes out (as you point out this
doesn't just apply to fonts).

Controls like buttons or custom ones should be drawn in code, not included as
raster assets, and the iconography for things like buttons, launch screens,
branding or other app-unique assets could be stored as the original vector, as
it should be. If you want to put a raster image on a button, well, sometimes
that's not going to look good.

On top of the problems caused by resamping, what Apple have done instead of
choosing the vector route is create an ever-growing headache for developers
and themselves by focussing on raster assets and requiring pre-rendered bitmap
reprentations of assets like icons, which usually start as vector anyway. How
many different pre-rendered sizes of ios icon are we up to now, 20? Where does
it end? This is not practical or forward looking, it's a hack which imposes
real costs on developers, customers and Apple - there is a huge amount of work
in creating and maintaining all the different launch screen, icon and other
assets now required to be rendered at 3 or more different sizes for Apple
devices, and this is bloating every single ios app with multiple megabytes of
assets which could be generated on the fly from their vector representation,
over the entire app store that alone is a _huge_ waste of bandwidth for Apple
and customers.

So there are lots of reasons to aim for real resolution independence, and from
the point of view of developers, Apple chose a quick hack (which has devolved
into a hacky mess over time), over a more difficult, forward-looking solution
which anticipates that screen sizes and formats are going to multiply until it
is impossible to cater to all of them with raster designs.

~~~
notjosh
It's relatively feasible today with (UI|NS)BezierPath and CAShapeLayer. It's
not perfect in every situation, as CAShapeLayer is optimised for performance,
but in the vast majority of cases it's worked well for me. Especially when
paired with tools like PaintCode.

The tools are there by Apple, but I don't think the hardware until fairly
recently could really justify the performance hit - instead choosing file size
over performance. These days though, in the majority of cases, the performance
hit is negligible (especially when compared to dev/design time of generating
@3x assets years/months later)

~~~
grey-area
_It 's relatively feasible today with (UI|NS)BezierPath and CAShapeLayer._

That's interesting, I might try that for launch assets, though it can't be
used for app icons, I'd love to just drop in an svg and let the OS scale my
icon.

Unfortunately the default path (including many default assets like icons and
launch screens) is to produce tens of raster files generated from the original
vectors and bundle them with the app which is wasteful, painful and prone to
break with new hardware.

------
0x0
Would it take a jailbreak to trick uikit into rendering at native resolution?

What would be the downside? Larger UI elements and less available space on the
screen? Could a @2x instead of @3x mode work or would that result in super
tiny "bad hidpi" UI?

~~~
wingerlang
There are at least two released jailbreak tweaks that does something related
to this [0][1]. Although I don't know exactly what they actually do,
technically, as I have not had the interest in these specific screen details.

[0] [http://www.idownloadblog.com/2014/11/07/littlebrother-
iphone...](http://www.idownloadblog.com/2014/11/07/littlebrother-
iphone-6-plus-scaling-landscape-home-screen-older-iphones/) [1]
[http://www.idownloadblog.com/2014/11/06/upscale-change-
resol...](http://www.idownloadblog.com/2014/11/06/upscale-change-resolution-
jailbroken-iphone-iphone-6-plus-5s-6/)

~~~
0x0
Hm, those two tweaks appear to be for iphone5/5s only. I'm more interested in
the capabilities of the 3x screen on the 6+.

I wonder if this whole downsampling business is happening because the 6+
didn't get a "full resolution" screen in time and so they had to go with off-
the-shelf 1080p screens? Related to the GT scandal maybe?

------
TillE
The scaling makes sense for legacy apps, but I can't understand why they don't
present a 1920x1080 screen for everything else.

Hasn't iOS had tools for building resolution-independent apps ever since the
iPhone 5 was released?

~~~
0x0
Probably because 1080p as a @3x screen gives a smaller effective viewport than
the lesser models, which would make it look weaker?

