Hacker News new | comments | show | ask | jobs | submit login
Understanding the iPhone 6 Plus Screen (oleb.net)
170 points by elo 697 days ago | hide | past | web | 54 comments | favorite



Does anyone wonder what's with the odd 414x736 resolution? I can't say I've ever heard of that resolution before; 360x640 at 3x would fit the 1080x1920 native resolution perfectly, or 2x the 540x960 of the iPhone 4. Apple's "Think Different" mentality at play here?


Word on the Apple rumor street was that the display was supposed to be a full 3x 414x736 (1242x2208), but the display yield was too poor.


I didn't hear any rumors like that other than Gruber's very-late-in-the-game speculation (on the potential resolution, not the yield). If there were any, they were more than outpaced by rumors that the larger iPhone would feature a 1080p (or lower!) resolution.

Here's one from all the way back in October '13 that predicted the 1080p screen on the "5.7 inch" phone:

http://www.displaysearchblog.com/2013/10/in-2014-apple-will-...

I think they have made a conscious decision to bet on display scaling technology at high PPI's, so they have the flexibility to always buy the best displays they can get manufactured at the price point they need to hit.


That would be my guess as well, a compromise, no sapphire and a fudged screen. That said, my take away from this article was that once your screen resolutions gets above 300 ppi the resolution available is not nearly as important as it is otherwise.


The sapphire is a completely separate issue and has nothing whatever to do with the screen resolution.

In addition, it's not even clear yet that a sapphire screen on an iPhone (as distinct from a watch) would be an improvement, because sapphire apparently shatters more easily. Which is why Apple is reportedly working on a screen incorporating a layer of sapphire on top for more scratch resistance, since it's harder, bonded to a layer of glass below, for more durability.


> 360x640 at 3x would fit the 1080x1920 native resolution perfectly

But then you'd end up with a comically large UI at 6" with no space for any content because everything's so huge on the screen. The entire point of the larger phone is to have more content area.


A change of 15% or less each direction is hardly the difference between 'good' and 'comical'.

If the entire point of this phone is to fit more on the screen it's doing a pretty bad job of it.


iOS still doesn't have proper resolution independence, hence hacks like this. There's no reason to be doing post scaling on a rendered image unless you have poor software which can't render at the proper resolution in the first place.

If anyone else did this they'd be pilloried for it being the total battery wasting font-destroying hack that it is, but Apple gets away with it.

This is true on the Macbook Retinas also - when I set mine to looks like 1680 or 1920, fonts get blurry due to this scaling, when if the software was better it'd be rendered at the native resolution and they would look as good as they do at the 2x resolution. (where the numbers work out that you can scale without a loss of fidelity)


In Apple's defense, they've been working on full resolution independence since 2005 or so, with a setting for developers available from Leopard (2007) on, where one could set the resolution in between 1x to 3x. However, due to the complexity of the issue, Carbon apps, and probably a lot of other things, that never worked really well. I've been playing around with it with each new OSX release, until they removed it (I think Lion or Mountain Lion).

Basically, I think they never got it working good enough and just fixed it to 1x and 2x.

Here's a screenshot from the utility they had for this. http://i.stack.imgur.com/0NToe.png


I worked on resolution independence on OS X over a period of years. I think the above comment is not made with full appreciation of the difficulty of the problem. Supporting arbitrary resolutions well is very difficult.

The underlying frameworks (e.g. CoreGraphics) are absolutely capable of rendering at arbitrary resolutions, and have been for a long time. It's not particularly hard: you interpose an affine transform which converts from local to device coordinates, apply it to all your geometry, and it falls out.

But "rendering at arbitrary resolutions" is not the same as "looks good at arbitrary resolutions." For example, a line 1 px wide at 1x will be 1.5 pixels wide at 1.5x. That means at least one edge must be aligned on a partial pixel, and there's the potential for weird antialiasing effects. There's also a behavior change: you cannot redraw half a pixel, so now "dirtying" a pixel requires redrawing more components at 1.5x than at 1x, which can cause performance problems and other bugs. And there's also a question of which pixels get the partial alignment: do you round in a direction? If so, which one? Or maybe you don't round and you have two pixels that are a quarter covered?

Which leads to the problem of centering. I wish to center a bitmap image within a button's border. The image is 101 logical points high, and the border is 200 logical points high. With a 2x scale factor, I can center absolutely, and still be aligned to device pixels. With a 1x, 1.25, 1.33, etc. scale factor, centering will align me on a partial pixel, which looks like crap. So I have to round. Which way? If the goal is "make it look good," then the answer is "whichever way looks good," which depends on the visual style of the bezel, i.e. whether the top or bottom has more visual weight. So now we need hinting.

And that's where things start to get really nasty. In order to make things look good at arbitrary resolutions, we want to round to device pixels. But the rounding direction is not a local question! Consider what happens if we have two visual elements abutting in logical coordinates, and they round in opposite directions: now there's a device pixel between them. That's very visible: you get a pixel crack! So you have to coordinate rounding.

WPF is a good example of a framework that attempted resolution independence and encountered this problem. Initially it has the "SnapsToDevicePixels" property, which triggers rounding behavior at draw time. But draw time is too late, because of the "abutting elements rounding in opposite directions" problem. So they introduced the "UseLayoutRounding" property, which does...something. And the guidance is basically "turn it on and see if it helps, if not, disable it." Great.

The web also has this problem in spades. Websites break in all sorts of fun ways when you zoom in or out. We tolerate this because, frankly, the bar is super-low for websites.

As I see it, the two options are:

1. Make everything vectors. You'll have to choose between weird antialiasing artifacts and potential pixel cracks; either way things will look bad. And you'll encounter bitmaps eventually, and have to deal with the necessities of resampling and pixel aligning at that point. 2. Scale only to integral sizes, and resample. You'll avoid antialiasing and pixel alignment issues, but pay a performance penalty, and things may look slightly blurry.

So which option is better? #1 has the potential for the highest-quality output, but at a significant price: developers must test their apps at more scale factors, and the failure mode is ugly drawing artifacts or outright bugs. #2 is more utilitarian: the output is not as nice, and you incur a perf penalty, but that's borne by the system instead of the apps, and the overall system is more consistent. #2 is also more forward looking: if you expect that pixel densities will continue to increase, then resampling artifacts will eventually be indiscernible, but a pixel crack will always be visible.

Apple took the practical and forward-looking approach to this problem. I can't fault them: to my knowledge, nobody has successfully implemented true resolution independence in a framework with wide adoption. If they have, I'd love to know how!


I think the OSX and iOS approach with @2x and @3x is the only sane way to go and is genius in its simplicity. It really works out well even for old apps that aren't updated (just double), while Windows' crazy DPI slider never worked properly. And you mention WPF, which for the longest had crazy blurring artifacts even in its text rendering.

People often say "just use SVG/PDF/vectors for all the things" but look online and you'll find many good arguments against it, not least because icons and text at lower pixel sizes really needs to be "pixeled" manually to look good.


While I only looked at the performance implications back when, here are my 2¢:

As far as I can tell, all the problems you describe boil down to "In order to make things look good at arbitrary resolutions, we want to round to device pixels."

As TFA shows, with iPhone 6+, you can't. Round to device pixels, that is. I wrote about this when the 6+ came out[1], it's good to see it confirmed empirically (awesome job by Ole, incidentally!).

So there no longer is a choice between your options 1 and 2, because #2 is out the window, no integral scaling available.

Of course, to make #1 work properly, what they need to do is remove all the places where snapping to device pixels is done, because otherwise you get pixel cracks, and those are noticeable.

The anti-aliasing effects are apparently theoretical at this point: yes you can create test cases to show them, but in practice the resolution is high enough that it no longer matters. From TFA:

"I am actually surprised how little of an issue the automatic downsampling is in practice. As I mentioned, I simply don’t notice anything of the effects I have illustrated here in real life."

Or as Jon Gruber put it: "Its 401 PPI display is the first display I’ve ever used on which, no matter how close I hold it to my eyes, I can’t perceive the pixels. "[2]

I do think it makes more sense to do the scaling directly in CoreGraphics, disabling all the features that make snapping to the pixel grid possible and letting the pixels fall where they may. It would certainly be less work overall and use less memory. I can only speculate as to why it wasn't implemented this way, my guess would be that getting all the pixel-snapping out is not that easy, and possibly it was a rush job (as suggested elsewhere) due to last minute unavailability of the higher resolution display.

In the future, I would expect either (a) an actual 3x display or (b) the display stack adapted to your (1) choice.

[1] http://blog.metaobject.com/2014/09/iphone-6-plus-and-end-of-...

[2] http://daringfireball.net/2014/09/the_iphones_6


Make everything vectors.

IMHO this is what Apple should have done a long time ago, certainly on ios. Yes it has issues, but those issues are mostly faced by the OS vendor once, and have been successfully dealt with in the case of font rendering for example with antialiasing, hinting etc. You don't hear anyone decrying fonts being vector formats and asking to go back to pixel fonts nowadays, and resolutions are only going to increase.

The OS should be handling rendering, hinting, caching, raster representations based on vector data. As you point out this is a hard problem and Apple are uniquely qualified to handle it. Instead Apple have punted on this and developers are actively encouraged to produce raster representations of each asset which break for every new device which comes out (as you point out this doesn't just apply to fonts).

Controls like buttons or custom ones should be drawn in code, not included as raster assets, and the iconography for things like buttons, launch screens, branding or other app-unique assets could be stored as the original vector, as it should be. If you want to put a raster image on a button, well, sometimes that's not going to look good.

On top of the problems caused by resamping, what Apple have done instead of choosing the vector route is create an ever-growing headache for developers and themselves by focussing on raster assets and requiring pre-rendered bitmap reprentations of assets like icons, which usually start as vector anyway. How many different pre-rendered sizes of ios icon are we up to now, 20? Where does it end? This is not practical or forward looking, it's a hack which imposes real costs on developers, customers and Apple - there is a huge amount of work in creating and maintaining all the different launch screen, icon and other assets now required to be rendered at 3 or more different sizes for Apple devices, and this is bloating every single ios app with multiple megabytes of assets which could be generated on the fly from their vector representation, over the entire app store that alone is a huge waste of bandwidth for Apple and customers.

So there are lots of reasons to aim for real resolution independence, and from the point of view of developers, Apple chose a quick hack (which has devolved into a hacky mess over time), over a more difficult, forward-looking solution which anticipates that screen sizes and formats are going to multiply until it is impossible to cater to all of them with raster designs.


For what it’s worth, Xcode 6 allows you to provide a single PDF file per asset. Xcode will then render the PDF into bitmaps at all required sizes _at build time_ and include the rendered images in your app bundle. Of course, you will probably have to make a new build when new devices come out.

It’s not resolution independence, but it has the potential to make asset production much less painful. I don’t know if this approach also work for app icons or only for things like button and toolbar images.


Thanks, I didn't know that, I'll be exploring that feature, it'd be nice if they did that for icons - using a script to generate 20 different sizes is getting old. Unfortunately it looks like it is only supported for images at present.

Of course it'd be better if this was done at run time on the device, and bitmap representations cached as required (eventually they probably wouldn't be, even if initially they are).


Apple is at least providing some tools to help with this - for instance, you can specify a storyboard whose initial view controller will then be used as the app’s launch screen; this avoids the problem of separate launch images for all screen sizes, resolutions and orientations their app supported.

http://oleb.net/blog/2014/08/replacing-launch-images-with-st...

I seem to recall Marco Arment took a bit of pride in having 100% procedurally generated graphics with overcast, so that he didn't have to create/ship any bitmapped images whatsoever.


It's relatively feasible today with (UI|NS)BezierPath and CAShapeLayer. It's not perfect in every situation, as CAShapeLayer is optimised for performance, but in the vast majority of cases it's worked well for me. Especially when paired with tools like PaintCode.

The tools are there by Apple, but I don't think the hardware until fairly recently could really justify the performance hit - instead choosing file size over performance. These days though, in the majority of cases, the performance hit is negligible (especially when compared to dev/design time of generating @3x assets years/months later)


It's relatively feasible today with (UI|NS)BezierPath and CAShapeLayer.

That's interesting, I might try that for launch assets, though it can't be used for app icons, I'd love to just drop in an svg and let the OS scale my icon.

Unfortunately the default path (including many default assets like icons and launch screens) is to produce tens of raster files generated from the original vectors and bundle them with the app which is wasteful, painful and prone to break with new hardware.


Effectively procedural generation of all the graphics, which could make for some tiny apps (see the demoscene for a great example), and it's certainly a great idea that should be used more, but there's still at times a need to get pixel-perfect images of the right resolution due to the horrible effects that resampling has on the edge-cases. Ultimately the display is still digital/pixel-based, not analogue/vector-based unlike how CRTs used to work; and many things we view on the displays, like photos, are still pixel-based.


It's certainly a trade-off, I wouldn't pretend otherwise, but the vast majority of assets I produce at least are vector, which are then translated to various sizes of bitmap image, which is wasteful, painful and ultimately futile as there are still issues whenever a new device comes out (like the iphone 6).

The pixels of the screen are now so small they have almost disappeared, so I think there's an argument we have reached the place where going mostly vector makes sense.

Clearly some assets will always be bitmaps (photos), but so many others are not well represented as bitmaps (icons, text etc) - content is fine as bitmaps but controls and chrome are usually better as vector.


Photos seem like good examples of pixel-based data but it's really not. They are really often rescaled by an arbitrary real number compared to the device pixel. Antialiasing hurts vector images much more often than photographs.


This is very interesting. I did not know that raster images are still used.

Currently as a windows programmer I have not used raster images in years.


I agree 100% with grey-area. But the trouble with vector artwork is the lack of an efficient widely-accepted storage format. SVG is full-featured but not particularly compact, and it's difficult to write a rendering engine which handles all its features correctly. There is such a big impedance mismatch between what SVG specifies and what GPUs can render without a lot of CPU-side preprocessing.

Since font technology is so old and so efficiently implemented, the obvious thing to do is to extend the same technology to rendering color fonts AND color vector artwork, such as icons. But Apple is going the wrong way on that too: they added emoji support to their fonts by embedding PNG images. Yuck. This article has links which explain more about the MS proposal, which is much better: https://en.wikipedia.org/wiki/OpenType#Color The glyphs can be built with layers, each of which has a color and a vector path. This is what we need. And then we need the Khoros Group, and then the GPU vendors, to focus on accelerating exactly the kinds of vector paths which are found in fonts. This will give us a better solution for rendering fonts and also a way of rendering resolution-independent icons and other types of application artwork at the same time. And the font format will constrain the icon designers to create icons that are efficient for the GPU to render: as few layers as possible, and without all the SVG features that make rendering so difficult.

In the meantime, it's a nice coincidence that the vogue of the moment is going towards monochrome icons. That means that the use of icon fonts (such as Font Awesome, and others) can go on for a few years even without adding color. But we can depend on there being a backlash eventually: we have color screens and are using them to show monochrome UIs? So the technology needs to be ready by the time the designers get back around to that way of thinking again.


The icons could be rendered at install time and stored forever as raster data on the device.


What about Android? The layout managers there attempt to get good placing of items, dealing with these issues.

Agreed it's a hard problem and that comment was certainly reductionist, but at the same time it's depressing to have this be the issue that makes fonts blurry - the hardware is fine but the software side is letting it down.


My understanding is that Android uses essentially the same strategy as iOS: don't attempt to support full resolution independence, but instead enumerate a set of supported DPIs. See the DisplayMetrics class: there's an enumeration of supported densities, and the 'density' property itself is noted as being a "gross" measure that is rounded to nice values, like 1.5.

Of course extra space can absorbed into the screen size instead of its density. Android and iOS support this via their respective mechanisms, like autoresizing masks or autolayout on iOS, and I guess layout managers on Android.

If you were to make a new device with unusual screen metrics, you have a few options for app compatibility. You could report an unusual density, and risk ugly antialiasing or pixel cracks. Alternatively, you could report an unusual size, and get bad layouts, where lots of apps will have UI elements overly small, and excessive unused space. Or you could compromise: report sane (but inaccurate) sizes and densities, and resample. This maximizes compatibility, with some quality and performance cost.

(The idealized option is true resolution independence, where apps look great at any density and size. To my knowledge, nobody has achieved that.)

I figure Apple realized that the iPhone 6+ would be very heavily reviewed, and so wanted to ensure that existing apps would work really well on it. That's what their scaling approach enables.


No, the DPI buckets are only for pre-baked assets (images, basically). Everything else is fully resolution independence. This was used for the original N7, for example, which didn't actually fit into either the MDPI or the HDPI buckets. It instead had a ratio of 1.33125, and yet everything but images are rendered unscaled.


Interesting. So what would happen if you drew, say, a 100 dip box on an N7? Would two edges just be blurry?


On Android, drawing happens at directly on the output framebuffer, using pixel values. So If you were directly drawing a box, you'd have to convert 100dip into a px value and round it in some direction.

But you wouldn't actually do that apart from in rare circumstances - instead, you'd create a View, size it to 100dip, and give it to a layout telling it to position it in say the center of the screen. When it's displayed, the layout manager will decide how big to actually draw it, taking the density into account.

Android doesn't have the issues with pixel cracks etc you are talking about, since the views are all stored in Layout Managers which operate at the native device resolution and calculate the final positioning. So they can see that this element would be 10.5px, display it at 10px, and move the next element up by 1px so everything works out.


I worked on autolayout, which does exactly what you describe: attempt to derive a pixel-aligned layout from abstract positioning rules. It was very difficult, so I'm skeptical that Android has found a way to just make "everything work out." The mathematics requires some sort of tradeoff.

For example, say we have 10 views, alternating black and white, each 10 dips wide, packed abutting into a container of width 100 dips. At 1.33x scale, each view wishes to have 13.3 device pixels. How do Layout Managers solve this?

1. Round each view down to 13 pixels, and repeatedly "move the next element up" by 1 px? Then you must account for .3 * 10 = 3 unclaimed pixels at the end of your container. That's going to be quite visible.

2. Round views based on their fractional pixel position, i.e. seven views down to 13 pixels, and three views up to 14? Now your stripes have different widths, which can be visible (especially if they have high-frequency content), and can also cause a "creeping" effect when they animate, if their edges are rounded independently.

3. Don't round at all, and position views on fractional pixel boundaries. This yields fuzzy edges.

Honest question - how would layout managers position views in this scenario?


Android converts dips to pixels and rounds them at view inflation time, then works with pixels from then on. So the view would load its width of 10 dip, convert it to pixels and round it (13 pixels), and then measure itself to that size. The fractional part is completely lost, never to be seen again.

So in that example there would indeed be 3 unclaimed pixels at the end of the container.

For things like hairline borders you typically just specify that they are however many pixels wide. It's perfectly valid to say width=100dip, height=1px. You can mix & match at will.

For the scenario where you just want to have a bunch of elements all fit on one row you usually just give them all the same weight and let the parent view divide up the space rather than saying parent=100, children each =10.


> at the same time it's depressing to have this be the issue that makes fonts blurry

Wait, have you actually seen blurry fonts on the iPhone 6 Plus?

Do you know of any photograph which demonstrates them?


Somebody with abnormally good eyesight probably could if they knew what they were looking for.


I'm asking for a photo, even with a microscope. I'm really curious, because I still believe the difference can't be seen for the text sizes that are actually readable by humans that don't use magnifying glasses.


Yes, I always believed Apple programmers were aware of all the potential problems.

There are also much older implementations: Postscript has "only" 595 x 842 "logical" pixels for a whole A4 page but it's just a first step in working with any resolution of the printer.

And NeXT started with http://en.wikipedia.org/wiki/Display_PostScript and Apple has http://en.wikipedia.org/wiki/Quartz_2D

To competently argue the topic it's good to understand more than "this number is different than that number."


> For example, a line 1 px wide at 1x will be 1.5 pixels wide at 1.5x. That means at least one edge must be aligned on a partial pixel, and there's the potential for weird antialiasing effects.

True, BUT you still have that problem on the 6+ with everything being scaled by 15%. In fact, the problem is vastly worse. Instead of the occasional line being 2px instead of 3px, you now have every line being blurry. And you can solve the 2px vs. 3px line thing quite easily by letting devs choose between specifying in pixels vs. points, which is what Android does. The problems you talk about are all problems that only exist if you exclusively work in points. They all go away when you can also use pixels.

And the density is not at all high enough to mask this. It's very visible as soon as you pick up the device, and the dividing lines in things like the calculator app are hilariously uneven and awful.


If you use pixel units for drawing your UI, you have essentially two alternatives:

(1) Leave it to developers to make sure that their code correctly handles different DPIs. This results in every app having to produce its own solution for some very difficult problems, such as those ridiculous_fish mentions elsewhere in the thread, or else look terrible on some devices. (Yes, some of this can be encapsulated in libraries, but you'll virtually always do some custom drawing that requires manual handling.)

(2) Virtualize the pixels, so that depending on the DPI of the screen you're displaying on, a "pixel" does not necessarily correspond to a single device pixel.

Generally what happens is that you try #1, realize that many or most developers screwed it up, and shift to #2.

See CSS "pixels" for an example of this phenomenon.


No, there's actually a 3rd solution which is what Android uses:

(3) Expose both pixels and dips. Both are first-class units in Android. The 2D drawing API works exclusively in pixels, but the layout commonly works in DIPs instead.

Developers then do their layouts in dips but custom 2D drawing in pixels. Or really most anything in code is using pixels, dips are largely contained in the XML layout files.


Thanks for the great post. Despite your background and very complete explanation, you're going to get a lot of, "Couldn't you just...?" responses. :-)


"If anyone else did this they'd be pilloried for it being the total battery wasting font-destroying hack that it is, but Apple gets away with it."

Except we're talking about a phone that sits at or near the top of every battery life shoot out chart. And the pickiest screen-specific shootouts fail to mention any difference in font detail between iPhone models. Some hacks work.


It looks like the plan for resolution-independence on Apple platforms is "get the DPI so high that you can set the screen size to whatever you want". And I'm OK with that. Not everything can be represented with vector graphics.


Are you sure? I think iOS is resolution-independent, they just chose to do this since a 3x scale factor is more convenient.

EDIT: It also sounds like it was originally going to be the correct resolution, they just couldn't get the yields. If that's the case, perhaps they made it downscale at the last minute and used a 1080p panel?


> It also sounds like it was originally going to be the correct resolution, they just couldn't get the yields.

I strongly assume that the lower resolution was indeed Plan B, and that it's the result of another twist in the saga of Apple's relationships with the mobile component suppliers.

On a not-really-unrelated note, the Samsung Galaxy Note 4's screen is apparently pretty stellar in all respects, not just resolution: http://www.displaymate.com/Galaxy_Note4_ShootOut_1.htm .


Why should 3x be any more convenient than 2.5x or 2.2x in a properly designed system? It's not any more difficult in Android (although images are prescaled in size buckets for efficiency and IQ reasons).


A 1 dip line is sharp at 1x or 3x, but requires antialiasing at 2.5x or 2.2x.


Regarding blurry fonts on you Macbook Retina: It only makes sense to set the resolution to match the number of hardware pixels. You cannot change the resolution of the hardware display, so any resolution other than the native resolution will look blurry as the system has to resample to match the hardware. This has nothing to do with OSX -- OSX can render fonts in any resolution.

... You can easily connect your Macbook Retina to an external monitor of any resolution. OSX is not limited to your Macbook Retina resolution.

On the iPhone6+ Apple could easily draw everything in 2.6x resolution to match the display, but I guess they feel using 3x+downsampling simplifies the UI-design process. From a technical point of view it is an unnecessary hack.


The problem here is that you are conflating resolution with how big things are. They aren't the same.

I want the resolution on my Macbook to always be 2880x1800. But I want to be able to make things bigger or smaller, while still rendering at that resolution so things don't have to be resampled. (and in the case of the iPhone 6+, the default/only setting requires this resampling).

The problem is that when I set it to 'looks like 1680', rather than just rendering things natively at 2880x1800 but a little bit smaller, we have to get into all these scaling hacks (rendering at 3360 x 2100 and then scaling down to 2880x1800), which means you lose the accurate font rendering and clarity you got, everything has a slight blur to it because it's been scaled down.


For this reason I much prefer how fonts look in Safari by scaling the browser down, and running at 'Best for Retina' instead of running at 'looks like 1440x900' and then leaving the fonts at the normal scale.


Would it take a jailbreak to trick uikit into rendering at native resolution?

What would be the downside? Larger UI elements and less available space on the screen? Could a @2x instead of @3x mode work or would that result in super tiny "bad hidpi" UI?


There are at least two released jailbreak tweaks that does something related to this [0][1]. Although I don't know exactly what they actually do, technically, as I have not had the interest in these specific screen details.

[0] http://www.idownloadblog.com/2014/11/07/littlebrother-iphone... [1] http://www.idownloadblog.com/2014/11/06/upscale-change-resol...


Hm, those two tweaks appear to be for iphone5/5s only. I'm more interested in the capabilities of the 3x screen on the 6+.

I wonder if this whole downsampling business is happening because the 6+ didn't get a "full resolution" screen in time and so they had to go with off-the-shelf 1080p screens? Related to the GT scandal maybe?


The scaling makes sense for legacy apps, but I can't understand why they don't present a 1920x1080 screen for everything else.

Hasn't iOS had tools for building resolution-independent apps ever since the iPhone 5 was released?


Probably because 1080p as a @3x screen gives a smaller effective viewport than the lesser models, which would make it look weaker?


No, iOS has had tools for building apps with flexible canvases, that has very little to do with the resolution of the screen these days.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: