In almost all of the public VR critiques I have made, I comment about aliasing. Aliasing is the technical term for when a signal is under sampled, and it shows up in many ways across graphics (and audio), but in VR this is seen most characteristically as "shimmering" imagery when you move your head around, even slightly. This doesn't need to happen! When you do everything right, the world feels rock solid as you look around. The photos in the Oculus 360 Photos app are properly filtered -- everything should ideally feel that solid.
In an ideal world, this would be turned into a slick article with before-and-after looping animated GIFs of each point I am making, along with screenshots of which checkboxes you need to hit in Unity to do the various things. Maybe someone in devrel can find the time...
All together in one place now, the formula for avoiding aliasing, from the most important:
Generate mip maps for every texture being used, and set GL_LINEAR_MIPMAP_LINEAR filtering. This is easy, but missed by so many titles it makes me cringe.
If you dynamically generate textures, there is a little more excuse for it, but glGenerateMipmap() is not a crazy thing to do every frame if necessary.
Use MSAA for rendering. Not using MSAA should be an immediate fail for any app submission, but I still see some apps with full jaggy edges. We currently default to 4x MSAA on Mali and 2x MSAA on Adreno because there is a modest performance cost going to 4x there. We may want to go 4x everywhere, and it should certainly be the first knob turned to improve quality on a Qualcomm S7, long before considering an increase in eye buffer resolution.
In general, using MSAA means you can't use a deferred rendering engine. You don't want to for performance reasons, anyway. On PC, you can get by throwing lots of resources at it, but not on mobile.
Even 4x MSAA still leaves you with edges that have a bit of crawl, since it only gives you two bits of blending quality. For characters and the environment you just live with it, but for simple things like a floating UI panel, you can usually arrange to use alpha blending to get eight bits of blending quality, which is essentially perfect. Blending requires sorting, and ensuring a 0 alpha border around your images is a little harder than it first appears, since the size of a border gets halved with each mip level you descend. Depending on how shrunk down a piece of imagery may get, you may want to leave an eight or more pixel cleared alpha border around it. To avoid unexpected fringe colors around the graphics, make sure that the colors stay consistent all the way to the edge, even where the alpha channel is 0. It is very common to see blended graphics with a dark fringe around them because the texture went to 0 0 0 0 right at the outline, rather than just cutting it out in the alpha channel. Adding an alpha border also fixes the common problem of not getting CLAMP_TO_EDGE set properly on UI tiles, since if it fades to invisible at the edges, it doesn't matter if you are wrapping half a texel from the other side.
Don't use techniques like alpha testing or anything with a discard in the fragment shader unless you are absolutely sure that the contribution to the frame buffer has reached zero before the pixels are discarded. The best case for the standard cutout uses is to use blending, but if the sorting isn't feasible, using ALPHA_FROM_COVERAGE is a middle ground.
Avoid geometry that can alias. The classic case is things like power lines -- you can model a nice drooping catenary of thin little tubes or crossed ribbons, but as it recedes into the distance it will rapidly turn into a scattered, aliasing mess of pixels along the line with only the 1 or 2 bits of blending you get from MSAA. There are techniques for turning such things into blends, but the easiest thing to do is just avoid it in the design phase. Anything very thin should be considered carefully.
Don't try to do accurate dynamic shadows on GearVR. Dynamic shadows are rarely aliasing free and high quality even on AAA PC titles, cutting the resolution by a factor of 16 and using a single sample so it runs reasonably performant on GearVR makes it hopeless. Go back to Quake 3 style and just blend a blurry blob underneath moving things, unless you really, really know what you are doing.
Don't use specular highlights on bump maps. This is hard to get really right even with significant resources. Specular at the geometry level (still calculated in the fragment shader, not the vertex level!) is usually ok, and also a powerful stereoscopic cue. This applies to any graphics calculation done in shaders beyond just sampling a texture -- if it has any frequency component greater than the pixel rate (a clamp is infinite frequency!), there will be aliasing. Think very carefully before doing anything clever in a shader.
Use gamma correct rendering. This is usually talked about in game dev circles as part of "physically based rendering", and it does matter for correct lighting calculations, but it isn't appreciated as widely that it matters for texture sampling and MSAA even when no lighting at all is done. For photos and other continuous tone images this barely matters at all, but for high contrast line art it is still important. The most critical high contrast line art is text. It is hard to ease into this, you need to convert your window, all textures, and any frame buffer objects you use too the correct sRGB formats. Some formats, like 4444, don't have an equivalent sRGB version, so it may involve going to a full 32 bit format.
Avoid unshared vertexes. A model edge that abruptly changes between two surfaces has the same aliasing characteristics as a silhouette at the edge of a model; you only get the MSAA bits of blending. If textures are wrapped completely around a model, and all the vertex normals and other attributes are shared, you only get the edge effects along the "pelting lines" where you have no choice but to have unmatched texture coordinates. It costs geometry, so it isn't always advisable, but even seemingly hard edged things like a cube will look better if you have a small bevel with shared vertexes crossing all the edges, rather than a knife-sharp 90 degree angle. If a surface is using baked lighting, the geometry doesn't need normals, and you are fine wrapping around hard angles as long as the texture coordinates are shared.
Trilinear filtering on textures is not what you ideally want -- it blends a too-aliased sample together with a too-blurry sample to try to limit the negatives of each. High contrast textures, like, say, the floor grates in Doom 3, or the white shutters in Tuscany, can still visibly alias even with trilinear. Reducing the contrast in the image by "prefiltering" it can fix the problem, or you can programmatically add an LOD bias.
Once aliasing is under control, you can start looking at optimizing quality.
Thank you! Reading the title, I was interested. Then I saw the domain and thought "ugh". Then I remembered that Carmack is with Oculus, and Facebook bought Oculus, making it a natural place for him to post, and was conflicted. Conflict resolved!
It's not a natural place for him to post. He was bought. From what I've seen of the guy, the natural place for him to post would be in a .plan file somewhere.
There's nothing wrong with posting a technical article on Facebook, especially if you're too busy being a graphical programming genius to set up your own blog somewhere. I don't see it as any different than posting on Medium...besides making a choice as to which company you're going to support? But seriously, who cares?
The only thing wrong with it is that you generally have to have a Facebook account to read Facebook content, which means you have to give them your real name. It appears that this month Facebook is allowing people to read this particular post, but they aggressively put the sign up wall wherever they thing it will generate the most conversions. You certainly can not see John Carmack's past posts without a Facebook account. Facebook is not the open web.
Technically you can violate the ToS and set up a fake account, but that's a bit of an inconvenience
Whether or not a post is available to "the public" (signed in or not) is completely the choice of the author, not some mysterious committee at Facebook.
Have you tried to read a page on facebook without an account? It's intended to be an almost unusable experience.
On loading a page, I get a modal popup obscuring all content insisting I "register" or "login", with a "not now" option. Then when you click "not now", the bottom third of the screen is obscured by a fixed overlay.
It is loosely describable as readable, but in practice, it's a fight to read it. The element has randomly generated id, so can't easily be blocked with ublock or similar.
I agree Facebook is a bad place for placing public content. Note though that it's currently a lot more bearable if you disable JavaScript for Facebook. As someone who doesn't use Facebook at all it's easy for me to keep it disabled with NoScript. Then I just see content, no popups.
And what happens when facebook decides otherwise? Because it seems they already have with their strategy of curating content and information to maximise engagement.
> There's nothing wrong with posting a technical article on Facebook, especially if you're too busy being a graphical programming genius to set up your own blog somewhere
Well, he could have posted on the Oculus blog[1] - as the subject matter would have been very relevant.
> I don't see it as any different than posting on Medium
Medium doesn't try to coerce readers into singing up for an account. Nor does it track you across domains. Nor does it smash a lengthy article into 1/4 page width.
> But seriously, who cares?
A lot of folks, myself included. Not only was it difficult to read due to the formatting, but many have strong objections to Facebook in general, causing Carmack's thoughts have less reach.
Facebook is not a blogging platform, it never has been a blogging platform, and it never will be a blogging platform. It's simply not setup to be conducive of blog content. I wager this is the reason Facebook decided to not use Facebook for their own blog posts[2].
That I know of, Medium doesn't have as it's core business the mining and sale of user data with major intrusion and breach of privacy, much less a self-hosted, or even non-self-hosted blog.
In almost all of the public VR critiques I have made, I comment about
aliasing. Aliasing is the technical term for when a signal is under
sampled, and it shows up in many ways across graphics (and audio), but in
VR this is seen most characteristically as "shimmering" imagery when you
move your head around, even slightly. This doesn't need to happen! When
you do everything right, the world feels rock solid as you look around.
The photos in the Oculus 360 Photos app are properly filtered --
everything should ideally feel that solid.
In an ideal world, this would be turned into a slick article with
before-and-after looping animated GIFs of each point I am making, along
with screenshots of which checkboxes you need to hit in Unity to do the
various things. Maybe someone in devrel can find the time...
All together in one place now, the formula for avoiding aliasing, from the
most important:
Generate mip maps for every texture being used, and set
GL_LINEAR_MIPMAP_LINEAR filtering. This is easy, but missed by so many
titles it makes me cringe.
If you dynamically generate textures, there is a little more excuse for
it, but glGenerateMipmap() is not a crazy thing to do every frame if
necessary.
Use MSAA for rendering. Not using MSAA should be an immediate fail for any
app submission, but I still see some apps with full jaggy edges. We
currently default to 4x MSAA on Mali and 2x MSAA on Adreno because there
is a modest performance cost going to 4x there. We may want to go 4x
everywhere, and it should certainly be the first knob turned to improve
quality on a Qualcomm S7, long before considering an increase in eye
buffer resolution.
In general, using MSAA means you can't use a deferred rendering engine.
You don't want to for performance reasons, anyway. On PC, you can get by
throwing lots of resources at it, but not on mobile.
Even 4x MSAA still leaves you with edges that have a bit of crawl, since
it only gives you two bits of blending quality. For characters and the
environment you just live with it, but for simple things like a floating
UI panel, you can usually arrange to use alpha blending to get eight bits
of blending quality, which is essentially perfect. Blending requires
sorting, and ensuring a 0 alpha border around your images is a little
harder than it first appears, since the size of a border gets halved with
each mip level you descend. Depending on how shrunk down a piece of
imagery may get, you may want to leave an eight or more pixel cleared
alpha border around it. To avoid unexpected fringe colors around the
graphics, make sure that the colors stay consistent all the way to the
edge, even where the alpha channel is 0. It is very common to see blended
graphics with a dark fringe around them because the texture went to 0 0 0
0 right at the outline, rather than just cutting it out in the alpha
channel. Adding an alpha
border also fixes the common problem of not getting CLAMP_TO_EDGE set
properly on UI tiles, since if it fades to invisible at the edges, it
doesn't matter if you are wrapping half a texel from the other side.
Don't use techniques like alpha testing or anything with a discard in the
fragment shader unless you are absolutely sure that the contribution to
the frame buffer has reached zero before the pixels are discarded. The
best case for the standard cutout uses is to use blending, but if the
sorting isn't feasible, using ALPHA_FROM_COVERAGE is a middle ground.
Avoid geometry that can alias. The classic case is things like power lines
-- you can model a nice drooping catenary of thin little tubes or crossed
ribbons, but as it recedes into the distance it will rapidly turn into a
scattered, aliasing mess of pixels along the line with only the 1 or 2
bits of blending you get from MSAA. There are techniques for turning such
things into blends, but the easiest thing to do is just avoid it in the
design phase. Anything very thin should be considered carefully.
Don't try to do accurate dynamic shadows on GearVR. Dynamic shadows are
rarely aliasing free and high quality even on AAA PC titles, cutting the
resolution by a factor of 16 and using a single sample so it runs
reasonably performant on GearVR makes it hopeless. Go back to Quake 3
style and just blend a blurry blob underneath moving things, unless you
really, really know what you are doing.
Don't use specular highlights on bump maps. This is hard to get really
right even with significant resources. Specular at the geometry level
(still calculated in the fragment shader, not the vertex level!) is
usually ok, and also a powerful stereoscopic cue. This applies to any
graphics calculation done in shaders beyond just sampling a texture -- if
it has any frequency component greater than the pixel rate (a clamp is
infinite frequency!), there will be aliasing. Think very carefully before
doing anything clever in a shader.
Use gamma correct rendering. This is usually talked about in game dev
circles as part of "physically based rendering", and it does matter for
correct lighting calculations, but it isn't appreciated as widely that it
matters for texture sampling and MSAA even when no lighting at all is
done. For photos and other continuous tone images this barely matters at
all, but for high contrast line art it is still important. The most
critical high contrast line art is text. It is hard to ease into this, you
need to convert your window, all textures, and any frame buffer objects
you use too the correct sRGB formats. Some formats, like 4444, don't have
an equivalent sRGB version, so it may involve going to a full 32 bit
format.
Avoid unshared vertexes. A model edge that abruptly changes between two
surfaces has the same aliasing characteristics as a silhouette at the edge
of a model; you only get the MSAA bits of blending. If textures are
wrapped completely around a model, and all the vertex normals and other
attributes are shared, you only get the edge effects along the "pelting
lines" where you have no choice but to have unmatched texture coordinates.
It costs geometry, so it isn't always advisable, but even seemingly hard
edged things like a cube will look better if you have a small bevel with
shared vertexes crossing all the edges, rather than a knife-sharp 90
degree angle. If a surface is using baked lighting, the geometry doesn't
need normals, and you are fine wrapping around hard angles as long as the
texture coordinates are shared.
Trilinear filtering on textures is not what you ideally want -- it blends
a too-aliased sample together with a too-blurry sample to try to limit the
negatives of each. High contrast textures, like, say, the floor grates in
Doom 3, or the white shutters in Tuscany, can still visibly alias even
with trilinear. Reducing the contrast in the image by "prefiltering" it
can fix the problem, or you can programmatically add an LOD bias.
Once aliasing is under control, you can start looking at optimizing
quality.
I've always been a big fan of including the command you use to get the output you're quoting - people sometimes pick up even subtle tricks or flags from doing this.
The Rift and Vive also use AMOLED screens, so I don't think it's the pixel tech that's the issue. LEDs can turn on and off extremely fast. It's probably that the screen's electronics were made for power-sipping mobile devices running at most 60 FPS and not 90+ FPS gaming.
OLED is the issue. Look into the black smear problem on Oculus DK2 and how they had to half-way fix it by overdriving for a frame and then falling back.
CV1 of the Rift headset is OLED and has the same issue. They have to run the panels at a notch above minimum brightness to work around it and as a result they lose out on true blacks.
None of those threads mention the Rift CV1, the Vive, or the Samsung Galaxy S6 or S7 (the ones that fit in the current Gear VR headsets). I tried the test image from this thread http://forum.xda-developers.com/showthread.php?t=2765793 on my S6 and didn't see the issue. That thread and this one https://forums.oculus.com/community/discussion/10924/true-bl... say it's only an issue around areas where the screen is actually black, so keeping things very dark would seem to be worse?
I think I wasn't clear. It isn't a notch above minimum brightness as in running the panel very dim.
It is a notch above minimum as in: minimum is pure black, and instead of every allowing pure black they set it to one notch higher, because pure black causes the smearing.
It isn't an visible issue in Rift CV1 or Vive because they do that as a workaround.
What's the problem with specular lighting on bump maps? I think I see what the issue can be, specifically, that when the bump map texture is smaller than 1-1 texels-to-pixels, then the normal chosen will 'shimmer' between the possible texel values. But can't one use mipmaps for bump maps as well to reduce this?
I do this a lot of the time, too. And I believe it is ridiculous to have to do it...
Yes, I'm grateful that browsers today make it easy to pwn the DOM.
And it is disgusting that this is often necessary to avoid spammy website behavior. People who don't understand browser internals are forced to contend with popups and forced registration to unveil content indexed by search engines.
Yep I'm looking at you Quora.
Thanks for forcing my mother to register to read the text promised by her google query. Long live ExpertSexchange 3.0.
Yeah, the Google query text is an oft-broken contract.
I presume there is some German word that describes the feeling of searching, seeing what you want right there, bolded in the result text, and doing a CMD-F to search for it on the page, only to hear the plaintive beep of 0/0 results.
Upvoted you, but the substantive point surely is that Facebook is a bad web citizen for doing this. OTOH every sane person already has a (negative) opinion about Facebook and this is not the forum to air those.
That article is shared to 'Public': it should be visible to non-logged-in users. Not sure what changed — could be an issue with excessive crawlers…
I’ve very convinced that Facebook is very comfortable to let the minority who are not comfortable having an account read a public post -- especially when it’s about engineering. Those are usually on the company blog (without log-in block); Carmack probably was just using internal tool, and didn’t want to bother.
It’s probably less high on the priority list than “give affordable internet access to 1 billion people”, but in an ideal world, posts that are meant for “everyone” should not be gated.
unfortunately there is an annoying box covering up 1/3 of the page asking you to login or signup. it does not "block" any content in that you can still read all of it. but it is highly annoying (and i assume purposely so).
Facebook employees generally communicate internal results or opinion with Facebook posts, a lot like this, using a Company-only privacy setting. Posts like this one sound like something he posted internally first, and just reposted, or changed the privacy setting.
I wouldn’t read any more into it than marginal laziness.
Metrics driven design and development.
1) team gets told it has to increase sign ups
2) Product already too popular to go anywhere meaningful with it's content/features
3) Only hope left is to make the experience worse for non-users in the hope to irritate them into being users.
> It’s probably less high on the priority list than “give affordable internet access to 1 billion people”,
Both this awful login form and the cynical project to get the 3rd world online via Facebook are part of the same goal.
> as Facebook cozies up closer and closer to Oculus and gang.
My understanding is that Facebook has “geared up” Oculus: added a lot of staff. I would assume that a majority of engineers who are working on VR were originally Facebook hires; it’s certainly the case for things like back-end / scaling.
It’s not the most visible part of the company, but when Mark says publicly that a project is a priority, that means that the team can have a lot of engineers if they need to.
Like it or not, when someone is a public figure, their posts on Facebook get LOTS of views. A larger percentage of internet users ONLY read Facebook these days.
Carmack's profile has almost 10,000 followers and several hundred 'friends' each of which will have been pinged about the new post immediately. Even a popular blog won't be able to do that, that instantly. So you can see the attraction.
Also, on Facebook there is the concept of 'Notes' which are better for long form writing and display in their own large overlay div. Not sure why Carmack didn't use it.
I am not a fan of Facebook, but from a technical point of view the platform does have its merits.
The character count of a random sentence ("Don't try to do accurate dynamic shadows on GearVR. Dynamic shadows") puts it at 68 characters. The Rules of Typographical Style suggests 66 characters is considered ideal: http://webtypography.net/2.1.2.
This is possibly why their textual content is narrow. It's designed to make it more readable.
Your browser window is just too wide. With such an oversized browser window, a substantial proportion of content on the web, perhaps the majority, is going to have an absurdly wide measure to the point of extreme illegibility. I highly recommend narrowing your window; as a general rule it will dramatically improve web typography.
HTML/CSS technology are really bad at giving page authors easy control over precise text layout. If they make the CSS code short and use percentages, then everything will end up too wide in large browser windows, and too narrow in small browser windows. But if they use fixed widths or maximum widths, it’s easy to screw up and break the layout in some browsers.
P.S. the author directly addresses your point in the linked page, did you read it?
“From a typographical perspective, the most appropriate method is to set box width in ems (elastic layout) as it ensures the measure is always set to the typographer’s specification. Setting box width as a percentage (liquid layout) gives the typographer approximate control over measure but also allows the reader to adjust the layout to suit his or her comfort. This website has been designed with liquid layout to afford readers this control.”
> I highly recommend narrowing your window; as a general rule it will dramatically improve web typography.
That will have the side effect of knocking most sites into "responsive" mode, giving you less of a desktop experience than you believe you're getting.
My monitor is 1920x1080, a fairly standard desktop monitor resolution. Narrowing it back to 1024x768 or something will give you a poorer experience on most modern websites.
I do believe this "typographical style" is more relevant for printed material with smaller font sizes.
Carmack: Some times you read HN. If you are reading this, please consider writing on a blog or something neutral.
It is extremely disgusting for your readers to have to log in or else...half of your screen is occupied with spam.
There is people out there that prefer not to report everything they read, when, where, how, how long to a big multinational associated to the US secret services.
"Extremely disgusting" sounds like an overreaction. "Not ideal" maybe, or "not helpful to those of us who don't want to read things in Facebook for privacy or security reasons". But "extremely disgusting" is over-exaggerating the importance of the choice of medium for a text post.
I was just suggesting that people might take the parent poster more seriously if he didn't go straight from 0 to 100 on the pain-meter.
If it was an attempt at humor, it certainly seems like people here didn't enjoy it much.
In general hyperbole for emphasis is fine, but blowing the badness of something out of proportion is a common way that people polarize and shut down otherwise healthy discussions, often unintentionally. HN is a forum to foster discussion, so we should try to avoid doing things that shut it down.
I know it's not an either or situation -- but in my opinion this is a step up from Twitter. Technical discussions on twitter is super frustrating to follow along. And almost impossible to have intelligent conversations in 140 chars.
It is a funny way to phrase that. If you own a meaningful stake in a business, do you call it your company? I think most people would have referred to it as Palmer's company. However, Carmack did have an ownership stake in Oculus.
New project: A script that extracts public posts from Facebook for a given user, and formats them for consumption via RSS, or generates content/markdown to be served by Jekyll/Lektor. I should get on that...
In almost all of the public VR critiques I have made, I comment about aliasing. Aliasing is the technical term for when a signal is under sampled, and it shows up in many ways across graphics (and audio), but in VR this is seen most characteristically as "shimmering" imagery when you move your head around, even slightly. This doesn't need to happen! When you do everything right, the world feels rock solid as you look around. The photos in the Oculus 360 Photos app are properly filtered -- everything should ideally feel that solid.
In an ideal world, this would be turned into a slick article with before-and-after looping animated GIFs of each point I am making, along with screenshots of which checkboxes you need to hit in Unity to do the various things. Maybe someone in devrel can find the time...
All together in one place now, the formula for avoiding aliasing, from the most important: Generate mip maps for every texture being used, and set GL_LINEAR_MIPMAP_LINEAR filtering. This is easy, but missed by so many titles it makes me cringe.
If you dynamically generate textures, there is a little more excuse for it, but glGenerateMipmap() is not a crazy thing to do every frame if necessary.
Use MSAA for rendering. Not using MSAA should be an immediate fail for any app submission, but I still see some apps with full jaggy edges. We currently default to 4x MSAA on Mali and 2x MSAA on Adreno because there is a modest performance cost going to 4x there. We may want to go 4x everywhere, and it should certainly be the first knob turned to improve quality on a Qualcomm S7, long before considering an increase in eye buffer resolution.
In general, using MSAA means you can't use a deferred rendering engine. You don't want to for performance reasons, anyway. On PC, you can get by throwing lots of resources at it, but not on mobile.
Even 4x MSAA still leaves you with edges that have a bit of crawl, since it only gives you two bits of blending quality. For characters and the environment you just live with it, but for simple things like a floating UI panel, you can usually arrange to use alpha blending to get eight bits of blending quality, which is essentially perfect. Blending requires sorting, and ensuring a 0 alpha border around your images is a little harder than it first appears, since the size of a border gets halved with each mip level you descend. Depending on how shrunk down a piece of imagery may get, you may want to leave an eight or more pixel cleared alpha border around it. To avoid unexpected fringe colors around the graphics, make sure that the colors stay consistent all the way to the edge, even where the alpha channel is 0. It is very common to see blended graphics with a dark fringe around them because the texture went to 0 0 0 0 right at the outline, rather than just cutting it out in the alpha channel. Adding an alpha border also fixes the common problem of not getting CLAMP_TO_EDGE set properly on UI tiles, since if it fades to invisible at the edges, it doesn't matter if you are wrapping half a texel from the other side.
Don't use techniques like alpha testing or anything with a discard in the fragment shader unless you are absolutely sure that the contribution to the frame buffer has reached zero before the pixels are discarded. The best case for the standard cutout uses is to use blending, but if the sorting isn't feasible, using ALPHA_FROM_COVERAGE is a middle ground. Avoid geometry that can alias. The classic case is things like power lines -- you can model a nice drooping catenary of thin little tubes or crossed ribbons, but as it recedes into the distance it will rapidly turn into a scattered, aliasing mess of pixels along the line with only the 1 or 2 bits of blending you get from MSAA. There are techniques for turning such things into blends, but the easiest thing to do is just avoid it in the design phase. Anything very thin should be considered carefully.
Don't try to do accurate dynamic shadows on GearVR. Dynamic shadows are rarely aliasing free and high quality even on AAA PC titles, cutting the resolution by a factor of 16 and using a single sample so it runs reasonably performant on GearVR makes it hopeless. Go back to Quake 3 style and just blend a blurry blob underneath moving things, unless you really, really know what you are doing.
Don't use specular highlights on bump maps. This is hard to get really right even with significant resources. Specular at the geometry level (still calculated in the fragment shader, not the vertex level!) is usually ok, and also a powerful stereoscopic cue. This applies to any graphics calculation done in shaders beyond just sampling a texture -- if it has any frequency component greater than the pixel rate (a clamp is infinite frequency!), there will be aliasing. Think very carefully before doing anything clever in a shader.
Use gamma correct rendering. This is usually talked about in game dev circles as part of "physically based rendering", and it does matter for correct lighting calculations, but it isn't appreciated as widely that it matters for texture sampling and MSAA even when no lighting at all is done. For photos and other continuous tone images this barely matters at all, but for high contrast line art it is still important. The most critical high contrast line art is text. It is hard to ease into this, you need to convert your window, all textures, and any frame buffer objects you use too the correct sRGB formats. Some formats, like 4444, don't have an equivalent sRGB version, so it may involve going to a full 32 bit format.
Avoid unshared vertexes. A model edge that abruptly changes between two surfaces has the same aliasing characteristics as a silhouette at the edge of a model; you only get the MSAA bits of blending. If textures are wrapped completely around a model, and all the vertex normals and other attributes are shared, you only get the edge effects along the "pelting lines" where you have no choice but to have unmatched texture coordinates. It costs geometry, so it isn't always advisable, but even seemingly hard edged things like a cube will look better if you have a small bevel with shared vertexes crossing all the edges, rather than a knife-sharp 90 degree angle. If a surface is using baked lighting, the geometry doesn't need normals, and you are fine wrapping around hard angles as long as the texture coordinates are shared.
Trilinear filtering on textures is not what you ideally want -- it blends a too-aliased sample together with a too-blurry sample to try to limit the negatives of each. High contrast textures, like, say, the floor grates in Doom 3, or the white shutters in Tuscany, can still visibly alias even with trilinear. Reducing the contrast in the image by "prefiltering" it can fix the problem, or you can programmatically add an LOD bias.
Once aliasing is under control, you can start looking at optimizing quality.