Hacker News new | past | comments | ask | show | jobs | submit login
The x86 PlayStation 4 could signal a sea-change in the console industry (arstechnica.com)
130 points by zoowar on Apr 2, 2012 | hide | past | favorite | 99 comments

The rumor that they are attempting to block the used game market through a sort of DRM scheme is disappointing. While I've only ever bought 1 or 2 used games in the past, the act itself is a huge "F* YOU" to consumers and leaves a bad taste in the mouth.

No one likes to feel controller or compelled to act in a certain way, which is why pretty much all DRM schemes have been broken and why the MP3 became so incredibly popular.

I feel like game producers are making the same argument for the second-hand game market as the MPAA makes for pirating movies: Every used game sale is a lost new game sale and blocking used games is better than innovating or changing the pricing/sales structure.

"I feel like game producers are making the same argument for the second-hand game market as the MPAA makes for pirating movies:"

Actually, they're making the exact same argument. The Entertainment Software Association (ESA), which is the video gaming industry's equivalent of the MPAA, also sponsored PIPA and SOPA.

The gaming industry's involvement in that legislation managed to fly under the radar, for most part, while everyone was focused on "Hollywood." In reality, Big Gaming and Big Entertainment are strikingly similar industries, with strikingly similar politics.

I honestly lost my faith in Sony respecting their customers when they removed a feature I paid for, from my hardware (PS3 Linux compatibility) then leaked all my personal data through poor (non-existent?) security.

Add attempts at suing reverse engineers for posting decryption keys (mathematically derived, no less) to that list.

I think for me it was when they rootkitted loads of customers machines by putting malware onto their music CDs.

At one time, book publishers lobbied to make it illegal to sell used books. That was a time in which governments were still able to reason their way to a balance between intellectual property owners' rights and the public interest.

The MP3 format became popular because it offers a good balance between audio quality and file size, which allowed people to share them easily on P2P networks back when broadband internet was more a novelty than the norm. WAV files have been around forever without DRM.

The MP3 format became popular before P2P networks - they were being shared on FTP sites first.

There are different degrees of popularity. The amount of people that shared files using FTP and usenet pales in comparison to the amount of users of P2P networks. I think it's fair to say that Napster and then P2P made MP3 ubiquitous.

I think Winamp, which predated Napster by a couple of years, was really the turning point that made MP3 popular. Really MP3 made P2P networks popular, rather than the other way around.

My 4GB hard disk was full of MP3 one or two years before Napster appeared.

It was the same for lots of my friends.

It depends. You also can't resell iOS games, but their price point is also an order of magnitude lower than the $50 of console titles.

The article fails to mention the audience this is good news for: PC gamers. If next generation consoles are x86 based, expect to see future games being more widely available on PC, and, better yet, expect the "best" versions of those games (in terms of graphics, features, etc.) to be the PC versions. The only catch is if the game developers hold back on PC releases due to fears of piracy, but on the whole this probably will still mean many more releases on PC.

That said, most flagship titles (Halo, Metal Gear, Final Fantasy, etc.) will probably stick to a single console due to contractual obligations.

What? The instruction set matters very little. Pretty much all games are written in C++ and last time I checked there are C++ compilers targeting pretty much any architecture in existence. The graphics API is more important than the instruction set.

How can I get a ticket to the magical land of milk and honey you come from, where C++ performs well on all architectures without extra work?

As other posters have mentioned, weird processors (notably freaks like the Cell) can require a lot of knowledge of the instruction set and chip quirks (see http://www.insomniacgames.com/category/research-development/ for a lot of good info on this), even requiring you to throw away your nice virtual object hierarchies and things to make the SPUs happy ( http://www.insomniacgames.com/wp-content/uploads/2011/06/GDC... ).

Also, just because a compiler targets an architecture doesn't mean that it can optimize code for it worth a damn.

EDIT: oh god it may even lie about its program counter ( http://www.insomniacgames.com/wp-content/uploads/2009/08/gdc... )

Apart from the 1st April angle, lets take your point seriously:

Actually, toolchain for the PS3 with its Cell processors and unconventional memory / processing model was a major, major pain.

Elsewhere, EPIC Itanium failed for the lack of sufficiently smart compilers.

The Cell processor was very different, architecturally. I can easily see games originally designed for the PS3 to require significant effort to port to the 360 and PC. (While the 360 used a PowerPC processor that actually reused assets from the Cell, it was architecturally not that different from typical x86 multicores.)

For video games, the CPU architecture is still very important.

Remember when Halo came out on the Mac, and it was running slow as molasses on higher-performance hardware than what the game was originally written for? That's in part because many of the optimizations that make code run faster on x86 did the opposite on PowerPC. And any code that relied on SIMD instructions would likely have to be rewritten from scratch.

I'm not particularly familiar with the Cell architecture, but based on what I do know I suspect that going between it and x86 would be an even bigger headache.

The graphics API is more important for what? Gameplay? AI? Load times? Physics? Or did you mean rendering triangles, because indeed graphics APIs are very important for that.

I think it is extremely naive to assume that the CPU's underlying architecture and microarchitecture are utterly mundane details abstracted by the C++ compiler, ESPECIALLY in the case of consoles. Consoles are underpowered by today's standards, and making today's game run on them requires a deft maximization of both the GPU and CPU, not a mindset like "well C++ will take care of the CPU stuff". In case you hadn't figured it out, common C/C++ stuff from Havoc Physics to little old memcpy() and strcmp() are all hand-optimized for their platforms -- sometimes multiple implementations that make use of various ISA extensions. Try stepping through them in a debugger sometime.

Additionally, current generation console hardware have a big-endian byte order, which can make dealing with importing of resources a pain at times.

Surely, /surely/ this is the least of their concerns. Since when did we get so sloppy as to assume byte ordering in files? When was the last time your PNG loader failed on x86 because the file has big-endian fields?

In the console programming world, cycles are precious. You don't tend to waste time doing things 'just in case', you're much more likely to assume a best-case scenario and engineer things that way.

In terms of endianness, it's not a huge problem - the toolchain normally copes with this, as assets are built individually for each target platform. This is what we did last time I worked on a cross-platform game anyway.

The graphics are indeed a big deal, but C++ compilers and libraries are not unified.

First Xbox was Pentium III PC with GeForce 3 Ti, all on single board in nice (that's debatable, but I liked it.) case, yet I don't think there were more ports than there are today.

I think it's gotten significantly better as time goes on. Going back to the SNES/Genesis days, ports (either direction) would often be an entirely different code base, so it'd be the same game only in spirit.

I also don't recall very many PSX/DC/N64 --> PC ports. MGS and FF7 would have been some of the few that I remember from those days.

It's definitely better today than it was 10 years ago.

Ask EA Sports fans about that. These games were annual PC releases and they have been dropped completely.

Console makers are likely to continue subsidizing the hardware. So we'll just end up having cheaper playstations and xboxes.

I think very few people go out and buy a powerful graphics card. A console is simply a much more consumer friendly product that also happens to be a better bang for the buck in terms of hardware.

If the rumors outlined in the Ars article are true, there will be nothing to subsidize -- the console manuf are going for the Nintendo model after having their asses fed to them for 5 years while Nintendo turned a profit on 2-year old tech from launch.

It makes the most sense for Microsoft to go the compatibility-route a la Apple and iOS -- Apple proves it works, Middleware companies will love it (Crytek, Unreal, Unigen) and gamers won't need to skip a beat.

Here is an example, where the OS might lose to a console:

OS is like what the TSA is doing on the airports - it has to check everything, and it slows down quite a lot the queue of people going and flying. It's not that the airplanes are faster or slower (GPU, CPU)

Not the case on the consoles, where this is your responsibility, and your game is tested by QA, and later the console manufacturer itself until it passes. But no "TSA" there (or not to such extent).

For example (this might be a bit out of date, but used to be valid at least going back to Windows 2000) - You have a index buffer with 32-bit indices - on a console you know it's your responsibility to make sure the indices are valid, but no checks are done. On a PC, due to security restrictions the OS must make this check.

Final Fantasy XIII and XIII-2 have been released for both PS3 and Xbox 360. Maybe the game developers are starting to get the upper hand when it comes to contractual exclusives?

It's not that game developers are indentured servants (with the potential exception of first party studios like Naughty Dog or Lionhead, owned by Sony and Microsoft respectively), it's that the console manufacturers give big piles of money to good game developers in return for platform exclusivity.

So it's basically a business case thing, and a matter of leverage. Big, successful studios - say, Rockstar, makers of GTA - have a very strong bargaining position, and they know they can sell lots of units on all platforms. Whereas a weaker developer might be happy to take a boatload of money for an exclusive, as it de-risks the development somewhat for them.

Id say it would be a bigger boon for non-Windows pc gaming. Especially if the PS3 uses OpenGL or a variant that can be easily backported.

The article clearly mentions that the xbox is still going to use PowerPC, and the playstation may mod the x86 processor.

And even if ps3 does leaves the processor as is, the OS they are going to use is not going to be windows. Perhaps a variant of Linux of some kind. This may mean more games on Linux (however highly unlikely), but definitely no new games for Windows PC.

I think you missed the point. The point is that if the processor is x86 based and the GPU is basically an off-the-shelf AMD GPU, porting the code from PS4 to PC will be a much easier process than it is today and make help game makers provide better feature parity between the console and PC version.

The article fails to mention that the graphics quality of the games made for these consoles could be a lot more advanced than what's available for the PC then, because developers get to write games directly for that specific hardware. John Carmack has said that the DirectX/OpenGL layers can slow down performance by 4x-10x, for example.

Is it really that much? I guess abstraction comes at a big performance cost.

You'll have to forgive my skepticism, but can you link to where John Carmack has ever made such a statement?

What I don't understand is, if the performance is being slowed by 4x to 10x, what is it being compared to? I doubt anyone is actually coding GPU assembler for complex 3D scenes all the big games require. If there are no alternatives to these API's for allowing game devs to create the results they need, then its a lot like comparing apples to oranges.

John Carmack: So we don't work directly with DX 11 but from the people that I talk with that are working with that, they (say) it might [have] some improvements, but it is still quite a thick layer of stuff between you and the hardware. NVIDIA has done some direct hardware address implementations where you can bypass most of the OpenGL overhead, and other ways to bypass some of the hidden state of OpenGL. Those things are good and useful, but what I most want to see is direct surfacing of the memory. It’s all memory there at some point, and the worst thing that kills Rage on the PC is texture updates. Where on the consoles we just say “we are going to update this one pixel here,” we just store it there as a pointer. On the PC it has to go through the massive texture update routine, and it takes tens of thousands of times [longer] if you just want to update one little piece. You start to advertise that overhead when you start to update larger blocks of textures, and AMD actually went and implemented a multi-texture update specifically for id Tech 5 so you can bash up and eliminate some of the overhead by saying “I need to update these 50 small things here,” but still it’s very inefficient. So I’m hoping that as we look forward, especially with Intel integrated graphics [where] it is the main memory, there is no reason we shouldn't be looking at that. With AMD and NVIDIA there's still issues of different memory banking arrangements and complicated things that they hide in their drivers, but we are moving towards integrated memory on a lot of things. I hope we wind up being able to say "give me a pointer, give me a pitch, give me a swizzle format," and let me do things managing it with fences myself and we'll be able to do a better job.


Yes, in layman's terms - on the console you talk to the hardware as your buddy, you trust each other, and get along doing business better.

On OS's, there is no buddy, friendship. The OS (kernel) does not trust you by default, and there is no friendly language to get quickly the stuff done as in the consoles.

You can't schedule a DMA user-space. You can't batch too many draw calls in one piece. You can't check, inspect more detailed what the system is doing. You can't talk to any of the devices directly, but have to delegate this through the OS.

Now, the consoles have gotten to be closer and "less" friendly in that respect. But then again, security becomes much more important these days, especially with online games...

The newest Nvidia stuff is offering some of that, though it may be specific to CUDA.

The thing is, the bandwidth mismatch between the GPU and its memory and the PC memory to GPU memory is crazy. There seems to be extra latency involved there too.

As long as we plug a special-purpose video card into an expansion bus, I suspect the best performance will come from manually-scheduled transfers.

Why can't you use cuda / opencl to force updates on specific video memory?

I guess trying to mix directx / opengl and cuda / opencl would be a pain. One is at a completely different abstraction layer than the other.

Not to mention that I don't know of any GPU manufacturer that releases any of that super-low-level information.

ATI/AMD has released GPU programming manuals and open-source drivers, so presumably the information is out there.

I could be wrong about the open source drivers, but last time I looked into it (about two years ago), the lowest level interface was still a binary blob on top of which the open source driver sat.

As for programming manuals, any of the really low level ones I've seen are about 4 years old now. If you know of any up-to-date ones, I'd love to see them out of personal interest, but I've yet to find any myself.


Where are the pixels?

TV manufacturers are currently delivering 1080P sets in volume and at low prices. Nobody can really get out of a commodity play, and the future of bigger HDTVs is kind of pointless.

Apple is delivering up 1080P content already, and the iPad 3 delivers better in our hands. (3D is a red herring.)

So what's next? Higher resolution TV screens. IMAX in the home. iPad functionality on the walls.

We are, in my opinion, going to continue to see a steady rise in the resolution of our TV screens (and change in what they do), and the content resolution will need to increase to match. That content will increasingly be delivered over IP.

Apple is rumoured to be entering the TV market, and we know it won't be with a me-too commodity product. So why wouldn't they launch with a higher resolution screen, just as they did with the iPhone and iPad 3? They can control the delivery of media through the Apple TV, and with the iPad already in the lounge gaming on the higher resolution TV is essentially ready to go.

So for me the NextGen gaming devices better launch alongside new TVs (Sony can do this), and with stunningly detailed graphics, or we will rightly yawn at their arrival and stick with our computers, iPads and iPhones.

Heck - if Sony, Nintendo & Microsoft continue with their very slow release cycles for the gaming machines, then the next generation may well be the last - and we'll be driving big screen games using iPads and other tablets.

The future of the living room is absolutely rooted in some form of forthcoming disruption. However, no one has meaningfully disrupted TV since the 80s when cable, video tapes and game consoles all hit mass market appeal at the same time.

If the rumors here are to be believed, innovation among console makers is waning, so you're right in assuming the inevitable TV disruption is coming from elsewhere.

It could come from Apple, but my money is on a startup.

TVs, ultimately, are just monitors. They process a signal that comes from an external source. The first product that meaningfully augments the signal, regardless of source (antenna, cable, satellite, streaming, game console) is the true disruptor.

Tayloring your TV watching experience to your web browsing, social, taste, and purchase histories is where TV will ultimately be disrupted. When you walk into the living room to watch Mad Men, the TV should know it and adjust the in-screen Twitter feed accordingly. It should hear you laughing at Its Always Sunny in Philadelphia and provide suggestions to other shows people laughed at who laughed at the same joke. No man should ever watch a commercial for feminine hygiene again. No woman should watch a beer commercial that objectifies women again. And when I see Don Draper wearing a slick hat, I should be able to pause the show so I can buy it.

As far as I can tell, the only thing stopping this gazillion dollar disruption is that we can't get signal providers to play well with device makers. Apple has made this work with cell phones, so there's every reason to believe they could do it to TV, but I think a startup that figures out how to augment the signal without the provider detecting and blocking it has a chance to become the next Apple.

When you walk into the living room to watch Mad Men, the TV should know it and adjust the in-screen Twitter feed accordingly. It should hear you laughing at Its Always Sunny in Philadelphia and provide suggestions to other shows people laughed at who laughed at the same joke.

That sounds awful. The issue at hand when compared to video games is that television and film are not interactive- people have tried time and time again to make them so, but I honestly think that is a mistake. I don't want to read instant Twitter reactions to Don Draper's latest verbal beat-down of Peggy, I want to watch it. There is nothing wrong with television and film being a one-way experience.

It could come from Apple, but my money is on a startup.

I doubt it, simply because television is an extremely expensive medium. The existing players make it more expensive than it should be, but creating TV shows will always cost a lot. That's why the moves by the likes of Netflix into original programming are particularly fascinating.

You're absolutely right that the passive TV peg shouldn't be forced into the active entertainment hole.

However, when I'm watching a sporting event, I find myself looking down at my phone or tablet and looking for reactions from my favorite sports writers - and then I miss a play and I get frustrated. I can't be the only one having this experience.

Also, GetGlue is proving that people want to make their entertainment social - they want to scream what they're watching to all of their friends.

I'm not saying it shouldn't be passive, but it should be doing a lot more than what it is. When something on the screen illicits a reaction from me - be it a need to hear someone's opinion on it, or a laugh or a desire to make a purchase - the TV should be immediately providing an outlet to that reaction without getting in the way of the experience.

I know the startup working to disrupt television delivery. I interviewed with them. Very, very cool folks.

>Tayloring your TV watching experience to your web browsing, social, taste, and purchase histories is where TV will ultimately be disrupted. When you walk into the living room to watch Mad Men, the TV should know it and adjust the in-screen Twitter feed accordingly. It should hear you laughing at Its Always Sunny in Philadelphia and provide suggestions to other shows people laughed at who laughed at the same joke.

Actually, this just sounds creepy. I want my machines to do what I demand of them, not try to plant ideas in my head.

When you walk into the living room to watch Mad Men, the TV should know it and adjust the in-screen Twitter feed accordingly.

No thank you. This product you describe is abhorrent to me.

No man should ever watch a commercial for feminine hygiene again. No woman should watch a beer commercial that objectifies women again.

The Taliban would approve of that. Except for the part about the beer.

It's not about censorship, it's about targeted advertising. Google doesn't offer up adsense ads for maxi pads because it knows I'm a man.

This scheme speaks for itself.

It doesn't matter what you or I think it's "about".

Yeah, it kinda does, because, you know, its my scheme.

So what?

What matters is the behavior of the system in reality, in the present and in the future, not the intent of the original designer at some point in time.

Here is something scary. A quarter of Netflix streaming in the US is done via the Wii which does 480p and goes over component (analog) cables at best.


Why is this scary?

What I find scary is that so many people seem to feel dissatisfied with TV at 1080p. I can't make out any pixels at that resolution unless the source material has high-contrast pixel edges (i.e. not camera captured images) and the screen is taking up most of my visual field.

It is scary because all the talk is about how 1080p is the minimum requirement for a new console (see the article). Heck it is claimed to be the minimum requirement for the new consoles.

What the Wii/Netflix usage shows is that many people don't care about 1080p or even 720p, they don't even connect video via digital. Even the audio is analog! (In my opinion audio fidelity is more important than picture fidelity.)

I think audio is fine analog. The frequencies involved are a thousand times lower, so distortion isn't a big problem.

> Apple is rumoured to be entering the TV market, and we know it won't be with a me-too commodity product.

I have a feeling they'll stick with the Apple TV. They might pioneer some connective technology to make it easier to plug into a TV, but television refresh cycles are much longer than what I think Apple is comfortable with.

This is a clear departure from the old Sony. Sony used to be all about locking developers and content providers into proprietary platforms, raising platform value through exclusives and lowering cross-platform development. They started off with cash-cows based on standards such as Trinitron and the Walkman and went cocky after the big success of going it all alone with the 'PS1' and 'PS2'. However where they were only looking at professional content, Apple one-upped them by lowering barriers for developers, opening the floodgates and lowering content prices. Is this the next step of Sony to reinvent themselves under Hirai-san? Whereas other Japanese CE companies are exiting the consumer business rapidly, Sony certainly is putting up a fight.

Wonder what this means for a Steam console, particularly if PC games get more love as a result of consoles being x86. I suppose it's good news in the end, with less time wasted on cross-platform shenanigans and more time spent on making games.

I wonder if Sony is basing this decision on negotiations with Valve.

Think about that possibility for a second. What would be the motivating factors? Sony has gotten the shit beat out of them relentlessly in terms of multiplayer/online gaming. PSN has been shuddered for weeks at a time. Partially outsourcing this component to groups like Valve will fix many of their problems, but in return Valve probably would like their whole portfolio of content to be accessible.

If Sony goes x86, it'd be a shame to make such a dramatic shift and not capitalize on a content neutral platform.

Sony's already been dabbling in Steam support. This move would not surprise me.

Wouldn't it have to have extensive support Win32 and DirectX APIs?

That would mean support from Microsoft (which would make it basically an Xbox) or Transgaming.

Sony has always had terrible developer tools and developer support. Hopefully this might enable some of the tools available on PCs to become available on the Playstation.

Huh? Maybe you're thinking of PS2 days or earlier. I'm a PS3 developer and Sony's tools and developer support is better than Microsoft's or Nintendo's. Enough so that we have asked Microsoft if they could replicate some aspects of what Sony does for the next generation.

Sony's tools support got a lot better after they bought SN in 2005, but it wasn't that long ago that you couldn't do things like source-level debugging on an SPU (I assume you can now). Compare that to something like Pix/remote source-level debugging, which the 360 has had since day 1, and the difference is quite clear.

SPU debugging has been available since at least 2007 going by archived email conversations I have on the topic, and I recall it working even before that.

Sony's tools stopped being annoying around 2007 and have been excellent the last few years. In some ways they surpass what's available on the Xbox 360 -- Tuner and GPAD are more flexible than PIX and the debugger isn't tied to the massive anchor that is Visual Studio.

Will next generation consoles still use optical disks, or will games be delivered by flash?

I would think producing 50 gb+ flash for games would be prohibitively expensive compared to optical discs. Especially as they can continue to add layers to Blu-ray to add more storage space. As long as the disc read speeds are faster than the PS3's (2x) I think it's still an advantage.

On a different note I wish the PS3 would adopt the 360's way of installing the entire game to the hard drive. I can't think that either Sony or Microsoft will stick an SSD in their next generation console but having a 500 gb hard drive shouldn't be too costly.

The rumors that I've seen suggest that they will support optical disks with an emphasis on on-demand downloads.

A disk is a lot cheaper to stamp and print than flash or some type of ROM would be.

Disk drives are dirt cheap, and a single blu ray can hold 25+ GB of data. Try putting that down a 1mb home connection.

Disks are cheap, but disk drives are ~$80. (Consoles are sold at a loss, so this comes directly out of profits.) I don't know how big PS4 games will be, but flash storage could cost 25 cents per GB.

But yea, I think you're right that the optical disks will still end up being cheaper.

Why do we need physical media? I'd prefer to download them myself. I usually buy games in the Playstation store if they are available there. Yes, of course, this would mean that you probably can't sell your games after you're done with them, but that's not a big issue for me personally.

You're making several anecdotal assumptions based on your personal situation and extrapolating it to the entire target market for the Playstation 4:

1. Everyone has a fast internet connection to the point where downloading a full BluRay disc of content (~25GB) is a non-issue.

2. Everyone has an Internet connection where downloading up to 25GB of content for a single game will not go over their transfer caps, possibly incurring usurious overage charges.

3. Everyone that will buy a Playstation 4 will be well-off enough that they don't need to resell their games after they have played them in order to afford the next game that they want to play (effectively only paying the difference between the original cost and the resale amount).

It works well enough for Steam.

Consider also that we're talking 2013 onwards. Connections continue to improve.

Not in rural, or even a lot of suburbian America. Telecom monopolies and all that. My mother has 300 kbps dsl, which she has had for a decade, and that is from Verzion, because they have an internet monopoly in her area.

Unless we get either reform on who can lay fiber or get some national fiber channel program in the states, only big cities stand a chance of seeing meaningful increases in bandwidth this decade.

If the new games were made available at a cheaper price up front, I could live without the ability to trade in (at a price point of say €30 vs €50 currently). It might be viable as they aren't sharing a slice with the high street retailers.

Media that are too enormous for HDD storage or DSL transfer seem to be a good way to make piracy less practical.

From the business perspective, we all understand the rumored incremental hardware updates here from Microsoft and Sony; Nintendo ate everyone's lunch for 5 years with last-gen, 3-year old tech[1] that cost little to make while Microsoft and Sony hemorrhaged money until what seems like last year... not to mention the huge technical hurdle for devs and middleware companies trying to reach parity on each platform.

WIN: If Microsoft goes the incremental route with the same Xbox OS a-la Apple's iOS evolution, they keep middleware compatibility with larger graphical budgets for teams to play with... EVERYONE likes this and every dev team that has shipped an Xbox 360 game hits the ground running immediately -- possibly opening up the doors to dual Xbox 360 and Xbox 720 releases of their games (one with lower quality textures and rougher geometric assets and the other with higher-end business) -- I am sweeping my hands across a fairly complex problem here, but you get the point. With compatibility intact, there are interesting things that can happen here that we've seen work on the iOS ecosystem. (maybe not for Triple-A devs, but for budget minded games, this could work wonderfully especially as digital-downloads and indie devs grow).

LOSS: Sony has to make a clean cut and scuttle the last 8 years moving onto an x86 design, new OS, new middleware all new QA cycles and retraining all the PS3 devs.

WIN: Sony now matches the same arch as the Xbox-next (same CPU optimizations apply and same GPU family) and we can stop getting these damn port-style games that never take advantage of the extra Blu-ray room or cell processing power.

LOSS: No revolutionary boost in performance if the hardware rumors are true. I have no doubt the bump in specialized/tuned hardware will give us real-time versions of the Unreal Engine 4 demos (of that smoking guy that beats up mechs) BUT, at 4k resolution and 120hz for 3D? Not for every game. I imagine we will see something akin to what we have now with most games running at 720p with the few rare tuned ones actually running at 1080p resolutions. I would guess next-gen will offer 4k resolution on some titles and 3D on others while running at 1080p or some such combination. Obviously if it is a simpler game like a Wave Racer, we'll get both, but for Gears of War 15 and Mass Effect 32, I doubt it.

WIN: Faster game dev cycles means we get some more impressive titles on the improved hardware/experience sooner. Don't have to wait 2 years post-launch for the first real good game.

LOSS: Next-generation consoles are going to be engineered to be all-in-one entertainment hubs... TV cable tuning, DVR, streaming, apps, social, games, mobile-tie-in, etc. etc... this will make jumping between consoles harder as you stake your claim in the Microsoft or Sony (or possibly Nintendo -- yet to be seen if they can pull off a Network) camp and build up your life/existence within their walled garden. This generation it was nothing more than rep points, next-generation (especially Microsoft) will be throwing every Facebook-esque psychologically engineered trick in the book to keep you ON their platform, consuming content through their channel with your credit card en-tow, sharing your pictures with friends and building your character's rep from game to game to game, movie to movie and app to app.

I sincerely doubt we (the harder-core folks) will all have all 3 next-gen systems this next time around and will most likely have 1 (probably Xbox-next) as the cost of flipping between them will be bigger than just powering on Console X to play game Y for a few hours.

LOSS x2: Double the resolution on the Kinect-next, rumored eye-tracking, multi-person tracking and more accurate voice controls? I can't even imagine what is in store (e.g. "You don't seem to be enjoying this episode of Lost, would you like to skip to the next episode or can we recommend Battlestar Galactica?" -- say "Start Mass Effect, invite Scotty32, Soldier load-out" -- experience {episode of Law and Order, McDonald's Billboard in background... notification popup...} "Just emailed you a buy-1-get-1-free BigMac coupon for McDonalds! The nearest one is 2 blocks from you... yummy! Say 'Pause' to pause what you are watching and go grab lunch!")

You get the idea...

[1] http://en.wikipedia.org/wiki/Wii#Technical_specifications

No one's lunch was eaten by Wii. I think by now it's pretty much been concluded that Wii was an alternative to the PS3/XBOX demographic (that is, people interested in PS3/XBOX also bought Wii), and that Wii itself brought a lot of new players into the console gaming world. Not much if any actual cannibalization of sales occured.

Also besides ignoring game sales, it also ignores the fact that Wii sold the most during the early part of it's release. This year Xbox 360 and PS3 are on pace to massively outsell Wii and have been doing so for a while. This is the part in the cycle where hardware sales are most profitable, so who really has the last laugh?

Besides, with the PS3 it was mission accomplished for Sony as they used it as a large bargaining chip to win the blu-ray format war.

Looking at 10 year stock trends, you can see the real story. Nintendo had a massive stock surge after the release of the Wii, but now they have fallen to pre-Wii levels. Sony has been steady throughout.

Wii sales where always profitable, because Nintendo did not subsidize the Wii. More importantly, sales late in a consoles life cycle are worth a lot less because the company's get a cut of game and accessory sales and someone that has a console for longer probably buys more games or at least more new games. People who buy late have a lot of great cheap games to keep them entertained, where someone who had the console from the beginning probably already played those games and is therefore interested in new titles.

Which is why the subsidy decreases over time.

Well stated, and hints at what I think a lot of folks are surmising: the platform has become more important than the box.

For generation after generation, console manufacturers basically asked us to give up the old box -- and our entire library with it -- and move onto the next one. Developers have to be retrained, consumers have to be resold, and all at a higher price point.

This was all well and good until things like XBox Live started becoming integral to XBox 360, and all the services attached to the online experience became central components of the total experience of ownership.

This is, indeed, the Apple iOS model. The devices might get incremental upgrades every so often -- and it's not inconceivable that we might start seeing incremental yearly refreshes on XBox and PS systems someday, the same way we do with iPhones and iPads. But the core OSes and platforms will remain fairly stable.

I disagree with you on LOSS x2: Kinect v.next.

The future generations of kinect will bring astonishing changes to the world of gaming. You assume the new capabilities will be used for "malicious" means which is possible but that's a bad way to judge some technology. That's like saying the next Gen Intel CPUs will be 2x faster so people will use it to spam twice as much or hack passwords twice as fast.

The fact of the matter is Kinect will continue to shape the future of gaming. The hardware is just not good enough for hard core gamers currently. Once that barrier is gone I think it can create a much more immersive gaming experience that would benefit all gamers.

"Once that barrier is gone I think it can create a much more immersive gaming experience that would benefit all gamers."

I wonder about this. I think the lack of tactile feedback as well as the higher intensity play style will never sit right with all games or gamers, but I do think there will be genres that become synonymous with the Kinect and become better than they ever were--and more than dancing and fitness. What I hope is that developers recognize this and don't just shove kinect down our throats but use it when it makes sense to do so.

There should have been a winky-face after that heading; you are absolutely right that the tech will take us forward in fascinating ways, I was trying to be somewhat snarky about some of the lesser-benefits-to-life enhancements we will see; surely there will be brilliant applications as well.

When you say Nintendo "ate everyone's lunch for 5 years," do you mean in sales or in profit? My impression is that XBox 360 and PS3 are much more popular consoles (but this is just based on anecdotal evidence).

Both actually, hence the eating of the lunches:

  Global Sales
  1. Wii ~95 million[1]
  2. Xbox 360 ~66 million[2]
  3. PS3 ~62 million[3]
Nintendo has made profit on the Wii hardware sales since launch, roughly $50 on average between US, EU and Japan[4] while it took Sony until 2010 (tail end of 2009) to become profitable with the PS3's hardware[5] while Microsoft most likely hit profitability in 2007 (2006 was a wash[6] with 2008 showing profits of the entire division[7])

Given that the Wii wasn't much more than a 50% performance bump over the Gamecube[8] which launched 5 years prior, it was amazing how such a conservative approach with an ingenious/untapped interaction mechanism sold like it was going out of style for years.

In a lot of ways the Wii was a lot like the first iPhone -- they did something no one else had thought to do, did it well enough that even hardcore gamers thought it was cool and made it accessible to everyone.

I agree that (especially) in my circles, the Wii is a paper weight, but I have a feeling the market it sold so ravenously to was brand new gamers and non-gamers... people that I don't interact with on a regular social basis which is why the Xbox seems so much more popular to me.

I don't think Microsoft wants to be out-done by Nintendo again, especially since it is clear MS knows how to do the software side better than anyone, and Sony cannot physically afford to do not follow suite... I am actually excited because all these next-gen devices LOOK to have AMD platforms in them (CPU/GPU) which will make middle ware normalization more formal and hopefully make squeezing life out of the platforms even longer and more effective than the current cycle (which blew my mind how long it lasted).

  [1] http://en.wikipedia.org/wiki/Wii#System_sales
  [2] http://en.wikipedia.org/wiki/Xbox_360#Reception_and_sales
  [3] http://en.wikipedia.org/wiki/PlayStation_3#Sales_and_production_costs
  [4] http://askville.amazon.com/big-profit-margin-Nintendo-Wii-rivals/AnswerViewer.do?requestId=31918760
  [5] http://www.joystiq.com/2010/05/13/ps3-proved-profitable-in-last-sony-fiscal-year/
  [6] http://www.techspot.com/news/23612-microsoft-makes-tiny-profit-on-xbox-360-hardware.html
  [7] http://www.mcvuk.com/news/read/xbox-celebrates-profitable-year-as-360-hardware-sales-rise/015046
  [8] http://www.pvcmuseum.com/games/vs/nintendo-wii-vs-gamecube-specs.htm

I personally bought a Wii, because it's not just a PC with a joypad. Most of the games for ps3 and xbox360 come out on PC, too and if need be, I can connect any controller I like to my box. The Wii however, was something new and different. They pushed things in a new direction and I really liked it. Prior to that, the last console I bought was a Sega MegaDrive.

Find someone with an Xbox and play Braid. I think it might be available on Steam now too. What a game...

It's been in Steam for years, I bought it long ago.

And while it's a great game I don't think this brings anything to the discussion (at least unless you elaborate).

you are ignoring game sales, the average xbox360/ps3 user would have bought many times more games than the average wii user.

Please provide some reference links -- without it to make your point comments like this are too easily influenced by personal experience (what you, friends, social circle see/do versus everyone in each country)

The PS3 and 360 are, without a doubt, used on a more consistent basis by their owners compared to the Wii. But in terms of sales, Nintendo blew everyone else away this generation, in both the console and portable markets.

Seems like good news for PC gamers, bad news for nVidia, so. I don't even see the risk of having that fast moving hardware development cycles like we had in the past when PC don't need to catch up on concoles in terms of performance.

The only hope I have now is PC gamers don't have to cope with PC adapted console titles affecting game play. But maybe that's more due to market share and not architecture, but hope dies last!

I have to say I'm rather disappointed in arstechnica right now. This article is nothing more than a riff on a rumor (read the first 2 paragraphs carefully). I think that's wholly irresponsible on their part.

Classic 1st April :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact