Hacker News new | past | comments | ask | show | jobs | submit login
Spec analysis: XBox720 vs PS4 (eurogamer.net)
84 points by Strom on Feb 18, 2013 | hide | past | favorite | 73 comments



Why does the xbox have HDMI in? My only guess is they are getting into the business of HDMI switching, so the XBOX can sit behind any DVRs or other media boxes you have. If they do this, they might have an interesting remote control to unveil (either physical or app-layer)


Funny coincidence, that the guy who originally cracked the xbox 1 DRM also implemented a way to overlay graphics on a HDMI HDCP-encrypted stream without decrypting the source video (and thus claiming to be non-infringing of the copy protection)

http://www.bunniestudios.com/blog/?p=2117


Would it be worthwhile just to get away from a shared bus (USB) for the Kinect->Console connection?


There is a dedicated Kinect In though.

It seems more likely that they are trying to push xbox towards being a media hub, and to get people used to always consuming media through their xbox.


You mean sit in front of my cable box and let me play games between commercials ;)


That's actually a very good market there if you think about it. I usually pull up a game of Ruzzle or Words w/ Friends on my phone in-between commercial breaks. If Microsoft could feed you content via this it might be a good money maker for them.


piss off a lot of advertisers though.


Or they work together, Beat this new level, and earn a coupon to try new SURGE™!


Who'll promptly switch/add advertising on the Xbox. Which sends more cash Microsoft's way.


I don't even need to think about it. That would be HUGE.

The biggest problem with consoles right now is what I call TTP (time to play) which is the time between switching the device on and playing a game. For the old consoles e.g. SNES it was seconds. For today's consoles it is in the minutes. The iPad is back in the seconds.

As people get busier the ability to bust out a quick 5 minutes of game play is going to be so important.


Targeting the short attention span immediate gratification market is definitely a win.


Given how common multi-platform releases have become I think it makes sense to just get behind x86.

Sony had quite a bit more powerful hardware in the last generation, but when I had the opportunity to compare games side by side I didn't notice a difference large enough to prefer one console over the other. Most likely the extra headache for developers and the higher cost of developing their own hardware (in partnership with IBM last time) is leading Sony in a different direction this time around.


My friend in gamedev explained it me once. The problem with PS3 is that it is a bit esoteric platform. GPU is worse there than in xbox360, but there are 6 SPUs. OTOH writing for them is far from a pleasant task. You have to perform manual fiddling with DMA stuff. Slow bus, async requests to 64k cache (IIRC), where you must send everything and get results via DMA some time later, 64k load max, so you must do partial screen updates, syncs, etc. That makes the whole execution path in PS3 quite different from other platforms. To see good-looking PS3 games, you have to look for games developed exclusively for PS3. PS3 ports always look worse, because no one devotes at least 2x more time for them to write "proper" renderers etc. Middle-level psgl is not recommended by Sony itself, as you should use gcm, where you manually put bytecode to GPU's memory being command buffer. And for SPU asm is the way to go...

tl;dr PS3 is not the most developer-friendly platform, you can say...

But it has its own potential. Check Naughty Dog's Uncharted. Difference between U1 and U3 is huge. Same technology and 10 years of development - good engine leveraging potential of underlying complex hardware takes a lot of time.


The Xbox360 is actually dramatically faster than the PS3 [1] where it really matters - the GPU.

    Triangle Setup
    Xbox 360 - 500 Million Triangles/sec 
    PS3 - 250 Million Triangles/sec

    Pixel Shader Processing with 16 Filtered Texels Per Cycle (Pixel ALU x Clock) 
    Xbox 360 - 24.0 Billion Pixels/sec
    PS3 - 16.0 Billion Pixels/sec 
The only reason people think (to some extent correctly) that the PS3 is more powerful than Xbox360 is due to the PS3's cell processors. If you sum up the PS3's total FLOPS, the cell processors do give it a total advantage in FLOPS. But this is very misleading - obviously FLOPS are only useful if the architecture enables you to use them towards what you need.

Almost nobody used "cell processors" to potential initially, and for very good reason [2]. The bottom line is they were incredibly difficult to program for in any useful way to graphics. But more than that, they are limited in very specific ways [3]. They won't increase triangle throughput, and they don't directly increase pixel throughput.

In some extremely rare cases, some game engines found a way to use them for graphics (like Battlefield 3's Frostbite 2 engine towards the deferred lighting passes). But by the time engineers found a way to leverage this extremely complex architecture, it was already too late. If you actually look at most Xbox360 vs PS3 games side by side, the Xbox360 often looks much better.

In this case though (the next gen Xbox and Playstation) the situation is reversed: Sony's GPU is actually legitimately faster, and by a large margin. Not only that, but having a full 4GB of fast ram is a HUGE advantage to Sony. High performance 3D rendering is inherently a bandwidth hog, because everything on the screen must be pushed through the pipes every single frame. Having a fast 32MB cache doesn't really help you if you want to consistently and smoothly render more than 32MB of content on the screen. And moreover, just as the last generation PS3 had an overly complex architecture making it difficult to reach its full potential, it seems Xbox and Playstation are swapping roles here as well with this generation.

[1] http://forums.gametrailers.com/viewtopic.php?f=23&t=7947...

[2] http://www.videogamer.com/ps3/saints_row_2/news/two_or_three...

[3] http://stackoverflow.com/questions/1355827/what-does-program...


None of the consoles were ever used for their potential. The Xbox 360's CPU wasn't really used to its potential either, if you look at its theoretical numerical throughput compared to the actual throughput.

The GPU in the PS3 isn't as powerful as the Xbox 360's GPU, but there are a number of great libraries available that permit augmentation of the GPU with the SPUs.

From experience, architectures that are designed first for the PS3 run very well on the Xbox 360, but architectures that are designed for the Xbox 360 (or, worse, PC) will not perform well on the PS3, which punishes you for pointer chasing. The tile-based lighting in BF3 that you mention is interesting because the Xbox 360 CPU did not have enough horsepower to crunch the lighting data so they had to leverage the GPU to augment it. This was challenging because the Xbox 360 GPU was a DX9.0c+ era part and wasn't intended for GPGPU computation. Colin Barre-Brisebois spoke at SIGGRAPH about how this was done: http://publications.dice.se/attachments/BF3_NFS_WhiteBarreBr...

I really like developing for the PS3. It's challenging to code for initially but the rewards are definitely there if you try.


>Almost nobody used "cell processors", and for very good reason.

Seeing that you edited with a citation from a Volition AP. I guess if you trust sources like this - there is no argument possible.

>In some extremely rare cases, some game engines found a way to use them for graphics (like Battlefield 3's Frostbite 2 engine towards the deferred lighting passes). But by the time engineers found a way to leverage this extremely complex architecture, it was already too late.

You mean like 2007 Uncharted?


It's too late in this cycle to be a fanboy. It is very well established that the cell processors are bandwidth starved and it's hard to feed them. Not impossible, but legitimately hard. This is an architectural fact, not something to be debated anymore.

The PS3's internal architecture was poorly balanced. This really hasn't been a secret for a long time, for anyone who can take off the hype goggles. It sounds like the PS4 will not have this problem. It will be interesting to see if Microsoft makes the same mistake; it really won't be a console-gaming win to stick 8GB of slower RAM in the box, then suck away 2 or 3 GB for other purposes. (Probably still won't be as poorly balanced as the PS3 though.)


Each console has its strengths and weaknesses. Working on multiplatform games, the 360 version is, more often than not, CPU-bound while the PS3 is very often GPU-bound. But in reality there's very little in it. They're both horribly memory-bound and the 360's Achilles' heel is actually the lack of disc space.

But look at something like Halo4. It's had all of Microsoft's drive behind it. Pretty, yes. Significantly better than anything the PS3 could do? Not really, no.


It's hard to code for SPU. Game programming in general is hard. Yet all 1st party studios and most high profile 3d party used SPU code either of their own or from the libraries that Sony gives.

And I agree, PS3 was not a nice architecture compared to 360. PS2 was not nice compared to xbox 1 too. However this just not justify idiotic claims like "nobody used SPU" etc.


Given that all x86 processors spend non-trivial energy translating the x86 ISA to an internal RISC format, it would be a competitive advantage to expose the "raw" internal ISA, which game developers would target if it could buy them 2% more performance. Standardization does not really work for games in the same way that it works in most industries.


But it's that translation that makes x86 so successfull. If you remove that layer, you also lock yourself in to whatever internal instruction set is used.

Even this being a console, it's still useful to be able to change the internal ISA in a future revision, without breaking compatibility.


It's a shame the PS3's Cell failed so badly. As far as theoretical general purpose performance per transistor it was a great design, but was stuck in a no man's land between general purpose CPUs and GPUs, each more specialized. However, you can see some of the Cell's ideas live on in the Xeon Phi and elsewhere.


Eh. It failed because it didn't solve problems devs actually had, and made many formerly easy things harder. Most technical decisions made for political reasons end up in the same bucket.


Agreed, but the point was not to make developers' lives easier. Were it, they would have put a monster multicore X86 CPU in the PS3. The idea was to get maximum performance for minimum hardware cost (that is, minimum number of transistors) even if it made developers lives harder. This is because the console market typically involves an initial subsidy of the hardware by the manufacturer, and to minimize the need for cooling hardware in the machine.

And that is why it was so hard to program--things that the hardware used to do for you, you now had to do in software (manual DMA, caching, coherency). If you were smart, you could do it better than the hardware implementations (special purpose vs. general purpose), or at least as well. In this respect it was like a GPU--but people are more familiar with how GPUs work, and they dominate their restrictive niche quite well already. There was no niche to fill between hard-to-program, special-purpose, graphics-specialized GPUs and easy to program, general-purpose, transistor-wasting traditional CPUs outside of, say, physics engines (and supercomputers). The Cell was neither special purpose nor general purpose.

Sony bet that super-smart game developers would create great things out of hard-to-program machines like they always had in the past (e.g. the PS1 and PS2)--after all, the machine had a ten year lifecycle, so developers would eventually have the skillz. However, the return wasn't worth it, for the most part--gamers care about graphics, and that wasn't where the Cell could add a lot of value due to the unbalanced architecture of the PS3 and the amazing rate of progress made by GPUs. So the Cell was used for tasks that a general purpose CPU could do better (if less efficiently), but it was still harder to program than a GPCPU, and it was overkill for CPU tasks (i.e. the game engine) which don't require a lot of horsepower in a game, and don't expose much parallelism.

However, Intel is targeting the Xeon Phi at areas the Cell was good at (lots of parallelism, lots of unpredictable branching--mainly supercomputing), albeit with hardware which is slightly easier to program. I think Sony hit it slightly off the mark, trading a bit too much complexity in software for savings in hardware. There is definitely a sweet spot. They would have been better off with a more powerful CPU and fewer SPEs, or SPEs which were slower but had a real cache.


I've never programmed one, but curious which tasks it made harder?


how did amd get both these? is it because they can offer cpu/gpu combos, which is more attractive then trying to combine solutions from intel and nvidia? that would be good; but what worries me is they are so desperate that they have offered an unsustainable price. presumably there's background to this i've not seen. what are the rumours?


I recall reading that when Microsoft selected IBM and AMD to design the CPU and GPU respectively for the Xbox 360 they wanted (and received) the IP so they could produce the chips themselves. The original Xbox was dependent on an Intel processor which they did not have the same rights to and could not, for example, build it on a smaller or cheaper process as the console's life ran on.

I'd imagine that combining that, plus that the market for powerful number-crunching processors is slightly narrower now that IBM has taken itself out of the consumer processor game, would mean that they would not have many vendors to choose from for the CPU/GPU. ARM processors are coming along but I think they are still a ways out from the speculated performance of what is going into the next generation of consoles.


So was Intel even part of the tendering process for this new generation of consoles? Or could AMD jack up their margins knowing they were the only option?

As a PC gamer, I'm glad both consoles are going with x86 and AMD. I'd assume (please correct me) this will mean more console ports of higher quality for the PC given the shared architecture.

Secondly, I presume these massive contracts will keep AMD's CPU division in business for a while. Its been a long time since AMD had CPUs on par with Intel's performance but at least they'll remain in the desktop market space.


I thought the problem with the original Xbox was the nVidia GPU, and that it was nVidia that refused to do a process shrink.


It is rumored that Nvidia felt burned by the contract on the Xbox and they didn't want to bear the risk of a shrink.


My guess is because of the lower price, especially on the CPU side. Both Sony and Microsoft know this is not the golden age of consoles anymore, and no matter what they add in them, people are not going to want to pay a lot of money for them.

I think Nvidia won the design for Valve's own Steambox, though. Not sure what CPU they're using there. Maybe Intel? I have a feeling Valve won't care as much about the price, considering you get much cheaper games on Steam than on PS and Xbox, and he'd rather go for higher performance and launch it at a later date, too.


Would it not make sense for the Valve Steambox to run on ARM since they're cooperating with Nvidia (and just launched Steam for Linux?) I thought the Nvidia Tegra 3 was considered quite the powerful chip.


Porting games becomes much more difficult when thing's are running on a different architecture.


Is that an issue for modern games / engines? I would think most people who drop down to assembly also have a C level fallback - and people have been porting games to x64 from x86 already?


> Is that an issue for modern games / engines? I would think most people who drop down to assembly also have a C level fallback - and people have been porting games to x64 from x86 already?

Language doesn't matter so much as the API - you can write against DirectX for both the Xbox 360 and the PC, making appropriate changes. "Porting" to/from the [for example] PS3 is a much bigger ordeal.


Right, so since they're already running Steam and several high profile games on Linux, going for ARM should be aaaaalmost as easy as a recompile?


The ARM/Linux world is somewhat fragmented right now. There's vanilla ARM/Linux/glibc which has no drivers (vanilla Linux cannot use Android drivers). There's Android, which has drivers but has completely different APIs than desktop Linux. Then there's franken-Linux which tries to shim desktopish APIs on top of Android drivers. Which one should Steam target?


It's not like people are installing Linux on Android tablets all over the place and Valve needs to support those users. Valve doesn't need to worry about those devices. They only need to worry about the ones coming out with that Nvidia chip, for which Nvidia will definitely provide the drivers. So what you talked about is completely a non-issue here.


Then there's a question about how many months or years it would take Nvidia to definitely write those drivers. But yeah, if Valve wanted to lead the development of a new ecosystem at any cost I guess they could do it.


what's interesting is that, by desktop x86 standards, cpu power is power in these consoles is going to be rather anemic. AMD "jaguar" cores, are optimized for power not performance at only 1.6GHz, meant that max single threaded performance (which is still important in games) is probably going to fall short of even what today's base-model 11" macbook air is capable of (1.7 i5, and the i5 core has a lot higher IPC than any core of AMD's). And that's at launch, how will these cpu's compare to laptop/desktop cpus just a few years after launch? not well.

But the design will allow for small, sleek and quiet boxes.

It's clear that the competition they are really targeting here, from a hardware perspective, is tablets, appleTV's and anything ARM-based. Those AMD cores will still wipe te floor with any current ARM designs, esp when coupled with an AMD GPU unit. The designers of both platforms have apparently decided that PC gaming isn't worth competing with, it's device-based gaming (and google and apple's game/app stores) that they are targeting with this form factor.

Meanwhile the gap between what a PC can render and what a console can render is only going to get a lot wider this generation.

Or maybe we all just need nVidia's grid solution to come to reality and disrupt the whole model. (e.g. end the hardware race by just putting huge racks of GPUs and CPUs near the edge of the cloud and run your games as VMs on any handy screen or device)


If I'm not too confused, historically consoles always had an edge at launch, even if small and specialized. This time I see none of that, it's off-the-shelves current hardware. Feels weird. AMD platforms are good on heavily multithreaded code, I hope that game devs will be able (with the help of a nice lib or not) take advantage of that.


Their GPU will likely trounce that of the 11" MBA most likely. Focus is highly on the GPU in consoles, and that capability will be put to good use.


Aren't those AMD cores even slower than Atom? How will it "wipe the floor with ARM chips" when Cortex A15 is significantly faster than Atom?


Brazos is already faster than Atom, so Jaguar should be much faster than Atom.


Which one allows me to sell my games when done?

That's the only spec I care about.


I'm of two minds. Steam doesn't allow for resale, but they also don't have mandatory price floors. I can't imagine that anybody at Microsoft or Sony really thinks that the future lies in ever more expensive "AAA" titles with no outlet for actual price discrimination. They've got their eyes on iOS and Android (both corrosive markets for a whole different reason).


Performance specs don't sell games. Look at the Wii.


The Wii doesn't really sell games either. The attach rate for the console is poor in comparison to its competition. Even though the Wii sold 30% more units worldwide than the Xbox 360, they only sold 13% more games worldwide. In North America, the biggest market for console games, the total number of Wii games sold is roughly equal to the total number of Xbox 360 games sold.

Title sales for the Wii: http://www.vgchartz.com/platform/2/wii/ Title sales for the Xbox 360: http://www.vgchartz.com/platform/7/xbox-360/ Console sales for the Wii: http://en.wikipedia.org/wiki/Wii#Sales Console sales for the Xbox 360: http://en.wikipedia.org/wiki/Xbox_360#Reception_and_sales


He's talking about the possibility that one or both may prevent used games sales by tying games to the console account they're first used on.


The Wii U is, so far, a huge failure, probably for many different reasons (the name is a big one), but one of the main reasons is likely the poor specs, it's just slightly faster than the old Xbox 360 and Playstation 3.

If it were a "real" next-gen console with specs similar to those of the upcoming Xbox and Playstation, instead of just essentially being a last-gen console with a fancy controller, a ton of people would care.


What the fuck are they thinking with that name. Wii as well, for that matter.


The Wii has a low game attach rate compared to XBox360/PS3. People buy the Wii for its gimmick and maybe some Mario, but stop there.


To add to that point, the XBox is now selling twice as many games as the Wii (which has a much larger installed console base).


I'm sort of sad they didn't use hetrogenous cores for these. 2 Steamroller threads with 4 Jaguar threads would be pretty interesting. Given that the game developers would know the core count they could just lock threads to the appropriate core, which they do anyways. Same shared memory space as normal unlike the Cell's PPUs, but you get throughput when you want it or relatively high singled threaded performance when you want that.


Weird that they don't mention Gaikai... I think that's going to be the real edge for the PS4.

Of course it depends on a fictional and amazing future network, but considering all relevant information I think that it will most likely be the winning edge for the PS4 in the next decade.

Sony is going to fucking print money when they have a subscription for a streaming gaming service.


The rumored PS4 specs are total overkill for Gaikai. I could imagine Sony using Gaikai for game demos and PS2/3 backwards compatibility, but not "mainstream" gaming.


I'm real disappointed that these consoles are still fighting an arms race. I was hoping to these boxen would really take to the cloud and finally figure out streaming but I guess we're just not there yet.


Why? It seems like a completely ridiculous idea (latency-sensitive, incredible waste of bandwidth). I honestly can't figure out what the purported benefit is.


But it's the cloud! /s


It's simply not feasible for a majority of countries and their broadband. Also, you would need to design games with lag in mind, otherwise there's a noticeable lag, even at 40 or so ms. You'd need the client to be doing some feedback work.


Game developers will almost always take more memory over faster memory, as long as the slower one's bandwidth is still sufficient.

Bandwidth of storage is abysmal by comparison.


Not really:

http://forum.beyond3d.com/showthread.php?t=62108

>Usable memory amount is very much tied to available memory bandwidth. More bandwidth allows the games to access more memory. So it's kind of counterintuitive to swap faster smaller memory to a slower larger one. More available memory means that I want to access more memory, but in reality the slower bandwidth allows me to access less. So the percentage of accessible memory drops radically.

In this case, if the 360 is really at 60 GB/s and the PS3 200+, that's a much larger disparity than the current generation.


I thought Sony was going to use the full OpenGL API for PS4?


It is an ongoing myth that consoles use OpenGL and only Microsoft is the bad boy.

The only console that had some kind of OpenGL support was the PS3 with OpenGL ES 2.0 but using CG instead of OpenGL ES Shading Language. Which most developers did not use anyway, prefering libCGM.

The others just have an OpenGL like API in terms of how they work, but that is all.

Most game studios don't care anyway, because what is important is to make a game, regardless how.


I know PS3 doesn't really use OpenGL ES 2.0, but I heard a rumor they might use some native OpenGL 4+ for PS4. We'll see in a couple of days, I guess, if they even intend to give that information now.


Ah OK, thanks for pointing it out.


IIRC they've exposed access to OpenGL on the PS3 but nobody uses it due to performance/feature set issues.


There doesn't seem to be anything official about how it's implemented under the hood, but the widespread assumption is that OpenGL on the PS3 is an emulation layer on top of the native graphics API, so performance tweaks are easier if you use the native API directly.


If I am not mistaken, even the engine provided by Sony in their SDK does not use it.

FireEngine if memory does not fail me.


PhyreEngine


Thanks!


I think that's because they added OpenGL ES 2.0 later on top, to make it easier for developers, but developers just preferred to use Sony's own custom API's.


Microsoft should advertise their next system to the audience they have and not the audience they want. They should start by calling the system the xbox 420 instead of the xbox 720.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: