Status of the project is that it's coming along well, after a long period of inactivity. Many games get to title screens though don't draw much. I'm really hoping AMD gets its shit together and releases Mantle soon, as that will make emulating the GPU related things significantly easier.
Hopefully you are not in US but they are pretty embedded in every country at this point.
If your feelings are hurt by someone not wanting to get hammered with idiotic requests/e-mails about a very clear alpha 'project', you should maybe reconsider your ISP payments.
I can understand the tone in the README.md because a lot of 'gamers', even the technically oriented ones, expect a lot from the word 'emulator'.
Different GPUs work very differently internally, and the 360 GPU probably is vastly different internally than what's in the Xbox One, despite both being made by the same vendor. The incredible amount of specific GPU hardware optimization, the incredible EDRAM bandwidth and other things that need to be emulated for emulation to work is incredible. That even ignores the performance characteristics of the Xbox 360 CPU - it outperforms even the highest end Haswells today in very specific scenarios (and Microsoft has no hope to emulate this on the weak Xbone CPU). Props to the OP for starting such a daunting project.
If you look at Dolphin's system requirements, the Xbox One does not quite have the power to even emulate a GameCube, a console like the Xbox 360 with an IBM PowerPC CPU and an ATI/AMD GPU but from 2001 instead of 2005. It could never hope to emulate an Xbox 360. It makes me wonder what kind of regular PC hardware is needed eventually to run something like Halo 4.
i kind of agree and disagree...
i agree with the idea that the GPU is a big problem, but i dont think the implementation of features on hardware like EDRAM, memexport, half-float textures etc. are especially problematic vs. the general case of the unified memory architecture...
most of those unique features like the EDRAM and memexport were relevant to hardware specific optimisations at the time - modern GPUs can produce equivalent functionality by ignoring the performance characteristics and relying on their horsepower. EDRAM is a very good example of something that you just don't need an equivalent for - you can emulate all that functionality with regular VRAM just fine.
memexport and similar functionality relying on the unified memory architecture is a little more tricky. the real problem i'd imagine would come from the the resulting 'tricks' used for performance and flexibility when feeding the GPU from the CPU side - e.g. being able to memcpy into a vertex buffer or blob of shader parameters instead of going through the DX like interface, which was a genuine and useful optimisation. although iirc MS put some limits on this by failing your cert if you did anything outside of their approved list of workarounds for DirectX performance issues... something which is theoretically very easy to check for in many cases
i can't really see how to workaround that with creating a quite complicated and expensive layer around the memory emulation. for the other features (including memexport) workarounds are possible - if unperformant. memexport is only 'easy' because it is an optimisation provided to do something you could already do but in a much faster way... (and something which is now 'standard' since SM4)
also, as an aside, i think its worth remembering that whilst the PPC CPUs had lots of advantages for fast execution the memory read/write performance was abysmal compared to contemporaneous Intel PC CPUs and many features like branch prediction (always true iirc) and out-of-order execution were also quite far behind the Intel PC counterparts of the time. Today that gap is even larger - although it is certainly true that other things have become slower, these were never the bottleneck in my experience... it was almost always memory and the poor size and performance of the cache.
I'd assume this makes it difficult.
Here is a more general article on the topic:
Whereas basically all consoles expose low-level system stuff to the game, such as memory-mapped IO, low-level GPU commands, TLBs, DMA, etc. All of this can't be emulated without a large amount of overhead.
At least 6th and especially 7th generation consoles have such complex timings that emulating the clocks of different parts isn't generally needed. And it's possible that Xbox 360 and PS3 require games to go through their kernel for a lot of hardware access; I don't know.
Intel and AMD added additional instructions that reduced the chance that information running on one VM couldn't be leaked to another VM. This is the foundation of Hypervisors, which are low level systems designed specifically for giving a managed interface for the host OS, but importantly these are still virtualized VMs meaning the computer architecture is still the same as the hardware. (a little hand-wavey but correct enough for the sake of discussion)
Transmeta designed cores that weren't strictly x86 for example, but the technology is more like RISC vs. CISC. By transforming the CISC instructions into equivalent RISC instructions on the fly, the underlying processor is RISC. This is already true of (almost?) every modern CPU, they execute micro code. Transmeta was one of the first to do this. I'm not sure, but Transmeta may have performed instruction reordering in their pipeline at the micro code level whereas others did this at the opcode level. I'm not aware of any instance where they used this to simultaneously provide multiple architectures on the same silicon, although at a glance it seems plausible. It would have been very expensive to build multiple ISAs into the same core, especially if the demand for such technology is nonexistent. By scraping the transistors that would have been used to support multiple ISAs, you can use that space for better pipelines, SIMD, multiple cores, or simply increase the yield, conserve power, and/or make the processor more efficient.
Any of those options would be better, so I don't believe any of these mythical multi-ISA processors exist.
The bottom line is that for the Xbox One to support Xbox 360 code, they would have to emulate everything and there simply aren't enough CPU cycles to make that happen.
Since I'm on a roll, the biggest disappointment was that the Xbox 360 didn't emulate the Play Station. Now obviously the Xbox 360 is made by Microsoft and the PS is made by Sony, but the idea isn't so extreme. A company called Conecttix  created a PS emulator for the Mac. The Mac was using the same ISA as the PS, so the emulator only had to emulate the BIOS and peripherals. Sony took them to court and lost. The pivotal piece was that Microsoft bought Connectix and a part of that company lives on in the Virtual PC virtualization software now made by Microsoft. Sony apparently bought the PS emulator and killed it, but imagine if that had gone to Microsoft instead? The Xbox 360 uses the same ISA, so in theory it could have also run a 360 version of VGS. Gamers who didn't have a PS2 might have been able to play their PS games on Microsoft hardware. Microsoft would have gotten hardware sales and Sony would have received money for game licenses.
For this generation, Microsoft would have done well for itself by acquiring OnLive or building out its own server side gaming system as Sony has done by purchasing Gaiki. This would have given the Xbox One the ability to play Xbox 360 games over a remote desktop type of link. I think if the public backlash against the online offerings wasn't so boisterous, we may have seen a service like that at launch instead of the watered down version they scrambled to produce.
The key future proofing component of Xbox One is the ability to run parts of the game in the cloud. This is why the slower core of the Xbox One shouldn't be seen as limiting. Games can be written to push complex calculations to a server farm while the local core handles more pedestrian chores. Extending that idea further, we may see Xbox 360 emulation yet. The Xbox One is posed to win the battle this generation if these long term strategies are given time to mature and be fully realized. The PS4 has some short term appeal, but the gap between Microsoft and Sony isn't as wide as the gap between those two companies and Nintendo.
The only platform a PSX emulator might have not done full dynarec/interpretation would have been the PSP, and that's unlikely to actually exist for various reasons.
But many are. E.g. QEMU can emulate PPC on x86.
Things that kill you are graphics and sound; particularly texture formats (which you don't have the CPU horsepower to convert) and sound (the 360 has a ton of voices in hardware, and this is difficult to emulate in software).
Personally, I don't think it's impossible. But it'd take the right people a couple of years to make it actually work.
It wouldn't be as purely simple as a recompilation, but it's a conceivable amount of work. Probably be more work than is worth it for many titles, but I'm surprised at least the Big Games don't have support.
I did this many years ago when I learned about the existence of the x86 instruction set as a stepping stone towards understanding/making interpreters, virtual machines and compilers. I recommend anyone do it as a learning exercise.
This stuff is incredibly simple at its core but there is a common misconception because it is 'low level' that is is some how hard or complicated...
Once i knew it was just simple instructions, registers and a few flags coupled with a memory model it was obvious how to achieve... you write C functions for the various flavours of ADD, SUB, LEA, MOV, FSTP, ADDPS etc. by iterating through the stream of bytes and interpreting them in the same way as the CPU (this is always described in the CPU manual in my experience) you can call the right ones in the right sequence. you use some appropriate blob of memory for your registers, flags and other CPU state and some big array of bytes for your emulated memory...
this is what an emulator is at the simplest level, an interpreter for CPU instructions. (of course making the implementation of instructions might necessitate that you do more - e.g. emulating memory, BIOS or more...)
And this generation it seems that Sony is trying to provide PS3 emulation using their cloud gaming system.
Some of the benefits of back compat to platform holders:
1) Encourages customers to upgrade sooner and to choose the compatible platform.
2) Increases the roster of games during the first few years of the new console's life.
3) Extends sales of the previous generation hardware and software. (Software because it can be played on both old and new hardware, and hardware because there's more new software available.)
In other words, it's a way to get you to pay for games you already own but can't play on your new console
More details: http://en.wikipedia.org/wiki/Binary_translation#Dynamic_bina...
Then there's the problem where PowerPC is big endian and x86 is little endian so add potentially additional processing for network and file system code as well (models, textures, sounds, etc), in addition to any magic numbers that may be used in the codebase.
While it is possible to emulate, the performance would be abysmal. Just take a look at GameBoy emulators for the PC. They use massive amounts of CPU due to the necessary overhead to emulate the processor, graphics, sound, etc. Trying to play a game like Call of Duty or Grand Theft Auto emulated from PPC to x86 would just be sluggish at best.
The PS2 is an incredibly weird architecture, though. Lots of different processors requiring really strict synchronization in weird ways. It's not that the raw horsepower wasn't there for emulation (it has been there since ~2004), but that it's just strange enough that emulating it is brutally difficult.
I mean, I obsess over odd stuff too, I'd guess most of us do, but it's important to keep a sense of humor about it.
In theory, the emulation of a system, in and of itself, is not illegal. The copying or unlicensed use thereof, is.