Hacker News new | past | comments | ask | show | jobs | submit login
Building a PS1 style retro 3D renderer (david-colson.com)
376 points by bwidlar 47 days ago | hide | past | favorite | 82 comments



This is an excellent article, a good insight into the particularities of the hardware. I particularly admire the explanation of why the polygons snap to the pixel, and why that's not the fault of floating-point mathematics but rather the rasterizer itself. One thing the article approaches, but doesn't really quite get to the bottom however, and which is the crux of all the ps1's 'flaws' is that its GPU was 2-dimensional fundamentally. That's why things snap to pixels, that's why the textures do not have a depth component and are thus not perspective-correct, that's why there is no z-buffer, because it had no z-coordinate. Essentially, every single PS1 game 'faked' 3D by using the GTE (which was separate from the GPU) to project 3D points and polygons into a 2D rasterizer. And it worked quite well! For the time.

Another thought - I also wrote a similar renderer under similar constraints a while back, just for funsies, and I'm wondering what would have came if I thought "I should write an article about this". I generally don't think to write articles over every side project I do, what's the impulse for when someone decides to detail their work to the internet?


> what's the impulse for when someone decides to detail their work to the internet?

When it comes to writing, I think the biggest factor that determines if/what/and how you write is the audience you imagine. When you write, you sort of picture yourself telling the story to someone. And the way that scene plays out in your mind depends entirely on who that person is.

If your mental image of the audience is "some random Internet user whose interests are different from mine", then you can't imagine them standing there patiently while you laboriously walk through all the details of your renderer. So you don't.

But if you imagine an audience that is "another retro renderer enthusiast who has similar projects", then you can imagine them getting hyped up by what you're saying and gleaning bits of useful stuff from it that they can apply to their own side projects. It almost feels wrong to not write it and let them down.


Adding to this, if you imagine the audience is _you_, then you might feel motivated to write the article you _wished_ you had before embarking on your project. That audience falls under your second one ("another retro renderer enthusiast who has similar projects"), but maybe is even more targeted.

One advantage of this is being able to look back on previous projects in the future, as a form of documentation. That helps me pick up on a project I haven't worked on in a bit. True, you don't need to publish the documentation publicly, but since I wrote it, putting it up publicly is almost no effort now that I have a blog.


I'd add there is a third category, "curious technology enthusiasts", that just like to read well-written article on some new topic.

I personally am not "retro renderer enthusiast" nor have similar project, but this article was joy to read.


That's a good audience too. You'll note that choosing this one significantly changes how you'd write: You would naturally find yourself including a lot more context and background information because you don't assume the reader has it already.

Either audience choice is fine, I just think it's interesting how it impacts the resulting writing.


> Essentially, every single PS1 game 'faked' 3D by using the GTE (which was separate from the GPU) to project 3D points and polygons into a 2D rasterizer.

While it's true that the PS1 handles perspective projection through the GTE (as the GPU is a primitive rasterizer, after all) I don't think it's fair to call it 'faked 3D' since, in my understanding, 3D projection is another fundamental stage in the graphics pipeline (our displays are 2D grids of pixels, so we need to transform the 3D world into something it can be displayed there). Other consoles like the Nintendo 64 provided more capability on the GPU side (the RCP) which allowed to move part of the matrix operations away from the CPU chip.

Be as it may, I've seen the 'fake 3D' claim before and I'm starting to wonder if I'm missing something in my understanding of this technology, maybe someone can offer a third opinion?

P.S I'm the one that wrote https://www.copetti.org/writings/consoles/playstation which is referenced at the bottom of the article (the 'Other reading'), I'm glad the author found it helpful (or interesting)!


Fake/real is not the most precise terminology, but I think it gets the important point across, so I’ll use it.

In a "real 3D" rasterizer, you interpolate the depth coordinate for each individual pixel. You need this value to perform two steps in the pixel pipeline that are required to get the correct look: First, you use the depth coordinate the perform depth testing (reject pixels that should be hidden). Then, you use it to perform a perspective correct texture lookup (make straight lines on the texture obey the laws of perspective).

If you throw out the depth coordinate anywhere earlier in the pipeline, as the PS1 did, you can’t do either of those things, so you get artifacts where objects flicker and warp when rotating. The rasterizer really needs to be "3D aware" right until writing out the final pixel values.

Note that both artifacts can be lessened, to some degree at least, with workarounds. For depth testing, you simply sort the polygons (which doesn’t work in every case, as triangles can overlap in cycles). And for the texture lookup problem, you subdivide the polygons (this helps because the vertex calculations are perspective correct, but it's expensive and you'd need infinite subdivision to fully solve the problem).

As an aside, the PS1 had another issue where the rasterizer didn’t support subpixel precision, which also leads to artifacts. But IMO that’s mostly unrelated to the 3D coordinate problems — 2D games need that as well to get smooth movement.


> Be as it may, I've seen the 'fake 3D' claim before and I'm starting to wonder if I'm missing something in my understanding of this technology, maybe someone can offer a third opinion?

I agree that all 3D is 'faked' in the sense that it's drawn to a 2D grid of lights eventually, I guess it just depends on where in the pipeline you make that fundamental transformation. I only meant faked in that the GPU itself is 2D, whereas the GTE is just math. So it's kinda like doing CSS transformations to make a web page all 3D-looking (a gross understatement I know, but, nevertheless). Though, semantically, I guess the '3D' part of a modern GPU is also 'just math'. Well now you've got me thinking about it harder...

Wonderful article by the way! I've read yours a few times myself.


I could see an argument being made for 'fake 3D' pretty strongly here. DF Retro has a few videos comparing various consoles, and one of the things that the PS1 and Saturn both got wrong, IIRC, was that textures would warp*, because the 3D look was being provided by affine transforms (as I understand, I may be misremembering the details), rather than perspective correct rasterization.

So, like, the vertexes are in the right spots, but the texture mapping is off. So it kinda depends on if you need texture mapping to be perspective correct to consider a given example of 3D "real" vs "fake". Dark Forces also has perspective warping that happens in it's engine when you look up/down, as an example.

* https://youtu.be/VutzIK3DqZE?t=341 DF Retro analyzes ports of Tomb Raider


Exactly, straight lines on a texture should obey perspective (think rows of bricks on a wall). I think this wasn't as widely understood back then, but notably, id software already got it right a few years earlier in the original Wolfenstein.

Great link. The artifacts are especially apparent when the walls are near the camera, for example at this timestamp: https://www.youtube.com/watch?v=VutzIK3DqZE&t=398s And note that here, the seemingly flat walls are already heavily subdivided into smaller polygons to lessen the impact of the problem. (Why they wouldn't then make use of those additional polygons to include more geometry details for free is beyond me, however.)


The way you've worded this makes it sound as though programmers of the era were somehow unaware of perspective mapping, which I can assure you is not the case at all. Affine texturing was a tradeoff of performance, nothing more.

Wolfenstein's correct texture mapping is due to it using raycasting in a 2D plane and rendering scaled vertical strips of texture, which just happens to be perspective correct because you only ever have surfaces at 90deg angles in the vertical.

As to why those extra triangles weren't used for detailing, it's likely because that would take up extra level data. Tessellation of an existing triangle into smaller triangles doesn't.


From the horses mouths (minus Abrash/Carmack) 'HandmadeCon 2016 - History of Software Texture Mapping in Games' https://www.youtube.com/watch?v=xn76r0JxqNM

Wolfenstein and Doom are 100% correct because Carmack 'cheated' by deciding to never look down/up or draw slopes :-) so whole game is drawn with 'lines of constant Z'. Or as they put it

Chris Hecker (Microsoft/Maxis/etc): that's a classic Carmack thing which is like Fuck those general problems, Im gonna solve this other problem perfectly

John Miles (ORIGIN/Miles Design/etc): Its all about not doing the math, we were still at a point in time when you won by not doing the math


The GTE truncated too early, and in too many places and there is a register for adding a fraction to the post-perspective-pre-truncate result to round instead. I didn't implement this back in the day, but thought about scaling the target dimensions in the GTE (watching for overflow) to get 1 bit of fraction back to refine the screen coordinate.


> what's the impulse for when someone decides to detail their work to the internet?

The company I work for considers this kind of writing to be fundamental to being a considered a principal engineer. I can't imagine they invented this idea; it's probably relatively common in the startup world.

While this might not have been the author's motivation, having a popular tech blog certainly doesn't hurt one's employment prospects.


>that's why there is no z-buffer, because it had no z-coordinate

Z-buffer is just an array of values you can sample, often as a texture. There is no inherit coordinates, unlike, say, vertices. Z-buffer is a 2d map anyway. The offset into that buffer would be the x and y.

Or do you just mean there's no hardware accelerated z-buffer?


In early 3D hardware, the Z buffer was a bit more special cased than it is now, where you quite literally allocate a texture. In either case, the interesting (and expensive to emulate in software) part is of course using the depth buffer to perform depth testing, that is, rejecting pixels that would be hidden.


Sure, the depth was not calculated or tested in hardware. But its not because buffers did not have a z-coordinate.


Love these "tiny" 128x128 textures when meanwhile, anyone doing development for the contemporaneous Nintendo 64 gets something like 44x44@16bpp color, or 64x64@8bpp grayscale. The maximum texture size is 4 KB with some major caveats (e.g. it drops to 2 KB if you use indexed color).

The N64 is, in a lot of ways, on paper, a more powerful system than the PS1. Better CPU, depth buffer / Z test, perspective-correct texturing, antialiasing, subpixel precision, etc. And yet the PS1 completely dominated the N64 in the marketplace and in terms of longetivity--a big part of that how cheap it is to manufacture CDs. And so people lovingly remember PS1-style graphics, but there isn't as much of a resurgence of N64-style graphics.

Another cute thing to note about PS1 is how it renders dark clouds on-screen. You might be familiar with the "add" blend mode for things like lens flares. Well, the PS1 also had a subtract blend mode, which has some unusual effects... basically, anything partially obscured by a dark cloud gets darker but also more colorful.


At the time, I hated how ugly PS1 games looked, and wished they were nice and smooth like N64. now I realize that N64 textures weren't smooth, they were blurry.


The textures were blurry along with the images itself due to its poor digital to analog converter. The latter can be addressed with HDMI mod boards now available for the N64.


Square Enix still has new copies of the PS1 version of Final Fantasy 9 on their web store. They used to have more PS1 FF titles but I guess they finally stopped printing them sometime in the last couple years. I doubt Nintendo made an N64 cart post 2003. CDs are just so dirt cheap to print that it made sense to keep printing big seller 20 years on.

PS1 rendering also has a bit more charm than N64 with its affine texture warping. N64 was just blurry.


It's funny how the particular constraints of a system can matter in different ways the designers didn't have a solid handle on at first.


As an aside, one thing that's interesting and sad to me about WebGL is that WebGL + modern browsers are significantly more powerful than the PS1 and Nintendo 64, but we haven't had _anything_ even _remotely_ close to the completeness of an N64 nor PS1 game. Lots of small experiments and impressive demoscenes, but nothing in WebGL that takes advantage of this potential power. Frankly even compared to the completeness of Flash games, there are very few WebGL projects that even come close to that.

This article and output would make a great WebGL project, and they could embed real demos of it running right in the site.

What's missing that prevents us from building these types of complete projects in WebGL?


Monetization is the answer. Nobody is paying $50 for web games, there's no in-app purchase system, no web gift cards in stores, and the exploitative interstitial ad networks that enable free mobile games to make boatloads of money don't exist on the web.

That said, there is an ecosystem of web games and you might be surprised by the sophistication of some of them. https://venge.io/ is a good example, try https://poki.com/ for a sampling of more. https://playcanvas.com/ is the most advanced WebGL-specific engine.


> Nobody is paying $50 for web games

Unless a AAA studio releases a main title for web that will be true as there are no $50 web games to buy to make this come true. I'd guess most non-AAA game sales on Steam aren't near $50 though but both of those can still be extremely lucrative.

> there's no in-app purchase system

90% of browsers support the Payment Request Web API and you're also free to make your own integrations to your payment processors directly yourself (which isn't always true of consoles/app stores).

> no web gift cards in stores

You can piggyback off the traditional payment backends (e.g. Google Play) and use their cards or use your own (like game specific Roblox/Fortnite cards in stores)

> and the exploitative interstitial ad networks that enable free mobile games to make boatloads of money don't exist on the web

I don't think you saw tons of interstitial ad companies for mobile prior to having mobile games that made sense to use them either. After all it's not like the web is incapable of delivering ads it's just there is no point in delivering these types of ads without that type of content.

.

I think the real reason is distribution pains. By the time you make a games impressive enough to garner strong sales it becomes a PITA to distribute via the browser. Large amounts of persistent storage are a pain to manage in browsers (if you can even get the amount you want) and delivering an entire game or designing it to be completely streamable is a ton of cost and overhead vs just serving persistent differential app updates via traditional means. You get less control of the environment the game is run in and what you get in return is worse performance and feature capability to work with out of the gate. Compare this to any other distribution means where you're able to manage distribution, use the full capabilities of the device, and more long term stable platforms with less overhead.


When I looked at it, I found that incentive ad networks want to use their blobs on mobile devices. They feel that the closed binaries protect them from ad fraud.

Google does offer a webgame interstitial ad network. You'll need volume for this.


I think you're right that the biggest technical reason is local storage. But I disagree about the difficulty of streaming. When properly implemented it comes with the huge advantage of drastically reducing install/patch/load times as well. Engine developers have been lazy and allowed install/patch/load times to get totally out of control. I think the game industry has been ignoring the huge benefit that reducing those times would bring them even on traditional platforms.


It's actually difficult to get smooth, realtime performance from webgl apps, there are just too many moving pieces.

I've written a lot of code for PS1, Dreamcast, Gamecube, PS2, XBox 360 and PS3, and except for the last two, what they had going for them was complete, absolute predictability. Your game was effectively running in realtime mode.

In a browser, you have a lot of jitter due to things out of your control; garbage collection, cpu frequency scaling, that kind of stuff. So, you see jerkiness, and that really breaks the feel of a game.

It will take a while to polish the browser tech stack to this level, and there seems to be no market to encourage that.


There have been lots of complete games built targeting the web browser.

Crosscode is pure HTML5 according to the devs. No webgl at all. Games like Quake have been ported to run in the browsers (and did so ~10 years ago).

I'm sure there are a lot more complete games out there targeting web browsers, but it isn't immediately apparent that's what they are using. A lot of the newer games I play feel like they are running in a web browser.


I think there are a lot of complete indie WebGL games, but they aren't nearly as well known as console titles from the late 90s. And that's because the market and technology are much more mature since then: demand for retro games like that is lower since the technology isn't state-of-the-art anymore, and supply is way up.


I have been reading a bit up on it for a side project and it's great, but resources for troubleshooting are slim and it's not compatible easily with modern frameworks so people tend to use particular Libraries for everything and finding vanilla webgl is a bit trickier.


A webgl game is hard to monetize, specially if you compare it with other options.


Nothing but lack of desire afaik.

Iirc, Godot and Unity can both target WebGL.


I worked on the MGS port to PC (from PS1), and I did little restoration trick to remove the "jittering" - I would save into a huge table of float `float gte_results[65536]` - any calculation from the GTE (our emulation) there. For example if after some translation the x was 123.43 - I know that it'll use only 123, but I would save gte_results[123] = 123.43 - and later when the triangle is drawn (with integer coordiantes, I'll lookup back as gte_results[123]). So ... it worked kind of :) - https://youtu.be/Fpep7oOGNfU?t=1442


This is basically how PGXP works too.


really - wow! - cool :) - Need to check it out! Thanks!!!


This was great, and took me back to the late 80s/early 90s when I was studying Michael Abrash and building software renderers before the first hardware 3D accelerators appeared on the market.

While I eventually figured out that the reason my textures looked fucked up, especially in long corridors, was because they didn't take into account perspective and thus needed an extra set of per-row and per-pixel interpolations (which killed the frame rate), I actually never realized until this article that my Gouraud shading was actually suffering from the same issue because it was so much less noticeable to the eye.


Interestingly, even the depth buffer is wrong if you interpolate Z linearly, but it's such a subtle effect, it's almost imperceptible.

IIRC games back then also used to do perspective-correct interpolation every 8 or 16 pixels, and linearly in between, which is a good compromise (you always have to interpolate U and V, what they wanted to avoid was the division, which is more expensive).

I've written about this here, https://gabrielgambetta.com/computer-graphics-from-scratch/1..., including some detailed examples of how the math works. Very fun topic :)


God, your statement about the zbuffer is true too, and as you say it was too imperceptible to notice - though if I'd looked carefully there would surely have been some strange effects where polygons intersected.

I didn't even think about the zbuffer and the shading when I wrote my engine - I was too preoccupied by the weirdness of the textures.


Also, please can you mail a copy of your book back to 1988 for me. Thanks.


One missing effect is dithering. Unless my memory fails me, it was quite prevalent on the PS1.


Indeed - it was built-in hardware functionality that developers (or GameShark users!) could enable/disable on a case-by-case basis https://www.chrismcovell.com/psxdither.html


There was a feeling when playing the PS1 like everything was being pushed to the limit, only just under control. Textures swimming and popping and little gaps appearing in the world as you explored it. Much like saturation on an audio track, that appearance of being close to the edge or even a little beyond it added somehow to the excitement. A force too powerful for the medium to contain.


Retro gamer (UK magazine) did a great piece on indie Devs going for this look. Think this vice article says something similar:

https://www.vice.com/en/article/3an385/were-in-the-beginning...


back when I was a 3D artist this was my favorite things to model - low poly things. I loved being as efficient as I can with my polygons. I'll make sure to follow the development. Great job so far.


Any modern indiegames that go for the PS1 look? Tons of indie games pay homage to the 8-bit era, but I don't recall much nostalgia for warped textures.


I think 8-bit/16-bit retro games were kind of at the apex of 2D sprite game design, and many of them were quite lovely, hence the nostalgia.

The PS1 3D games look worse when you look back at them, because they were not really the next logical increment of 2D gaming, but the first step into 3D. So more like the Pong, Asteroids, etc of the 3D era. They were exciting at the time as a glimpse of where things were going, but objectively they looked pretty bad.


I agree. Retro 2D games are amazing. The SNK games are works of art. Modern 2D games don't quite feel the same. I don't even understand why.


Devil Daggers is one of the few games I have time for. Mostly because I rarely survive more than 3 minutes and a world record run is on the order of 10 minutes. But, Steam says I have over 20 hours in the game.

Video Review: https://www.youtube.com/watch?v=-jaTKi-1rz4


Amongst retro gamers there is universal agreement that 5th generation console games(specifically 3d games) aged the worse.


BallisticNG has tried really hard to get the look and feel of the PSX Wiepout games, including an optional PSX shader that adds vertex wobble.

https://neognosis.games/ballisticng/


Valheim is an indie game with a PS1-style aesthetic built using Unity.

https://www.valheimgame.com/home/#anchor4


Kinda.

I know that's what the developers claim, but the game looks so much better than a PS1 game. The only PS1 aesthetic the game can honestly claim to have is low-res textures. It doesn't have the weird texture warping, shallow draw distances, rigid animations, or really anything else that we associate with the PS1.


I don't see it. Those screenshots look beyond what even a PS2 can do.


The Chameleon is another recent stealth-game that has PS1 inspired aesthetics: https://www.merlinogames.com/the-chameleon


The Haunted PS1 Demo Disks are a great place to start: https://hauntedps1.itch.io/


It doesn’t have the warped textures but DUSK is probably the closest thing to a PS1 style indie game I’ve seen.


DUSK is evocative of 90s PC FPS games, not PS1 games. There is a rather decently sized community of indies who use the PS1 style, mostly on itch.io.


there's been a bit of a surge of low-poly fake-PS1-wobbly-vertices horror games lately


I wonder if people try to retrofit post ps2 ideas onto ps1 hardware. Or new techniques on older hardware when the knowledge wasn't there or spread.


The most obvious one I can think of is triangle strips and MIPmaps. The 60fps demo that came with Ridge Racer Type 4 had MIPmaps.

I made an internal "cel-shading" tech demo back in 1998, tracking the polygon edges and then using line primitives to outline the objects but there was no game to apply it to.

There were bump/normal mapping demos around then too, where a palette entry was assigned to a vector (so 256 vectors max), lit, and the texture applied additively (or subtractively).

On a personal project attempting a mesh shader like approach (although you can say it's really a PS2 VU1 workflow) to transform and sort a cluster on scratchpad, then triple buffer the output in memory, ready to be kicked off to the GPU in slice mode. It might end up slower than the old brute-force OTZ approach but it's fun to figure this stuff out.


If you like this sort of thing, you might like this remake of the "Death Stranding" trailer, in a PS1 style[0].

I'm not sure what tools were used to achieve the look, but the outcome is great.

[0]: https://www.youtube.com/watch?v=iTgJHU3MB24


Cool - reminded me of this guy's project [1]. I had to mission about to find the channel again because I forgot the name, but his videos are (were? hmm it's been a while) really cool.

[1] https://www.youtube.com/watch?v=IPzl9FmKVjI


I actually sought out techniques to do very similar in Unity for a Saturn/PS1 inspired 3D platformer I’m developing.

It’s such a cool aesthetic, I’m surprised it hasn’t ‘caught on’ yet as a modern design fad.


It has definitely caught on with the indies. Specifically the Haunted PS1 genre.


I just want to say this is a beautiful and inspiring project so far. Im looking forward to seeing how it evolves.


Me and a friend were talking about this some time ago.

He brought up an interesting point: One thing that made for example Silent Hill extra scary, was the janky PS1 graphics.


>fantasy console-based inspired by PS1 era technology

Oh yes I've been waiting for this!


Yes, but: why have these characteristics, especially the perspective-incorrect textures (which look terrible, and do not provide an efficiency advantage in the context of a "fantasy console")? Why not just write for the PS1 instead (if you must have these traits)?


There are plenty of people who find those characteristics aesthetically pleasing, I enjoy the affine texture mapping effect and vertex jitter when they're used effectively- depends on the game. For example in Silent hill these sort of effects enhance the eerie atmosphere, giving it a more warped and dream-like feel.

As for developing directly for the PS1 there are many practical reasons not to, like ease of development and actually being able to release/sell your game on modern storefronts


In Ridge Racer the texture warping and geometry raster gave some heightened sense of raw speed. Later versions on subsequent platforms (or emulated ones that correct this technical fault) felt way too smooth and much less intense.

e.g around 1:47, the tunnel walls go zigzag in a very obvious way but the effect is present in most textures, only more subtle https://youtu.be/WLJqWtnFkdY


someone should make a racing game where the vertices get more "unstable" the faster you go...


The PS1 is extremely limited in terms of hardware. This is about liking the aesthetic but being able to create without the limitations the PS1 put on us.

The PS1 has 2MB RAM. The average consumer laptop these days has 8GB. My dev laptop has 16GB. And that's just RAM. Video memory, similarly; allows us a lot more physical space in our game areas while still retaining this aesthetic, if we choose - it allows more enemies, or objects on the screen...more complexity in AI, etc, etc.

None of these have to do with the visual aesthetic, but are all previously creative restrictions we no longer have to deal with as creators.

I'm a Homebrew dev for the SEGA Saturn, whose architecture is actually way more of a pain in the ass to dev for than the PS1 ever was - but I do it specifically for the challenge and joy of learning the ins and outs of the hardware.

That being said - in no way would I want to take on a project specifically for that architecture, especially because; if the project is successful in any way - porting it to another platform would be a nightmare.

Much better to use something like Unity, or - if you're a 'total control' kinda person, SDL, etc.


I think it depends on what you want to do.

I use Unity extensively and adding a PS1 filter to an existing game takes a 15$ asset. The only real downside is your stuck in the Unity verse.

I would absolutely love for Godot to have a robust asset store ( yes paid assets , I've no problem paying for good code ). Let's hope Godot 4 is nice.


Again, I stated use whatever toolchain you'd like - it's really the fact that we're not limited to 2MB RAM that matters.

Unity just works for me. Whatever works for you is great. I have never once found the 'Unity-verse' to limit or hamper my creativity or ability to do anything I wanted, and; in fact, have found it more empowering than the dozens of other game IDE's I've worked with in 20+ years of game dev. Since Unity provides me with more than I could possibly need for myself; and I've been using it since I had it on my PowerMac G5 back when that was still a thing - I never foresee myself swapping IDE's for any reason. Especially with a backlog of 10+ years of projects backed up.


Oh don't get me wrong. I love Unity. I'm actually coming out with my first larger scale Unity games next year .But I'm a bit afraid of having all my skills locked into one platform, controlled by a single for-profit company, for my next side project, I really want to use anything else. Preferably a fully opened source solution like Godot

Can you explain this, you have games planned ?

>Especially with a backlog of 10+ years of projects backed up.


I have an incredibly unique project that I'm honestly very proud of, and very surprised how far I've managed to get with on my own.

For better or worse, that project is going to require a decent budget to organize, as hardware is a big part of some of the subprojects. Unity is at the core of both the interactive and less interactive aspects of the project.

It'll need to be pretty much a full-time gig for 6-9 months to pull off the stability the project will require to be safely scaled to the mammoth size it'll need to be.

Knowing that, I've saved dozens of prototypes over the years which represent different important iterations of the technology, and as technology grows some of those demos and ideas become more or less important.

I expect this year I'll finally have the finances to truly get started on some serious demos with the expectation/assumption that I will actually be going into full-time development shortly after the releases of the demos - I expect the demos to be fairly mind-blowing if executed better than they were able to due to technological and budget limitations, say - 5 years ago; and I've held out on speaking to investors or doing crowdfunding until I absolutely knew everything was there.

Since...well, the main chunk of this project has always been...not a video game - Unity allowed me the environment to create, I guess...something incredibly unique, because it was more of a sandbox to me than a game engine. I'm interested to see where I can take the project when I'm not hampered by the tech.


Make sure to post something here when you have a prototype, I'm not exactly sure what you're building, but anything that's out of the ordinary is always of interest to me.

I'm considering taking off a few months myself to finish my side projects.


Why do it all? Brian Eno had an answer for that: https://www.goodreads.com/quotes/649039-whatever-you-now-fin...

I’ve considered doing this —digging out PS1 homebrew tools and targeting libRetro. I even worked on PS1 games just a bit a long time ago.

Then I looked at actually doing it and, oh man, it’s a lot of work to get started! If I was a younger man with more free time I could pull it off. But, at the moment I need to keep my side projects smaller scope.

As far as OP goes, setting up a renderer like this requires digging out some unusual techniques. But, they aren’t hard to implement once you figure them out. And, then you can go back to the speed and convenience of modern gamedev environments.


The aesthetic itself has value, not just for the nostalgia but because it can evoke a particularly creepy vibe when used correctly. The low-poly designs have a surreal and sometimes uncanny-valley quality to them, the texture warping is unsettling, the fog and dithering need no explanation.

Highly recommend the Haunted PS1 Game Demo Disc 2021[0] if you're into horror, indie games, and the PS1 aesthetic.

[0] https://hauntedps1.itch.io/demodisc2021


Because it's fun.


Developing against this seems much easier than making an actual ps1 game, if you want to get that retro look.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: