Hacker News new | past | comments | ask | show | jobs | submit | rdw's comments login

Article is from 2021, but the game launched today.


It's a very risky industry. It's quite likely to sink a ton of money into a game and earn very little back, basically wasting everything. Game series that consistently make revenue are even rarer so they're extremely coddled while they last, though it is expected that eventually they will die, too. The rare successes cover the costs of the common failures.


It's a lot like the VR hype cycle of the past 5 years. Maybe there will be more of an actual change from the AI stuff, though.


I used to believe this and then I got into the area. It depends on the area of course but it turns out that the cost of the house is quite significant. House construction costs are $200-500 per square foot, putting even a medium-sized house at around a quarter of a million dollars. When you look at the costs of empty plots of land versus similar plots of land with houses on them, you'll see that the housed plots of land cost the same as the empty plots plus the construction costs of a similar-sized house. In the areas where I looked, the costs of the house dominated, such that the land value is about 20% of the total value of the plot. Even the variance that occurs at that level can be further explained by the value of potential future plots of land on the space -- a plot of land that has one house but could hold a second (for whatever reason) is more valuable than an otherwise-identical plot that can only support one house.


It has more to do with how game engines are built, making embeddability be the most important criteria.

Most game engines are written as a large chunk of C++ code that runs each frame's timing, and the big subsystems like physics, particles, sound, and the scenegraph. All the important engineers will be dedicated to working on this engine and it behaves like a framework. The "game logic" is generally considered to be a minority of the code, and because less-technical people generally author it, it gets written in a higher-level "scripting" language.

This creates some serious constraints on that language. It must be possible to call into the scripts from the main engine and the overhead must be low. The scripts often have to make calls into the data structures of the main engine (e.g. for physics queries) and the overhead of that should be low as well. It should also be possible to control the scripting language's GC because the main engine is pretty timing-sensitive. Memory consumption is pretty important as well.

All these requirements point towards two implementations specifically: Lua (and LuaJIT), and Mono. Those two runtimes go out of their way to make it easy and fast to embed. A third option which a lot of engines pick is to write their own scripting language where they control everything about it. Any other language you can think of (with the possible exceptions of Haxe) will have some major hurdle that prevents easy embedding. The fact that you can compile multiple languages to Mono bytecode pushes some folks in that direction; if you're planning to write a lot of code in the scripting engine (not all of them do! See: Unreal) that's nice flexibility to have.


You're right, and it's got more layers than that. C# does have value types, which are not boxed, and using them judiciously can avoid garbage. However, they are a more recent addition to the language (which started as a lame Java clone), and so the standard library tends to not know about them. Really trivial operations will allocate hundreds of bytes of garbage for no good reason. Example: iterating over a Dictionary. Or, IIRC, getting the current time. They've been cleaning up these functions to not create garbage over tiem, of course, but it's never fast enough for my taste and leads to some truly awful workarounds.


C# had value types and pointers from the very beginning. These are not a recent addition. The standard library does know about them. However, not until C# 2.0, which introduced generics, were collections able to avoid boxing value types.

There are some cases where allocations are made when they could have been avoided. Iterating over a dictionary creates a single IEnumerator object. Async methods, tuples, delegates, and lambda expressions also allocate memory as do literal strings. It is possible to have struct-based iterators and disposers. There are some recently added mitigations such as a ValueTask, ValueTuple, function pointers, ref structs, conversions of literals to read-only spans, that eliminate allocations.

DateTime is a value type and doesn't allocate memory. Getting the current time does not allocate memory.

With the recent additions to ref types and Span<>, C# provides a lot of type-safe ways to avoid garbage collections. You can always use pointers if need be.


And to put that in context, 2.0 was around 2003, iirc


It was 2005.

But even then, arrays of value types were available since C# 1.0 (2001).


Value types in .NET exist since forever, version 1.0.

Besides the runtime also does C++ since version 1.0, and any language can tap into it.


I had one, and I wrote a lot of code on it. There was pretty much nothing else on the market that could do web browsing in that portable of a form factor, and at that price point. Steve was somewhat right to worry about them -- in an alternate history they would have taken over as the dominant form of computing, creating a whole evolutionary tree of tiny laptops, converged with keyboarded mobile phones, at all price points and build qualities.

The 11" Air I got to replace the Eee was much better in every single way. One note is that the thin-and-long of the Air was much more portable than the chunky but stubbier Eee. It just fits better into the kinds of bags we all have. The thinness wars started then.


Yes, I've heard this said many times about sales people. If they're not visibly wasting money, people are reluctant to hire them because they either aren't good (and thus have no money to waste), or won't be hungry (because they've saved the money they earned by not wasting it). So conspicuous consumption becomes a way to signal that the sales person is capable of reliably generating large incomes.


It's fun to be back in the age where every few years you want to upgrade your computer because the new ones are so much faster, not because the old one is worn out.

20% faster isn't enough to make me regret my M1 purchase, but after one or two more 20% speed gains I'll feel like upgrading to the latest is going to be worth it.


I'd want the Wirth's law[1] to be not true regardless of where we are in the hardware scale.

I recently tried using a 486 + CRT monitor setup. The speed at which your keystrokes appear on the screen is simply astounding. I'd like us to pay more attention to how software is written and debloating the layers of abstraction with hindsight.

[1] https://en.wikipedia.org/wiki/Wirth%27s_law


> I recently tried using a 486 + CRT monitor setup. The speed at which your keystrokes appear on the screen is simply astounding.

Use a fast editor like Sublime Text or even something like JetBrains IDEA with the zero-latency mode enabled and the editor delay will be single-digit milliseconds.

Get a modern 120Hz or more monitor (search for gaming monitor) with low latency and your display lag will be less than 10ms.

More info here: https://pavelfatin.com/typing-with-pleasure/


I’ve been using Sublime Text since the Aztecs and it is really fast, but not as fast as what I described earlier. Yep, using it on macOS with 120Hz Liquid Retina (M1 MBP 14 inch).

I’ll check Jet Brains zero latency mode. I didn’t know that exists, cool.


The example here will still neglect the reality that the stack processing the input will significantly increase the latency.

The (likely) ps2 keyboard on the 486 will have lower input latency as it doesn't have the multiple complex (both hardware and software) stacks that each key-input will need to travel through. People say you can't feel it, but people are regularly incorrect.

I feel like Rick (C-137) when I say, you gotta experience low latency on something like the original pong machine (from 1972) to know what you're missing out on. Once you've used this, the wireless mice and USB keyboards will feel slow and laggy even in low latency software.


Downvoted to zero points again, apparently not contributing to the discussion.


I can't understand why people like those tiny wireless keyboards and mice they have to recharge and pair. When you're at a workstation just plug in one dongle and hook up all of it. It's so easy.


The most responsive computer I’ve ever used was a Mac 512ke upgraded with 2.5 MB of RAM. I booted and loaded applications all off a RAMdisk and every input was instantaneous, including launching applications.


It's the OS and event driven model, and polling USB vs interrupt based PS/2.

The nearly unnoticeable latency is small price for progress.


I remember reading that 1/10 of a second response time is "interactive"

I wonder how many systems don't qualify as interactive?

I don't think typing into the amazon search bar qualifies.


Boot on Linux and bash on a modern computer. Do you think the keystrokes are slower?


The problem isn't even the computer most times, its more often the display. Many modern flat panel displays have far more latency than even a crappy analog CRT, and yes the difference is definitely often noticeable if you have any experience with computers. Some of the most modern ones can be the worst offenders too, given the amount of signal processing a modern display does.

To add to earlier example, I do some retro gaming on an old windows 98 box with a CRT, the lack of latency moving the mouse in first person shooters of this era compared to today can be incredible.


Same reason most people playing FPS games today use a 144hz+ monitor, ideally with black frame insertion. The difference is night and day.


That's analog for you. You essentially have a speed of light connection from the frame buffer to the electron gun, with no processing involved. As fast as the bits can be read from the data buffer they appear on the screen.


Dan's famous article on this subject has benchmarks: https://danluu.com/keyboard-latency/


I've tried FreeBSD (without desktop) on Thinkpad X1. I must admit, it is pretty good.


Right, and this is quite impressive, since they haven't yet been able to go from 5nm to 3nm. So, by the time the M3 is out, it will be a great update for the M1. (And the M1 will still be awesome for people who don't need the bump.)


IDK that I'd count on node shrinks to provide anywhere near the performance/power saving bumps that it used to. We are pretty close to the point were smaller nodes will mean more power consumption with the same design merely due to the fact that smaller nodes mean more leakage due to electron tunneling.


This has been the story for years and we just keep solving the problems. Perhaps one day you’ll be right, but it won’t be N5 to N3.


For as long as I can remember, every new Apple computer was "up to" 5... 10... 15... times faster than the one before. The use of "up to" is a clever marketing ploy. Because if all those were actual figures then, with all that compound multiplication, the current laptops must be about a million times faster than the one I had 20 or so years ago.


Well, even my phone feels about a million times faster than the desktop machine with an old pentium chip (that was in the mid megahertz range) that I was still using 20 years ago.


Even 20 years ago, Notepad used to open in a flash on my desktop. The Notes app on my phone however...


definitely feels a million times faster than a iMac circa 2000 with it's 333mhz CPU. I mean, we are talking 100000x performance at least.


Meanwhile my main computers are all about 2010, even for hobby 3D graphics coding I will never be able to saturate the GPUs with my designer skills.


> for hobby 3D graphics coding I will never be able to saturate the GPUs with my designer skills

Yeah, but latest versions of usual apps (especially web-browser) requires newer GPU/CPU.

Sadly, our days coding style by popular software devs is not focused on performance & optimization.


Web browser 3D apis are a bad example, because WebGL 2.0 is unware of what happened after 2011 in GPU hardware, while WebGPU is targeting 2015 hardware.


That doesn't seem new. Just taking the default single-thread benchmark from Anandtech, CPUs that are sequentially 20% slower than the current record, the Intel Core i9-12900K (Q4 2021), are the Ryzen 9 5900X (Q4 2020), the Core i9-10900K (Q2 2020), Core i3-7350K (Q1 2017), and Core i5-6500 (Q3 2015). That's a doubling in performance in about 6.25 years. So you needn't have waited for your fruity savior to bring you biennial 20% performance increases.


I think you got that wrong. Speed isn't a bottleneck anymore and you won't get much better experience. This is same as phones.

Soon, they'll have to be more creative for people to consider buying every few years.

I wish MBA started having ProMotion.


> I think you got that wrong. Speed isn't a bottleneck anymore and you won't get much better experience. This is same as phones.

Well... I'm pretty sure that this is Hacker News. And for many developers, any performance improvement is great.

I first got an M1 Air, which was fantastic. Faster than my 3700X workstation for building Rust projects, while it was passively cooled and portable. Despite the awesome performance of the M1 Air, I upgraded to an M1 Pro when it came out. Moving from 4 performance + 4 efficiency cores to 8 performance + 2 efficiency cores was yet another awesome upgrade, giving again much quicker builds.

I work on machine learning stuff, so the AMX matrix co-processing unit in the M1 was really great for training small networks (sometimes convolution networks are still great for NLP + being able to train locally is nice for development). Then the M1 Pro/Max had double the AMX units, so it's again a great step forward.

Getting 20% YoY improvements will definitely make me very happy (and I bet many other developers).


Depends what you do. Compiling/Building or Video/Photo editing, the speed makes a big difference. M1s cut compile time on xcode build by half compared to I9s. Also docker builds are faster. Shaving off a few minutes here and there makes a big difference in ROI when you factor in the a year lifespan. Let's say it saves you 15 minutes a day. That is 62 hours a years.


Small things add up. Freezing a track in Ableton, for instance, is substantially faster. This makes a huge difference to the workflow. When you're in a creative state of flow, stopping several seconds just for a track to freeze can take you completely out of your zone.


Completely agree here. Not only that, but I have faster compiles on my m1 mac than on my i7 mac, _and_ I can compile on batter on my M1 and still have it last all day.


I am a Linux user.

For me lagging on Windows is not only noticeable but also maddening.

Typically my old laptop with a 3 year old processor and half the memory is snappier running a mainstream distro with KDE Plasma than a brand new one running Windows.


> Speed isn't a bottleneck anymore

stares in Mojang's poorly optimised Java

(Also, as other people have pointed out, it absolutely can be when compiling, using Photoshop, doing billions of Prolog inferences, etc.)


>Soon, they'll have to be more creative for people to consider buying every few years.

No, they will just stop the "security" updates and force you to buy a new one.


Higher speed at the same power usage translates to better battery life.


I spend a lot of my (non-work) time doing photo editing using Adobe products while traveling.

I'll happily take a laptop that is 10x as fast as my current one...


This is written in such a way that it seems like it makes sense, but it ultimately takes its conclusion as an axiom and is thus essentially meaningless.

Gerrymandering is to draw districts according to political party affiliations. So of course if you only divide up your districts by political party affiliation (in rather extreme ways) you will discover that doing so doesn't lead to stable political systems. No shit, that's the problem with gerrymandering.

Any serious analysis of this problem needs to confront the realities of geographical location and migration.


> Gerrymandering is to draw districts according to political party affiliations.

That not really what it is. It's to draw districts in order to give a particular party an advantage. Usually the way to do that is not to draw them along party lines, but to mix voters of different affiliations in the same district such that your party comes out with a majority in as many districts as possible.


The practice of using political party affiliation as a criteria at all for drawing district borders inevitably leads to gerrymandering. This part of the article does seem apparent, which is why the solution, to _not_ do that, seems mysterious in its absence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: