Hacker News new | past | comments | ask | show | jobs | submit | yakaccount4's comments login

3 Generated frames sounds like a lot of lag, probably a sickening amount for many games. The magic of "blackwell flip metering" isn't quite described yet.

It’s 3 extrapolated frames not interpolated. So would be reduced lag at the expense of greater pop-in.

There’s also the new reflex 2 which uses reprojection based on mouse motion to generate frames that should also help, but likely has the same drawback.


Digital Foundry just covered this. 3x and 4x both add additional latency on top of 2x.

https://youtu.be/xpzufsxtZpA?si=hZZlX-g_nueAd7-Q


"Frame generation (FG)" was not a feature in DLSS 2 - the subthread starter was speculating about MFG (of DLSS 4) having worse latency than FG (of DLSS 3), on the basis of more interpolated frames meaning being more frames behind.

To me this sounds not quite right, because while yes, you'll technically be more frames behind, those frames are also presented for a that much shorter period. There's no further detail available on this it seems however, so people have pivoted to the human equivalent of LLM hallucinations (non-sequiturs and making shit up then not being able to support it, but also being 100% convinced they are able to and are doing so).


Nobody is talking about DLSS 2 here so I don't know where that came from. The 2x, 3x, and 4x in my post are the number of generated frames. So 2x == DLSS 3, and 3x and 4x are then part of the new MFG in DLSS 4.

Digital Foundry has actual measurements, so whether or not that matches your intuition is irrelevant. But I think the part you forgot is that generating the frames still takes time in and of itself, and you then need to still present those at a consistent rate for motion smoothness.


Watched their coverage, not much in the way of details that would explain why the (slightly) increased latency. Your speculation about why MFG takes longer makes sense to me, although I have troubles picturing how exactly the puzzle all fits together. Will have to wait for more in-depth coverage.

Seems like I misunderstood your notation.

> Digital Foundry has actual measurements, so whether or not that matches your intuition is irrelevant.

I mean, it's pretty relevant to me. Will watch it later then.


Yeah, in hindsight I should have figured it was more generated frames presented at a lower frame times (shorter period).

The Digital Foundry initial impressions are promising, but for me with a 144hz monitor that prefers V-Sync with an an FPS cap slightly below, I'm not sure using 3x or 4x mode will be desirable with such a setup, since that would seemingly make your input lag comparable to 30fps. It seems like these modes are best used when you have extremely high refresh rate monitors (pushing 240hz+).


This is true, but it's worth noting that 3x was 5ms additional latency beyond original FG and 7ms for 4x, so the difference in latency between DLSS 3 FG and DLSS 4 MFG is probably imperceptible for most people.

I just saw the Digital Foundry results, and that's honestly really good.

I'm guessing users will self tune to use 2x/3x/4x based on their v-sync preference then.


yeah but it means MFG still has the same fundamental problem of FG that the latency hit is the largest in the only scenario where it's meaningfully useful. That is, at low 15-45fps native FPS, then the additional impact of an additional frame of buffering combined with the low initial FPS means the latency hit is relatively huge.

So Nvidia's example of taking cyberpunk from 28fps to 200+ or whatever doesn't actually work. It'll still feel like 20fps sluggish watery responses even though it'll look smooth


> It’s 3 extrapolated frames not interpolated. So would be reduced lag at the expense of greater pop-in.

it's certainly not reduced lag relative to native rendering. It might be reduced relative to dlss3 frame gen though.



This isn't relevant to what I said?

> It’s 3 extrapolated frames not interpolated.

Do you have a source for this? Doesn't sound like a very good idea. Nor do I think there's additional latency mind you, but not because it's not interpolation.


Interpolation means you have frame 1 and frame 2, now compute the interstitial steps between these two.

Extrapolation means you have frame 1, and sometime in the future you'll get a frame 2. But until then, take the training data and the current frame and "guess" what the next few frames will be.

Interpolation requires you to have the final state between the added frames, extrapolation means you don't yet know what the final state will be but you'll keep drawing until you get there.

You shouldn't get additional latency from generating, assuming it's not slowing down the traditional render generation pipeline.


I understand this - doesn't address anything of what I said.


This is not even about the same technology the person I replied to was talking about in the quoted section (this is Reflex 2, not MFG).


Could you please point out where on that page does it say anything about "extrapolation"? Searched for the (beginning of the) word directly and even gave all the text a skim, didn't catch anything of the sort.

The literal word doesn't have to be there in order to imply that it were extrapolation instead of interpolation. By your logic, there is no implication of interpolation versus extrapolation either. Nvidia simply won't use such terms, I believe.

They did specify [0] that it was intermediate frames they were generating back when the 1st version frame generation was announced with DLSS 3, which does translate to interpolation. It's only natural to assume MFG is the same, just with more than a single intermediate frame being generated.

It is also just plain unsound to think that it'd not be interpolation - extrapolating frames into the future means inevitably that future not coming to be, and there being serious artifacts every couple frames. This is just nonsense.

I checked through (the autogenerated subtitles of) the entire keynote as well, zero mentions there either. I did catch Linus from Linus Tech Tips saying "extrapolation" in his coverage [1], but that was clearly meant colloquially. Maybe that's where OP was coming from?

I will give you that they seem to intentionally avoid the word interpolation, and it is reasonable to think then that they'd avoid the word extrapolation too. But then, that's why I asked the person above. If they can point out where on that page I should look for a paragraph that supports what they were saying, not with a literal mention of the word but otherwise, it would be good to know.

[0] https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-n...

[1] https://youtu.be/3a8dScJg6O0?t=345


MFG is almost certainly still interpolation. I'm guessing Reflex 2 is more akin to extrapolation, and might be getting the media to cross wires?

Reflex 2 seems to be asynchronous projection [0]. How the two techs come together when both are enabled, I'm not quite sure how to fit together in my head, but clearly it works fine at least. Hopefully there will be more coverage about these later.

[0] https://youtu.be/7qzJHUbAkZw?t=316


Jensen Huang said during his keynote that you get 3 AI generated frames when rendering a native frame.

This doesn't imply "extrapolation" instead of interpolation.

Deploying some sort of TPM remote attestation for DRM requires every component from every vendor to play nice, so I don't think you'll ever see that rolled out for Windows.

I would guess that the actual push for TPM is to have 'better' BitLocker, and Passkey support.

In practice the default BitLocker+TPM configuration isn't that great (no user entropy/pin, dTPM is basically worthless).

I have no actual understanding for how TPM is involved for Windows Hello/WebAuthn/Passkey or whatever, but at a glance it would seem Biometrics without a TEE seems like a very weak link.


I figured it’s more about ensuring the kernel and boot loading and OS are 100% unmodified by attackers/malware.

If that helps with bitlocker or passkeys or whatever that’s great. But I assume at its base it’s a pure integrity play.

I would think that would also let you know the public key stuff used to communicate with hardware authentication like a fingerprint reader is secure too, but I don’t know how that stuff works well enough to know if that’s true.


TPM can measure the Secure Boot state for later reporting (attestation) but when it comes to DRM, that’s not a terribly interesting bit of information, knowing the firmware and kernel are valid, when the configuration of the OS and installed applications is really the important part.

As far as I know there’s no real scalable way for that to work in the Windows ecosystem.


That makes sense to me. It just doesn’t seem that useful for DRM, seems like kind of a reach.

Especially in modern systems where the graphics card could do all of it and so the host PC never has access to the decrypted data or keys in the first place.


I think there's a large pool of good talent that can't fully be trusted to do the right thing (not slack off), but can be employed successfully with sufficient guardrails (return to office).

Loosely related anecdote why I feel so:

For a while right after Apple started its mandatory RTO and ramped up Caffe Macs, the soda "grab n go" sections had an honor system - you paid your $1.25 for a coke and went on your way. Apparently there was a _lot_ of petty theft of drinks, so they have a person with an iPad check you off now, and I assume theft has basically plummeted since there's far less employees brazen enough to just barge off with one without checking off their name.

So maybe the RTO is effective in someway? Or maybe the real lesson is that the soda should have been free to begin with.


You have to.... pay... for soda... at Apple? What the hell?


Same at Nvidia. But it's subsidized. The argument I heard is it reduces food waste.


Call it the Apple tax


You have to pay for food and drink. It’s not like Google etc.


Sorry; not understanding your outrage here. You expect your work to provide you with free soda? Or you expect a FAANG company in particular to do so?


There is a classic text [1] about exactly this, the free soda.

In short, when a company starts to pay attention to such petty details as the cost of soda, which must be a rounding error in the cost of running an office of a decent engineering company, it's a signal that the culture has changed. With that change, best engineering talent often leaves towards places where priorities are still aligned with lofty goals, not bean-counting.

(Disclaimer: of last 10 companies I've worked for, only two did not offer free soda, due to being 100% remote.)

[1]: https://steveblank.com/2009/12/21/the-elves-leave-middle-ear...


I worked at the KFC/YUM! headquarters for 10 years. When I started, we had free soda in the lobby of the building. It was great for those late afternoon doldrums and a group of us would often walk down 5 flights of stairs to get a pick-me-up.

About 3 years before I left they removed the soda machines. My understanding was that it only cost about $30k / year and most of that was for cups & lids. We even had an executive that was willing to pay for it out of their budget. No go.

It turns out that the catering company that supplied food to the building didn't like losing out on the soda money. So they told KFC/YUM! to remove the free soda option, and they did. It really was the beginning of the end.

It's not so much that the soda was gone it was the thought of it. That free soda actually solved quite a few programming problems, or at least allowed us to solve them on our way downstairs. It also let us work harder/later than we normally would by giving us that afternoon journey. It's positive effect was much greater than it's financial cost.


Thanks for the explanation. It makes sense, but is entirely foreign to me (at least for soda). In my history of 20 years of software/DBA in the Australian mining/construction industry, most have had free coffee but none have had free soda. Though two did free Friday-afternoon beer and pizza.

My current place has free instant coffee (until it runs out) and everyone who wants to push for more than that is viewed with 'tall poppy syndrome'.


Yes, free basic cheap commodity drinks and snacks are bare minimum for a FAANG-class employer, and every other FAANG-class employer I'm familiar with goes well above and beyond that.

Even my first job out of college as a boilerroom MSP sysadmin earning $40k in manhattan gave us free soda (the only benefit lol)

I mean, even if you work as a mechanic at a body shop, there is often free coffee in an old pot in the break room, it's not that crazy.

But outside of that context, no of course I don't expect/care if my employer provides me with free soda. I don't even drink it.

It just seems weirdly cheapskate for a supposed FAANG-class employer.


I'd rather receive benefits in cash than paying some vendor tens of thousands of dollars to stock useless items.


I actually kind of agree as I am not even a soda drinker, but that was not the point. The point is the message it sends. And it really doesn't cost that much on a per-employee basis -- I would guess some employees might consume a six pack per day, valued at $5, and some others might consume nothing, valued at $0, so average that to maybe say $3 or $4. This is less than that employee already paid out of pocket to commute themselves to work. And having some basic food/drink taken care of centrally is more efficient in terms of saving time stocking up, going and buying more during break, etc.

But most of all, it's just the message. All the other "nice" employers do it, even some pretty "basic" employers do (not surprising, this is a very cheap perk, only a little more expensive than breakroom coffee), so if you don't, it you look like a cheapskate. Like what else are you cheaping out on?

I also don't drink much coffee, but would see it as a red flag if they cut costs by getting rid of the coffee in the break room, and that's a red flag that would hold even if I was working as a retail cashier (that's not an industry where free break room coffee is standard, but it's not unusual, so while I wouldn't care if a company never offered it, if I was a cashier at a company that had it, and then they took it away, the message would be obvious -- we are preparing to cut costs at the expense of your work life, our company is no longer growing, jump ship if you can)


Sure, you could cap the expense per-employee so its a non-issue. I was thinking about benefits in general that nobody cares about or uses. But even if they use it, snacks/food can get expensive real quick.

https://www.businessinsider.com/elon-musk-free-cafeteria-lun...


I remember when Oracle took over Sun and they brought in the free soda drinks where you can grab a can and they're constantly replenished. I brought in a backpack and filled it then took it home. After a week of doing that I had all these soda cans to drink but I realized I didn't like soda anymore. Larry is the best.


> I think there's a large pool of good talent that can't fully be trusted to do the right thing (not slack off)

Ever worked in an office? At any corporation size no one works 8 hours a day and if you want to slack, there are plenty opportunities in the office as well. Hell, you can spend the whole day there without doing ANYTHING.

So that is a non-point for me, slackers gonna slack.


I knew a guy whose daily routine was something like:

9:25 - Arrival (cafeteria stopped serving breakfast at 9:30.

9:30-10:15 - Breakfast

10:15-11:30 - A lot of walking around the office, having business-sounding conversations with others but really getting nothing done.

11:30-1:00 - Out for lunch.

1:00-3:00 - Sneak out to car in the parking lot for a nap.

3:00-3:30 - Daily standup. Vaguely talk about how he’s “merging code” or “updating library dependencies” this week.

3:30-4:00 - At desk working.

4:00-5:30 - Doing his walk-around-talk-arounds through the office hallways again.

5:30-6:00 - At his desk again!

6:00 - Cafeteria starts serving dinner. Grabs a bunch of food, throws it in his backpack, and heads to the parking lot.

This went on for years. The other teammates joked that he must have blackmail dirt on the boss because it was obvious to everyone that he wasn’t doing anything and nobody seemed to care. He was there when I joined the company and was there when I left, years later. He may very well be still there today, merging code.


There are less serious versions of the same thing that people might unintentionally do.

09:25 - arrive.

09:30 - 10:00; breakfast

10:00 - 10:30; standup

10:30 - 11:00; Coffee after standup

11:00 - 11:30; checking emails before lunch

11:30 - 13:00; lunch

13:00 - 13:30; food coma, better just check some emails

13:30 - 14:00; coffee and a chat

14:00 - 15:00; maybe some actual work

15:00 - 15:45; someone needs help, they come over and have a chat

15:45 - 16:30; maybe some more work

16:30 - 17:00 wrap up, go home, urgently write some emails and expect a response before you arrive 09:25 tomorrow.

I've seen this, a lot of this.


Cokes are 2.25 now


I'm a coke zero guy


I'm sorry but the example you gave for your argument makes no sense at all. Most people here have tasks and daily meetings where these tasks are discussed or at least briefly mentioned (in other words, your "iPad guy" is already there).

If someone is slacking, it is immediately visible. Can you abuse the system by providing fake explanations for why one task takes so long? Of course, but you can do the same whether you work remotely or not, and this can also be easily verified in both cases.


When is the Swift rewrite?


I don't plan on buying any form of spinning media ever again in my life.


While I agree, there is nothing less satifying than backing up to a nvme external drive.

I still have some spinning disks large hdd as a secondary backup and it's so much more satisfying. I could listen to them work all day! Probably rooted in my brain from childhood.


Wait a few weeks and the digital version of the spinning media will magically appear thanks to the magic of an eye-patch, peg-leg, and parrot included service in a bay.


This is sad indeed, but it was already infested with low quality trolls before the Elon Musk purchase. The entire Jonathon Scott clown-fiesta was utterly embarrassing.


I don't particularly care, but it's lazy. It's nicer if people try and pick something unique and unambiguous in the domain, so searches don't clash. Obviously, that's not always possible, but in this case, I'm not giving them the benefit of the doubt.


Which would you recommend is better to get started with: MiSTer or Analogue Pocket (non dev edition)?

The workflow of testing new cores on AP if you don't have Developer Pocket seems kind of nasty.


The workflow is actually the exact same for the dev and retail Pockets. Both have an exposed JTAG port, but for the retail Pocket, you just need to remove the "hump" on the lower back. You want to make sure you get an official USB Blaster (https://www.mouser.com/ProductDetail/Terasic-Technologies/P0...), because they negotiate a lower voltage level with the Pocket (someone blew their JTAG port using a cheap one).

For a beginner, the Pocket is definitely easier. With MiSTer, you're building a bunch of other framework code that isn't your own, and that adds a lot of iteration time. I also think the Pocket APIs are more refined, but also more limited. I've already written some info about this - https://www.reddit.com/r/fpgagaming/comments/1318jsr/tamagot...


My gawker account used it.


There's nothing on the official website or GitHub that indicates what this software is, other than a cropped screenshot that looks like vscode with a prompt pop up over it.

Edit: I still can't figure out if this is just some sarcastic joke software or something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: