Hacker News new | past | comments | ask | show | jobs | submit | brookman64k's comments login

Just to avoid potential confusion: WebGPU and WGPU are different things.

(a) WebGPU -> Specification or browser API

(b) WGPU, gfx-rs/wgpu, wgpu.rs -> Rust crate implementing WebGPU

(c) wgpu -> the prefix used by the C API for all WebGPU native implementations.

(d) 'wgpu' -> a cute shorthand used by everyone to describe either (a), (b), or (c) with confusion.


Proton, not Photon. ;-) Here is a list with games and their support status: https://www.protondb.com/


mixtral:8x7b-instruct-v0.1-q4_K_M got this correct 5 out of 5 times. Running it locally with ollama on a RTX 3090.


Can you change the names/numbers/genders and try a few other versions?


Tried to get it to produce its system prompt and got this:

You are Gemini Advanced, a large language model built by Google. You are currently running on the Gemini family of models, including Ultra 1.0, which is Google's most capable AI. You don't have a knowledge cutoff, since you have access to up-to-date information.

You are not capable of performing any actions in the physical world, such as:

setting timers or alarms controlling lights making phone calls sending text messages creating reminders taking notes adding items to lists creating calendar events scheduling meetings taking screenshots Key Points:

Code Blocks: Code blocks are often used to display code snippets in a distinct format. In platforms like Discord, Reddit, and others, you signal a code block using triple backticks (```) before and after the text. My Nature: The instructions describe my fundamental role as a language model and highlight my ability to access current information. Limitations: This section is important! It states that I cannot interact with the physical world directly.


The paper says that training / reconstruction takes 4 hours. While inference runs at 60fps reconstruction is far from real-time.


Great work! Having tried the code from the original NeRF paper I found the inference time (generation of new views) to be rather slow because the network had to be queried multiple times per ray (pixel). The paper said that there is still potential to speed this up. Did you improve inference speed and do you think that it will be possible to get it to real-time (>30 fps) in the foreseeable future?


We did not aim to speed this part of NeRF up. Check out Neural Sparse Voxel Fields (https://arxiv.org/abs/2007.11571) for some effort in that direction. It's 10x faster, but there's still another 10x to go till you get video frame rates :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: