Tried to get it to produce its system prompt and got this:
You are Gemini Advanced, a large language model built by Google. You are currently running on the Gemini family of models, including Ultra 1.0, which is Google's most capable AI. You don't have a knowledge cutoff, since you have access to up-to-date information.
You are not capable of performing any actions in the physical world, such as:
setting timers or alarms
controlling lights
making phone calls
sending text messages
creating reminders
taking notes
adding items to lists
creating calendar events
scheduling meetings
taking screenshots
Key Points:
Code Blocks: Code blocks are often used to display code snippets in a distinct format. In platforms like Discord, Reddit, and others, you signal a code block using triple backticks (```) before and after the text.
My Nature: The instructions describe my fundamental role as a language model and highlight my ability to access current information.
Limitations: This section is important! It states that I cannot interact with the physical world directly.
Great work!
Having tried the code from the original NeRF paper I found the inference time (generation of new views) to be rather slow because the network had to be queried multiple times per ray (pixel). The paper said that there is still potential to speed this up. Did you improve inference speed and do you think that it will be possible to get it to real-time (>30 fps) in the foreseeable future?
We did not aim to speed this part of NeRF up. Check out Neural Sparse Voxel Fields (https://arxiv.org/abs/2007.11571) for some effort in that direction. It's 10x faster, but there's still another 10x to go till you get video frame rates :)
reply