That's a great concise implementation! I'm glad you also went with the parse-and-interpret-an-AST approach: it seems like the right design. Your `Reconstruct` type is a great idea! I might copy that.
I can't understand why they invest so much in gaming. Gaming on such small devices is too uncomfortable, both for your eyes and your posture. In addition, with cloud gaming you can basically run any PC/Console game on any device provided a good-enough connection (eg. Nvidia NOW, Xbox cloud gaming etc)
But how much % of these mobile gamers are using iPhones, I wonder.
AFAIK developing countries are dominating mobile gaming for a simple reason that they cannot afford PS/Xbone/PC. Ergo no iPhone for them to begin with.
A surprising amount. Genshin and many Gatcha games are GPU intensive, even on such a small screen.
It might be because they code very inefficiently, and they might also be doing a ton of crypto and anti-hacking calculations in background, but these games are surprisingly taxing for the phone they're running on.
Because gaming is one of the last remaining markets that apple can dominate in and profit. Microsoft spent a lot of money on Activision to claw back against apple
If your eyes hurt then switch to literally any other apple device, Apple TV, Mac, iPad, iPhone… they all have access to Apple Arcade. When cloud gaming streams asset objects and not raw video, then it’d be a contender. As of now, it’s too expensive and is losing to mobile games
The other comments are right in saying mobile gaming is huge.
But, if you are referring specifically to the new GPU with accelerated raytracing, I want to add something more.. Remember that for many years now, every innovation that Apple has produced has been secretly developing towards AR!
For this reason it makes perfect sense.. Raytracing is SUPER important for the kind of rendering needed to make AR/VR look realistic
This is not surprising, because the GPU support in GGML is said to be preliminary and it is optimized for being run on CPUs.
Seeing the times reported by other people, it seems that using the GPU with GGML, instead of the CPU, still provides a speed improvement, but it is small.
Nevertheless, I have appreciated that after following exactly the instructions of this project everything was up and running after a few minutes and it could be tested.
Past attempts to install all the environment needed to run such models have required much more work.
Other than Monads, HKT can be used to easily write type-level functional programs [1]. This can for example help writing type-level parsers for other lanugages.
A real world use-case could be parsing GraphQL raw string queries and automatically infer the returned types based on a common schema, without using special code-generators.
For instance you can come up with some magic function `gql_parsed` like:
doc = gql_parsed`query GetUser { user { name }}`
where doc is inferred as something like
Doc<Query<{GetUser:{user:{name:string}}}>>
Seems very limited. I wonder if the same can be achieved with just stable diffusion and neighbor latent walks with very small steps. On the other hand the interpolation techniques with the GigaGAN txt2img produce much higher quality “videos” than this
Imagine the impact of such a crash to the whole humanity in the next years. Api crashes and suddenly all bots crash and half of the world is stuck for some hours
A containerized version of this thing would be def useful, as it installs global packages and assumes a lot of preinstalled binaries. The node image won't work alone tho, you'll python, pip, git, cpp compiler
Yeah, I've been wanting containers for these type of projects for a while now. Conda is fine if you're already involved in the ML/Python ecosystem, and as an outsider to that world I guess I have no right to complain (Conda is actually not all that hard to learn all things considered), but boy would it be nice if I could just install Docker, run `docker run cool_project/ml_wizardry`, and have a demo up and running in my web browser instantly.
reply