Yes current EVs are heavy. It's not at all clear that this will prevail as solid state batteries evolve to become standard. It is highly possible that EVs will soon be lighter than comparable ICE vehicles [1]
No no no. Sure, there might be a future where solid state batteries become the standard for electric vehicles, but you cannot link to Donut Lab's announcement from this month. There is no credible evidence they've achieved the holy grail of batteries so far until they actually deliver these motorcycles in hand and people independently verify them.
Time will tell on their battery, especially if the bike they're putting it on delivers. I think the overall point could be that there's active R&D in trying to find geopolitically sustainable materials, and lowering the weight of materials used.
If you want proper answers, yes. If you want to rely on whatever reddit or tiktok says about the book, then I guess at that point you're fine with hallucinations and others doing the thinking for you anyway. Hence the issues brought up in the article.
I wouldn't trust an LLM for anything more than the most basic questions of it didn't actually have text to cite.
You don’t need any rights to execute the feature. The user owns the book. The app lets the user feed the book into an LLM, as is absolutely their right, and asks questions.
1. The user doesn't own the book, the user has a revocable license to the book. Amazon has no qualms about taking away books that people have bought
2. I doubt the Kindle version of the LLM will run locally. Is Amazon repurposing the author-provided files, or will the users' device upload the text of the book?
You agree that we should own our digital content but it sounds like you don’t want this particular capability because… fuck Amazon.
I can totally understand that sentiment but I don’t think giving up end user capabilities to spite Amazon is logically aligned with wanting ownership of digital media.
> All these weird mental gymnastics to argue that users should have less rights
We probably agree more than not. But users getting more rights isn’t universally good. To finish an argument, one must consider the externalities involved.
I work on a much easier problem (physics-based character animation) after spending a few years in motion planning, and I haven’t really seen anything to suggest that the problem is going to be solved any time soon by collecting more data.
"We present Dreamer 4, a scalable agent that learns to solve control tasks by imagination training inside of a fast and accurate world model. ... By training inside of its world model, Dreamer 4 is the first agent to obtain diamonds in Minecraft purely from offline data, aligning it with applications such as robotics where online interaction is often impractical."
In other words, it learns by watching, e.g. by having more data of a certain type.
I am pushing the optimism a bit of course, but currently we can see many demos of robots doing basic tasks, and it seems like it is quite easy nowadays to do this with the data driven approach.
The problem becomes complicated once the large discrete objects are not actuated. Even worse if the large discrete objects are not consistently observable because of occlusions or other sensor limitations. And almost impossible if the large discrete objects are actuated by other agents with potentially adversarial goals.
Self driving cars, an application in which physics is simple and arguably two dimensional, have taken more than a decade to get to a deployable solution.
Next to zero cognition was involved in the process. There's some kind of hierarchy of thought in the way my mind/brain/body processed the task. I did cognitively decide to get the beer, but I was focused on something at work and continued to think about that in great detail as the rest of me did all of the motion planning and articulation required to get up, walk through two doorways, open the door on the fridge, grab a beer, close the door, walk back and crack the beer as I was sitting down.
Basically zero thought in that entire sequence.
I think what's happening today with all of this stuff is ultimately like me trying to play Fur Elise on piano. I don't have a piano. I don't know how to play one. I'm going to be all brain in that entire process and it's going to be awful.
We need to learn how to use the data we have to train these layers of abstraction that allow us to effectively compress tons of sophistication into 'get a beer'.
I think this is an interesting direction, but I think that step 2 of this would be to formulate some conjectures about the geometry of other LLMs, or testable hypotheses about how information flows wrt character counting. Even checking some intermediate training weights of Haiku would be interesting, so they’d still be working off of the same architecture.
The biology metaphor they make is interesting, because I think a biologist would be the first to tell you that you need more than one datapoint.
The issue with Swift IS the type theory. Constraint solvers are by definition going to be harder to reason about and have longer execution time than just declaring the type.
These companies are also biased towards solutions that will more-or-less trap you in a heavily agent-based workflow.
I’m surprised/disappointed that I haven’t seen any papers out of the programming languages community about how to integrate agentic coding with compilers/type system features/etc. They really need to step up, otherwise there’s going to be a lot of unnecessary CO2 produced by tools like this.
I kind of do this by making LLM run my linter which has typed lint rules.
The way I can get any decent code out of them for typescript is by having no joke, 60 eslint plugins. It forces them to write actual decent code, although it takes them forever
It’s kind of a bummer because this is the exact same playbook as DirectX, which ended up being a giant headache for the games industry, and now everyone is falling for it again.
I would be curious to see whether it's a common opinion that DirectX was a bad thing for the games industry. It was preceded by a patchwork of messy graphics/audio/input APIs, many of them proprietary, and when it started to gain prominence, Linux gaming was mostly a mirage.
A lot of people still choose to build games on Direct3D 11 or even 9 for convenience, and now thanks to Proton games built that way run fine on Linux and Steam Deck. Plus technologies like shadercross and mojoshader mean that those HLSL shaders are fairly portable, though that comes at the cost of a pile of weird hacks.
One good thing is that one of the console vendors now supports Vulkan, so building your game around Vulkan gives you a head start on console and means your game will run on Windows, Linux and Mac (though the last one requires some effort via something like MoltenVK) - but this is a relatively new thing. It's great to see either way, since in the past the consoles all used bespoke graphics APIs (except XBox, which used customized DirectX).
An OpenGL-based renderer would have historically been even more of an albatross when porting to consoles than DX, since (aside from some short-lived, semi-broken support on PS3) native high-performance OpenGL has never been a feature on anything other than Linux and Mac. In comparison DirectX has been native on XBox since the beginning, and that was a boon in the XBox 360 era when it was the dominant console.
IMO historically picking a graphics API has always been about tradeoffs, and realities favored DirectX until at least the end of the XBox 360 era, if not longer than that.
While Switch supports Vulkan, if you really want to take advantage of Switch hardware, NVN is the way to go, or make use of the Nintendo Vulkan extensions that are only available on the Switch.
Usually it is an opinion held by folks without background in the industry.
Back in my "want to do games phase", and also during Demoscene days, going to Gamedev.net, Flipcode, IGDA forums, or attending GDCE, this was never something fellow coders complained about.
Rather how to do some cool stuff with specific hardware, or gameplay ideas, and mastering various systems was also seen as a skill.
DirectX carried the games industry forward because there weren't alternatives. OpenGL was lagging, and Vulkan didn't exist yet. I hope everyone moves to Vulkan, but DX was ultimately a net positive.
It is a FOSS, complaining about proprietary APIs, because there is this dissonance between communities.
Game developers care about IP, how to make it go beyond games, getting a publisher deal, gameplay, the proprietary APIs is a set of plugins on a middleware engines in-house or external, and done it is.
Also there is a whole set of companies whose main business is porting games, where is where several studios got their foot into the door before coming up with their own ideas, as a means to get experience and recognition in the industry, they are thankful each platform is something else.
Finally anyone claiming Khronos APIs are portable never had the pleasure to use extensions or deal with drivers and shader compiler bugs.
It is only an headache for FOSS folks, games industry embraces proprietary APIs, it isn't the elephant problem FOSS culture makes it to be, as anyone that has ever attended game development conferences, or Demoscene parties can tell.
Yeah DirectX ended up being a giant headache but there were times in its history where it was the easiest api to use and very high performance. DirectX came about because the alternatives at the time were, frankly, awful.
OpenGL (the main competition to DirectX) really wasn't that bad in the fixed-function days. Everything fell apart when nVidia / AMD came up with their own standards for GPU programming.
DirectX was nice in that the documentation, and example/sample code was excellent.
The fixed function version of OpenGL was non thread safe with global state, it made for some super fun bugs when different libraries set different flags and then assumed they knew which state the OpenGL runtime was in the next time they tried to render something.
What's stopping you from using ONNX models on other platforms? A hardware agnostic abstraction to make it easier for consumers to actually use their inference capable hardware seems like a good idea, and exactly the kind of stuff I think an operating system should provide.
> Call the Windows ML APIs to initialize EPs [Execution Providers], and then load any ONNX model and start inferencing in just a few lines of code.
i exclusively use ONNX models across platforms for CPU inference. it's usually the fastest option on CPU. hacking on ONNX graphs is super easy, too...i make my own uint8 output ONNX embedding models
reply