For Rust tests, can Maelstrom be combined with nextest [0]? (Maelstrom provides the cargo maelstrom command, and nextest provides the cargo nextest command, with no obvious way to compose them)
I guess an env variable to specify which test command to run (very low level) or something like cargo maelstrom --nextest would work (but then how to compose with other test runners?)
Now,
> It's fast. In most cases, Maelstrom is faster than cargo test, even without using clustering.
That's surprising. Why is this the case?
Will Maelstrom without clustering (running on a single machine) be faster than nextest as well? (Nextest is also faster than bare cargo test [1])
Would combining nextest with Maelstrom bring further performance benefits, or is Malestrom already doing whatever improvements nextest do?
Google's monorepo, and it's not even close - primarily for the tooling:
* Creating a mutable snapshot of the entire codebase takes a second or two.
* Builds are perfectly reproducible, and happen on build clusters. Entire C++ servers with hundreds of thousands of lines of code can be built from scratch in a minute or two tops.
* The build config language is really simple and concise.
* Code search across the entire codebase is instant.
* File history loads in an instant.
* Line-by-line blame loads in a few seconds.
* Nearly all files in supported languages have instant symbol lookup.
* There's a consistent style enforced by a shared culture, auto-linters, and presubmits.
* Shortcuts for deep-linking to a file/version/line make sharing code easy-peasy.
* A ton of presubmit checks ensure uniform code/test quality.
* Code reviews are required, and so is pairing tests with code changes.
>While the investigation cleared Altman to reclaim his board seat, he said he “did learn a lot from this experience,” expressing remorse for one incident in particular involving a board member he did not name.
>That appeared to be a reference to former OpenAI director Helen Toner, a researcher at the Center for Security and Emerging Technology, a Georgetown think tank. After she published a research analysis that criticized the speed of OpenAI’s product launch decisions, Altman reportedly tried to remove her from the board. “I think I could have handled that situation with more grace and care—I apologize for that,” he said.
Eventually you get to a point where the AI simply doesn’t know how to do something correctly. It can’t fix a certain bug, or implement a feature correctly. At this point you are left trying to do more and more prompt crafting… or you can just fix the problem yourself. If you can’t do it yourself, you’re screwed.
I wonder if the future will just be software hobbled together with shitty AI code that no one understands, with long loops and deep call stacks and abstractions on top of abstractions, while tech priests take a prompt and pray approach to eventually building something that does kind of what they want.
Or to hell with priests! Build some temple where users themselves can come leave prompts for the general AI to hear and maybe put out a fix for some app running on their tablets devices.
No Catia. Here is my story. While I mostly do software at some point I needed to design some piece of equipment from scratch. At the time I happened to have access to Solidworks and without any prior experience in mechanical CAD I was able to do it in about a month. Parts, assembly, subassemblies, regular CNC parts, sheet metal, welds, fastening. Stellar. Despite absolutely insane power the tool is super easy to use for complete noob. I tried later on to do something like this in FreeCad - never fucking again.
Could you help me understand what it means to "pump entropy out of a system"?
I asked ChatGPT and it claims "It is generally not possible to "pump" entropy out of a system in the same way that it can be added to a system. This is because the second law of thermodynamics states that the total entropy of a closed system will always tend to increase over time."
This is quite stunning. OpenAI valuations are quoted as $3B. Stability AI just popped up6 months ago and is able to raise rounds in very tough economy at $1B valuation. The real differentiator between two is that Stability AI has took upon itself to o democratize AI ignoring all social wokness and thereby multiplying their efforts many manifold through community labor. OpenAI meanwhile is still deeply caught up in all kind of social/ethical/philosophical delima as well as Apple-isc secrecy as devotional offerings to Steve Jobs.
With this level of funding and community following, Stability AI is en route to dethrone OpenAI in almost no time.
At places like Starbucks I wonder what all they do behind the scenes to make sure the network is secure. Like if I go into a mom and pop coffee shop and ask for the WiFi password, that's one thing. But with starbucks all having a standardized wifi infra in their stores, I wonder what sorts of things are happening behind the scenes to make sure everything stays secure. If consumers are worried about WiFi at these places, I bet Starbucks IT obsesses over it.
I'm building a job board for the ADHD version of this. Not quite 100% ready but I received more interest in this than anything else I've ever built (1,300 candidate signups in 48hrs). It's to the point that I'm scrambling to supply enough jobs because they keep getting filled.
Calling it "Wildcard People" removed the gatekeeping/"medical" aspect while maintaining expectations of how these candidates end up doing their work.