2. Brand quality went down to equivalent or less of no-name dropshipped goods in order to keep similar dollar prices for goods.
3. Cost of real estate spiked incredibly, putting even more pressure on retail margins.
4. American logistics is seriously good, in cities 24-hour delivery can be common even for small or inexpensive things.
5. Once there wasn't a quality or convenience reason to prefer the shopping mall, many were already struggling quietly, they just hadn't run out of cash yet.
6. The pandemic happened right as all these things converged, accelerated all of them, and wasn't really recoverable.
It allows the code to be fully public domain, so you can use it anywhere, while very strongly discouraging random people from forking it, patching it, etc. Even still, the tests that are most applicable to ensuring that SQLite has been built correctly on a new compiler/architecture/environment are made open source (this is great!) while those that ensure that SQLite has been implemented correctly are proprietary (you only need these if you wanted to extend SQLite's functionality to do something different).
This allows for a business model for the authors to provide contracted support for the product, and keeping SQLite as a product/brand without having to compete with an army of consultants wanting to compete and make money off of their product, startups wanting to fork it, rename it, and sell it to you, etc.
It's pretty smart and has, for a quarter century, resulted in a high quality piece of software that is sustainable to produce and maintain.
My 10hr drives usually have 2 stops at 30mins-1hr each, for food. Unfortunately, stopping at a restaurant for a meal doesn't leave the vehicle in a location that has a charger, for the most part. Other parts of the world may differ, but the infrastructure to "just spend 15 minutes charging" whenever you want is not there.
Something to consider is that in a secure environment like LANL, and especially for a non-standard or one-off process, it's likely that there is no computer system that everyone has access to with all the information.
It would not be unusual for the person being told to write the process document to be brought into a room with a notebook, be shown written or electronic materials in the room, take notes in a provided notebook, have that notebook be handed over after the meeting for a (non-technical) security review, then receive the notebook pages some days/weeks later and have those notes be what is used to develop the document. Security culture is good for security but bad for error-free processes involving people.
Yes, but my junior coworkers also don't reliably do edge case testing for user errors either unless specifically tasked to do so, likely with a checklist of specific kinds of user errors they need to check for.
And it turns out the quality of output you get from both the humans and the models is highly correlated with the quality of the specification you write before you start coding.
Letting a model run amok within the constraints of your spec is actually great for specification development! You get instant feedback of what you wrongly specified or underspecified. On top of this, you learn how to write specifications where critical information that needs to be used together isn't spread across thousands of pages - thinking about context windows when writing documentation is useful for both human and AI consumers.
The best specification is code. English is a very poor approximation.
I can’t get past that by the time I write up an adequate spec and review the agents code, I probably could have done it myself by hand. It’s not like typing was even remotely close to the slow part.
AI, agents, etc are insanely useful for enhancing my knowledge and getting me there faster.
My theory is that this (juniors unable to get in) is generally how industries/jobs die and phase out in a healthy manner that causes the least pain to its workers. I've seen this happen to a number of other industries with people I know and when it phases out this way its generally less disruptive to people.
The seniors who have less leeway to change course (its harder as you get older in general, large sunk costs, etc) maintain their positions and the disruption occurs at the usual "retirement rate" meaning the industry shrinks a bit each year. They don't get much with pay rises, etc but normally they have some buffer from earlier times so are willing to wear being in a dying field. Staff aren't replaced but on the whole they still have marginal long term value (e.g. domain knowledge on the job that keeps them somewhat respected there or "that guy was around when they had to do that; show respect" kind of thing).
The juniors move to other industries where the price signal shows value and strong demand remains (e.g. locally for me that's trades but YMMV). They don't have the sunk cost and have time on their side to pivot.
If done right the disruption to people's lives can be small and most of the gains of the tech can still come out. My fear is the AI wave will happen fast but only in certain domains (the worst case for SWE's) meaning the adjustment will be hard hitting without appropriate support mechanisms (i.e. most of society doesn't feel it so they don't care). On average individual people aren't that adaptable, but over generations society is.
Of course, you can do this in good conditions. The extremely powerful part that TrueTime brings is how the system degrades when something goes wrong.
If everyone is synced to +/- 20ns, that's great. Then when someone flies over your datacenter with an GPS jammer (purposeful or accidental), this needs to not be a bad day where suddenly database transactions happen out of order, or you have an outage.
The other benefit of building in this uncertainty to the underlying software design is you don't have to have your entire infrastructure on the same hardware stack. If you have one datacenter that's 20yrs old, has no GPS infrastructure, and operates purely on NTP - this can still run the same software, just much more slowly. You might even keep some of this around for testing - and now you have ongoing data showing what will happen to your distributed system if GPS were to go away in a chunk of the world for some sustained period of time.
And in a brighter future, if we're able to synchronize everyone's clocks to +/- 1ns, the intervals just get smaller and we see improved performance without having to rethink the entire design.
> Then when someone flies over your datacenter with an GPS jammer (purposeful or accidental), this needs to not be a bad day where suddenly database transactions happen out of order, or you have an outage.
Most NTP/PTP appliances have internal clocks that are OCXO or rubidium that have holdover (even for several days).
If time is that important to you then you'll have them, plus perhaps some fibre connections to other sites that are hopefully out of range of the jamming.
> fibre connections to other sites that are hopefully out of range of the jamming.
I guess it's not inconceivable that eventually there's a global clock network using a White-Rabbit-like protocol over dedicated fibre. But if you have to worry about GPS jamming you probably have to worry about undersea cable cutting too.
Now, try and use two or more libraries that expose data structures with bitfields, and they have all chosen different crates for this (or even the same crate but different, non-ABI-compatible-versions of it).
There's a ton of standardization work that really should be done before these are safe for library APIs. Mostly fine to just write an application that uses one of these crates though.
2. Can modern GPU hardware efficiently make system calls? (if you can do this, you can eventually build just about anything, treating the CPU as just another subordinate processor).
3. At what order-of-magnitude size might being GPU-native break down? (Can CUDA dynamically load new code modules into an existing process? That used to be problematic years ago)
Thinking about what's possible, this looks like an exceptionally fun project. Congrats on working on an idea that seems crazy at first glance but seems more and more possible the more you think about it. Still it's all a gamble of whether it'll perform well enough to be worth writing applications this way.
1. The GPU owns the control loop And the only sparingly kicks to the CPU when it can't do something.
2. Yes
3. We're still investigating the limitations. A lot of them are hardware dependent, obviously data center cards have higher limits more capability than desktop cards.
Thanks! It is super fun trailblazing and realizing more of the pieces are there than everybody expects.
The US is way, way behind in banking P2P technology / fintech adoption. In many parts of Asia, even uneducated street vendors now accept digital payments via mobile phones (that's how easy it is). See - https://www.forbes.com/sites/pennylee/2024/04/17/the-us-lags... and
I would rather not have the kind of "financial innovation" that requires non-free apps running on non-free operating systems on locked down hardware. These apps, by design, track how people spend their money.
Traditional banks have about as much data about how you spend your money as any modern fintech. The banking system is non-free, locked down and centralized to begin with. How you access it is just a matter of cosmetics and policies.
> These apps, by design, track how people spend their money.
That depends - In India, for example, I am free to use either (1) a private company's app (like PayTM, Google Pay, PaisePe etc.) (2) a Government app or (3) my Bank's app to make digital payment using the Unified Payment Interface (UPI) (or all 3). And, if I don't want to use any mobile app, I can still make offline payment through my mobile phone over USSD - https://razorpay.com/blog/how-to-make-offline-upi-payments/ ...
(You are right though that it is prone to abuse in the absence of strong privacy and data protection laws - digital payment does allow new form of surveillance capitalism to the corporates and new avenues of authoritarian control to the government).
These things are amazing for maintenance programming on very large codebases (think, 50-100million lines of code or more, the people who wrote the code no longer work there, it's not open source so "just google it or check stack overflow" isn't even an option at all.)
A huge amount of effort goes into just searching for what relevant APIs are meant to be used without reinventing things that already exist in other parts of the codebase. I can send ten different instantiations of an agent off to go find me patterns already in use in code that should be applied to this spot but aren't yet. It can also search through a bug database quite well and look for the exact kinds of mistakes that the last ten years of people just like me made solving problems just like the one I'm currently working on. And it finds a lot.
Is this better than having the engineer who wrote the code and knows it very well? Hell no. But you don't always have that. And at the largest scale you really can't, because it's too large to fit in any one person's memory. So it certainly does devolve to searching and reading and summarizing for a lot of the time.
2. Brand quality went down to equivalent or less of no-name dropshipped goods in order to keep similar dollar prices for goods.
3. Cost of real estate spiked incredibly, putting even more pressure on retail margins.
4. American logistics is seriously good, in cities 24-hour delivery can be common even for small or inexpensive things.
5. Once there wasn't a quality or convenience reason to prefer the shopping mall, many were already struggling quietly, they just hadn't run out of cash yet.
6. The pandemic happened right as all these things converged, accelerated all of them, and wasn't really recoverable.
reply