You might find it worth upgrading to 10gbps if you continue to go down this road. The Mikrotik CRS-309 has served me well, and a couple Intel X520-DA2s. I believe those NICs can do iSCSI natively, and pass the session to the operating system with iBFT.
SFP28 might be cheap enough now too, I'm not sure...
You can download the rootfs, extract it to a ramdisk, and just run in memory. This is fast for everything. Unfortunately, memory just got super expensive. Fortunately, Linux requires ~no memory to do many useful things.
One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?
A week of superintelligence should be enough to take over the world, or at least sabotage your competitors. And even if someone else gets there a week later, they'll be permanently one week behind the curve (until the AI hits some physical limit, I suppose).
What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?
A month with a superintelligence at your hands could be quite impactful, especially if you're willing to break the law / normal operating decorum in the pursuit of protecting what you have. A superintelligence, if wielded so, could destroy your competitors in a great many ways, including the relatively-benign solution of outcompeting them, to exploiting them and tearing them apart from the inside.
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
If I interpret "a machine superintelligence" as "a classroom of 300IQ humans," I'm not really sure how this is true? You still have material and energy constraints, you can't think your way out of those.
For the concrete problem we're discussing, you can hack your competitors out of existence, replace all of your knowledge workers to shed costs, hyperoptimise your logistics, etc. It's not just intelligence, it's speed and scale.
Bostrom's Superintelligence (2014) is a bit of a dreary read, and I didn't finish it, but it pulls no punches about the leverage that a superintelligence might have in our highly-connected world.
> For the concrete problem we're discussing, you can hack your competitors out of existence, replace all of your knowledge workers to shed costs, hyperoptimise your logistics, etc. It's not just intelligence, it's speed and scale.
For the concrete problem we're discussing, that hypothetical belongs in a Marvel movie, not reality. In the real world, you can't 'hack your competitors out of existence', and you'll be going to prison very quickly for trying this sort of thing.
> especially if you're willing to break the law / normal operating decorum
in my original post. If you have a superintelligence, you have something that can find and take advantage of every exploitation vector in parallel - technical, social, bureaucratic - and use that to destroy a company from the inside. A superintelligence that is subservient to its operator is an informational superweapon.
I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations; it does not take that much imagination to gauge how much worse it could be when the process can be automated and scaled.
> A superintelligence that is subservient to its operator is an informational superweapon.
The five dollar wrench attack will put an end to that operator's use of an informational superweapon.
> I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations
What can it do? Generally, a minor disruption to operations.
It consistently does a lot less than what law enforcement can do to you if you start messing with other rich peoples' money, while having enough of a presence to own a super-intelligence and a trillion-dollar data center.
Within a day - well before any legal or societal force could intervene - a superintelligence could make its way into every part of an organisation's internal network and tear it apart from the inside.
Conventional hackers are limited by the serial nature of their work - finding breaches, exploiting them, conducting further exploration of the network, trying not to get detected - in ways that a superintelligence would not be. The latter could be a hundred times as effective, a hundred times as fast, and a hundred times more parallel.
I agree that this is unlikely to happen because the societal bill would come due in time, but my point is that a month's lead is enough to do significant and lasting damage.
Assuming it can't super hack all computer systems and cripple competing SI incubation to at least increase its lead time indefinetly.
The assumption would be that in the lead time it has the super intelligence at least takes a small lead and undermines any paths a later arriving super intelligence could take to interfere with it's goals, which naturally includes stopping competing SIs from becoming more powerful in a way that could undermine it.
So assuming the super intelligence has goals and work towards them it will be initially trying to solidify its own power, iterating on that small lead, assuming it's the smartest super intelligence[1], should be enough to win. The scary part is that assuming no guardrails [2] it's going to be as ruthless as possible in achieving those goals. That does not necessarily mean it will appear ruthless in achieving those goals, just as ruthless as it judges optimal.
1. Which being so smart one of it's chores would have been reinvestment in making itself smarter than competition and being smarter than its makers has a good chance of actuating those self-improvements.
2. In the internal balancing of goals sense not the don't feed the mogwai after midnight sense.
Are there any situations you would compare this to historically?
To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.
It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.
Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.
It's a shame that VCs have corrupted a $200MM/year business into the perception as a failure. Who cares if the VCs didn't get a large return, or if the outsized impact of the software didn't quite fully capture the value created. $200MM/yr without aggressive R&D or operational costs could be an incredibly healthy business.
Maybe we should stop trying to build so many billion dollar/year businesses and work on more sustainable models.
I haven’t followed Docker’s case in particular, but how much investment was required to get it to that point? If it’s a case of “How do you become a millionaire? Start as a billionaire and invest in Docker”, then the perception may have some basis.
I’m not sure the GP did mean that, but I agree it’s a much better solution than maintaining an out-of-tree kernel module, which is generally a really bad idea
This seems like a classic "rationalism vs empiricism" issue, and it was interesting for me to learn that Galen (famous / influential ancient Roman physician) wrote a lot about that.
With respect to the weapons programs, I'm not a historian, but I was not under the impression that the US stopped development of these weapons unilaterally or out of good will. My understanding is that it was due to a mixture of not perceiving a need or use for the capabilities, along with formal or informal international cooperation eliminating the need for deterrence.
Just a couple of thoughts since it seems like the next issues in this space are rapidly arriving or already here.
As far as I've read the literature from the 60s and 70s, tactical nukes were eventually eliminated in order to assuage western Europe's concerns that large portions of their countries would be turned into irradiated wastelands for decades / centuries if war erupted between the US and USSR.
It was also the product of perceived overmatch on both sides -- the Soviets believed they had superior mass of armored formations (and they did), while the US and allies believed they had technological supremacy (and they did). Ergo, neither needed tactical nukes.
It didn't hurt that it helped both in the eyes of the then vehemently anti-nuclear European movements.
Offensive bio and chemical weapon limitation is a more nuanced decision.
In both cases, their primary use was either local mass lethality or terrain denial, neither of which were important in the then-gelling American doctrines of maneuver.
The sole use case they seemed viable for was industry denial (e.g. contaminate a high capital cost industrial center), a task at which strategic sized nuclear weapons were equally adept (and more easily stored). So, if you had to have strategic nuclear weapons for deterrence, and they were capable of the same task, why have fiddly bio and chemical weapons?
But in both cases there was also a constant radiant pressure of scientists and the public campaigning against them, and being unwilling to work on or tolerate them.
Absent that, who knows how history would have turned out? Normalization is a powerful opinion shifter.
SFP28 might be cheap enough now too, I'm not sure...
reply