For every major outage that makes the news, there are thousands of smaller problems that users never notice because of CDNs, failover, graceful-ish service degradation, autoscaling and/or overprovisioning, and fallbacks. Services can be reliable even when the software and hardware that backs them is not.
In other words, "reliability" is a property of the whole system, including the human operators, not just the code. The code can actually be pretty bad, yet the entire system can still be reliable in terms of being available and functional when the users want to use it.
That's what FULL AFK botting in games taught me, kinda random % values, so in my experience it was
Automating 80% of stuff is easy, can be done in 1h
90% is starting getting tricky
95% requires you to spend days of testing and writting various scripts
100% is additionally limited by BOT API and would require you to write your own stuff and talk to bot via e.g file system and try to handle things like client/bot crashes, networking maybe, damn...
But yea, having to do something once a day for 10min instead of playing 10h is great enough
Just to clarify: those are private servers where everybody's botting.
I don't think that you need that clarification. The greed of many game developers doesn't have any borders. Artificial delays that force you to spend money to work around them, ridiculous drop rates etc - I, personally, would never judge badly someone who bots, if its doesn't affect other players.
Cheating (such as aimbots, wallhacks, radars, nethacks etc) is totally different story though
Some random thoughts because I'm kinda busy
Game is great and insanely addictive(seriously, I can talk about this more if youre interested), but community on private servers is insanely toxic and uneducated (I don't really like saying this like that, but seriously that's something you tend to do not experience in other games) due to... well, that's interesting topic.
This game has relatively high entry level because it's like 23 years old game which has A LOT of content, even more if you do not use spoilers for quests/missions.
If you don't play on private servers, then you have to buy "premium account" to be able to play the game to its full potential which costs like 10 euro for a month
Dying in this game is very punishing, but it used to be worse like decade ago because nowadays you can buy ingame_currency relatively cheap. Dying decade ago without "protections" could waste your like 2-3 hours of exp and some items, add to that fact that you can be killed on PvP servers by anyone "just because" :P
This game is also interested from programming perspective -
Private servers are an initiative by programmers to recreate this game engine and host their own server. Nowadays it's mature.
One of the most popular private server engines: https://github.com/otland/forgottenserver
It's really impressive project.
When it comes to botting FULL afk scripts
You have to write code which goes to the exp place, don't get stuck somewhere, exp there for a hours (using spells, drink health/mana potions, take loot from corpses, run on the more or less complex spawns (map), go back to the city if you don't have potions, sells loot, take cash to the bank, and repeat
Many things can go wrong during that cicle - like other played blocked your "road" to the exp for 1min, or you character just died due to PK(players), lag and maaaaaaaany other.
Your script has to consider a lot of those different conditions and trade offs (sometimes you don't want to kill monsters that are between your city and your spawn because it would take 10min, so you just want to run through them, but you can get "trapped" (sorrounded) by them, so you have to consider this)
It's not easy to do this good enough that you can start bot on VPS and it will running for a day, week or something similar.
Some of the reasons are:
1) Most applications use very reliable databases as a store. Back in the day, programmers used to write their own one-off BTREE libraries, which stopped with msql/mysql, jet, dbm and sqlite.
(A holdover from that era is when interviewers asked you to roll your own sort algorithm, which you should almost never do for production apps.)
2) Some software is fuzzed.
3) Some software has managed memory.
4) Some software has tests.
Formal verification is huge these days I would like to think.
EDIT: I now think it was French, source https://www.prover.com/portfolio-items/ratp-paris/
> Formal methods researchers who are really keen on rigorous checking and proof should identify and concentrate on the most critical areas of a large software system, for example, synchronisation and mutual exclusion protocols, dynamic resource allocation, and reconfiguration strategies for recovery from partial system failure. It is known that these are areas where obscure time-dependent errors, deadlocks and livelocks (thrashing) can lurk untestable for many years, and then trigger a failure costing many millions. It is possible that proof methods and model checking are now sufficiently advanced that a good formal methodologist could occasionally detect such obscure latent errors before they occur in practice. Publication of such an achievement would be a major milestone in the acceptance of formal methods in solving the most critical problems of software
I feel like this presages the way TLA+ (in particular) has had a relative surge among practitioners.
Reliability doesn't require mathematical proof - this goes for airplanes to your kitchen faucet to software. Software became more and more reliable through testing, errors, fixes/patches, etc. Also, many times good enough is good enough. We don't need the perfect for most situations.
Also, it's a subjective call too. For some software is an unreliable mess and we need to throw everything out and rebuild it from scratch. Search for haskell, rust, etc threads. The neverending arguments against side-effects, mutation, pointers, etc. For most, they are good enough and get the job done.
Formal verification is only as good as your model, anyway. Problem comes from outside your model? Formal verification won't save you.
The level he works at might be higher level though. It sounds like control systems for airplanes are fairly modular.
Software is highly unusual in that proofs can even be deployed. Most highly safe processes can't even use proofs. How do you prove that a dam won't fail? It is impossible.
Me shit posting is not a life or death situation so a bug here or there is fine
Few algorithms are as battle-tested as the sorting algorithm from the Java standard library. I can't even estimate how many calls are made to this function every day. The fact that we can still find bugs in it, no matter how obscure, suggests to me that we simply can't write correct nontrivial software without proof.
The victory isn't in showing that this specific algorithm is incorrect in some fairly obscure way. It's in the fact that even software that's this battle-tested is still not fully reliable.
But does it matter?
I also don't understand formal methods well, but wouldn't they also be subject to bugs? It seems like all we can do is make software less unreliable.
That depends on the application. It probably doesn't most of the time. My point is not that it is always economical to use formal methods, just that I suspect their cost is overestimated compared to the benefit (because most programmers are simply unfamiliar with FM).
> I also don't understand formal methods well, but wouldn't they also be subject to bugs?
Yes, they are. There are multiple classes of errors FM are vulnerable to.
1. In FM you prove your code correct with respect to a specification. But you can make errors in the specification too. The arguments for FM are: a) the spec is much smaller than the program, so the opportunity for error is lower, and spec errors are more obvious; b) often if your spec is wrong, you'll find out when you try to prove your algorithm; and c) combining testing with FM is very likely to uncover whatever errors are left.
2. If your proof of correctness is not mechanically verified, then you may still have trivial implementation errors like mixing up variables and the like.
3. If it is mechanically verified, then theoretically there's still opportunity for errors. The verifier might contain a bug, the compiler might contain a bug, the hardware you run the algorithm on might contain a bug, cosmic radiation may flip a bit during the verification/compilation process, invalidating the correctness, etc. I can soapbox about the implications of this but I will only do that when prompted to :)
So indeed, there is no method that gives you zero probability of failure. But you can get the probability orders of magnitude lower, which may be worth it depending on what you're working on.
This appears to be entirely normal for scientific papers.
Can anyone tell me how this foolishness came to be considered acceptable among scientists, and why it persists?
In any case nowadays I think there's no excuse for this practice continuing. It's far too common for a paper to get sent around, or put on the author's personal website, with no associated date at all.
Doesn't matter. If the author was thinking of a specific year, they'd say the year. If they say "20 years ago", everything that was "20 years ago" this year is also "20 years ago" a couple years from now.
In general, I'd say people are sloppy about this because publishing is like a super-public form of notarization, which has the effect of placing the name of the article into public indices and databases, most importantly the list of contents of that journal. It's not that people don't care about the date—it's that they have no difficulty finding out what it was.
1996 is in the URL.
This is less a science thing and more an Internet culture thing, IMO.
Documentation norms online have changed with variablized URLs full of computed data
PS: in this case: it turns out that the paper was published in the International Symposium of Formal Methods Europe, 1996: https://link.springer.com/chapter/10.1007/3-540-60973-3_77
It would still be best to include the date inside the artifact itself instead of relying on external and contextual metadata like you are describing. As long as we're at it, documents should also ideally include their DOI.
Second: Programs got so reliable by killing the bugs that made them most unreliable. That is, if a program has two bugs, one of which 10% of people hit, and one of which 0.0001% of people hit, the 10% bug gets fixed. The program is still buggy, but it's buggy in a way that most people never encounter.
And third: For (many kinds of) formal proof, you need a formal specification. Much of software is somewhat discovered. (What's the right user interaction flow for this, anyway? Let's try some ideas out and see.) If you write the formal specification before you have really discovered what the software should be, then you can have formally-proven software that does less-than-optimal things.
And then you can have bugs in the specification. (Hello, MCAS!) Formally proving that you correctly implemented an incorrect spec does not make the software more correct.
I don't think it'd require twice the effort unless you insisted on a full and complete formal proof of the entire program. Which most contemporary advocates of formal methods do not want (and this paper basically concurs with). Rather, see TLA+ where the advocation is for people to model critical parts, or the concurrency model, or the synchronization model of their distributed system. You can elide so many details that it's nowhere near the same level of work as actually writing the full program, but can inform your design so that you spend less time reworking later.
OTOH, I've seen lots of programs where the testing comprises at least as much effort, if not some small integer multiple, as the actual coding portion. But no one is advocating for removing tests (though some people strongly despise, if not outright hate, certain test approaches).
OTOH, the less you prove correct, the more bugs you leave, and you get people asking "How did software get so reliable with so little proof?"
On the third hand (I think that's the number I'm up to now), types are a kind of proof, and I really do want those. So I think there's some kind of curve, with no proof giving you lots of bugs (which are expensive to fix, and bad for your product's PR), and full proof taking forever and costing a ton, and somewhere in between being the sweet spot.
And where is that sweet spot? I would say at least type systems, but less than full formal proof. Somewhere in between, which leaves a lot of room and therefore is not much help. Worse, where exactly the sweet spot is probably depends on several things, which means we can't give any simple answer.
Software testing is exactly the same. The more tests you have, the more evidence you build up that the theory that the software is incorrect in some way may be wrong. If you find out about a new way it could still be wrong, you write a test to prove that wrong. And then if you are right about that, you fix the software.
> The more tests you have, the more evidence you build up that the theory that the software is incorrect in some way may be wrong.
Or you could actually use the software. You can have 100% unit tests passing and a completely broken integration.
He's not exactly wrong in that analogy but I think that in practice such utopia does not exist.
As far as I know, this is not something available to other forms of engineering.
There was a point a few years ago that GTK+ dev was complaining that only he was left to work on the project. A single person for a whole GUI lib..
And GTK+ is the basis of so many FOSS it's not even funny...
The main advantage of open source for me isn't that all bugs are found. It's that when I encounter a bug I can check the code, report it, fix it in my fork and get it fixed in the main repo eventually.
With closed source you have to make do with workarounds.
They may not be able to fix it, but just noticing there is a problem is a big first step.
Regardless of that none of the many billion dollar companies that relied on it invested anything into improving it.
If most/all CAD designs were opensource, you can imagine there would be projects underway to automatically "find all screws in the world that are too short to hold the design load". With enough projects of that nature, standards across the entire field will improve.
"Linus's law" is refered to as "Linus's fallacy" now.
It is a tool invented by bureaucrats for the sole purpose of job creation.
If society keeps becoming more bureaucratic, soon enough, someone will invent the concept of unit testing for formal proofs... There is no limit to how ridiculous this can get. The Fed's mandate is job creation, and job creation is just what it'll do! Any job! Mostly useless jobs!
Since the beginning of computer software (and computer hardware) the solutions that were not reliable enough for practical purposes were discarded.
Hence, popular software has less bugs and becomes more popular. Using popular software in a different way will trigger unfound bugs.
Security bugs were special, as nobody triggers them by default. There is no selection pressure to make them disappear.
Bugs are energy of noise on continuous spectrum where frequency describes types of bugs.
When you write software you generate initial signal which will depend on what you are doing and your ability (awareness of different types of bugs, etc.)
Then during further stages of testing these bugs tend to be filtered out. Some types of bugs are attenuated very efficiently. For example a bug that would cause your application to not compile would have very little chance of being shipped...
Some bugs are only detected (as signal) only in some circumstances. For example, users use a detector that has different characteristics from the detector used by developer.
You don't generally want to invest in attenuating all frequencies. What you do is you ignore frequencies that don't matter (that just pass and cause no harm) and focus on harmful frequencies.
In the end, the best way to reduce the output noise of the system is usually to reduce the input noise (ie. signal produced by developer).
Social Processes and Proofs of Theorems
Even rigorous mathematics requires social processes to become 'reliable', ie truthful.
With a proof doubly so, as you need a specification that you can prove your implementation against. In my time in industry (YMMV) I think I've never seen a formal enough specification that would be useful in that sense.
I think we're in a specification crisis where behavior is unspecified and documentation diverges from that as well. APIs break regularily and services are down daily. It's a miracle the whole house of cards has not come down yet.
If you account for survivorship bias, I would bet that software is not very reliable.
The tree or tributary structure of software suggests that deeper, more fundamental, more general bugs will be revealed first and frequently. And logically will be fixed at the root, rather than hacking at the myriad leaves... right...?
Beyond that, see SPARK where you're limited to a provable subset of the Ada language (the subset has been extended with each version) so you can make assertions and it will attempt to prove (as part of compilation) that the assertions hold. As in, you could have a function that should always return a non-negative integer, but if it does something like (not fluent, so may be wrong but the principle is correct):
function Foo(A : Integer; B :integer) return Integer with
Post => Foo'Result > 0;
function Foo(A : Integer; B : Integer) return Integer is
return A - B;
I'm not sure I'd try to use Z Notation at this point, given that there are other systems out there that seem to fill the same niche but with better tooling. Event-B and TLA+ are the two I've explored the most.
> The seL4® Microkernel. Security is no excuse for bad performance. The benchmark for performance. The world's most highly assured OS kernel. Open source & community-supported under the seL4 Foundation.
I do however like the strong emphasis on (manual) testing.
> So it seems safer to take an entirely fresh copy of the existing code, and modify that instead. Over a period of years there arise a whole family of such near-clones, extending over several generations. Each of them is a quick and efficient solution to an immediate problem; but over time they create additional problems of maintenance of the large volumes of code. For example, if a change is made in one version of the clone, it is quite difficult even to decide whether it should be propagated to the other versions, so it usually isn't. The expense arises when the same error or deficiency has to be detected and corrected again in the other versions.
Note his use of the phrase "it seems safer", he doesn't say it is safer and at the end explains what the actual risk/expense is with the copy/paste approach.
And at the end he discusses the issues with over-engineering in summary:
> The limitation of over-engineering as a safety technique is that the extra weight and volume may begin to contribute to the very problem that it was intended to solve. No-one knows how much of the volume of code of a large system is due to over-engineering, or how much this costs in terms of reliability. In general safety engineering, it is not unknown for catastrophes to be caused by the very measures that are introduced to avoid them.
They will tell you that current software is basically crap and they can't understand how the majority tolerates this state of affairs.
Programs or algorithms only guarantee that they can go from one state to another state.
Somehow computer programs have been conflated with relability and proof possibly because of close adjacency to mathematics.