Why are developers of popular database solutions so reluctant to write secure-by-design software. You would guess that some basic form of authentication should be implemented in any internet-facing service. But here we are, after the MongoDB fiasko, still left with thousands of vulnerable services because someone didn't bother to implement basic auth.
Because these tools follow the unix philosophy of building single use tools. There are a wide variety of authentication measures that can be composed with databases to secure them. There is really no need to build the authentication into the database itself and it fact doing so would violate a don't repeat yourself ethos.
> I'm convinced that unit tests don't usually find bugs.
They don't, they test whether or not the API contract the developer had in mind is still valid or not.
> IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it.
You don't write test to find bugs (in 98% of cases), but you can write tests for bugs found.
> Fuzzing is a much better approach.
If you're writing an I/O intense thing, such as a JSON parser, then yes. For 80% which is CRUD, probably not.
> The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
You are blaming tests for bad design choices. With the patterns raised unit tests only get you so far, integration tests are what help you prevent bad deployments.
> The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
So how many exceptions were raised due to bad deploys? Core review only gets you so far.
> If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you.
Failing tests don't have to do with devs being "great" or not. Developers must have the capability of quickly testing the system without manual work, in order to be more effective and ship new features faster. If the tests are one-sided (only unit tests, only integration tests), then this will get you only so far, but it still get's you that far.
Don't abandon good development practices only because you saw a terrible Java EE application.
Tests are a pattern. And patterns are the bread and butter of the medicore. That's not to say that patterns or tests are bad, but high calibre guys know when to use which tool. As a tool, unit testing is almost useless.
Low calibre guys don't have any feel for what they're doing. They just use the tools and patterns they were taught to use. All the time. This goes from engineers to managers to other disciplines.
I've seen people at a factory floor treating my test instructions for a device I built as some kind of bible gospel. I had a new girl who had no idea I designed said gadget telling me off for not doing the testing exactly like the instruction manual I wrote says.
The same thing happened with patterns and unit tests. You have hordes of stupid people following the mantra to the letter because they don't actually understand the intent. Any workplace where testing is part of their 'culture' signals to me that its full of mediocre devs who were whipped into some kind of productivity by overbearing use of patterns. It's a good way to get work done with mediocre devs, but good devs are just stifled by it and avoid places that force it.
I find unit tests to be _most_ useful in very particular cases: When a given function I'm writing has a set of input/outputs that I'm going for. Various items like parsing a URL into various components, or writing a class that must match a particular interface. I need to make sure the function works anyway, so I can either test it manually, or I can take a few extra moments and commit those test cases to version control.
For more complex items, I'm much more interested in higher level black-box integration tests.
That's a great example of why unit testing is mostly useless.
Having an expected/input output set when writing something like a parser is standard practice. Turning that set into unit tests is worthless for a few reasons.
1: You will design your code to make them all pass. A unit test is useless if it always passes. When your test framework comes back saying x/x (100%) of tests have passed, you are receiving ZERO information as to the validity of your system.
2: You wrote the unit tests with the same assumptions, biases, and limitations as the code they're testing. If you have a fundamental misunderstanding of what the system should do, it will manifest in both the code and the test. This is true of most unit tests - they are tautological.
3: While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing. More than likely, that's not the most readable or understandable way said code could have been written. You sacrificed clarity for unit testability. Metrics like test code coverage unintentionally steer developers to writing unreadable tangled code.
The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
You're complaint about always passing only makes sense if you ignore negative tests. Good tests will also test that bad/incorrect input results in predictable behaviour - e.g. invalid input into a parser doesn't parse.
> While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing
Another way to consider it is that unit testing forces you to structure your code to be more composeable which is a win. The amount of intrusive access/changes you need to avail test-code is language-dependent.
> The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
And yet successful large-scale projects like LLVM use unit-testing. Not exclusively but it's a part of their overall strategy to ensure code quality. Sure, for very small-scale projects with a few constant team members it can be overkill. Those aren't particularly interesting scenarios because you're facing fewer organizational challenges. The fact of the matter is that despite all the hand-wringing about how it's not useful, unit tests are the inevitable addition to any codebase that has a non-trivial number of contributors, changes and/or lines of code.
The applicability of unit testing to your particular cases varies greatly across languages & runtimes.
For URL parsing, some runtimes/frameworks have that thing already implemented. E.g. in .NET the Uri class allows getting scheme/host/port/path/segments, and there’s a separate ParseQueryString utility method to parse query part of the URI.
To ensure a class conforms to an interface, the majority of strongly-types OO languages have interfaces in their type systems. If you’ll use that but fail to implement an interface or some parts of it, you code just won’t compile.
Indeed. Tests allow new members of the team to confidently make changes. I've seen codebases that had near zero tests and also a total mess, with one change somewhere breaking a hundred things 30 levels down the stack. We'd find the issue only in production, along with an enraged customer.
Tests are not a replacement for good developers, they are just a tool for contract validation and a regression safety net.
> Developers must have the capability of quickly testing the system without manual work
Running unit tests is hardly quick. Especially if you have to compile them. End-to-end are even worse, in this regard.
> They don't, they test whether or not the API contract the developer had in mind is still valid or not.
If you're always breaking the API, then that's a sign that the API is too complex and poorly designed. The API should be the closest thing you have to being set in stone. Linus Torvalds has many rants on breaking the Linux kernel's API (which, also, has no real unit tests).
It's also really easy to tell if you're breaking the API. Are you touching that API code path at this time? Then yes, you're probably breaking the API. Unless there was a preexisting bug that you are fixing (in which case, the unit test failed) then you are, by definition, breaking the API, assuming your API truly is doing one logical, self-contained thing at a time as any good API should.
edit: As an aside, I'd like to point out that POSIX C/C11/jQuery/etc. are littered with deprecated API calls, such as sprintf(). This is almost always the correct thing to do. Deprecate broken interfaces and create new interfaces that fix the issues. Attempting to fix broken APIs by introducing optional "modes" or parameters to an interface, or altering the response is certain to cause bugs in the consumer of the interface.
> Don't abandon good development practices
Unit tests are a tool. There are cases where they make sense, where they are trivial to implement and benefit you greatly at the same time. Then there are cases where implementing a unit test will take an entire day with marginal benefit and the code will be entirely rewritten next year anyway (literally all web development everywhere). It doesn't make sense to spend man-months and man-years writing and maintaining unit tests when the app will get tossed out and rewritten in LatestFad Framework almost as soon as you write the test.
It's really nice that we see more and more awareness for Zero Trust and specifically Google's BeyondCorp whitepaper. If you're looking to experiment with this model yourself, check out the following open source projects. While they might not implement everything in Google's BeyondCorp paper yet, they are pretty close to the full thing, and address many issues raised in the comments.
> On-chain scaling is not sustainable. If you want to handle as many transaction as for example Visa, you would need 1 GB blocks every 10 minutes, which would make the whole blockchain heavily centralized because regular users won't be able to host full nodes to validate payments.
The way things are today and the way they need to be to ensure a censorship-resistant secure public ledger are not the same. Core is working toward that, security is first priority, then optimize performance after that.
How will you add liquidity to LTSE, as (from the little details given) it seems like the market will be moving very sluggish because trading volume is "conceptually" limited.
The LTSE is part of what’s called the National Market System, so our stocks still trade on other exchanges. We don’t limit the ability of traders to access the stock.