I had a nexus 5, reliable, reasonably fast, snappy under most normal usage patterns.
I wanted a SD card and wireless charging so I upgraded to a note 4. 50% more ram, faster cpu, faster GPU etc. Felt way slower, and crashed fairly often. Annoyingly the home button was quite sluggish. Even more annoying is despite 50% more ram it was very aggressive in killing background apps, making multitasking almost useless. So whenever you switched apps it would have to restart the app, MUCH more so than my nexus 5. I tracked it down and apparently it's some benchmark optimization. Android updates (security or otherwise) were quite rare and was about a year behind on the android releases.
I switched the note 4 to cyanogen. The phone was much snappier, much more consistent (no random lag), and stopped crashing. Oh and the home button was much quicker as well.
In short, I highly recommend upgrading any old hardware you have to Lineage OS (Cyanogenmod continued).
If you have static analysis tools available and you're writing a larger application (more than 10k LOC), and intend to maintain it for longer than a couple of months, it is definitely worth the initial investment. Better than static analysis, is static typing in a language with an expressive type-system, such as Scala. You simply cannot have NPEs with Option types.
tldr; Used static analysis on 1.5M LOC codebase to reduce bugginess tremendously. Use of static analysis and statically typed languages is adviced.
P.S: I hope Scala implemented optional types better than Java, where the optional types themselves are nullable :)
It's interesting how many bugs can be found by static analysis. The problem with C and C++ is that the domain of 'normal' operations is much larger than Java, leading to more false positives in static analysis and/or less true positives.
If they're not building with -Wall -Wextra -Werror, are they really going to make use of PVS-Studio's output?
(Also, as a member of the Rust Evangelism Strike Force, I am obligated to point out that a lot of errors PVS-Studio catches are just straight-up not possible in Rust. In order to avoid getting too off-topic here, here's a comment I made on Reddit on a similar post from PVS-Studio about two years ago: https://www.reddit.com/comments/3aehq5//cscj2w5/)
This is very likely. Do not forget that the static analyzer is not just a warning. This is also the infrastructure.
For example, PVS-Studio is:
- Saving and loading analysis results allow doing overnight checks - during the night the analyzer does the scanning and provides you with the results in the morning.
- Interactive filtering of the analysis results (the log file) in the PVS-Studio window: by the diagnostic number, file name, the keyword in the text of the diagnostic.
- BlameNotifier utility. The tool allows you to send e-mail notifications to the developers about bugs that PVS-Studio found during a night run.
- Mass Suppression - ability to suppress all old messages raised for the legacy code, so that the analyzer reports 0 warnings. You can always go back to the suppressed messages later. This feature allows you to seamlessly integrate PVS-Studio into your development process and focus on errors found in new code only. Details: https://www.viva64.com/en/m/0032/
- Relative paths in report files to view them on different machines.
- CLMonitoring feature allows analyzing the projects that have no Visual Studio files (.sln/.vcxproj); in case the CLMonitoring functionality is not enough, there is a possibility to integrate PVS-Studio in a Makefile-based build system manually.
- pvs-studio-analyzer - a utility similar to CLMonitoring, but working under Linux.
- Possibility to exclude files from the analysis by name, folder or mask; to run the analysis on the files modified during the last N days.
- Integration with SonarQube. It is an open source platform, designed for continuous analysis and measurement of code quality.
- and so on
Most of the other things sound like tools to fix erroneous code after it's landed. If you're using -Werror, and you have something that makes sure the code builds before merging it to master / trunk, you can't commit erroneous code in the first place.
You can contact me at email@example.com if you are interested.
The numbers here may also be something like that. Nice blog post, may even make developers adopt something like cppcheck but most guys won't buy this.
To describe the 27,000 detections as errors is a little misleading, and I think that's what the grandparent post is discussing.
I've heard people who said that Coverity gives many false positives and Cppcheck gives few. I've heard the opposite, that it is impossible to use Cppcheck because of the huge number of false positives, but Coverity is doing great. I heard that Coverity gives more false positives than PVS-Studio. And vice versa. And so on and so forth. What is the reason for such differences?
Projects have their certain styles of writing and different kinds of macros. These are macros and peculiarities of style that become the main source of false positives. This is why the first impression of using the analyzers of code depends on luck, and not on the coolness of the analyzer. If the analyzer doesn’t like a self-made my_assert() it will issue 10000 false positives.
So there is no point in talking abstractly about the number of false positives. Yes, you can not get lucky and there can be a lot of false positives. However, the static code analyzers allow you to configure them. In articles I have showed many times that the simplest configuration of the analyzer can greatly reduce the number of false positives. Example: https://www.viva64.com/en/b/0496/#ID0ENNAC
...if you can grant them the premise that the 3.3% sampled represents the project as a whole so that the ratio will project out to 27k overall.
That's a few million sample lines tested.
Nevertheless I agree with you: there are no obvious way to do a better estimation, so words "fair" and "transparent" seem right in place there. Of course only if Karpov does not try to mislead his readers deliberately by choosing sub projects by some criterion that correlates with error rate.
The author makes no names but ends the article with "[...] did I mention EFL is the basis of all applications on Tizen?".
I don't know whether EFL is still the base for all applications on Tizen (I think they changed technologies several times, they are now on .NET?) but that, plus the SDK churn and Samsung reputation of generally low software quality and severe security issues don't really instill confidence in the platform.
95% of Tizen is a huge hairball of needless nested and overlapping indirections (custom IPC layers, serialization formats, "management" daemons, libraries to interface with them, etc.) implemented in varying (except for the pervasive messiness and broken english) C styles; deep beneath all this you have the open-source libraries doing the actual useful work. You could try to make some similar claims about Android, but the percentage would be much lower and the code quality is much higher. I wish I had some real metric to point at (maybe this article suffices?) but the code in Tizen is truly awful.
Also they don't give a shit about their bug tracker; I've had system-breaking bugs, with patches, in "new" status for over a year. Because I don't work at Samsung, I can't contribute.
The second (free() of a stack pointer), oof. Depending on the malloc implementation, you'll get a noop, a crash, an abort, a warning to stderr, or worst of all silent munge of stack vars.
Computer screens are meant for sitting very close, and TV's are meant for sitting back aways... so the viewing experience will be "off" a bit (due to pixel densities) if you use either for the other purpose.
Also, most computer screens don't have COAX inputs, usually only have 1 HDMI input, etc...
No, companies don't really care about avoiding bugs. No one at Samsung will risk delaying the release of the next plastic crap product by fixing any problems. Why would they? Defects will even encourage customers to buy a new device because they are used to getting no bug fix updates. This is the worst possible company for Viva64 to go to.
It seems like C++ developers either don't give a shit about code quality or they do, and don't make many mistakes anymore. Either way there is little demand for a static analyzer. It won't get easier for Viva64 in the future. A few years ago, Compilers and IDEs started providing similar warnings as PVS-Studio, although not quite as sophisticated yet. Code-level testing begins to be a thing in proprietary software. Clang offers valgrind-like sanitizers for different classes of bugs that even PVS-Studio cannot detect.
TIL that Viva64 has over 20 employees. The passive-aggressive blog posts always made it look like one or two people running PVS-Studio as a side project or so. I read those posts regularly, but I don't remember them announcing any new features or improvements in PVS-Studio itself. Why do I have to pay annually if they don't spend the money on improving the product?
I am not even sure if PVS-Studio is worth the money. They don't have prices on the web site and defend their business decisions in the FAQ (https://www.viva64.com/en/order-faq/) in a very unprofessional way. After reading that it feels like I might get ripped off.
Well, this is like claiming that authors writing in English don't care about grammar. There's too many people doing too many different things with the language -- from texting LOLs to writing academic research -- to generalize. But speaking from experience with one ~5M line industrial C++ codebase continually developed since the 90's, we care enough about quality to have a whole set of measures, automated and cultural, to support it.
We've looked at static analyzers several times but none of the evals have found issues worth the price. It turned up things like a static expression to compute a bitmask where the same flag was being listed multiple times. Which is nice, but made it feel more like a lint tool in terms of differential value added for us. If we didn't have things like valgrind memcheck on continuous integration, it might be a different story.
This is standard practice. PVS-Studio is a B2B solution. There are many details to be discussed.
For individual developers we propose the following: "How to use PVS-Studio for Free" - https://www.viva64.com/en/b/0457/
And "Handing out PVS-Studio Analyzer Licenses to Security Experts" - https://www.viva64.com/en/b/0510/
"Handing out PVS-Studio Analyzer Licenses to Security Experts" - sorry, tl;dr
A lot of companies don't do that. Last I checked, Coverity was the same. And likely 6 figures for large code bases.
At this point, for commercial project, I'd just use clang analyzer and cppcheck. Then, use the clang address sanitizer when running tests. That's likely better than what people do now.
And, ensure that builds are clean with no errors. :( Most projects have thousands of warnings when compiling. That just can't be good.
Some software teams take pride in their work, it is unfortunate consumers don't have an easy way to ID those teams when buying a product.
PVS-Studio Release History: https://www.viva64.com/en/m/0010/
PVS-Studio project - 10 years of failures and successes: https://www.viva64.com/en/b/0465/
How PVS-Studio does the bug search: methods and technologies: https://www.viva64.com/en/b/0466/
I don't mean once-a-year posts that no one reads because they are too long. I mean regular, valuable information about the product.
Including, describing the product. For example:
PVS-Studio as a plugin for SonarQube - https://www.viva64.com/en/b/0513/
Support of Visual Studio 2017 and Roslyn 2.0 in PVS-Studio: sometimes it's not that easy to use ready-made solutions as it may seem - https://www.viva64.com/en/b/0503/
The way static analyzers fight against false positives, and why they do it - https://www.viva64.com/en/b/0488/
Why I Dislike Synthetic Tests - https://www.viva64.com/en/b/0471/
Integrating PVS-Studio into Eclipse CDT (Linux) - https://www.viva64.com/en/b/0458/
Integrating PVS-Studio into Anjuta DevStudio (Linux) - https://www.viva64.com/en/b/0459/
Issues we faced when renewing PVS-Studio user interface - https://www.viva64.com/en/b/0450/
and so on
I can also offer a presentation:
PVS-Studio static code analyzer for C, C++ and C# (2017) - https://youtu.be/kmqF130pQW8
I wish. That last part should rather be: "and don't make a lot of mistakes anymore". Even the gurus out there are not completely without mistakes. As for the first group you refer to: what might look like not giving a shit, is often actually simply a lack of understanding/training it seems. As in: not actively not giving a shit but rather simply not knowing they are doing it wrong.