Hacker News new | past | comments | ask | show | jobs | submit login
27000 errors in the Tizen operating system (viva64.com)
178 points by matthewwarren on July 12, 2017 | hide | past | web | favorite | 78 comments



Tizen: Summing Up - https://www.viva64.com/en/b/0522/


With Samsung's android record, why would anyone want more Samsung written code on their phone?

I had a nexus 5, reliable, reasonably fast, snappy under most normal usage patterns.

I wanted a SD card and wireless charging so I upgraded to a note 4. 50% more ram, faster cpu, faster GPU etc. Felt way slower, and crashed fairly often. Annoyingly the home button was quite sluggish. Even more annoying is despite 50% more ram it was very aggressive in killing background apps, making multitasking almost useless. So whenever you switched apps it would have to restart the app, MUCH more so than my nexus 5. I tracked it down and apparently it's some benchmark optimization. Android updates (security or otherwise) were quite rare and was about a year behind on the android releases.

I switched the note 4 to cyanogen. The phone was much snappier, much more consistent (no random lag), and stopped crashing. Oh and the home button was much quicker as well.


I had a similar experience with a Samsung Galaxy S4, which would run out of battery life in under 5 hours on any given day, and which was dropped for major updates after two versions. Switching to Cyanogenmod, even the nightly version extended battery life to ~9 hours, and greatly improved performance. It also allowed access to the next version of Android, and native skinning and easy customization (with a literal toggle in the dev settings for root).

In short, I highly recommend upgrading any old hardware you have to Lineage OS (Cyanogenmod continued).


This sounds so familiar. Some 6 years ago, I joined a team of engineers developing a large (1.5M LOC) classic java/spring/hibernate application for administration in higher education. The software was buggy and loads of time was wasted on bug-fixing, so I introduced the Findbugs static analysis package. Result: >6000 (suspected) bugs. It took a couple of days of blunt perseverance of the whole team to clean up the mess and bring it back to normal proportions, but the result was a stable, well-functioning system.

If you have static analysis tools available and you're writing a larger application (more than 10k LOC), and intend to maintain it for longer than a couple of months, it is definitely worth the initial investment. Better than static analysis, is static typing in a language with an expressive type-system, such as Scala. You simply cannot have NPEs with Option types.

tldr; Used static analysis on 1.5M LOC codebase to reduce bugginess tremendously. Use of static analysis and statically typed languages is adviced.


The first time we ran static analysis on the C++ code base of a specific project, it found ~20 minor issues and a couple of medium severity ones which were mitigated by other checks. However, we also used 3rd-party open source C libraries, where it found thousands of problems. I'm talking about serious issues, memory leaks, invalid memory access, etc.

P.S: I hope Scala implemented optional types better than Java, where the optional types themselves are nullable :)


Well, technically you could write `val opt: Option[Int] = null`, but I've never seen it used and would seriously consider sending the author back to a course Scala for Dummies.

It's interesting how many bugs can be found by static analysis. The problem with C and C++ is that the domain of 'normal' operations is much larger than Java, leading to more false positives in static analysis and/or less true positives.


Optional types will eventually become value types on Java, you just need to wait until Java 10 for it.


We can PVS-Studio against FreeRADIUS. It found things which were missed by Coverity, clang analyzer, and cppcheck. Sadly, they don't respond to my queries about paying for it.

http://freeradius.org/security/pvs-studio.html


Every time I see a PVS-Studio article I wonder how many errors are caught by the compiler itself. In my experience, things like the same expression on both sides of a less-than sign, returned pointers to local variables, are caught by -Wall -Wextra in GCC or LLVM.

If they're not building with -Wall -Wextra -Werror, are they really going to make use of PVS-Studio's output?

(Also, as a member of the Rust Evangelism Strike Force, I am obligated to point out that a lot of errors PVS-Studio catches are just straight-up not possible in Rust. In order to avoid getting too off-topic here, here's a comment I made on Reddit on a similar post from PVS-Studio about two years ago: https://www.reddit.com/comments/3aehq5//cscj2w5/)


> If they're not building with -Wall -Wextra -Werror, are they really going to make use of PVS-Studio's output?

This is very likely. Do not forget that the static analyzer is not just a warning. This is also the infrastructure.

For example, PVS-Studio is:

- Saving and loading analysis results allow doing overnight checks - during the night the analyzer does the scanning and provides you with the results in the morning.

- Interactive filtering of the analysis results (the log file) in the PVS-Studio window: by the diagnostic number, file name, the keyword in the text of the diagnostic.

- BlameNotifier utility. The tool allows you to send e-mail notifications to the developers about bugs that PVS-Studio found during a night run.

- Mass Suppression - ability to suppress all old messages raised for the legacy code, so that the analyzer reports 0 warnings. You can always go back to the suppressed messages later. This feature allows you to seamlessly integrate PVS-Studio into your development process and focus on errors found in new code only. Details: https://www.viva64.com/en/m/0032/

- Relative paths in report files to view them on different machines.

- CLMonitoring feature allows analyzing the projects that have no Visual Studio files (.sln/.vcxproj); in case the CLMonitoring functionality is not enough, there is a possibility to integrate PVS-Studio in a Makefile-based build system manually.

- pvs-studio-analyzer - a utility similar to CLMonitoring, but working under Linux.

- Possibility to exclude files from the analysis by name, folder or mask; to run the analysis on the files modified during the last N days.

- Integration with SonarQube. It is an open source platform, designed for continuous analysis and measurement of code quality.

- and so on


I don't remember reading about any of that even though I am a regular reader of your "checked projects" series. All of the relevant information on your web site is hidden in some wall of text. Please consider hiring someone who knows how to sell something.


It isn't hidden, really. I followed the hyperlink at the top to the main page of the site, and the bullet-point list that you are responding to is right there, on that very first page. It is under a heading entitled Main Features of PVS-Studio.

* https://www.viva64.com/en/pvs-studio/


"Main Features of PVS-Studio" is one of those walls of text that I meant.


Suppressing old errors is definitely useful.

Most of the other things sound like tools to fix erroneous code after it's landed. If you're using -Werror, and you have something that makes sure the code builds before merging it to master / trunk, you can't commit erroneous code in the first place.


The compiler warnings are good, but they are not enough :). Specialized tools for static analysis, such as PVS-Studio will always outpace the compilers in diagnostic capabilities and configuration flexibility working with false positives. That's actually how the analyzer developers make money. Proofs: https://www.viva64.com/en/inspections/#ID0EAPAG


Any chance you want to try PVS-Studio on QEMU? It's about one million lines of code, do not huge but definitely nontrivial. We use static analysis and have reached almost zero defects.

You can contact me at pbonzini@redhat.com if you are interested.



The number of errors seem like pylint giving me errors. Some of them were like a is not a valid variable name. Maybe the numbers are like that. (Pylint that is not configured sanely/with default config really gives you thousands of errors and warnings for a fairly large code base and some of them were not errors but coding style.)

The numbers here may also be something like that. Nice blog post, may even make developers adopt something like cppcheck but most guys won't buy this.


I switched to flake8. Pylint's default config is crazy.


It's not just the config; Pylint assumes it's possible to statically parse code and make some simple assumptions about things like class / module members, which only works for the most straightforward Python code. It issues constant warnings for many third-party packages - the one that broke the back of it for us was protobuf, in which it complains on every access to any one of the generated classes. In the end we moved to flake8 as well, which finds probably 90% of the issues that Pylint did but at < 1% of the false positive rate.


It's a lot like that. PVS-Studio ran against FreeBSD and produced a ton of false positives.


The reason for this is described in the article "How to find 56 potential vulnerabilities in FreeBSD code in one evening": https://www.viva64.com/en/b/0496/#ID0ENNAC . There it is told that this is not scary and the analyzer settings can be customized.


Running any sufficiently advanced static analyzer on any large codebase will generate a ton of false positives.


So what? Qualitatively, it feels like PVS-Studio's false positive rate is higher than Coverity. And far higher than a modern free compiler's -Wall.

To describe the 27,000 detections as errors is a little misleading, and I think that's what the grandparent post is discussing.


This is not the way you think and it is not nice to mislead people. It all depends on the project.

I've heard people who said that Coverity gives many false positives and Cppcheck gives few. I've heard the opposite, that it is impossible to use Cppcheck because of the huge number of false positives, but Coverity is doing great. I heard that Coverity gives more false positives than PVS-Studio. And vice versa. And so on and so forth. What is the reason for such differences?

Projects have their certain styles of writing and different kinds of macros. These are macros and peculiarities of style that become the main source of false positives. This is why the first impression of using the analyzers of code depends on luck, and not on the coolness of the analyzer. If the analyzer doesn’t like a self-made my_assert() it will issue 10000 false positives.

So there is no point in talking abstractly about the number of false positives. Yes, you can not get lucky and there can be a lot of false positives. However, the static code analyzers allow you to configure them. In articles I have showed many times that the simplest configuration of the analyzer can greatly reduce the number of false positives. Example: https://www.viva64.com/en/b/0496/#ID0ENNAC


If you write good code, there won't be that many FPs.


> As you can see, the calculations are absolutely fair and transparent.

...if you can grant them the premise that the 3.3% sampled represents the project as a whole so that the ratio will project out to 27k overall.


Sampling 3.3% of lines in a 72 million line codebase, especially across so many sub projects, is statistically significant enough to make claims like this with decent certainty.

That's a few million sample lines tested.


Oh... We can ask about way sub projects were selected, and are they can be called a representative sample.

Nevertheless I agree with you: there are no obvious way to do a better estimation, so words "fair" and "transparent" seem right in place there. Of course only if Karpov does not try to mislead his readers deliberately by choosing sub projects by some criterion that correlates with error rate.


Even if the sampling was rigorously random and no projects were dropped after starting, there would still likely be a bias towards small, "long tail", projects. It seems plausible that these types of projects would have more errors (or vice versa).


Did they include Linux kernel or other 3rd party code in those 72 million lines? If yes then that is one buggy kernel.


The mere mention of EFL brought this article back to my mind: https://what.thedailywtf.com/topic/15001/enlightened

The author makes no names but ends the article with "[...] did I mention EFL is the basis of all applications on Tizen?".

I don't know whether EFL is still the base for all applications on Tizen (I think they changed technologies several times, they are now on .NET?) but that, plus the SDK churn and Samsung reputation of generally low software quality and severe security issues don't really instill confidence in the platform.


I've read a lot of Tizen code, and EFL is by far one of the more professional and well-engineered components. And this is realizing that in relation to many open-source libs EFL is rather a goofy oddball.

95% of Tizen is a huge hairball of needless nested and overlapping indirections (custom IPC layers, serialization formats, "management" daemons, libraries to interface with them, etc.) implemented in varying (except for the pervasive messiness and broken english) C styles; deep beneath all this you have the open-source libraries doing the actual useful work. You could try to make some similar claims about Android, but the percentage would be much lower and the code quality is much higher. I wish I had some real metric to point at (maybe this article suffices?) but the code in Tizen is truly awful.

Also they don't give a shit about their bug tracker; I've had system-breaking bugs, with patches, in "new" status for over a year. Because I don't work at Samsung, I can't contribute.


I'm dreaming of a language which prevents bugs a priori so that there is no need for static analysis. I know Rust is out there, but I'm not sure if all state of the art knowledge is getting into the language specification. Is there some programming language geek here which could confirm that this is the case and all safety features (except these with non-optional run-time trade offs) of Ada are included?


Samsung doesn't have a brilliant track record for security...


And yet they want to be the go-to IOT company... kind of scary.


Security is invisible.



Those alloca issues are a real concern! Especially given the fact that there's no way of knowing with any certainty that it has failed, as if it fails then it's not specified what behaviour will occur.


The first alloca (alloc in a loop) is in a test, which makes it less concerning IME.

The second (free() of a stack pointer), oof. Depending on the malloc implementation, you'll get a noop, a crash, an abort, a warning to stderr, or worst of all silent munge of stack vars.


Yeah, but you also don't want to allocate too many of these and smash the stack.


Continue. Exploring Microoptimizations Using Tizen Code as an Example - https://www.viva64.com/en/b/0520/


I will never buy a Samsung product that includes software written by them. They simply don't care about quality.


Do any Samsung products not contain software written by them?


are there similar security analyses of android OS source?



It's on my "smart TV" and there's a 50/50 chance that when I turn on the TV it crashes. On the plus side it's helping me stop watching so much TV.


This sort of thing is exactly why I'm very unwilling to buy a smart tv. It's getting harder to find decent dumb ones, though.


LG's WebOS is surprisingly solid. I was all set to use my TV as a monitor and plug in my Roku for the "smart" bits, but I abandoned that idea shortly after I turned it on for the first time.


I went from an LG with Google TV to an LG with WebOS and I was seriously surprised at how much I liked it. For fun I plugged in a mouse into the tv and even that worked great. They actually made it a first-class option for input. The UI overall is responsive and looks nice. I'm pretty happy with it.


Why not just get a big computer screen?


Pixel densities, and price are major factors here (computer screens being far more expensive per inch than a TV).

Computer screens are meant for sitting very close, and TV's are meant for sitting back aways... so the viewing experience will be "off" a bit (due to pixel densities) if you use either for the other purpose.

Also, most computer screens don't have COAX inputs, usually only have 1 HDMI input, etc...


Eh, what would a 50" TV do with coax?


Receive ATSC or QAM TV.


Receive over the air channels


Nobody makes an acceptably priced 40-50" computer screen that I can put in my living room.


Dunno, Iiyama has a decent 40" 4k for 400-500€ that does the job for me.


I need almost double that size unless I want to watch TV like a peasant....


Oh, huh, hadn't seen that before. That's actually an option, potentially.


Why not a projector?


Because I don't want to close the curtains (actually, I don't even have curtains currently, but assuming I did) and make the room dark whenever I want to watch TV.


Fair enough, but most HD projectors are > 3000 lumens (and start at ~400 USD/Euro) which is generally sufficient in daylight.


Vizio E series? Mine does the same thing. Powers on, sits for 15 seconds, then a "cannot connect" message that cant be fixed by anything other than turning it off and waiting it out.


Yes! I have a Samsung Vizio. So it's not just mine. When I turn on the screen, there is an 80% chance that the TV can't connect to the internet. I have to switch from the most-recently used app to another and back (say, Netflix to Amazon to Netflix) to force the TV to kick start its internet connection. It seems like the TV or its apps get stuck in some idle mode when the screen is off.


I thought Vizio was an private brand not related to Samsung.


Samsung and Vizio are completely separate entities.


Yep. My bad. I knew my Samsung TV runs Tizen but didn't know if it was a "Vizio" model [sic]. Assuming we were still talking about Samsung TVs because the parent article was about Samsung's Tizen OS, I searched for "samsung and vizio" and saw some images that looked like my Samsung TV. :)


No Samsung. I half jokingly says it's because it's been compromised by the CIA (Which doesn't sound too far fetched now) :P


Yes, PVS-Studio is very good and useful for avoiding bugs.

No, companies don't really care about avoiding bugs. No one at Samsung will risk delaying the release of the next plastic crap product by fixing any problems. Why would they? Defects will even encourage customers to buy a new device because they are used to getting no bug fix updates. This is the worst possible company for Viva64 to go to.

It seems like C++ developers either don't give a shit about code quality or they do, and don't make many mistakes anymore. Either way there is little demand for a static analyzer. It won't get easier for Viva64 in the future. A few years ago, Compilers and IDEs started providing similar warnings as PVS-Studio, although not quite as sophisticated yet. Code-level testing begins to be a thing in proprietary software. Clang offers valgrind-like sanitizers for different classes of bugs that even PVS-Studio cannot detect.

TIL that Viva64 has over 20 employees. The passive-aggressive blog posts always made it look like one or two people running PVS-Studio as a side project or so. I read those posts regularly, but I don't remember them announcing any new features or improvements in PVS-Studio itself. Why do I have to pay annually if they don't spend the money on improving the product?

I am not even sure if PVS-Studio is worth the money. They don't have prices on the web site and defend their business decisions in the FAQ (https://www.viva64.com/en/order-faq/) in a very unprofessional way. After reading that it feels like I might get ripped off.


> It seems like C++ developers either don't give a shit about code quality or they do, and don't make many mistakes anymore. Either way there is little demand for a static analyzer

Well, this is like claiming that authors writing in English don't care about grammar. There's too many people doing too many different things with the language -- from texting LOLs to writing academic research -- to generalize. But speaking from experience with one ~5M line industrial C++ codebase continually developed since the 90's, we care enough about quality to have a whole set of measures, automated and cultural, to support it.

We've looked at static analyzers several times but none of the evals have found issues worth the price. It turned up things like a static expression to compute a bitmask where the same flag was being listed multiple times. Which is nice, but made it feel more like a lint tool in terms of differential value added for us. If we didn't have things like valgrind memcheck on continuous integration, it might be a different story.


> They don't have prices on the web site

This is standard practice. PVS-Studio is a B2B solution. There are many details to be discussed.

For individual developers we propose the following: "How to use PVS-Studio for Free" - https://www.viva64.com/en/b/0457/

And "Handing out PVS-Studio Analyzer Licenses to Security Experts" - https://www.viva64.com/en/b/0510/


"How to use PVS-Studio for Free" is ridiculous.

"Handing out PVS-Studio Analyzer Licenses to Security Experts" - sorry, tl;dr


> I am not even sure if PVS-Studio is worth the money. They don't have prices on the web site

A lot of companies don't do that. Last I checked, Coverity was the same. And likely 6 figures for large code bases.

At this point, for commercial project, I'd just use clang analyzer and cppcheck. Then, use the clang address sanitizer when running tests. That's likely better than what people do now.

And, ensure that builds are clean with no errors. :( Most projects have thousands of warnings when compiling. That just can't be good.


I've worked on consumer electronics, code quality is up to the technical leads. Asserts had to be fixed, warnings as errors, and all memory problems were to be fixed, full stop. We had crash dump reporting in place, more than 10 hits and the issue got investigated and fixed.

Some software teams take pride in their work, it is unfortunate consumers don't have an easy way to ID those teams when buying a product.


> I read those posts regularly, but I don't remember them announcing any new features or improvements in PVS-Studio itself.

PVS-Studio Release History: https://www.viva64.com/en/m/0010/

PVS-Studio project - 10 years of failures and successes: https://www.viva64.com/en/b/0465/

How PVS-Studio does the bug search: methods and technologies: https://www.viva64.com/en/b/0466/


That's a changelog.

I don't mean once-a-year posts that no one reads because they are too long. I mean regular, valuable information about the product.


I think you do not follow our articles carefully :). We have a lot of diverse publications in the our blog: https://www.viva64.com/en/b/

Including, describing the product. For example:

PVS-Studio as a plugin for SonarQube - https://www.viva64.com/en/b/0513/

Support of Visual Studio 2017 and Roslyn 2.0 in PVS-Studio: sometimes it's not that easy to use ready-made solutions as it may seem - https://www.viva64.com/en/b/0503/

The way static analyzers fight against false positives, and why they do it - https://www.viva64.com/en/b/0488/

Why I Dislike Synthetic Tests - https://www.viva64.com/en/b/0471/

Integrating PVS-Studio into Eclipse CDT (Linux) - https://www.viva64.com/en/b/0458/

Integrating PVS-Studio into Anjuta DevStudio (Linux) - https://www.viva64.com/en/b/0459/

Issues we faced when renewing PVS-Studio user interface - https://www.viva64.com/en/b/0450/

and so on

I can also offer a presentation: PVS-Studio static code analyzer for C, C++ and C# (2017) - https://youtu.be/kmqF130pQW8


It seems like C++ developers either don't give a shit about code quality or they do, and don't make many mistakes anymore

I wish. That last part should rather be: "and don't make a lot of mistakes anymore". Even the gurus out there are not completely without mistakes. As for the first group you refer to: what might look like not giving a shit, is often actually simply a lack of understanding/training it seems. As in: not actively not giving a shit but rather simply not knowing they are doing it wrong.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: