Hacker News new | past | comments | ask | show | jobs | submit login
Android Phones Can Be Hacked Remotely by Viewing Malicious PNG Image (android.com)
237 points by air7 40 days ago | hide | past | web | favorite | 177 comments



It's kind of ridiculous that my Google Nexus 5X, which was released just over 3 years ago, will not receive updates to patch vulnerabilities like this anymore.


Most important isn't really the release date, but the date it was last sold to the public.

A member of the public should expect to be able to buy a new phone and use it for 3 years without exposing their nude pics to blackmailers.

So far, that isn't the case in the Android world.


> Most important isn't really the release date, but the date it was last sold to the public.

Agreed. I picked mine up 2 years ago and am now forced to upgrade to a newer model or install a third-party version of Android like LineageOS.


It's worse than that for other high-end phones. I bought the first Razer Phone from the Microsoft Store 1 year and 4 months ago (Nov 24, 2017). The last security patch was from July 5, 2018.

From what I understand, the CAT Phones (with built-in FLIR, etc) get even less updates.


Absolutely agree - there should be regulation demanding a 5 year period of security updates (or similar)..

Check out https://www.lineageos.org or one the other dists out there - and get that loaded up..


Do you really want legislators defining what is and what is not a ‘security’ issue?


That’s literally one of the jobs of government, to step in when the private sector does not regulate itself well enough to protect consumer interests. It’s not about wanting the government to step in, its about having no other recourse.


I'm not sure any of us has a right to speak for every consumer. I live in a country were majority would likely prefer cheaper devices w/o any security guarantees. Forcing producers to provide 5(?)-years updates will make prices rise, and it could be against interests of a large segment of consumers. The only regulation which I believe would be beneficial for all is obligatory transparency. There should be clear warnings like "The producer expects you to replace this device in 2 years, and will not support it after that", or "This producer doesn't promise anything in regard of this device - use at your own risk"


You wrote: "Forcing producers to provide 5(?)-years updates will make prices rise"...

I don't think so: it will oblige makers to standardize processes and software across phones... i.e. very basic specific drivers, then same OS and libs on all phones (with just different themes).

Then the Android security updates can be uploaded directly from Google at no cost. Just like for computers, and phones are computers with very few different features (input device, GSM chip).

My HP or Dell computer is not more expensive when Microsoft or Debian is pushing security updates.

In the end, unifying processes and software brings costs down.


I almost loved this idea on the first sight, but on the second I'm not sure I see how it can work. Will you mandate Apple to make iOS installable on Huawei's devices? If not, why? If yes, how? Who will bear responsibility if SailfishOS won't run on Samsung's hardware? Do you really think API standard can be enshrined in the law, and how you plan to make it safe from corporate abuse?

Btw, if Microsoft is pushing security updates you did pay additional cost for license. If it's Debian then quite possibly you still paid something to MS (if your computer was bought with OEM version), or in better case other corporations, and individuals pay for it (mostly to reduce Microsoft's power).


>I'm not sure any of us has a right to speak for every consumer.

Yet almost every civilization on Earth has already decided that we, the masses, _do_ have a right to speak for everyone when it constitutes a common good. In the US, you have to wear a seatbelt in most states. Your food is regulated by the FDA. Your cars must meet certain safety standards, as does your home. This list goes on and on.


You are referring to undefined terms to define undefined terms: "common good" is not something obvious in this particular situation. Food is poor example here because toxicity is more, or less the common denominator here, while safety of a smartphone has a very different value for someone whose life is immersed in digital services, and whose income allows to see a +/-50 dollars as a small variation, and one who uses only whatsapp, and weather service, and needs to save every penny. In fact majority of Android phones in the world right now are somewhat vulnerable, and it doesn't seem that people who use all these cheap stuff are eager for change. You already can be safer if you it's your priority by using more expensive brands (Apple, Samsung) which offer longer-term updates.

P.S. My food is not regulated by FDA, not everybody registered in HN are from States.


Just because a lot of people do it doesn't mean it's the right choice though. Most Republicans seem to disagree from what I can tell.


>Just because a lot of people do it doesn't mean it's the right choice though

Who said that it was?

The FDA, as it is known today, came into being in response to the public outrage at the shockingly unhygienic conditions in the Chicago stockyards that were described in Upton Sinclair’s book “The Jungle." Building codes exist to protect public health and welfare.

These weren't arbitrary decisions; they were made in response to real issues. Laws in general limit personal freedoms to protect society and the public. Also, some personal freedoms infringe on the freedoms of others. If murder were legal you would gain personal freedom, but your victims would lose theirs. I may want to have cows in my backyard, but my neighbors may have a few legitimate issues with that.

The fact is that the world has decided you are wrong, and for good reason. I don't need to prove why; you need to prove why everyone else is apparently incorrect.

>Most Republicans seem to disagree from what I can tell.

Uh huh, until we start talking about what you do with your body or who you want to marry. Many conservatives do believe in personal freedom over government rule, yes, but the best conditions are always brought about by balancing those, never in favoring one completely over the other.


"The producer expects this device to be unsafe after 2 years" would be more accurate :)


Do you think companies are going to choose "security" issues wisely? Do you have an actual solution that doesn't involve government, doesn't have the companies deciding themselves, and that the general public can do?


The past decade has seen an explosion in software being put and used everywhere. With that comes an explosion of bugs that are exploited. Literally hundreds of millions of people have had all their shit stolen from numerous services that have a laissez-faire approach to application security. It's like getting into an automobile accident; you're basically guaranteed to get into at least one in your lifetime. If you've used the Internet, private data of yours is virtually guaranteed to be leaked by at least one service you use.

I'm not a fan at all of excessive government overreach, but the private tech sector is utterly incompetent of policing itself because a) they don't give a shit, and b) no one is holding them accountable enough (you could argue shareholder should, but there's rarely an impact to bottom lines when security breaches happen). The only thing that will make them care is if an impartial 3rd party that can force them to care.


They don't need to. For example in the UK, goods sold need to be of "satisfactory quality" at the time of sale, and if in breach then the seller has to make it good for up to six years after sale, depending on the expected market lifetime of the product.

Something like that is all that's required in primary legislation.

What is missing is a finding that a sufficiently severe security vulnerability present at time of sale falls short of the expected standard. The general concept could be enforced by a court ruling setting precedent or by still quite generic legislation.

Finally it would be up to the courts to decide on a case-by-case basis what constitutes "sufficiently severe" in specific cases. That's no different to how everything else in law works.


Yes, that seems like a very good idea, the market seems to be incapable of regulating itself on security issues.


Yes, I really do want the government I pay for to at least attempt to protect me from pervasive, daily threats.

"Do you really want legislators deciding what is and is not 'reckless' driving?"

Yup!


What is your proposal for fixing that devices are not updated these days?


In an ideal world, my preference would be for the government to enforce some sort of standardized driver interface and user-modifiability guidelines, such that users have the ability to update their own devices.


Get a new device and pick a manufacturer that provides updates.


So daydream. Such a company doesn't exist.


Apple comes closer than most. The most recent unsupported device is the 5C, which was released 5 years ago and discontinued in 2016. Their end of support appears to be mostly in line with major architecture changes and not "too old, too hard, just buy a new one" as appears to be the case with Android.



You do realize that phone is only available for preorder? They may not even be around in 3 or 5 years. The OP likely suggested Apple (5s is in its 6th year of updates).


>You do realize that phone is only available for preorder?

Yes, it is only for preorder, but the devkit already exists [0].

> They may not even be around in 3 or 5 years.

If they manage to make the final version of the phone (which is quite likely), it won't matter anymore, since the OS is just GNU/Linux. Correspondingly, the updates do not so much depend on the phone manufacturers.

[0] https://puri.sm/posts/how-we-designed-the-librem-5-dev-kit-w...


So how does that solve a problem lineage os doesn't?

You're buying into a phone that should get continued support for years to come, assuming the company survives, and they don't end up finding the maintenance burnden utterly untenable (once they on model version 3, I can't see it being the easiest thing to support the older models - Apple does, but Apples huge). It seems to be relatively underpowered (though that's really up to how many resources the system uses), and I can't imagine will age well. There will be little to no mainstream app support (they seem to be touting "just run it in a browser" as a solution to that - I thought we left those dark ages).

It seems like a neat, niche product, like a raspberry pi phone, but it doesn't give me a lot of hope for the future.


> So how does that solve a problem lineage os doesn't?

LineageOS depends on the proprietary drivers and/or undocumented devices, which will never get any updates. If there is a security bug in your WiFi driver, game over. This phone is being designed to work with free software, anyone could do updates.

>assuming the company survives, and

Again, free software gives us the freedom to do updates ourselves (or pay to someone to get them). GNU/Linux supports most of the old computers quite well.

>There will be little to no mainstream app support

"We will test the capabilities of powering Anbox or Shashlik to allow users the ability to run Android applications within PureOS on the Librem 5", https://puri.sm/faq/

>It seems to be relatively underpowered

True, but one has to start somewhere. Personally, I think this phone brings a lot of hope for the future.


Currently, that phone doesn't exist.


What about OnePlus? I haven't been in the Android ecosystem for a few years but they always seemed to be on par if not better than Google.


You pack a lot of fallacies into one sentence! False dichotomy, boogie man with the bonus of scare quotes. Mandating security updates for some amount of time after a product is sold isn't 'legislators defining security issues'.


Google has already addressed the issue with Android One. Android One certified devices are guaranteed at least two years of security updates. Most of the manufacturers already have such devices available.


Any advices, caveats or other thoughts on this process?


No, that's not strong enough. It should be indefinitely (or owner has right to damages) UNLESS the entire spec and interface of a device is completely, comprehensively, and publically documented from the silicon up, and the device must either lack software integrity checking or it must be fully under the owner's control (eg purge OEM public key, replace with his own). This should apply to all products containing microprocessors and software to execute, and should apply to burned in ROMs too (since that software in ROM should be user writable/replaceable, this should discourage use of burn in ROMs). This should apply to the end product, so the whole car, TV, washing machine, vacuum cleaner, cellphone, game console, Intel CPUs and chipsets, etc, must have its microcontroller interfaces and specs fully and publically documented or damages could be awarded later once exploits appear. This should tamp down on IoT for fridges and can openers too as what OEM wants to either document IP xor expose themselves to potentially unlimited civil liabilities.


It's crazy how this is even an issue in Android when it has been a solved problem in desktop Linux for decades. I can install Ubuntu on any ancient laptop and have daily security updates for life. The only explanation I can think of is that Google planned this on purpose so users would be forced to buy new devices every few years.


It's nothing to do with Linux. It's because desktops use standardised discoverable hardware with mostly documented peripheral interfaces. ARM phones do not.


That is really only an issue for low-level drivers and kernel level issues. There is nothing stopping OS vendors from fixing this without updating the drivers.


There is for Linux because it doesn't have a stable driver ABI. The drivers and the kernel are inseparable.


I never said otherwise. But the most critical android vulnerabilities are almost always fixable in userspace.


At the risk of sounding uninformed, does this mean the Year Of The ARM Desktop will have these same issues with updates and standardised hardware, or is it just a mobile SOC thing?


If you can force phone manifacturers to standrize chargers, same is possible for other issues.


Google did not "plan this on purpose" - if anything, they managed to deal with the general crappiness of ARM hardware somewhat better than other Linux-on-mobile efforts did, which is why Android became dominant in the first place.


Dictated obsolescence.

Knowingly and willingly exposing your customers who are unable or unwilling to buy a new device to hacking.


The unfortunate reality is that most users don't care for security. Google can continue to get away with only 3 years support of security updates. Another reason why I am considering going back to an iPhone.


They don't care about security because they have never been affected by having an Android security patch level too old, and they don't know anybody who has. And they probably won't, honestly.

I find it very concerning that I can't download an update to fix this particular vulnerability without a help from the manufacturer of my device. PNG has nothing to do with drivers, after all. And yet, I doubt I'm going to get pwned over this one. Mitigations to the rescue.


I disagree. Users may not care about security in the same way that technical people do, but they do expect the data on their phones to be private (nudes, in particular). I think the reason why the current situation is generally accepted, is because nobody has exploited one of these vulnerabilities to extract user data on a large scale, yet.


I thought that it still _may_ receive security updates. But now it's on Google's mercy. Or is the fix impossible without whole OS update for some reasons.

Either way it is ridiculous indeed. I have a 5X. It certainly pokes me to install LinageOS faster. A side question: does anyone use Plasma Mobile on their 5X? - https://www.plasma-mobile.org/neon-arch-reference-rootfs/


If you follow the monthly Android Security Bulletin, there has been a long stream fixes for remote-code-execution vulnerablities in codec implementations. It got big in 2015 when the Stagefright branded vulnerability got some publicity, and they continued testing & have continued to fix an endless stream of codec vulns ever since.

There was also a Bluetooth RCE in this month's bulletin. And in November there was yet another round of WiFi pwnage, this time in Qualcomm WiFi (CVE-2018-11905). It's all quite bad.


And this is exactly the reason why you need vendor support to apply fixes. Half of those fixes are in binary blobs, so if you don't update those (e.g you rely only on AOSP/Lineage updates), you're screwed.


Apart from the fact that the only answer right now for security-conscious android buyers is "get a Pixel" (I'm exagerating, flagship from a few vendors are good to go for 1-2 years), this is once again a reminder of the damages of binary blobs, here directly for consumers (they have non-updatable security holes).


Actually Android One devices are also guaranteed (inasmuch as the tiny text at the bottom of the Android One website [1] says) three years of security updates

> Monthly security updates to be supported for at least 3 years after initial phone release.

(scroll down to the bottom of the page and look for the double asterisks footnote. I don't know why they want to hide what is IMO the best reason to get an Android One device)

[1]: https://www.android.com/one/


Good point. They previously removed this language from the website, glad they put it back. So yeah, buy an Android One at launch is good advice for the 3 years of security support.

It's still a short window IMHO, and it's bad for the planet to throw working devices after 3 years.


The long-term solution is to move away from binary blobs. The PostmarketOS project is trying to achieve this for at least a few hardware models, including most notably the LG Nexus 5.


I've managed to find the list [1] of bulletins, but do you happen to know how one can subscribe to receive updates through email?

[1] https://source.android.com/security/bulletin

edit: in hindsight, i could have just looked at the article link rather than googling - but I still can't work out how to subscribe for updates.



This is great!

Unfortunately, there don't seem to be any bulletins later than March 2018 (irrespective of whether one joins the group or not).


Do any of these have public PoCs?


>> We encourage all users to update to the latest version of Android where possible.

Good luck with that.


for those who don't speak google, they approximately said 'buy a new phone'


My phone gets the latest Android updates the day they are released even though it is a discontinued phone and not from Google.

The issue isn't Google or Android. It is phone manufacturers.


Heh. I got the Galaxy Nexus when it was the flagship, and I never got a single update. I got the Nexus 5X a couple of years ago, and Google is failing to provide this update for that phone.

If Google can't even get it right on their own devices, they can't be shifting the blame to manufacturers.

The only remotely-safe choice is an iPhone.


> The issue isn't Google or Android. It is phone manufacturers.

To be fair, Google only guarantees support of even the Pixel for 3 years... so it still kind of is.


AND carriers. Don't forget the carriers.


It's fine, Google updates everything through Google Play Services now.


Is this sarcasm or serious? If serious, how does this work in practice on older phones that don't get oa updates but do get play store updates?

I see this mentioned a lot on reddit, but unless there are no security updates happening through android itself it seems ill informed.


I deleted a couple of responses to this but the short version is that no, Skia (the affected component) is not being updated through the Play Store Framework. Flippant responses to security issues are not useful.


Thanks! I was 99% sure the common reddit response was wrongheaded.

I'll make this point explicitly when I see it, that phones may get some updates, but that old phones don't get the security patches at an os level.

Am not on android so I didn't want to be mistaken in how it worked.


I don't have the info on Project Treble (Android 8 and up) - this changes things as the "Android Framework" is separated out from the rest of the phone firmware.

It's not clear if this would include kernel and driver updates (I don't think so), but it's very likely things like Skia and codecs would be able to be updated faster. Very happy to be corrected.

It also looks like not all devices running 8.0 were required to be Treble compliant. Hi, Samsung.


Gotcha. So for 8.0 devices onwards the issue will be less severe. Still some unpatched bugs, but a smaller portion of them.


I jailbroke my Android tablet. El Goog won't let me update it unless I flash a new ROM. Because how dare you control your own device!


Flippant responses to security issues aren't helpful.

In your case, (I presume) you are both the owner and the only user with physical access, and "you control your own device" is unique.

In my case I'm the owner, but other people may have temporary physical access, and the identity of the "you" is therefore somewhat ambiguous. As owner I want protection against one of the possible values of "you", see?


Whilst that makes sense for many versions of secure boot, it doesn't make sense for blocking updates to a flashed ROM.

If the owner of the device flashed their ROM, they want updates. If someone other than the owner flashed the ROM, the true owner of the device is screwed anyway. Blocking updates won't save the true owner. Instead, this seems nothing more than lockdown for the sake of control.


As I understand it, the flashing breaks a chain of trust. Once a device has booted from software written by others, your own software can never again be truly confident that it is in full control.

It was written to flash by untrusted software, it may have been modified on the way such that it thinks it's unmodified, such that the signatures match when it checks itself, but actual execution uses a modification by that untrusted code.

It's a bit like that paper about modifying the compiler. https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html

From the owner's point of view, being able to update the ROM is the manufacturer's statement that nothing done by a user with physical access survives.


This is actually not the case. The owner is always made aware of whether the system has been opened up to "untrusted" software or not (they get a warning at boot if it has been), but an "untrusted" system can still be OTA-updated if the original OS image has been preserved as-is. This is often combined with some amount of custom modification by separately installing an "overlay"-based solution such as Magisk. But a full-custom ROM cannot be updated in this way, because the manufacturer's OTA update is monolithic and effectively replaces the original stock ROM!


AIUI, an OTA update (or USB update) is effectively carried out by software that was booted from the same flash memory to which the untrusted software was written.

I accept that the manufacturer's OTA update is intended to be monolothic, is desigend to be monolithic, but what assurance do I (the owner) have that the software that was flashed by a physical user actually flashes its replacement monolithically? That it leaves nothing behind?

EDIT: on further reflection, it seems possible to design a phone that provides such an assurance. That any monolothic OTA update actually has to be monolithic, even if untrusted software is in control of the main CPU. But I wouldn't want to bet that any/many/most phones built today actually offer that guarantee.


Android system updates are monolithic OS images. So what are you going to do exactly, replace the user's custom ROM with a stock system?


> replace the user's custom ROM with a stock system?

Yes. I can't see any technical reason why Google disallows factory resetting my device without re-flashing a ROM.


I have a Moto G6 Forge purchased a couple months ago, and it lists no updates available with the last update Jan 1 2019.

This is just the android ecosystem these days, don't count on security updates?

(This phone is actually my first smart phone, I can't even explain why I had a flip phone for so long... except this. But I'm still figuring out what to expect from em.)


If you want the quickest Android updates, you'll need either a carrier unlocked Google Pixel, or an Android One device. These are devices where there are fewer hurdles between a patch and your device.

If, for example, you have a Samsung device from Verizon, it goes something like this: Google releases new source code containing patch. Samsung takes a few months to roll it into their updates, and sends it to Verizon to Q&A. Verizon either pushes back or accepts it after some time. That whole process takes far longer than it should, partly because they are mixing security and features in the same updates.


Thanks, so I gather it's normal to expect significant delays in security patches for most Android phones, with some exceptions.

People are talking about how _old_ phones don't get patches at all, but even most _new_ phones have "zero-day" vulnerabilities (cause zero day lasts for months apparently) for significant periods.

What a world! How is this okay?


How it's OK is likely because there hasn't been a huge category-5 fuck up yet. Or at least none that we know of. Some of this is probably from sheer luck. Some of it is probably from some decent security design baked in like sandboxing and the like. The first time it becomes public that 80% or more of Android users just had their personal info stolen, it'll suddenly become not okay. Especially if that user group contains significant amounts of celebrities and government types.

It's not all doom and gloom though- there have been improvements. Different partitioning schemes, breaking out core services out of firmware and into the play store, etc. It's just that thus far Samsung, the biggest Android player by far, has decided not to implement all of them.


I have a Moto x4 which is an Android One device. I got the security patches a few days ago.


Yeah, I didn't realize I had to pay extra for a phone that would receive security patches. Oops.


Shouldn't all code be gradually updated to use Rust libraries when they exist? Is it a stupid idea?

I know it's a lot of work, but the Android/Chrome teams are big, and this is important enough work that could improve the security of the systems.


It's more like Rust is still young and there are far from enough people with same level of skill as there are C/C++ programmers. As an example just think of the amount of work that's gone into libavcodec/libavformat, it will take years to replicate and reach the same level of functionality.

Eventually it will happen though (at least I hope so).


Here we're talking about PNG format, for which a codec already exists.

I was asking about the integration work, exporting high quality Rust libraries with a C API and calling from Skia for example.


This has actually been happening for Firefox [1]. Several parser/codec libraries have already been ported for exactly this reason.

1. https://wiki.mozilla.org/Oxidation


It's cool, I knew this is happening, but didn't know the code name for it. The big thing will be when WebRender is shipping to Firefox.

The plan for C++ / Rust interop sounds amazing.

Anyways the main question is whether other companies should reuse the work already done by the great Mozilla team.


What's exciting about the interop plan? I'm currently working on a project that requires including some C headers in Rust. Trouble is the bindgen library doesn't recognize inline functions and I need to create wrappers for those.


Well... Rust has some safety features, but 1. so do others (Go is a Google project...), 2. it helps with some things and less with others. So yeah, if you have the resources, using Rust/Go/whatever and taking a security-first view is sensible. It's not a silver bullet, but it's probably helpful.


Actually Rust _is_ a silver bullet, that's why it's so exciting.

System level programming memory safety was a dream for a long time.

Go is not capable of this at all, so it's out of question for replacing low level libraries.


Why isn't Go capable of system level programming memory safety?


It does not actually provide memory safety in concurrent scenarios, only for purely sequential code. Given the predicted use-case scenarios for Go, that's pretty much a truck-sized hole in the language's purported guarantees, especially in comparison with Rust.


It has a runtime with a GC. It's capable of safety, but not the low level performance/predictability of the Rust approach, and is also harder to embed than a Rust approach.

This is not a disqualifer for most use cases, but is for some.


Well the problem with go I believe as soon as you have to make high performance code you disable the garbage collector, which removes a lot of the safety features.


Technically that's a very fine idea. I write codecs in Rust, and it's working great for this use case.

But a complete rewrite is a hard sell where just fuzzing and patching existing code seems like a quicker fix and not rocking the boat too much.


It's an interesting question whether fuzzing or replacing is the better way to go. Newer Rust libraries can have other (unknown) problems, that's why I was interested in which is the statistically better solution for improving the code base.

But from what you wrote, it sounds more like what's simpler to implement for the team gets done.


I think Skia is dynamically linked with a C++ API.


Is there a test to see if my device is vulnerable?


I remember a similar bug with gifs years ago. It was possible to embed html inside gif and browsers would actually render the html.

It was great for stealing peoples sessions on forums using a malformed avatar.


There was also the Windows Metafile mayhem [0].

[0]: https://en.wikipedia.org/wiki/Windows_Metafile_vulnerability


yes! that one was a doozy.


Seems the vulnerability was in Skia, thankfully, rather than libpng.


Is this another backdoor or am I being paranoid again cause this type of exploit is like those who see in the movies, we start laughing ironically and we are like "cmon now, that's insane...hack every phone using an image, not in a billion"


It’s notoriously hard to make parsers bulletproof against malicious input.

For a toy example, imagine the format says [number of bytes to come] [the bytes]. Allocate an array with the specified size and then copy the rest of the message into it. Right?

Now you’ve got a remote buffer overflow. How about [start sentinel byte] [up to N data bytes] [end sentinel byte]? Just put more than the allowed bytes before the end sentinel.

Unless a format is designed to facilitate secure implementation (which would include being very simple), there are going to be at least a few vulnerable implementations. Is it a backdoor? Read the patch yourself, decide whether the original authors and reviewers knew what they were doing.


> For a toy example, imagine the format says [number of bytes to come] [the bytes]. Allocate an array with the specified size and then copy the rest of the message into it. Right? Now you’ve got a remote buffer overflow.

Well then you're doing it wrong. You copy [number of bytes to come] bytes, not [rest of message] size bytes, only once you've validated that [rest of message size] is not too large to begin with. That is a completely toy example. A decent programmer would never trust the remote end to actually just send [number of bytes to come] bytes.


> A decent programmer would never...

You might want to Google for ImageTragick. Or heartbleed. Or shell shock. Or Java deserialization. Or flash. But please, tell me more about what a decent programmer would never do.


Decent programmers are also context dependent. For example when ImageMagick's delegate functionality was envisioned (before 1999 - that's the earliest page I was able to quickly found that mentions the feature), IT security was not taken that seriously. Basically Mitnick was just arrested (1995), multi user systems were all guarded and watched closely by bastard operators from hell like dragons sitting on a mountain of gold.

And PHP was seen as harmless fun.

And malware was thought of as boot sector viruses and even though the first worm (1988, the Morris) worked exactly like what people are harping about (buffer overrun), no one cared, because C++ was slow to compile, and managed runtimes are evil.

And still no one cares. It takes some competence to stop using C, but it's easier to simply shout at those who point out the obvious.

And when someone brings out the good old Therac-25 everyone starts to fall silent. (Even if healthcare accreditation is very expensive, it's expensive exactly because the whole IT field is still full of unverifiable code.)


> For example when ImageMagick's delegate functionality was envisioned ... IT security was not taken that seriously

Yes it might have been easy to get away with writing the OP's faulty specification into code back then. After 25 years of people screaming about the dangers of buffer/integer overflows, it's very hard for me to think that a serious C/C++ programmer could look at that specification and not see the issue before blindly committing it to code today. I suppose it could happen.

OpenSSL (which everyone else here is crying about as an example) is also over 20 years old.

> And still no one cares. It takes some competence to stop using C, but it's easier to simply shout at those who point out the obvious.

Well imagine trying to rewrite the Linux kernel today in a memory safe language, without losing any support for any of the current target architectures. Or NetBSD even more so for that matter. Nobody has convinced me it could be done, and as of yet, it hasn't been.

Throwing away C and C++ for good is fine - if you don't want to do much in the way right now of embedded programming, or working with sophisticated games engines (eg. UE4), just for fun.

If you're not confident of the quality of the code, then maybe don't deliver it to for all and sundry saying its fit for purpose. And if you are confident, then check it again. That's all I'm saying.


> Nobody has convinced me it could be done, and as of yet, it hasn't been.

Agreed. And even though it could be done, but it'd of course came at an enormous cost to the current pace of driver and other updates.

Google is trying something with Fuchsia though.

And there are experiments similar to how Linux itself got started. (Redox OS comes to mind.)


> Google is trying something with Fuchsia though. And there are experiments similar to how Linux itself got started. (Redox OS comes to mind.)

Aware of both of these, but yeah, wake me up when either are close to being capable of being even a niche competitor, for example, as complete and accessible as Haiku currently is.


What's Haiku's offering? It's BeOS resurrected. It's in C++. (Which - IMHO - has too much baggage, and still lacking sane module support.) In development since 2009 and still lacking SMP, hm.


Oh nothing on a technical level, I agree with your critcisms there. Just in the sense that it's available now. It has package management and you can install applications, compile applications, run applications etc right now. Unlike Fuchsia and Redox AFAIK.

Most of the BSD's didn't have decent working SMP for a period longer than that either.

I think it's kind of sad that it takes a company the size of Google to create something like Fuchsia, which we can't be sure will ever see the light of day for most users, even given the vast pools of code available in Linux and BSD they could reference.

Hardware has grown exponentially in complexity since Linux was initially developed of course, which makes the job of writing a new OS much tougher. But you could probably find 20-30 different hobby OS's over on osdev.org that are more complete and accessible for the end user than Fuschsia or Redox right now, even if they are just more boring POSIX clones.


> Hardware has grown exponentially in complexity since Linux was initially developed of course, which makes the job of writing a new OS much tougher.

Though, interestingly hardware also got a lot smarter (because usually they have some kind of microprocessor and RTOS on them), so in theory interfacing with it should be more easier. (Serial buses, simple enumerate, simple addressing, we have a lot of good abstractions, no need to fiddle with registers, just use a PCI-E, DMA, I2C library.)

But of course at the same time just supporting these smart standards is hard (because initialization of things is not trivial, even just getting DDR RAM to work requires tinkering with MSRs, and ACPI is its own special hell).


> In development since 2009 and still lacking SMP, hm.

Er ... what? Where are you getting this information? Haiku has been in development since 2002, and has SMP support since ... 2003/2004?

NUMA support is a little shaky, but I know users who have run Haiku on 32-core Ryzens with no issues. So we definitely have SMP support.


Wikipedia. And late night misreading. Sorry for causing confusion!

Now re-reading it, the article clearly states that development started in 2001, and that it has rudimentary SMP support.


Yes, I don't know why, the SMP support is far from "rudimentary". I don't think we ever had a Giant-lock, and our scheduler is pretty good, too (runqueue-based, O(1) w.r.t. threads, O(log N) with respect to cores, fully utilizes ACPI topology information.)


And yet, look at all the billions (trillions?) of lines of code in the world which do not have a horrible security flaw. Heartbleed is notable because of its rarity. OpenSSL is over 500,000 lines of C, written by 500 contributors, and somehow it generally works pretty well.

Maybe the phrase "a decent programmer would never" unfairly assigns blame to an individual's skill, but I think it would be fair to say that the programmers who designed and implemented these bugs were not doing decent work on the day that they made these mistakes.

If our standards for C programmers don't include bounds checking of array access, what exactly do they include?


The whole problem is that you only need one mistake in all those billions/trillions of LOC's, and your systems get 0wned. I'm pretty sure that "our standards for C programmers" don't include "programmer is guaranteed not to make any mistake, EVAR", whereas that's an entirely appropriate standard for automated, language-based security.


See the related article posted from Microsoft today; Microsoft: 70 percent of all security bugs are memory safety issues

  https://news.ycombinator.com/item?id=19138602
If we could get rid of that 70% that would be great, but it still leaves programmers doing other stupid shit in the other 30% of cases.


You and I evidently have different definitions of "decent programmer" then. It sounds like your standards are much lower.


Unless by "decent programmer" you mean a non-human programmer, he's right.

This might be shocking to a flawless supercomputer AI such as yourself but humans make mistakes.


Look back at how the original problem is specified.

> imagine the format says [number of bytes to come] [the bytes]. Allocate an array with the specified size and then copy the rest of the message into it. Right?

If your client came to you and said that was the specification, you'd be negligent to fail to point out the obvious flaw with the logic, wouldn't you?


IIRC Heartbleed was basically exactly that.


Heartbleed triggered a major audit and a fork (LibreSSL) of the OpenSSL codebase, wherein much of the code was found to be of less than stellar quality:

"Unfortunately, the OpenSSL basket was being watched somewhat less than very carefully. Yeah, it has bugs, but surely somebody else will fix them. And worst case scenario, since everybody uses the same library, everybody will be affected by the bugs. Nobody wants to be alone".

https://www.openbsd.org/papers/eurobsdcon2014-libressl.html

So I'm pretty sure the one example you have picked here reinforces my point.


you... don't have much real world experience with coding, do you. Big teams, huge codebases that nobody groks completely, weird legacy code etc. All these create tons of sub-optimal implementations on various levels.


If there were easily exploitable buffer overflows in one of the weird, legacy C++ programs I and my team have to maintain, and we didn't take steps to protect that program from abuse, ambulances and fire engines for nearly 8 million people would never be called to their destinations nor arrive on time.

Is that real world enough for you?


That is entirely fallacious reasoning. Your programs could be full of exploitable bugs (and they probably are) yet nobody knows or cares enough to exploit them. OpenSSL had trivial errors unchecked and presumably unexploited for years.


Or maybe we know the program we inherited has flaws, and we don't connect it/publish its source code to the internet claiming it's fit for general purpose use, and we strive everyday to provide new mitigations and defense in depth to prevent its exploitation.

But you go ahead and believe it's because of dumb luck if you like.


You made a simple If-this-then-that claim that was a non-sequitur, that's my only point.

> we strive everyday to provide new mitigations and defense in depth to prevent its exploitation.

I'm fairly confident that even if you didn't do that it wouldn't make much of a difference, because the likelihood of anyone attempting to exploit your particular niche program are quite low.

> But you go ahead and believe it's because of dumb luck if you like.

I didn't say it was dumb luck. The fact of the matter is that most software isn't hardened in the way you describe, for simple economical reasons.


Sorry where is the non-sequitur? The previous poster purported that I mustn't have much "real world" experience in coding, and I responded that I and my team wouldn't be entrusted to successfully maintain a program and network that coordinates emergency services first responders over a large land mass and number of people if that was the case.

If you honestly think that emergency services infrastructure is less of a valuable target to a rogue operator than Internet Widgets Inc. or whatever then I think we are done here for now.

This is all a huge over-reaction to my initial point which was; the format that the OP proposed was busted, and any good C/C++ programmer should notice that immediately rather than implement it blindly. If they can't do that then, yes, definitely stick to a managed language. Which I agree is good advice anyway.

Thank you and have a nice day.


> Sorry where is the non-sequitur?

It is in your original statement (abbreviated):

If there were easily exploited buffer overflow in <our program>, ambulances and fire engines <for our customers> would never be called to their destinations nor arrive on time.

One doesn't follow from the other. Your statement implies there are bad actors in existence that would actually exploit such flaws in order to disrupt the service. That may or may not be true, therefore it does not follow.

> The previous poster purported that I mustn't have much "real world" experience in coding, and I responded that I and my team wouldn't be entrusted to successfully maintain a program and network that coordinates emergency services first responders over a large land mass and number of people if that was the case.

That's not what I actually replied to, but it's also a non-sequitur. It would be entirely plausible that you could be a reckless band of cowboy coders that happens to be able to produce a working product that is nevertheless full of theoretically exploitable flaws that just happen to not get exploited, because nobody cares. Also, whoever "entrusted" you may themselves be entirely reckless or incompetent.

I'm not saying that any of this is the case, I'm simply saying you can't logically deduce one from the other. Therefore it doesn't really work as an argument.


> One doesn't follow from the other. Your statement implies there are bad actors in existence that would actually exploit such flaws in order to disrupt the service.

Rubbish. If the service in question was directly accessible from a public network and its source code known and published, it would be attacked in less than 5 minutes. That is the assumption that causes millions of dollars to be spent on securing the application and the environment every year. We are not prepared to take that risk, and we secure appropriately and expensively.

> Also, whoever "entrusted" you may themselves be entirely reckless or incompetent.

That is how employment works in the wider world of Internet Widgets etc. but not in critical government regulated infrastructure like emergency services.

Yes we are unable to prove 100% that the application has no externally exploitable bugs, despite our best efforts, so we don't claim that it is and go and make it accessible to untrusted sources of input.


> If the service in question was directly accessible from a public network and its source code known and published, it would be attacked in less than 5 minutes.

You just keep going with the non-sequitur, but now you're also moving the goalpost. Your original statement didn't contain anything about "directly accessible from a public network" (which I assume isn't the case) or "source code known and published" (which you said wasn't the case).

And it is still a non-sequitur. Why would anybody expend non-trivial resources to find exploitable flaw in an open-source codebase to actually bring down emergency service? There's no reasonable way to profit from that. If you're some foreign hacker, you might want to find the exploit, but you wouldn't attack right away. Again, I'm not saying nobody would possibly do it, but one does not automatically follow from the other.

Heartbleed really is the best example on how a critical piece of software infrastructure (including a lot of government regulated infrastructure) had a publicly visible flaw that went undetected (and presumably unexploited) for several years.

> That is how employment works in the wider world of Internet Widgets etc. but not in critical government regulated infrastructure like emergency services.

I don't know man, I'm here arguing with this guy who doesn't seem to grasp a basic concept in logical reasoning, yet he's working on critical government infrastructure.


> Heartbleed really is the best example on how a critical piece of software infrastructure (including a lot of government regulated infrastructure) had a publicly visible flaw that went undetected (and presumably unexploited) for several years.

It's already been discussed in this thread how old OpenSSL is - it dates back to 1998 when the risk of buffer overflows was not as well understood or publicised as it is now. It has also been addressed that code audits of OpenSSL were done as a result, which has lead to both forks and patches.

In the context of the new, one liner graphics format proposed by the OP in 2019 that had an obvious bug written straight into its specification, this thread has now reached absurd levels.

> Again, I'm not saying nobody would possibly do it, but one does not automatically follow from the other.

Ok. I'll tell my customer we don't need all the firewalls, IDS, code inspection, risk assessment and hardening because zeroname on HN told me it was OK.

> I don't know man, I'm here arguing with this guy who doesn't seem to grasp a basic concept in logical reasoning, yet he's working on critical government infrastructure.

I'm not at all offended by your ridiculous insults, I feel rewarded every day that my team and I are working on something worthwhile, and also the white-knuckled fear on that goes with it on occasion when we worry whether we've missed something or not.

Let's just both be thankful you don't live in an area where my team provides emergency services then.

Again, have a nice day, "man".


> It's already been discussed in this thread how old OpenSSL is...

That's completely irrelevant to the point I am making. I'm using it as a counterexample to your reasoning that if code for (your) critical infrastructure was published, any obvious flaws in it would almost immediately be discovered and exploited. That's completely orthogonal to how old or poorly written the codebase is.

> Ok. I'll tell my customer we don't need all the firewalls, IDS, code inspection, risk assessment and hardening because zeroname on HN told me it was OK.

Well, I didn't. You're clearly not only unwilling to engage in logical reasoning, you also seem to lack basic reading comprehension.

> I'm not at all offended by your ridiculous insults...

I don't mean it as an insult, but I'm really losing my patience here. Your ego is so tied to being right that you can't admit to having made a little logical blunder there. Logical fallacies are actually really common, there's no shame in them. In fact, I'm prone to arguing the same If-this-then-that fallacies myself.

Either way, if you really care to be right, it doesn't matter what your job is or who gave it to you and how smart and diligent your team is. You just have to get your propositional logic correct.

Simple example:

False: If I leave the door unlocked, people will come and steal my stuff. (Non Sequitur)

True: If I leave the door unlocked, a thief will have it much easier to come in and steal my stuff.


From wikipedia: the Heartbleed bug was introduced into OpenSSL in 2012 and publicly disclosed in April 2014.

The Morris worm was released in 1988.

People are prone to make logical errors and to overlook stuff, even when they are aware of the risks. Thinking is hard, and every little bit of help is useful, either from peers or from tools.

Do not take what zeroname wrote as an insult. He simply pointed out the mistake you made (indeed, it does not follow what you wrote - it may be possible, and probable, but it does not follow automatically).

You can look at all this as a peer review, just not for code, but for thought/logic argumentation.


"You're clearly not only unwilling to engage in logical reasoning, you also seem to lack basic reading comprehension."

- that is not simply pointing out mistakes. That's ridiculous insults that have no place on HN. I didn't think zeroname came off better in that exchange at all, in terms of who was trying to communicate, not just appear superior. There was a crossed wire, and condescending lectures in basic logic mixed with insults did not help.


>A decent programmer would never trust the remote end to actually just send [number of bytes to come] bytes.

Well, I applaud your optimism, anyways, but programmers of almost any skill level make egregious security mistakes, especially when working on a deadline. Maybe not everyone would make this mistake, but managing trust while writing optimal code is very tricky.

LZO is a fantastic example of a library that had an "obvious" security flaw for many years, with a fairly trivial integer overflow bug, due to how the decoder dealt with the variable length int encoding.


> A decent programmer would never [...]

And that kind of attitude is why we still have security bugs.


> And that kind of attitude is why we still have security bugs.

Absolutely poppycock. We could all program in completely memory safe languages tomorrow with only one signed integer type and crap programmers would still find a way to write security holes into their programs.

I'm in no way condoning the use of C/C++ forever, the writing is on the wall for those languages, as much as I love them.

But programmers have to learn first and foremost to take responsibility; if you're writing code that runs with elevated privileges then BE CAREFUL. If you're writing code that is reading data from an untrusted source (disk, network or otherwise) then BE CAREFUL. Hell, being careful even if you think the data source can be trusted is a good starting point - defensive programming 101.

We cannot blame our tools forever, but we can improve them.


"Be careful" is not very actionable. Here are some things you can actually do:

- Perform periodic external security reviews

- Use fuzzing for all uncontrolled/user-inputs

- Use static analysis tools

- Maintain a security bounty program

- Send your employees to security training

- Use a memory safe programming language


I'd add the "adopt an information theoretic approach" to security analysis. (Which is basically what taint analyzers perform.) Think through how systems/components/libraries/functions can and do interface with each other, and try to secure these points. (Make them type safe, make them strict, report meaningful errors ["expected this but got this" is a million times better than invalid input], so they will be easy to maintain and make even more secure.) Try to extract out these parameters as much as possible so you can avoid possible impedance mismatches across the interfaces.

Also, checklists. Checklists are good. And an inventory of used components, and their versions. (This makes it easy to do a CVE review from time to time, and then to automate the review eventually, so only the list maintenance will remain manual.)

Defense in depth, but not through obscurity. (There usually are low hanging fruits. Enforce use of password managers, invest in centralized credential storage, don't overdo password expiration and 2FA. Security training is also a good idea, but the real goal is to nurture a security aware office/team culture.)

Social engineering [or just plain old laziness] is still a serious threat.

Timeboxing. Set aside 1-2 days every month to work on meaningful security-conscious goals. So try to upgrade to lay the fundamentals for that library upgrade that is overdue for years, try to make systems reproducible (also good for DR), try to add a few simple validations here and there against local file inclusion (or whatever comes up during the month, or during the checklist review).

Also, accepting that maintaining network facing systems have an inherent ongoing cost. (Unless you want your iToaster to eventually end up as part of a botnet.) Sometimes we have to let things go, accept that some business models (or hobbyist projects) are not worth it to do sanely and securely.


That is a nice list and all, and I would add that if you can't use a memory safe programming language then you should look closely at the compiler flags in use as well.

But if you need all that to spot the obvious issue in the OP's original specification ... then wow.


> But programmers have to learn first and foremost to take responsibility;

That's not going to happen as long as all our licenses include something like "This software is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."

> if you're writing code that ...

Being careful (I'd add: learn how to be careful) is a good platitude, though might as well just say so without the ifs. Especially for open source, you don't always know the context in which your program will run. There's responsibility of the dev, but also responsibility of downstream users. When companies wrap a cli program behind a web server they're asking for trouble.

Not failing on the data parse example above is expected of competent C/C++ programmers. I do think these days if you're going to use such languages outside of well-defined low-security-impact targets like non-networked games you're morally obligated to be better than competent. I can't change what exists though.

Managed languages help so much. I do think Java achieved its mission statement of dragging many C++ programmers halfway to Common Lisp (if halfway is a GC) and preventing many security bugs, but if more people were to go all the way, I think Naggum points to a big obstacle I don't know how to overcome: "What has to go before Common Lisp will conquer the world is the belief that passing unadorned machine words around is safe just because some external force has «approved» the exchange of those machine words."


Everyone makes mistakes, but not if the tools they use don't allow them to make them. Yes, if everyone working on it was perfect then you would be correct. Thankfully this is not a requirement for any profession.


There are parts of some jobs where perfection is required. People working at height don't drop anything (and there are cases where safety lines aren't possible). Surgeons don't get to just say "oops, my finger slipped" when holding a scalpel near your heart.

In "From the Earth to the Moon" [1], someone explains that the main problem of the Apollo 1 fire was that nobody thought to label that test "hazardous". Sitting on the ground was supposed to be safe.

We should expect perfection when writing C code, but we should also be clear that writing C is a hazardous activity, and not to be taken lightly. My surgeon is a human being, so I don't expect him to never make a mistake, but he's also a skilled and careful professional, and I do expect him to never make a 101-level mistake while holding a scalpel.

[1]: https://www.youtube.com/watch?v=J_HZr2otkg4


"There are cases where safety lines aren't possible" in programming, too, but such cases should be as rare as possible. Coding C/C++ software in this day and age (or software in any other language lacking extensive safety properties, for that matter) is the IT equivalent of working at height and without safety lines 100% of your working life, purely for shits and giggles.


I've made this point before too. Not only is perfection expected elsewhere, it's routinely achieved. Not always but routinely. My example is a Cirque Du Soleil performance. Seeing the routine lack of screwing up points to some actionable advice too: practice and train, study, have competence rankings.


New tools = New security bugs we still need to find.

If you know your tools there is less risk that you produce security bugs. You know the sideeffects of all the commands and functions. If you use a black box and trust it fully, you will (unknowingly) find ways to make new security holes noone ever thought of before.

I'm not saying that we should not innovate, just that we should not rush head first into the blue sky thinking everything is fine just because we use the new shiny thing where we can't make bugs ever.


Agreed. Though using conceptually better tools (parser generators) usually has real gains. (Even if it makes the system a bit more rigid. You usually can't just put an "if" in the middle of generated code. But this seems like a sane trade off.)


Unfortunately there are a lot of scenarios where C/C++ are still the only practical choices. No other language even comes close to their ubiquity AND performance at the same time. I'm sure the day will come though.

And there are those working on mitigating some of the worst issues with those languages, maybe someday they will bear practical fruit as well.

https://github.com/Microsoft/checkedc


Even though you can still screw up as as a programmer in a better tool, you should still pick that tool if that reduces the security risk by a number of percent over another tool. (as a swede would say "Think of the percentage")

So why doesn't everyone go with the better tool? Large problem is that experienced programmers encourages new programmers to use the older & less secure tools. I guess in a way to stay relevant.

Just this past week it has been an article about C programming almost everyday on the top list here at Hacker News.


> Even though you can still screw up as as a programmer in a better tool, you should still pick that tool if that reduces the security risk by a number of percent over another tool.

I agree in general, but not necessarily if the "better tool" doesn't run on or generate code for your target platform, or doesn't meet your performance requirements, or memory constraints, or the requirements to interface with other languages via a common ABI etc, etc etc.

And, in those situations, you need to BE CAREFUL.

> Large problem is that experienced programmers encourages new programmers to use the older & less secure tools. I guess in a way to stay relevant.

Oh I get it. Blame the older generation who wrote the platforms & tools that gave you a job in the first place. If that doesn't work, blame the tools. Blame anything but yourself for writing shit code. I see.


> Blame the older generation who wrote the platforms & tools that gave you a job in the first place.

Sure, why not. Almost all of them write "shit code", by your definition.

https://www.cvedetails.com/vulnerability-list/vendor_id-72/p...

https://www.cvedetails.com/vulnerability-list/vendor_id-33/p...

https://www.cvedetails.com/vulnerability-list/vendor_id-97/p...

Plenty of people claim they can write secure C code, and 99% of them are rookies that have learned the rules but not their own limitations, IMO.


Sure, there is always practical limitations, but I think that is less of a problem today than it used to be.

We have newer languages, but also a lots of languages have gotten better to interface lower level libraries.

Experienced programmers can be a huge asset, but at the same time a curse, there is no contradiction there. And I'm not arguing for a revolution to throw out all of what has been gained in software, I just say that new projects should to leave the old tools behind.

I am of course also guilty for proselytizing bad ideas & writing bad code.


No, decent programmers know that no one is perfect and so they design systems so that even when the most experienced programmers ever make a mistake that mistake is less likely to cause an exploit. This is the entire premise of Chrome's architecture. Check the CVEs and you'll see there's the same number of bugs as every other browser but around 10x less code exploits because it's designed under the premise that bugs happen.


A decent programmer or team of programmers can select a good architecture, and also not pretend that it is okay to treat a 4096-byte buffer as if it were 4097 bytes long.


Chrome did the as fast as they could sandboxing because they has no better alternative, because we lack better libraries for parsing.

Decent programmers also try to err on the side of safety, use the right tool for the job, and so on. And writing parsers should not be done directly, instead the parser code should be generated by a battle tested library/framework. (Or via using parser combinators, etc.)


Well, it's hard only if you do it in hard mode (in hand-written C/C++), just like bulletproof anything is hard there. We (as a field) have for decades known how to do it safely, people just elect to do it unsafely because of the usual reasons.


There is a decent argument that line of business apps in unmanaged languages are taking silly risks, but this is the operating system. If the platform has already spent the processing power, application developers are going to be pretty constrained.


IIRC, one of the first iPhone jailbreaks used malformed images as an attack vector to get root.


Yup, it was TIFF in the iPhone case


There was an image based exploit for the PSP as well.


PNG as well, iirc


There has been remote control by viewing a web page (classic), a pdf, a word document, receiving a text message, just being connected to the internet, etc. And that's just the beginning, then comes all the indirect attacks like stealing your domain, phone number, identity for loans, etc etc


Yeah, I'd say this is paranoid -- it turns out that file format decoders have been extremely insecure forever, and Google's been doing more than anyone to advance the fuzzing of public libraries to fix it. If they wanted to put a backdoor somewhere, they would not put it somewhere where their own automated tools that would detect and publish it.


Semi-OT: is there already a shellcode for this? The Chromecast should be vulnerable to this too, given it's Android based?


Chromecast was Chrome OS, which is not Android. (And I doubt that's changed.)



...which probably uses Skia as well.


For sure. This post freaked me out because my main machine now is a Pixelbook.


Is this going to be used as an opportunity to get us all to buy new phones much earlier than we normally would?


I woke up this AM to an update on my Pixel 2 so at least Google is keeping on top of their own phones (even though the Pixel 2 is one gen old now)


If I have a Samsung Tab S3, do I need to wait until Samsung releases an update to fix this?

edit: N/M I found the updates on Samsungs page that address these

https://security.samsungmobile.com/securityUpdate.smsb




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: