Hacker News new | past | comments | ask | show | jobs | submit login
Report on the technical vulnerabilities found in Huawei 5G [pdf] (finitestate.io)
152 points by UkiahSmith 6 days ago | hide | past | web | favorite | 88 comments





"On dozens of occasions, Huawei engineers disguised known unsafe functions (such as memcpy) as the “safe” version (memcpy_s) by creating wrapper functions with the “safe” name but none of the safety checks. This leads to thousands of vulnerable conditions in their code."

Things like this everywhere. Just stupid programmers or method?


This is the "expected" outcome of most big tech organizations. I doubt there was malice or stupidity from the actual people coding this (but the malice still exist, it is just a few layers up)

You have two kind of elite at the top: a bunch of very smart people, completely disconnected from daily work, setting up automated code checks and style rules. Let's call those architects/Engineering VPs. Then you also have a bunch of short term revenue people in a completely different side of the company who owns all the money and set the targets. Let's call those Product VPs

The daily work will be full of agile-cargo-cult (that is, all the meetings agile impose, without any the actual communication), with people in the teams reporting to different chains. Engineers having to abide by the crazy ineffective coding style (usually because nobody cared to provide the tools to help achieve the code quality required), and product/managers abiding by the short term revenue churn.

And since the Product org is the one who decides if target goals were met for bonuses and promotions you see why the actual coders might do that bullshit (e.g. which one ever got you a promotion: "I contributed to hugely announced feature X" vs "I followed all the best practices set up by the architects i have never met and prevented a security report two years in the future from dragging the company in the mud"?)

Most companies denounce the problems of having silos, but silos actually make the smaller org/teams responsible for the entire stack and tools. It is hated by investors because it is more expensive, but in the end, it is the real cost. A very vertical big corp will be cheaper but the reason it is cheaper is because less work is being done and always end up like this for ignoring the human nature of the people that it is made of.


Engineers having to abide by the crazy ineffective coding style

This is usually a sign of "metrics-driven management". They are then surprised why all their measures of "code quality" and "compliance" on their awesome dashboards and analytics are showing 100% while the actual product continues to spew forth bugs and disappoint their customers.

Any rules perceived as stupid will naturally be worked around. "You get what you measure."


...and that right there is Huawei, Boeing, Exxon, BP, and a thousand other big corporate dumpster fires in a nutshell.

Dumpster fires are everywhere not just large companies: contractors, restaurants, marriages etc.

I would argue that the natural state of things is variable levels of dumpster fire.

True, but greenshifting, cya driven development and derailment by metrics come natural with a multilayered middlemanager organization.

He didn't say anything about big companies being exclusive to disasters.

agree.i didnt mean to seem like i thought he/she/they said only big companies.

but isnt it sufficiently common public language to focus on large /corporate/capitalism etc? i only hoped to offer a note to support thia this conversation from falling into that.

thanks for following up on that, good catching!


I think the analysis glosses over compile time checks.

Take the "VOS_memcpy_s" function they condemn for not checking one of the arguments on page 35.

If the source contains a check that the destination length is no less than the source length, for instance in an assert(), and the compiler is able to conclude that will always be the case, it is perfectly justified in not generating code for the check.

The trouble is that the check code would also be missing if the "production code" was compiled with asserts disabled.

To make the conclusion they do, they have to have checked that:

A) All implementations lack the check and all are in not-link-time-optimized library locations. (in which case asserts was probably disabled)

or

B) All distributed implementations (because VOS_memcpy_s was either static inline or #defined) all lack the check and at least one of them is called with arguments where the check does not hold or cannot be determined at compile time. (in which case asserts was probably also disabled.)

If they cannot prove either of these two points, it may just be that the code is in fact correct, and that the compiler was able to determine this and did not need to emit the checks.

For all their bragging about how state of the art their tools are, I see no indication that they have analyzed that deep and comprehensive.

That said, Huawei's code is still has atrocious quality.

Or as the C-team calls it: "Industry Best Practice"


For both A and B, why do they need to check all locations? Wouldn't finding a single location be enough to prove that it's happening?

Also in A and B you seem to be saying it's difficult for them to see whether all the locations lack the check. But it sounds to me like their static analysis makes that fairly easy, and that that's what they found.

You are saying that VOS_memcpy_s's source code could be safe but have its safety checks hidden by the compiler if the compiler has strong optimizations enabled while simultaneously having asserts enabled and also every single call to VOS_memcpy_s has an assert before it. That seems like a pretty rare configuration, because AFAIK, asserts are usually disabled when strong optimizations are enabled. And it seems unlikely to me that Huawei would put an assert before every single call to VOS_memcpy_s, for multiple reasons, (1) because Huawei is sloppy, (2) because there's no reason to put an assert before VOS_memcpy_s if it has safety guarantees. So I conclude there's very little likelihood there are any checks in VOS_memcpy_s's source code.


No, you would put the assert inside the VOS_memcpy_s() implementation, and you would not make that a library function, but a static inline function, so the compiler can see the arguments every time it is called.

As for disabling asserts in the code you ship: That is such a quaint 1980'ies thing to do, and people should have stopped doing it long time ago.

First, asserts are usually free, in the sense that the values checked are already in registers.

Second, any assert the compiler can evaluate is free, because no code will be generated.

Third, asserts can often enable the compiler to produce better and faster code, because it knows more about the values it produces code for.

And this is not theory: Nobody has ever accused Varnish Cache of being slow, despite the fact that approx 10% of all source lines are asserts which cannot be compiled out.

My money is on Huaweis function doing the check with an assert which was compiled out for deliverable code.

Why else would the have the function in the first place ?

The fact that code is generated for the function as shown, indicates that it was not a static inline function, because the compiler would have reduced that to just the regular memcpy call, so it was probably in a library.

So yeah: Industry Best Practice Incompetence.


>As for disabling asserts in the code you ship: That is such a quaint 1980'ies thing to do, and people should have stopped doing it long time ago.

Sqlite makes all its asserts noop in release mode[1].

Chrome disables DCHECK() in release mode[2], and it's used many times[3].

RE2 has debug only checking[4].

In Rust debug mode, signed integer overflow panics, in release mode it wraps[5].

Firefox disables MOZ_ASSERT() in release mode[6], and it's used many times[7].

I doubt Huawei is more modern and less 1980s than all these other projects.

>My money is on Huaweis function doing the check with an assert which was compiled out for deliverable code.

Are you saying it was compiled out because asserts were disabled, or because the compiler could reason about the arguments at every callsite? It seems unlikely to me that the compiler could reason about the arguments at every callsite.

[1] https://www.sqlite.org/assert.html

[2] https://chromium.googlesource.com/chromium/src/+/master/styl...

[3] https://cs.chromium.org/search/?q=DCHECK%5C(+case:yes&sq=pac...

[4] https://github.com/google/re2/blob/master/util/logging.h

[5] http://huonw.github.io/blog/2016/04/myths-and-legends-about-...

[6] https://dxr.mozilla.org/mozilla-central/source/mfbt/Assertio...

[7] https://dxr.mozilla.org/mozilla-central/search?q=MOZ_ASSERT(...


Yes, the FOSS industry needs to grow up about software quality.

It is natural to have very expensive checks in development and test-environments, they help and aid development and QA to no end, and of course they should be disabled in production.

But the run-of-the-mill assert which documents integer ranges, pointer non-null-ness, data structure relationships and so on, cost nothing in performance and should be left in production code.

And as I said: We developed Varnish Cache to prove this point, and we overwhelmingly have: One bad CVE since 2006, and "only" about 20% of all web-traffic goes through Varnish at some point or other.

Re Huawei:

The report claims that there where no checks at all, and I'm saying they have not documented evidence for that claim, because the lack of checking code could be either disabled asserts in production code or the compiler reducing it as dead code.


This is highly disingenuous. They had no source code access and are just looking at whatever their static analyzer tool is spitting out, with looking at optimized assembly for these specific functions.

Counting references to libc functions then suggesting these are "unsafe" because they are not calls to the exactly as unsafe Microsoft _s incompatibility functions is the definition of stupidity.


Indeed. memcpy_s is certainly the definition of stupidity (as I heard someone once say, "that's what the _s stands for") because there is nothing "unsafe" about memcpy() as oppposed to e.g. the infamous gets(). It copies exactly the number of bytes you tell it to, no more and no less.

The length calculation always has to be done before the copy. Introducing an extra check in the way memcpy_s does is futile if you didn't get it right in the first place, and if you did, like you should, then it's nothing but bureaucratic bloat that can even make "safety" worse (because now there are two length checks, the one that actually does the work and makes the decisions, and this implicit one that someone in the future updating the code might miss.)

Almost all of the _s shit is just pure paranoia-FUD propagated by basically clueless "security" experts, mainly at Microsoft. To see the depths of this insanity, consider that they even have a "safe coding" standard where memcpy() is "banned", but memmove() is not, and there is no memmove_s() --- despite the fact that with the exception of overlapping areas, memmove() behaves exactly the same as memcpy(). You could simply search-and-replace all occurences of the latter with the former, and suddenly code that is considered "unsafe" is not.


That sounds as much like "hey the qa/code review kicked out back and said you have to use memcpy_s not just memcpy. We're on a deadline, fix this so we can get it back to them by [short deadline]."

Kind of like melamine letting something pass protein content testing.


Except that forcing use of memcpy_s does nothing to actually fix the real problem. Enforce arbitrary rules, get arbitrary results...

So extremely, criminally awful?

If they copied or used code that relied on using the safe memcpy_s, instead of having to change that code to use the unsafe versions, this could just be a proxy layer that lets that code run. This might be done so it's easier to keep the other code up to date if it's getting updated else where.

Right, but why not take thirty seconds to do the appropriate checks in the wrappers?

Because that might add a bunch of code that may or may not be useful. It might be that this is inlined everywhere and so would bump some of code to a different alignment. Perhaps it's timing critical and this saved a few calls. Perhaps they use static analysis only. Or they might have debug versions where they do use the safety checks and then for production versions it's preprocessed out.

There could be plenty of different reason.


> Or they might have debug versions where they do use the safety checks and then for production versions it's preprocessed out.

Why would you do that? Most attacks will probably target your production environment rather than your development environment.


One could fuzz test the debug versions to ensure it's safe. And they really need the extra performance for prod. Perhaps they are confident that is enough testing? We know it can never be 100%

Because it's far from being thirty seconds (as seen in the post by ploxiln here):

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1967.htm

The most interesting quote regarding these "safer" interfaces from the report:

"The design of the Bounds checking interfaces, though well-intentioned, suffers from far too many problems to correct. Using the APIs has been seen to lead to worse quality, less secure software than relying on established approaches or modern technologies. ... Therefore, we propose that Annex K be either removed from the next revision of the C standard, or deprecated and then removed."


I don’t understand. Your link is talking about the difficulty of using these functions. But here, the functions are already being used. What’s missing is implementations of them. They chose to implement these functions without the security checks, but implementing them with the security checks is not hard.

That whole article is basically saying that those extra checks are useless anyway in correctly written code, as otherwise it would be code that can be tested and reached:

On the other hand, in code that does check for and attempts to handle errors reported by the APIs, the new error handling paths tend to be poorly tested (if at all) because the runtime-constraint violation can typically be triggered only once, the first time it is found and before it's fixed. After the flaw is removed, the handling code can no longer be tested and, as the code evolves, can become a source of defects in the program.


It's possible that they are calling the functions, but passing incorrect length arguments so frequently that they are best ignored.

> Therefore, we propose that Annex K be either removed from the next revision of the C standard, or deprecated and then removed.

This irked me, because along with useless junk like memcpy_s, Annex K had memset_s, which in particular (and unlike memset(3)) was guaranteed not to be optimized out by the compiler.

Meanwhile Spectre has made it more important than ever for programs to clear any sensitive data out of memory as soon as it's no longer needed, so we actually need that. And it's troublesome to either have half a dozen ifdefs for all the different platform-specific functions that do the same thing, with as many possibilities to call one of them wrong, or have to pick up a dependency on a crypto library just to get e.g. sodium_memzero().


Not arguing with your useless junk attitude. Useless junk is the glibc and its FORTIFY_SOURCE "maybe we catch it or maybe not" attempt.

But sodium_memzero() is not safer than memset_s. If only provides a compiler barrier, but no memory barrier. AFAIK only my safeclib memset_s is "safe", but I don't provide a clflush there. Maybe I should.


All the more reason there should be a function in the C standard that does it properly and uniformly across platforms.

To actually fix this, you have to call memcpy_s instead of memcpy, and then not create the wrapper, which seems like almost the same amount of work, but slightly less. So why create the wrapper? Their library didn't have a memcpy_s function?

Huawei is actually one of the very few vendors which do provide the Annex K functions. I can only name Cisco, Intel, Embarcadero, HP and Microsoft who also do so. But Intel very badly.

The report is right to call out Huawei engineers to bypass these checks. They probably had the same attitude to bounds checks as all open source developers and industry practioners.


Performance reasons maybe? Just spitballing here.

> Just stupid programmers or method?

Likely just stupid. I can easily see:

Manager: "Hey, review kicked your code back. You need to use the foo_s versions."

Minion: "But the foo_s versions are in libsecure and the system doesn't build and link that. I need someone with the authority to add it to the build to do that."

Manager: "Not my problem. Get your code clean by the end of the day."

Minion: <decides that renaming the function locally looks pretty good right now>


I bet that's just the usual chinese quality in manufacturing in general, the actual backdoor is just the hardcoded ssh key.

And thats their enterprise infrastructure equipment stuff, can you even imagine how their android firmware must look like, holy shit.



It seems odd to have a report with a 5G label and then find information in it about core routers. Yes they play a part but it strikes me as odd that there is no focus on mobile switching, gateways, HLRs, signaling vulnerabilites etc..

When looking at the boxes they compare:

Huawei: CloudEngine 12800 scales up to 2000 * 25GBit or 500 * 40GBit Juniper: EX4650 Ethernet Switch scales up 48*25GBit

With all due respect but this is comparing apples with orange seeds. A carrier class switch is more complex than an enterprise switch and has a much larger attack surface.


Its probably a shared codebase, I wouldn't be surprised if they had a monolithic OS and just had specific daemons to run things.

Shared codebase + more offload ASIC processors is exactly what I'd expect

Exactly.

Most network forwarding happens on ASICs, its the only reasonable way to get performance.


If there was an average of 128 rating 10.0 CVE vulnerability's in Android, Google Home, Google Glass, Google Mesh.. and they also had a track record of getting more insecure over time with new updates; would you feel safe in their self driving car?

Company culture on this have everything to say. So looking at their most sold products is a good testimony for lack of security focus.


> While we cannot prove malicious intent through a technical analysis, we can concretely state that 55% of tested devices had at least one potential backdoor

And: "In virtually all categories we examined, Huawei devices were found to be less secure than those from other vendors making similar devices."

I was wondering about how bad this was against industrial standards, considering they've been articles about TP-Link and other Internet-of-Trash devices that have a lot of unpatched issues.

The article seems to suggest Hawaii is exceptionally bad at patching or best practices in security.


It's a low bar, but apparently they managed to not clear it.

Which seem to include Juniper and Arista and not the other base station providers listed in the review such as Nokia, Ericsson, ZTE, or Samsung.

Not since the days of Microsoft astro-turfing have I seen such a campaign against a tech company. Some of these metrics are very subjective and I've seen so many bone-headed security bugs in equipment over the years (Cisco, Nortel, TP link, Dlink...) that I really have difficulty believing the narrative of state sponsored malice.

Regardless, Huawei needs to clean up their practices. Ineptitude isn't a defense and this level of sloppiness is unacceptable; We have too much at stake with these essential networks, and I think legislation of security practices and standards in these products should be considered.


> Regardless, Huawei needs to clean up their practices.

Sure. So does every crappy home router manufacturer, Android manufacturer, security cam manufacturer etc: singling out Huawei is retarded.

Personally I suspect that biggest issue is that the US doesn't wish to lose their own NSA phone/txt backdoor access (thumbs in everyone's pies). They have access to most US TCP/IP traffic, so they don't need to target router manufacturers.


Yes, but this is actually an opportunity for Huawei. If I was them, I would take these findings and hire a team of hackers/sec researches to rip their products apart, fix the issues and then put then up for independent comparison with competitors. They have all of the resources one could dream if to be the best for security. It would be nice if anyone would set a high bar.

Some of these items seem like they need more analysis, and that the report was rushed. We agree as a community that hard coded default passwords are bad when unchanged, but that implies that the device doesn't require them to be changed on first boot. Basic setup processes often require these credentials to change after the first provision.

I would expect a report which produces claims about such comprehensive backdooring and negligence to at least demonstrate how that behavior would play out in the real world. It seems much more like they did static analysis of the binaries, identified any strings with default passwords, and then reported on them. That's okay to do, but you can't conclude that's a problem until you confirm shoddy behavior after provisioning and deployment in practice.


The process and practice findings are the most damning in my opinion. This isn't a opps we were rushing to market, this is a problem with values and in my mind damns the whole organization.

Basic security tooling to remove hard coded keys of any kind need to be in the process. I bet shops like Cisco/Linksys, Neatgear and others at least make some attempt of this in their CI process.


The huawei user account hardcoded in the firmware that has sudoers permissions for insmod/modprobe is really blatant. I would have guessed the Chinese would at least attempt to hide their way in?

So they sell enterprise infrastructure equipment with such an obvious backdoor? It seems the US boycott of Huawei was fully justified after all, I didn't really believe them.


That doesn't have to be a backdoor. Remember the wave of bugs caused by hardcoded secrets in Cisco hardware?

Might just be some leftover debugging account or for service.


>That doesn't have to be a backdoor.

It is a backdoor. The definition of the term doesn't require intent.


It would be both more subtle and practical had Huawei modified sshd to accept any key that had been signed by their CA.

That would have been slightly harder to detect.


How could that be harder to detect?

The report says that Huawei devices have unsafe functions like “memcpy” and "strcpy". Is this a coding preference or dangerous practice? To what extent can these examples reflect on code quality?

strcpy() - sure it is fairly often used unsafely, strlcpy() or snprintf() or similar should generally be used. But memcpy()? It takes an explicit length, which you have to calculate. What's the alternative? memcpy_s() does part of the check for you, but you end up writing more code around it. It's trying too hard and not achieving a net positive. I have seen "use of unsafe function memcpy()" show up in some dumb security scans recently, and it's a strange development. There is lots of C code where avoiding memcpy() would be quite awkward and really not help anything at all, it's just a fundamental operation.

Here's a dry technical report on how the new-ish _s variants are not really helpful: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1967.htm


This technical report is nonsensical, and asks for continuation of widespread insecure practices. The authors (the glibc maintainer) should be removed from the committee for spreading such harm. He simply doesn't like secure practices, and continue to use sometimes checked calls. If he would have some technical competence he would implement the needed safety checks seperated into compile-time (no performance penalty) and runtime (when the compiler doesn't know). He only does the first, and leaves all the dynamic cases unchecked. But then he would stumble over the inability of gcc to properly handle compile-time expressions.

See eg. https://rurban.github.io/safeclib/doc/safec-3.5/index.html


But is the use of strcpy and memcpy worthy of a global boycott and witch-hunt?

Unless you know the context they are used in it doesn't say anything.

It doesn't say that those particular uses are dangerous. But it says that their overall standards don't prohibit the use of potentially dangerous methods.

In a non-critical application that kind of stuff is usually fine. But for critical infrastructure there are normally rules against using potentially unsafe methods like those entirely.

Huawei would need to be more transparent to show that their practices are secure and that these uses are wrapped in some kind of protective framework. But until that happens it's reasonable to be skeptical from a security (and stability) perspective.


Yeah... but it's security theatre.

They will just use memcpy_s with the dest len and the len set to the same var. Or strncpy with the limit set to strlen(src) etc. These guys will tell you it's suddenly using 'modern security practices'.

Conversely depending on the code strcpy / memcpy can be 100% safe.

I think these guys are selling static analysis, so they find themselves using these oversimplified metrics... it's a shame because it looks like there was no lack of real issues.


If I ever saw a project where someone wrote a fake wrapper around an insecure function that gave the illusion that it had proper checks in place (which is what's being described here) instead of using the actual function I would be concerned.

And if I ever saw code where the size parameters weren't legit (as in someone used the same variable for both) I would also be concerned unless proper checks where taking place elsewhere. But it is a bad smell.

That's the only point in that particular finding. They did detail other ways in which the whole development standards seem bad.

And, yes, you can always shoot your own foot, but it's still best not to aim directly at it.



Its a pretty good indication how easy its gonna be to find a memory corruption vulnerability. There are very few C programmers that I would trust to write secure code at all (none of those would ever touch those functions), yet alone with critical infrastructure equipment like this.

I'm so glad this isn't about how "5G is dangerous zomg".

We're lucky to get this sort of report!

(In case you were looking for an article about the safety of 5G -> https://medium.com/@tomsparks/is-5g-dangerous-405a19e9ea88)


It's hard to say to which extend this "assessment" is honestly backed by solid evidence, or if it was written first and foremost for political reasons. The broad claims, apparent lack of factual evidence (only statements about what has been discovered), and rather overarching nature of the whole report, sure do suggest it could be the latter. Pure speculation on my own part, of course.

I have not gone through the full report (yet), but while reading just a few sections my sales-pitch-bullshit-meter went on full tilt, at least a few times.

The timing of this report is also peculiar, to say the least. Who is Finite State? What is their track record, to date? Who owns the company? What business relationships/interests do the company and its owners have?

Yes, I realize/know these are suggestive questions. Questions that should nonetheless have satisfying/assuring answers, in order take this report seriously.

There's a lot at stake here, so at least examining the validity of this report/summary isn't just a luxury.

Are the factual findings behind this report publicly available?


They did it using their analysis automation software (iotasphere) and the list of devices is in appendix A.

Someone knowledgeable about forensics with access to the same devices probably can validate those claims.


I sure hope that someone will. I do not have access these devices, neither do I have any inclination to buying their commercial product in order to verify their claims.

Maybe their product is brilliant, and/or maybe they did a lot of manual verification of their automated scans, in which case their report could be accurate (though still hard to verify).

That said, my experience with both static analysis and runtime security scanner is less than stellar. They are without a doubt useful tools, if not indispensable. But they usually generate more leads for further (manual) analyses, rather than generate (accurate) final verdicts.

Granted that pen-testing is a different animal (but one I have more experience with), I often have had to explain to customers that the reports they got from security auditing companies were full of false conclusion. More often than not because those companies were relying/trusting too much on automatically generated results, in addition to them trying to sell additional services. So, forgive me if I'm skeptical.

That Finite State apparently has a stake of their own here (promotion of their product), which makes me only less inclined towards giving them the benefit of a doubt.

As I mentioned before, this is of course all speculation on my part. Still, until Finite State publishes their factual/detailed scan results, I believe I have every reason to consider that this might be either a political motivated report, or more likely be an attempt to use/abuse the current political climate to promote their own product and services.


• 29% of all devices tested had at least one default username and password stored in the firmware, enabling access to the device if administrators don’t change these credentials.

• We identified 76 instances of firmware where the device was, by default, configured such that a root user with a hard-coded password could log in over the SSH protocol, providing for default backdoor access.

• 8 different firmware images were found to have pre-computed authorized_keys hard coded into the firmware, enabling backdoor access to the holder of the private key.

• 424 different firmware images contained hardcoded private SSH keys, which can enable a man-in-the-middle to manipulate and/or decrypt traffic going to the device.

What a witch hunt... This is state of the art in the industry. Everybody does it like that. No intelligence agency has to be involved at all, it's basic negligence. If you're behind a NAT, your device is unlikely to be attacked via these vectors.


Negligent or malicious, no matter which, I would not want to deploy devices that had hardcoded ssh private keys.

Default passwords can be dealt with.


How many companies would be left on the market if a default password or hardcoded account or sloppy programming was enough to put them out of business? If we treated them all the same.

A default password and hardcoding a private key are two very different things.

Yes, it is. I'm just pointing out that defaul password was a common practice couple of years ago, and one of the companies caught with hardcoded private key is Cisco. If we treated them all like Huawei, we wouldn't have much of the infrastructure left. Which means the whole affair around Huawei is blown out of proportions.

So I am reminded of the difference between "he attacked them because he was evil" and "he attacked them because he was insane". So, while this might make some difference in how you feel about it, but either way you need to lock them up (albeit perhaps in different places).

If they are grievously incompetent at security, or they are intentionally installing backdoors, might make a difference in how we feel about it, but in neither case should we allow their products to be used.


Exactly, every vendor has default passwords. If you are doing things right all default credentials are changed or disabled. Access to any service is firewalled off or better yet only accessible out of band on a dedicated management network.

This is changing. I've seen a number of ISPs no longer have default passwords. Each router or modem has a random password string set to the device, it's printed out and pasted as a sticker on the modem (or some print directly to the plastic). A lot of big name devices do this now too.

Sure it's a password written on the device, but it's random, you need physical access to see it, and people who are security conscious can change it.

This bad practice isn't excusable, especially not by a company as big as Huawai, not if they want to be taken seriously.


It's definitely a good development that ISPs have started to deploy routers and modems with randomized passwords. However, please do keep in mind the deployment of consumer equipment and enterprise hardware is different. Or at least it should be, in theory.

Enterprise equipment is usually not supposed to be just dropped into place, without oversight. It usually needs proper configuration/management, by qualified people.

Whether this also happens in practice can be a different story altogether. Still, the security of enterprise equipment usually involves more policy and procedure than it does with consumer equipment. With the latter, security has to come more or less by default, because the people handling the devices usually have little expert knowledge.


From what I have seen where I live, printed passwords on things like home routers and VDSL/Fiber modems provided by major ISP's are for 802.11 stuff (WiFi passwords) and not for the devices management interface. This may have changed since I last looked into it a few years ago though. There was also the whole Netgear router "backdoor" port thingy (a device shipped by a major ISP) which I actually had to exploit to recover my password after forgetting it once, which was kind of amusing.

Juniper doesn't. There is no password on the device when you power it on.

When you get a new device, in order to save your initial configuration on it, you have to set a password.

Cisco used to ship with zero config on their devices and part of the setup process was setting a password as well.


Cisco never requires a passsword to be set. iOS prompted for a password during the easy configuration but if you dropped in, via console or Tftp, a config over you can configure it without a pw.

Later versions did not allow passwordless ssh but still allowed it via telnet. Cisco’s ACI platform enforces password on the initial account, then with some smarts you could disable it in OpenLDAP


If you consoled into easy mode you defo could set a pw there, or skip it. I still don't get why they don't make you set at least one password, but some of it also stems from the architecture of the OS AFAIK.

So would a "different password for each device programmed at the factory" approach (as Comcast/Xfinity/Att and others currently do, at least in the US) be better? This would make it better in the sense that only Huawei would know about the default password for that piece of hardware [if they were to log it].

It is not the bug now but bug in the future. It is whether you can trust a country that respect nothing human.

Can you even a campaign against anything once they take over your information infrastructure.

Yes NSA may do it but cf NSA with PLA ... what is your leverage. Hear of the word totalitarian...




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: