A member of the public should expect to be able to buy a new phone and use it for 3 years without exposing their nude pics to blackmailers.
So far, that isn't the case in the Android world.
Agreed. I picked mine up 2 years ago and am now forced to upgrade to a newer model or install a third-party version of Android like LineageOS.
From what I understand, the CAT Phones (with built-in FLIR, etc) get even less updates.
Check out https://www.lineageos.org or one the other dists out there - and get that loaded up..
I don't think so: it will oblige makers to standardize processes and software across phones... i.e. very basic specific drivers, then same OS and libs on all phones (with just different themes).
Then the Android security updates can be uploaded directly from Google at no cost. Just like for computers, and phones are computers with very few different features (input device, GSM chip).
My HP or Dell computer is not more expensive when Microsoft or Debian is pushing security updates.
In the end, unifying processes and software brings costs down.
Btw, if Microsoft is pushing security updates you did pay additional cost for license. If it's Debian then quite possibly you still paid something to MS (if your computer was bought with OEM version), or in better case other corporations, and individuals pay for it (mostly to reduce Microsoft's power).
Yet almost every civilization on Earth has already decided that we, the masses, _do_ have a right to speak for everyone when it constitutes a common good. In the US, you have to wear a seatbelt in most states. Your food is regulated by the FDA. Your cars must meet certain safety standards, as does your home. This list goes on and on.
P.S. My food is not regulated by FDA, not everybody registered in HN are from States.
Who said that it was?
The FDA, as it is known today, came into being in response to the public outrage at the shockingly unhygienic conditions in the Chicago stockyards that were described in Upton Sinclair’s book “The Jungle." Building codes exist to protect public health and welfare.
These weren't arbitrary decisions; they were made in response to real issues. Laws in general limit personal freedoms to protect society and the public. Also, some personal freedoms infringe on the freedoms of others. If murder were legal you would gain personal freedom, but your victims would lose theirs. I may want to have cows in my backyard, but my neighbors may have a few legitimate issues with that.
The fact is that the world has decided you are wrong, and for good reason. I don't need to prove why; you need to prove why everyone else is apparently incorrect.
>Most Republicans seem to disagree from what I can tell.
Uh huh, until we start talking about what you do with your body or who you want to marry. Many conservatives do believe in personal freedom over government rule, yes, but the best conditions are always brought about by balancing those, never in favoring one completely over the other.
I'm not a fan at all of excessive government overreach, but the private tech sector is utterly incompetent of policing itself because a) they don't give a shit, and b) no one is holding them accountable enough (you could argue shareholder should, but there's rarely an impact to bottom lines when security breaches happen). The only thing that will make them care is if an impartial 3rd party that can force them to care.
Something like that is all that's required in primary legislation.
What is missing is a finding that a sufficiently severe security vulnerability present at time of sale falls short of the expected standard. The general concept could be enforced by a court ruling setting precedent or by still quite generic legislation.
Finally it would be up to the courts to decide on a case-by-case basis what constitutes "sufficiently severe" in specific cases. That's no different to how everything else in law works.
"Do you really want legislators deciding what is and is not 'reckless' driving?"
Yes, it is only for preorder, but the devkit already exists .
> They may not even be around in 3 or 5 years.
If they manage to make the final version of the phone (which is quite likely), it won't matter anymore, since the OS is just GNU/Linux. Correspondingly, the updates do not so much depend on the phone manufacturers.
You're buying into a phone that should get continued support for years to come, assuming the company survives, and they don't end up finding the maintenance burnden utterly untenable (once they on model version 3, I can't see it being the easiest thing to support the older models - Apple does, but Apples huge). It seems to be relatively underpowered (though that's really up to how many resources the system uses), and I can't imagine will age well. There will be little to no mainstream app support (they seem to be touting "just run it in a browser" as a solution to that - I thought we left those dark ages).
It seems like a neat, niche product, like a raspberry pi phone, but it doesn't give me a lot of hope for the future.
LineageOS depends on the proprietary drivers and/or undocumented devices, which will never get any updates. If there is a security bug in your WiFi driver, game over. This phone is being designed to work with free software, anyone could do updates.
>assuming the company survives, and
Again, free software gives us the freedom to do updates ourselves (or pay to someone to get them). GNU/Linux supports most of the old computers quite well.
>There will be little to no mainstream app support
"We will test the capabilities of powering Anbox or Shashlik to allow users the ability to run Android applications within PureOS on the Librem 5", https://puri.sm/faq/
>It seems to be relatively underpowered
True, but one has to start somewhere. Personally, I think this phone brings a lot of hope for the future.
Knowingly and willingly exposing your customers who are unable or unwilling to buy a new device to hacking.
I find it very concerning that I can't download an update to fix this particular vulnerability without a help from the manufacturer of my device. PNG has nothing to do with drivers, after all.
And yet, I doubt I'm going to get pwned over this one. Mitigations to the rescue.
Either way it is ridiculous indeed. I have a 5X. It certainly pokes me to install LinageOS faster. A side question: does anyone use Plasma Mobile on their 5X? - https://www.plasma-mobile.org/neon-arch-reference-rootfs/
There was also a Bluetooth RCE in this month's bulletin. And in November there was yet another round of WiFi pwnage, this time in Qualcomm WiFi (CVE-2018-11905). It's all quite bad.
> Monthly security updates to be supported for at least 3 years after initial phone release.
(scroll down to the bottom of the page and look for the double asterisks footnote. I don't know why they want to hide what is IMO the best reason to get an Android One device)
It's still a short window IMHO, and it's bad for the planet to throw working devices after 3 years.
edit: in hindsight, i could have just looked at the article link rather than googling - but I still can't work out how to subscribe for updates.
Unfortunately, there don't seem to be any bulletins later than March 2018 (irrespective of whether one joins the group or not).
Good luck with that.
The issue isn't Google or Android. It is phone manufacturers.
If Google can't even get it right on their own devices, they can't be shifting the blame to manufacturers.
The only remotely-safe choice is an iPhone.
To be fair, Google only guarantees support of even the Pixel for 3 years... so it still kind of is.
I see this mentioned a lot on reddit, but unless there are no security updates happening through android itself it seems ill informed.
I'll make this point explicitly when I see it, that phones may get some updates, but that old phones don't get the security patches at an os level.
Am not on android so I didn't want to be mistaken in how it worked.
It's not clear if this would include kernel and driver updates (I don't think so), but it's very likely things like Skia and codecs would be able to be updated faster. Very happy to be corrected.
It also looks like not all devices running 8.0 were required to be Treble compliant. Hi, Samsung.
In your case, (I presume) you are both the owner and the only user with physical access, and "you control your own device" is unique.
In my case I'm the owner, but other people may have temporary physical access, and the identity of the "you" is therefore somewhat ambiguous. As owner I want protection against one of the possible values of "you", see?
If the owner of the device flashed their ROM, they want updates. If someone other than the owner flashed the ROM, the true owner of the device is screwed anyway. Blocking updates won't save the true owner. Instead, this seems nothing more than lockdown for the sake of control.
It was written to flash by untrusted software, it may have been modified on the way such that it thinks it's unmodified, such that the signatures match when it checks itself, but actual execution uses a modification by that untrusted code.
It's a bit like that paper about modifying the compiler. https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html
From the owner's point of view, being able to update the ROM is the manufacturer's statement that nothing done by a user with physical access survives.
I accept that the manufacturer's OTA update is intended to be monolothic, is desigend to be monolithic, but what assurance do I (the owner) have that the software that was flashed by a physical user actually flashes its replacement monolithically? That it leaves nothing behind?
EDIT: on further reflection, it seems possible to design a phone that provides such an assurance. That any monolothic OTA update actually has to be monolithic, even if untrusted software is in control of the main CPU. But I wouldn't want to bet that any/many/most phones built today actually offer that guarantee.
Yes. I can't see any technical reason why Google disallows factory resetting my device without re-flashing a ROM.
This is just the android ecosystem these days, don't count on security updates?
(This phone is actually my first smart phone, I can't even explain why I had a flip phone for so long... except this. But I'm still figuring out what to expect from em.)
If, for example, you have a Samsung device from Verizon, it goes something like this: Google releases new source code containing patch. Samsung takes a few months to roll it into their updates, and sends it to Verizon to Q&A. Verizon either pushes back or accepts it after some time. That whole process takes far longer than it should, partly because they are mixing security and features in the same updates.
People are talking about how _old_ phones don't get patches at all, but even most _new_ phones have "zero-day" vulnerabilities (cause zero day lasts for months apparently) for significant periods.
What a world! How is this okay?
It's not all doom and gloom though- there have been improvements. Different partitioning schemes, breaking out core services out of firmware and into the play store, etc. It's just that thus far Samsung, the biggest Android player by far, has decided not to implement all of them.
I know it's a lot of work, but the Android/Chrome teams are big, and this is important enough work that could improve the security of the systems.
Eventually it will happen though (at least I hope so).
I was asking about the integration work, exporting high quality Rust libraries with a C API and calling from Skia for example.
The plan for C++ / Rust interop sounds amazing.
Anyways the main question is whether other companies should reuse the work already done by the great Mozilla team.
System level programming memory safety was a dream for a long time.
Go is not capable of this at all, so it's out of question for replacing low level libraries.
This is not a disqualifer for most use cases, but is for some.
But a complete rewrite is a hard sell where just fuzzing and patching existing code seems like a quicker fix and not rocking the boat too much.
But from what you wrote, it sounds more like what's simpler to implement for the team gets done.
It was great for stealing peoples sessions on forums using a malformed avatar.
For a toy example, imagine the format says [number of bytes to come] [the bytes]. Allocate an array with the specified size and then copy the rest of the message into it. Right?
Now you’ve got a remote buffer overflow. How about [start sentinel byte] [up to N data bytes] [end sentinel byte]? Just put more than the allowed bytes before the end sentinel.
Unless a format is designed to facilitate secure implementation (which would include being very simple), there are going to be at least a few vulnerable implementations. Is it a backdoor? Read the patch yourself, decide whether the original authors and reviewers knew what they were doing.
Well then you're doing it wrong. You copy [number of bytes to come] bytes, not [rest of message] size bytes, only once you've validated that [rest of message size] is not too large to begin with. That is a completely toy example. A decent programmer would never trust the remote end to actually just send [number of bytes to come] bytes.
You might want to Google for ImageTragick. Or heartbleed. Or shell shock. Or Java deserialization. Or flash.
But please, tell me more about what a decent programmer would never do.
And PHP was seen as harmless fun.
And malware was thought of as boot sector viruses and even though the first worm (1988, the Morris) worked exactly like what people are harping about (buffer overrun), no one cared, because C++ was slow to compile, and managed runtimes are evil.
And still no one cares. It takes some competence to stop using C, but it's easier to simply shout at those who point out the obvious.
And when someone brings out the good old Therac-25 everyone starts to fall silent. (Even if healthcare accreditation is very expensive, it's expensive exactly because the whole IT field is still full of unverifiable code.)
Yes it might have been easy to get away with writing the OP's faulty specification into code back then. After 25 years of people screaming about the dangers of buffer/integer overflows, it's very hard for me to think that a serious C/C++ programmer could look at that specification and not see the issue before blindly committing it to code today. I suppose it could happen.
OpenSSL (which everyone else here is crying about as an example) is also over 20 years old.
> And still no one cares. It takes some competence to stop using C, but it's easier to simply shout at those who point out the obvious.
Well imagine trying to rewrite the Linux kernel today in a memory safe language, without losing any support for any of the current target architectures. Or NetBSD even more so for that matter. Nobody has convinced me it could be done, and as of yet, it hasn't been.
Throwing away C and C++ for good is fine - if you don't want to do much in the way right now of embedded programming, or working with sophisticated games engines (eg. UE4), just for fun.
If you're not confident of the quality of the code, then maybe don't deliver it to for all and sundry saying its fit for purpose. And if you are confident, then check it again.
That's all I'm saying.
Agreed. And even though it could be done, but it'd of course came at an enormous cost to the current pace of driver and other updates.
Google is trying something with Fuchsia though.
And there are experiments similar to how Linux itself got started. (Redox OS comes to mind.)
Aware of both of these, but yeah, wake me up when either are close to being capable of being even a niche competitor, for example, as complete and accessible as Haiku currently is.
Most of the BSD's didn't have decent working SMP for a period longer than that either.
I think it's kind of sad that it takes a company the size of Google to create something like Fuchsia, which we can't be sure will ever see the light of day for most users, even given the vast pools of code available in Linux and BSD they could reference.
Hardware has grown exponentially in complexity since Linux was initially developed of course, which makes the job of writing a new OS much tougher. But you could probably find 20-30 different hobby OS's over on osdev.org that are more complete and accessible for the end user than Fuschsia or Redox right now, even if they are just more boring POSIX clones.
Though, interestingly hardware also got a lot smarter (because usually they have some kind of microprocessor and RTOS on them), so in theory interfacing with it should be more easier. (Serial buses, simple enumerate, simple addressing, we have a lot of good abstractions, no need to fiddle with registers, just use a PCI-E, DMA, I2C library.)
But of course at the same time just supporting these smart standards is hard (because initialization of things is not trivial, even just getting DDR RAM to work requires tinkering with MSRs, and ACPI is its own special hell).
Er ... what? Where are you getting this information? Haiku has been in development since 2002, and has SMP support since ... 2003/2004?
NUMA support is a little shaky, but I know users who have run Haiku on 32-core Ryzens with no issues. So we definitely have SMP support.
Now re-reading it, the article clearly states that development started in 2001, and that it has rudimentary SMP support.
Maybe the phrase "a decent programmer would never" unfairly assigns blame to an individual's skill, but I think it would be fair to say that the programmers who designed and implemented these bugs were not doing decent work on the day that they made these mistakes.
If our standards for C programmers don't include bounds checking of array access, what exactly do they include?
This might be shocking to a flawless supercomputer AI such as yourself but humans make mistakes.
> imagine the format says [number of bytes to come] [the bytes]. Allocate an array with the specified size and then copy the rest of the message into it. Right?
If your client came to you and said that was the specification, you'd be negligent to fail to point out the obvious flaw with the logic, wouldn't you?
"Unfortunately, the OpenSSL basket was being watched somewhat less than very carefully. Yeah, it has bugs, but surely somebody else will fix them. And worst case scenario, since everybody uses the same library, everybody will be affected by the bugs. Nobody wants to be alone".
So I'm pretty sure the one example you have picked here reinforces my point.
Is that real world enough for you?
But you go ahead and believe it's because of dumb luck if you like.
> we strive everyday to provide new mitigations and defense in depth to prevent its exploitation.
I'm fairly confident that even if you didn't do that it wouldn't make much of a difference, because the likelihood of anyone attempting to exploit your particular niche program are quite low.
> But you go ahead and believe it's because of dumb luck if you like.
I didn't say it was dumb luck. The fact of the matter is that most software isn't hardened in the way you describe, for simple economical reasons.
If you honestly think that emergency services infrastructure is less of a valuable target to a rogue operator than Internet Widgets Inc. or whatever then I think we are done here for now.
This is all a huge over-reaction to my initial point which was; the format that the OP proposed was busted, and any good C/C++ programmer should notice that immediately rather than implement it blindly. If they can't do that then, yes, definitely stick to a managed language. Which I agree is good advice anyway.
Thank you and have a nice day.
It is in your original statement (abbreviated):
If there were easily exploited buffer overflow in <our program>, ambulances and fire engines <for our customers> would never be called to their destinations nor arrive on time.
One doesn't follow from the other. Your statement implies there are bad actors in existence that would actually exploit such flaws in order to disrupt the service. That may or may not be true, therefore it does not follow.
> The previous poster purported that I mustn't have much "real world" experience in coding, and I responded that I and my team wouldn't be entrusted to successfully maintain a program and network that coordinates emergency services first responders over a large land mass and number of people if that was the case.
That's not what I actually replied to, but it's also a non-sequitur. It would be entirely plausible that you could be a reckless band of cowboy coders that happens to be able to produce a working product that is nevertheless full of theoretically exploitable flaws that just happen to not get exploited, because nobody cares. Also, whoever "entrusted" you may themselves be entirely reckless or incompetent.
I'm not saying that any of this is the case, I'm simply saying you can't logically deduce one from the other. Therefore it doesn't really work as an argument.
Rubbish. If the service in question was directly accessible from a public network and its source code known and published, it would be attacked in less than 5 minutes. That is the assumption that causes millions of dollars to be spent on securing the application and the environment every year. We are not prepared to take that risk, and we secure appropriately and expensively.
> Also, whoever "entrusted" you may themselves be entirely reckless or incompetent.
That is how employment works in the wider world of Internet Widgets etc. but not in critical government regulated infrastructure like emergency services.
Yes we are unable to prove 100% that the application has no externally exploitable bugs, despite our best efforts, so we don't claim that it is and go and make it accessible to untrusted sources of input.
You just keep going with the non-sequitur, but now you're also moving the goalpost. Your original statement didn't contain anything about "directly accessible from a public network" (which I assume isn't the case) or "source code known and published" (which you said wasn't the case).
And it is still a non-sequitur. Why would anybody expend non-trivial resources to find exploitable flaw in an open-source codebase to actually bring down emergency service? There's no reasonable way to profit from that. If you're some foreign hacker, you might want to find the exploit, but you wouldn't attack right away. Again, I'm not saying nobody would possibly do it, but one does not automatically follow from the other.
Heartbleed really is the best example on how a critical piece of software infrastructure (including a lot of government regulated infrastructure) had a publicly visible flaw that went undetected (and presumably unexploited) for several years.
> That is how employment works in the wider world of Internet Widgets etc. but not in critical government regulated infrastructure like emergency services.
I don't know man, I'm here arguing with this guy who doesn't seem to grasp a basic concept in logical reasoning, yet he's working on critical government infrastructure.
It's already been discussed in this thread how old OpenSSL is - it dates back to 1998 when the risk of buffer overflows was not as well understood or publicised as it is now. It has also been addressed that code audits of OpenSSL were done as a result, which has lead to both forks and patches.
In the context of the new, one liner graphics format proposed by the OP in 2019 that had an obvious bug written straight into its specification, this thread has now reached absurd levels.
> Again, I'm not saying nobody would possibly do it, but one does not automatically follow from the other.
Ok. I'll tell my customer we don't need all the firewalls, IDS, code inspection, risk assessment and hardening because zeroname on HN told me it was OK.
> I don't know man, I'm here arguing with this guy who doesn't seem to grasp a basic concept in logical reasoning, yet he's working on critical government infrastructure.
I'm not at all offended by your ridiculous insults, I feel rewarded every day that my team and I are working on something worthwhile, and also the white-knuckled fear on that goes with it on occasion when we worry whether we've missed something or not.
Let's just both be thankful you don't live in an area where my team provides emergency services then.
Again, have a nice day, "man".
That's completely irrelevant to the point I am making. I'm using it as a counterexample to your reasoning that if code for (your) critical infrastructure was published, any obvious flaws in it would almost immediately be discovered and exploited. That's completely orthogonal to how old or poorly written the codebase is.
> Ok. I'll tell my customer we don't need all the firewalls, IDS, code inspection, risk assessment and hardening because zeroname on HN told me it was OK.
Well, I didn't. You're clearly not only unwilling to engage in logical reasoning, you also seem to lack basic reading comprehension.
> I'm not at all offended by your ridiculous insults...
I don't mean it as an insult, but I'm really losing my patience here. Your ego is so tied to being right that you can't admit to having made a little logical blunder there. Logical fallacies are actually really common, there's no shame in them. In fact, I'm prone to arguing the same If-this-then-that fallacies myself.
Either way, if you really care to be right, it doesn't matter what your job is or who gave it to you and how smart and diligent your team is. You just have to get your propositional logic correct.
False: If I leave the door unlocked, people will come and steal my stuff. (Non Sequitur)
True: If I leave the door unlocked, a thief will have it much easier to come in and steal my stuff.
The Morris worm was released in 1988.
People are prone to make logical errors and to overlook stuff, even when they are aware of the risks. Thinking is hard, and every little bit of help is useful, either from peers or from tools.
Do not take what zeroname wrote as an insult. He simply pointed out the mistake you made (indeed, it does not follow what you wrote - it may be possible, and probable, but it does not follow automatically).
You can look at all this as a peer review, just not for code, but for thought/logic argumentation.
- that is not simply pointing out mistakes. That's ridiculous insults that have no place on HN. I didn't think zeroname came off better in that exchange at all, in terms of who was trying to communicate, not just appear superior. There was a crossed wire, and condescending lectures in basic logic mixed with insults did not help.
Well, I applaud your optimism, anyways, but programmers of almost any skill level make egregious security mistakes, especially when working on a deadline. Maybe not everyone would make this mistake, but managing trust while writing optimal code is very tricky.
LZO is a fantastic example of a library that had an "obvious" security flaw for many years, with a fairly trivial integer overflow bug, due to how the decoder dealt with the variable length int encoding.
And that kind of attitude is why we still have security bugs.
Absolutely poppycock. We could all program in completely memory safe languages tomorrow with only one signed integer type and crap programmers would still find a way to write security holes into their programs.
I'm in no way condoning the use of C/C++ forever, the writing is on the wall for those languages, as much as I love them.
But programmers have to learn first and foremost to take responsibility; if you're writing code that runs with elevated privileges then BE CAREFUL. If you're writing code that is reading data from an untrusted source (disk, network or otherwise) then BE CAREFUL. Hell, being careful even if you think the data source can be trusted is a good starting point - defensive programming 101.
We cannot blame our tools forever, but we can improve them.
- Perform periodic external security reviews
- Use fuzzing for all uncontrolled/user-inputs
- Use static analysis tools
- Maintain a security bounty program
- Send your employees to security training
- Use a memory safe programming language
Also, checklists. Checklists are good. And an inventory of used components, and their versions. (This makes it easy to do a CVE review from time to time, and then to automate the review eventually, so only the list maintenance will remain manual.)
Defense in depth, but not through obscurity. (There usually are low hanging fruits. Enforce use of password managers, invest in centralized credential storage, don't overdo password expiration and 2FA. Security training is also a good idea, but the real goal is to nurture a security aware office/team culture.)
Social engineering [or just plain old laziness] is still a serious threat.
Timeboxing. Set aside 1-2 days every month to work on meaningful security-conscious goals. So try to upgrade to lay the fundamentals for that library upgrade that is overdue for years, try to make systems reproducible (also good for DR), try to add a few simple validations here and there against local file inclusion (or whatever comes up during the month, or during the checklist review).
Also, accepting that maintaining network facing systems have an inherent ongoing cost. (Unless you want your iToaster to eventually end up as part of a botnet.) Sometimes we have to let things go, accept that some business models (or hobbyist projects) are not worth it to do sanely and securely.
But if you need all that to spot the obvious issue in the OP's original specification ... then wow.
That's not going to happen as long as all our licenses include something like "This software is distributed WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
> if you're writing code that ...
Being careful (I'd add: learn how to be careful) is a good platitude, though might as well just say so without the ifs. Especially for open source, you don't always know the context in which your program will run. There's responsibility of the dev, but also responsibility of downstream users. When companies wrap a cli program behind a web server they're asking for trouble.
Not failing on the data parse example above is expected of competent C/C++ programmers. I do think these days if you're going to use such languages outside of well-defined low-security-impact targets like non-networked games you're morally obligated to be better than competent. I can't change what exists though.
Managed languages help so much. I do think Java achieved its mission statement of dragging many C++ programmers halfway to Common Lisp (if halfway is a GC) and preventing many security bugs, but if more people were to go all the way, I think Naggum points to a big obstacle I don't know how to overcome: "What has to go before Common Lisp will conquer the world is the belief that passing unadorned machine words around is safe just because some external force has «approved» the exchange of those machine words."
In "From the Earth to the Moon" , someone explains that the main problem of the Apollo 1 fire was that nobody thought to label that test "hazardous". Sitting on the ground was supposed to be safe.
We should expect perfection when writing C code, but we should also be clear that writing C is a hazardous activity, and not to be taken lightly. My surgeon is a human being, so I don't expect him to never make a mistake, but he's also a skilled and careful professional, and I do expect him to never make a 101-level mistake while holding a scalpel.
If you know your tools there is less risk that you produce security bugs. You know the sideeffects of all the commands and functions. If you use a black box and trust it fully, you will (unknowingly) find ways to make new security holes noone ever thought of before.
I'm not saying that we should not innovate, just that we should not rush head first into the blue sky thinking everything is fine just because we use the new shiny thing where we can't make bugs ever.
And there are those working on mitigating some of the worst issues with those languages, maybe someday they will bear practical fruit as well.
So why doesn't everyone go with the better tool? Large problem is that experienced programmers encourages new programmers to use the older & less secure tools. I guess in a way to stay relevant.
Just this past week it has been an article about C programming almost everyday on the top list here at Hacker News.
I agree in general, but not necessarily if the "better tool" doesn't run on or generate code for your target platform, or doesn't meet your performance requirements, or memory constraints, or the requirements to interface with other languages via a common ABI etc, etc etc.
And, in those situations, you need to BE CAREFUL.
> Large problem is that experienced programmers encourages new programmers to use the older & less secure tools. I guess in a way to stay relevant.
Oh I get it. Blame the older generation who wrote the platforms & tools that gave you a job in the first place. If that doesn't work, blame the tools. Blame anything but yourself for writing shit code. I see.
Sure, why not. Almost all of them write "shit code", by your definition.
Plenty of people claim they can write secure C code, and 99% of them are rookies that have learned the rules but not their own limitations, IMO.
We have newer languages, but also a lots of languages have gotten better to interface lower level libraries.
Experienced programmers can be a huge asset, but at the same time a curse, there is no contradiction there. And I'm not arguing for a revolution to throw out all of what has been gained in software, I just say that new projects should to leave the old tools behind.
I am of course also guilty for proselytizing bad ideas & writing bad code.
Decent programmers also try to err on the side of safety, use the right tool for the job, and so on. And writing parsers should not be done directly, instead the parser code should be generated by a battle tested library/framework. (Or via using parser combinators, etc.)
edit: N/M I found the updates on Samsungs page that address these