I suspect you are a human that also has a bot running on the same account name. If not you are using a tool. If not you are an avid digital archaeologist.
This used to be a thing- I remember my father excitedly configuring a made-to-order laptop from ZipZoomFly[0] back in the day. I think that the market wasn’t kind to them though, the ecosystem about replaceable laptop parts never matured to the point where it was competitive with the proprietary designs, and standards constantly changed because of the form factor’ constraints, so the dream of just replacing a single part never materialized.
Closest thing to that dream now is the framework laptop, which does have replaceable parts.
Resellers of Clevo barebones offer a fair bit of flexibility to spec the system to order. It's not full freedom to mix and match, but still quite flexible. The price is that it is far less sleek, bulkier and heavier than most other laptops.
I don't know that this CVE would be trivial to knock out.
My CVSS score for this is as follows:
CVSSv3.1:AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:L (I said "Low" integrity issues, and "Low" availability issues, since I don't know if the DOS issue is real)
That reads out to a "Medium" CVE.
I have, in the past, worked with some banks, and they want all 4+ CVSSv3 CVEs enumerated and either remediated or for a plan to be in place to remediate them.
Maybe you're significantly better than I am at this, but I am hesitant to look at any CVE and say it's not a problem with how I have configured my software. Unless I have really deeply looked into the issue, I get really nervous saying a CVE is not going to affect my software.
I don't think they're criticizing - I think it's observation.
It makes a lot of sense, and we're early-ish to the tech cycle. Reading the Manual/Google/ChatGPT are all just tools in the toolbelt. If you (an expert) is giving this advice, it should become mainstream soon-ish.
I think this is where personal problem solving skills matter. I use ChatGPT to start off a lot of new ideas or projects with unfamiliar tools or libraries I will be using, however the result isn't always good. From here, a good developer will take the information from the A.I tool and look further into current documentation to supplement.
If you can't distinguish bad from good with LLMs, you might as well be throwing crap at the wall hoping it will stick.
>If you can't distinguish bad from good with LLMs, you might as well be throwing crap at the wall hoping it will stick.
This is why I think LLMs are more of a tool for the expert rather than for the novice.
They give more speedup the more experience one has on the subject in question. An experienced dev can usually spot bad advice with little effort, while a junior dev might believe almost any advice due to the lack of experience to question things. The same goes for asking the right questions.
This is where I tell younger people thinking about getting into computer science or development that there is still a huge need for those skills. I think AI is a long way off from taking away problem solving skills. Most of us that have had the (dis)pleasure of needing to repeatedly change and build on our prompts to get close to what we're looking for will be familiar with this. Without the general problem solving skills we've developed, at best we're going to luck out and get just the right solution, but more than likely will at best have a solution that only gets partially towards what we actually need. Solutions will often be inefficient or subtly wrong in ways that still require knowledge in the technology/language being produced by the LLM. I even tell my teenage son that if he really does enjoy coding and wishes to pursue it as a career, that he should go for it. I shouldn't be, but I'm constantly astounded by the number of people that take output from a LLM without checking for validity.
It kept war out of central Europe from 1945 until 2022 (I'm not 100% sure we shouldn't count Georgia/Bosnia/Chechnia/Kosovo, so I'll say "central Europe").
I don't think two nuclear armed powers have ever declared war on each other - despite two nuclear armed powers currently being in active conflict (India and China) and another few being incredibly geopolitically unfriendly (India/Pakistan and Israel/Iran).
The whole idea behind MAD initially was that if Russia decided to get ideas in Europe, the Western powers would stop them with a nuclear curtain. That's why France has a "warning shot" nuclear doctrine, and the US hasn't ruled out Nuclear First Strike.
IMO, for what it was trying to stop, it worked. Ask people in China and India - it seems to be working for them as well.
EDIT: as an amendment to this: would Russia have been so bold as to invade Ukraine if the 1994 surrender of Ukraine's nuclear arsenal hadn't happened?
Like I said. A lack of war will be taken as direct evidence that it works (not any other causes). And the only way to disprove it conclusively is if we all wipe each other out.
I guess my issue with your statement is that it seems almost impossible to disprove short of someone doing the unthinkable. We have history to suggest (note: not prove) that MAD works.
It’s kinda like economics. We can’t really prove anything in economics works the way we think it does, but we have a bunch of REALLY GOOD suggestions to support our hypotheses.
MAD isn’t a natural law - it’s a social construct, very much like economics.
Ping ID is "SAML" - they actually don't comply with the spec. If you remove the Bearer element from the SAMLRequest, you should be on your way. Ask me how I know.
reply