Idk. I just visited the site (McMaster) for like 2 minutes and found a few annoying things. I filtered for cotton (o rings). Nothing happens after click for 4 secs. Then it chooses something else to filter on.
Next the animation to get the filtering menu is bugging out. And dragging it down triggers a site refresh.
I 100% agree. It's weird. I Use 'Dash to Panel's GNOME extension, and to me that should be the default.
From there I really like how GNOME merges the app bar with maxize minimize close with the title etc. At least most applications.
The reason the current macro is so complex is because it supports mixed types while avoiding (failing on) integer promotion bugs. A version supporting arguments of all the same type would be just as trivial as in C++ (albeit relying on GCC extensions like statement expressions).
This is addressed if you chase down the original kernel mailing list thread (the "flamewar" linked in the article). It was important to Linus that the macro was actually min/max and not min_slong max_uint etc, so that people couldn't "accidentally" use the untyped version. In other words he was trying quite hard to "force" people to use these macros.
If you would then say "why doesn't min/max just implement a switch on each primitive type", I think on some level it does just do that.
Maybe I'm off, but to me the gist of the expression problem can be explained by contrasting how code extensibility is achieved in OOP/FP.
OOP Approach with interface/inheritance:
Easy: Adding new types (variants) of a base class/interface.
Hard: Adding new functionality to the base class/interface, as it requires implementing it in all existing types.
FP Approach with Discriminated Unions:
Easy: Adding new functions. Create a function and match on the DU; the compiler ensures all cases are handled.
Hard: Adding new types to the DU, as it requires updating all existing exhaustive pattern matches throughout the codebase.
Here's some Kotlin code. Kotlin is great because it can do both really well.
// Object-Oriented Approach
interface Shape {
fun area(): Double
fun perimeter(): Double
}
class Circle(val radius: Double) : Shape {
override fun area() = Math.PI \* radius \* radius
override fun perimeter() = 2 \* Math.PI \* radius
}
class Rectangle(val width: Double, val height: Double) : Shape {
override fun area() = width \* height
override fun perimeter() = 2 \* (width + height)
}
// Easy to add new shape
class Triangle(val a: Double, val b: Double, val c: Double) : Shape {
override fun area(): Double {
val s = (a + b + c) / 2
return Math.sqrt(s \* (s - a) \* (s - b) \* (s - c))
}
override fun perimeter() = a + b + c
}
// Hard to add new function (need to modify all existing shapes)
// interface Shape {
// fun area(): Double
// fun perimeter(): Double
// fun draw(): String // New function
// }
// Functional Approach
sealed class ShapeFP {
data class CircleFP(val radius: Double) : ShapeFP()
data class RectangleFP(val width: Double, val height: Double) : ShapeFP()
}
fun area(shape: ShapeFP): Double = when (shape) {
is ShapeFP.CircleFP -> Math.PI \* shape.radius \* shape.radius
is ShapeFP.RectangleFP -> shape.width \* shape.height
}
fun perimeter(shape: ShapeFP): Double = when (shape) {
is ShapeFP.CircleFP -> 2 \* Math.PI \* shape.radius
is ShapeFP.RectangleFP -> 2 \* (shape.width + shape.height)
}
// Easy to add new function
fun draw(shape: ShapeFP): String = when (shape) {
is ShapeFP.CircleFP -> "O"
is ShapeFP.RectangleFP -> "[]"
}
// Hard to add new shape (need to update all existing functions)
// sealed class ShapeFP {
// data class CircleFP(val radius: Double) : ShapeFP()
// data class RectangleFP(val width: Double, val height: Double) : ShapeFP()
// data class TriangleFP(val a: Double, val b: Double, val c: Double) : ShapeFP()
// }
However the definitions make the 2 choices (adding new types vs adding new functions/operations) sound like a toss-up.
It is the fact that I tend to find the FP/DU approach so much more frequently useful for my/my team's own code that makes me wonder if I'm missing something.
Perhaps the important distinction I've been missing is in Wikipedia's definition:
"The goal is to define a data abstraction that is extensible both in its representations and its behaviors, where one can add new representations and new behaviors to the data abstraction, without recompiling existing code, and while retaining static type safety (e.g., no casts)."
... but when I'm working on my own/team's code, it is perfectly sensible to recompile the code constantly.
The reason why it matters less than people intuitively think is precisely that it matters a lot less when you're in full control of both the operations and the types anyhow, and that's actually the most common case. Generally you are "composing" in library code, that is, just using it, not extending the library itself.
When you are extending, you actually want to choose the correct thing depending on what you need. Going the wrong direction is painful in both directions.
Personally I think one of the reasons sum types are greeted with such "oh my gosh where have you been all my life" reactions is precisely that we had type extension as our only option for so long. If we had had only sum types, and type extension was given to us for the first time in obscure languages 20 years ago and they only really started getting popular in the last 5 or so, I think they'd be considered in much the same way. Just as in a world with only screwdrivers, the invention of the hammer would be hailed as a revolution... and in a world with only hammers, the invention of the screwdriver would be hailed as a revolution. But in both cases the real mystery is how the hypothetical world got that far in the first place.
Not that they aren't useful; consider what it means that I'm analogizing them to something like a hammer and a screwdriver, not an oil filter remover or something. It is weird that we were missing one of them for as long as we were in the mainstream.
And I've known about them for, let's see, at least fifteen years, and I've definitely gotten over my "oh my gosh I must use these for everything" phase.
Though I do wonder as well how many people encountered them in their "spread their wings" phase and happened to be learning about sum types just as their general programming skill was leveling up in general, and conflate the two. When you learn how to use both skillfully, I really feel like the differences collapse quite a bit. I see so, so much bad code with type-based extension, but it's not because they're using type-based extension, but just that it's bad code, regardless. Of course bad type-extension code is worse than good sum types code, but there's still times and places for both approaches when you know what you're doing with both.
What do you do when you are handed a DLL with publicly exposed types? You can't recompile someone else's DLL without the source, but the public non-sealed types are totally open to inherit from; it's just a question of how useful the inheritance would actually be if there's not a logical public interface (as provided by the original designer).
Maybe you can read all the public fields, but you can't actually modify them or create functions that modify the object. You then must wrapper around the instances and fight to bridge every little behavior between their code and yours.
This to me is evidence that one core tenet of OOP "Open to extension" is in-practice meaningless.
With fear of sounding like a douche-bag, I honestly believe there's A LOT of incompetence in the tech-world, which permeates all layers, security companies, AV companies, OS companies etc.
I really blame the whole power-structure, it looked like the engineers had the power, but last 10 years tech has been turned upside-down and exploited as any other industry, controlled by the opportunistic and greedy people. Everything is about making money, shipping features, the engineering is lost.
Would you rather tick compliance boxes easily or think deep about your critical path? Would you rather pay 100k for a skilled engineer or 5 cheaper (new) ones? Would you rather sell your HW now despite pushing feature-incomplete buggy app ruining the experience for many many customers? Will you listen to your engineers?
I also blame us, the SWE engineers, we are waay to easily busied around by these types of people who have no clue. Have professional integrity, tests is not optional or something that can be cut, it's part of SWE. Gradual rollout, feature-toggles, fall-backs/watchdogs etc. basic tools everyone should know.
I know people really dislike how Apple restricts your freedom to use their software in any way they don't intend. But this is one of the times where they shine.
Apple recognised kernel extension brought all sorts of trouble for users such as instability, crashing, etc. and presented a juicy attack surface. They deprecated and eventually disallowed kernel extensions supplanting them with a system extensions framework to provide interfaces for VPN functionality, EDR agents, etc.
A Crowdstrike agent couldn't panic or boot loop macOS due to a bug in the code when using this interface.
> I know people really dislike how Apple restricts your freedom to use their software in any way they don't intend. But this is one of the times where they shine.
Yes, the problem here is that the system owners had too much control over their systems.
No, no, that's the EXACT OPPOSITE of what happened. The problem is Crowdstrike had too much control of systems -- arguing that we should instead give that control to Apple is just swapping out who's holding the gun.
> arguing that we should instead give that control to Apple is just swapping out who's holding the gun.
apple wrote the OS, in this scenario they're already holding a nuke, and getting the gun out of crowdstrike's hands is in fact a win.
it is self-evident that 300 countries having nukes is less safe than 5 countries having them. Getting nukes (kernel modules) out of the hands of randos is a good thing even if the OS vendor still has kernel access (which they couldn't possibly not have) and might have problems of their own. IDK why that's even worthy of having to be stated.
don't let the perfect be the enemy of the good, incremental improvements in the state of things is still improvement. there is a silly amount of black-and-white thinking around "popular" targets like apple and nvidia (see: anything to do with the open-firmware-driver) etc.
"sure google is taking all your personal data and using it to target ads to your web searches, but apple also has sponsored/promoted apps in the app store!" is a similarly trite level of discourse that is nonetheless tolerated when it's targeted at the right brand.
This is good nuance to add to the conversation, thanks.
I think in most cases you have to trust some group of parties. As an individual you likely don't have enough time and expertise to fully validate everything that runs on your hardware.
Do you trust the OSS community, hardware vendors, OS vendors like IBM, Apple, M$, do you trust third party vendors like Crowdstrike?
For me, I prefer to minimize the number of parties I have to trust, and my trust is based on historical track record. I don't mind paying and giving up functionality.
Even if you've trusted too many people, and been burned, we should design our systems such that you can revoke that trust after the fact and become un-burned.
Having to boot into safe mode and remove the file is a pretty clumsy remediation. Better would be to boot into some kind of trust-management interface and distrust cloudstrike updates dated after July 17, then rebuild your system accordingly (this wouldn't be difficult to implement with nix).
Of course you can only benefit from that approach if you trust the end user a bit more than we typically do. Physical access should always be enough to access the trust management interface, anything else is just another vector for spooky action at a distance.
It is some mix of priorities along the frontier, with Apple being on the significantly controlling end such that I wouldn't want to bother. Your trust should also be based on prediction, and giving a major company even more control over what your systems are allowed to do has been historically bad and only gets worse. Even if Apple is properly ethical now (I'm skeptical, I think they've found a decently sized niche and that most of their users wouldn't drop them even if they moved to significantly higher levels of telemetry, due to being a status good in part), there's little reason to give them that power in perpetuity. Removing that control when it is absued hasn't gone well in the past.
Microsoft is also trying to make drivers and similar safer with HVCI, WDAC, ELAM and similar efforts.
But given how a large part of their moat is backwards compatibility, very few of those things are the default and even then probably wouldn't have prevented this scenario.
These customers wouldn't be able to do that in time frames measured in anything but decades and/or they would risk going bankrupt attempting to switch.
Microsoft has far more leverage than they choose to exert, for various reasons.
I can't run a 10year old game on my Mac but i can run a 30 year old game on my windows 11 box. Microsoft prioritizes backwards compatibility for older software,
For apple you just need to be an apple customer, they do a good job on crashing computers with their OSX updates like Sonoma. I remember my first macbook pro retina couldn’t go to sleep because it wouldn’t wake up till apple decided to release a fix for it. Good thing they don’t make server OSes.
I remember fearing every OSX update because until they switched to just shipping read-only partition images you had considerable chance of hitting a bug in Installer.app that resulted in infinite loop... (the bug existed since ~10.6 until they switched to image-based updates...)
30 years ago would be 1994. Were there any 32-bit Windows games in 1994 other than the version of FreeCell included with Win32s?
16-bit games (for DOS or Windows) won't run natively under Windows 11 because there's no 32-bit version of Windows 11 and switching a 64-bit CPU back to legacy mode to get access to the 16-bit execution modes is painful.
Maybe. Have you tried? 30 year old games often did not implement delta timing, so they advance ridiculously fast on modern processors. Or the games required a memory mode not supported by modern Windows (see real mode, expanded memory, protected mode), requiring DOSBox or other emulator to run today.
Well - recognition where it's due - that actually looks pretty great. (Assuming that, contrary to prior behavior, they actually support it, and fix bugs without breaking backwards compatibility every release, and don't keep swapping it out for newer frameworks, etc etc)
> I also blame us, the SWE engineers, we are waay to easily busied around by these types of people who have no clue. Have professional integrity, tests is not optional or something that can be cut, it's part of SWE.
Then maybe most of what's done in the "tech-industry" isn't, in any real sense, "engineering"?
I'd argue the areas where there's actual "engineering" in software are the least discussed---example being hard real-time systems for Engine Control Units/ABS systems etc.
That _has_ to work, unlike the latest CRUD/React thingy that had "engineering" processes of cargo-culting whatever framework is cool now and subjective nonsense like "code smells" and whatever design pattern is "needed" for "scale" or some such crap.
Perhaps actual engineering approaches could be applied to software development at large, but it wouldn't look like what most programmers do, day to day, now.
How is mission-critical software designed, tested, and QA'd? Why not try those approaches?
Amen to that. Software Engineering as a discipline badly suffers from not incorporating well-known methods for preventing these kinds of disasters from Systems Engineering.
> How is mission-critical software designed, tested, and QA'd? Why not try those approaches?
Ultimately, because it is more expensive and slower to do things correctly, though I would argue that while you lose speed initially with activities like actually thinking through your requirements and your verification and validation strategies, you end up gaining speed later when you're iterating on a correct system implementation because you have established extremely valuable guardrails that keep you focused and on the right track.
At the end of the day, the real failure is in the risk estimation of the damage done when these kinds of systems fail. We foolishly think that this kind of widespread disastrous failure is less likely than it really is, or the damage won't be as bad. If we accurately quantified that risk, many more systems we build would fall under the rigor of proper engineering practices.
Accountability would drive this. Engineering liability codes are a thing, trade liability codes are a thing. If you do work that isn't up to code, and harm results, you're liable. Nobody is holding us software developers accountable, so it's no wonder these things continue to happen.
"Listen to the engineers?" The problem is that there are no engineers, in the proper sense of the term. What there are is tons and tons of software developers who are all too happy to be lax about security and safe designs for their own convenience and fight back hard against security analysts and QA when called out on it.
Engineers can be lazy and greedy, too. But at least they should better understand the risks of cutting corners.
> Have professional integrity, tests is not optional or something that can be cut, it's part of SWE. Gradual rollout, feature-toggles, fall-backs/watchdogs etc. basic tools everyone should know.
In my career, my solution for this has been to just include doing things "the right way" as part of the estimate, and not give management the option to select a "cutting corners" option. The "cutting corners" option not only adds more risk, but rarely saves time anyway when you inevitably have to manually roll things back or do it over.
Sigh, I've tried this. So management reassigned to a dev who was happy to ship a simalcrum of the thing that, at best, doesn't work or, at worst, is full of security holes and gives incorrect results. And this makes management happy because something shipped! Metrics go up!
And then they ask why, exactly, did the senior engineer say this would take so long? Why always so difficult?
I don't know that incompetence is the best way to describe the forces at play but I agree with your sentiment.
There is always tension between business people and engineering. Where the engineers want things to be perfect and safe, because we need to fix the arising issues during nights and weekends.
The business people are interested in getting features released, and don't always understand the risks by pushing arbitrary dates.
It's a tradeoff which in healthy organizations where the two sides and leadership communicate effectively is well managed.
> Where the engineers want things to be perfect and safe, because we need to fix the arising issues during nights and weekends. The business people are interested in getting features released, and don't always understand the risks by pushing arbitrary dates.
Isn't this issue a vindication of the engineering approach to management, where you try to _not_ brick thousands of computers because you wanted to meet some internal deadline faster?
> There is always tension between business people and engineering.
Really? I think this situation (and the situation with Boeing!) shows that the tension is between ultimately between responsibility and irresponsibility.
I cannot be said that this is a win for short-sighted and incompetent business people?
If people don't understand the risks they shouldn't be making the decisions.
I think this is especially true in businesses where the thing you are selling is literally your ability to do good engineering. In the case of Boeing the fundamental thing customers care about is the "goodness" of the actual plane (for example the quality, the value for money, etc). In the case of Crowdstrike people wanted high quality software to protect their computers.
Yeah, good point. If you buy a carton of milk and it's gone off you shrug and go back to the store. If you're sitting in a jet plane at 30,000ft and the door goes for a walk... Twilight Zone. (And if the airline's security contractor sends a message to all the planes to turn off their engines... words fail. It's not... I can't joke about it. Too soon.)
Yes. I have been working in the tech industry since the early aughts and I never seen the industry so weak on engineer lead firms. Something really happened and the industry flipped.
In most companies, businesspeople without any real software dev experience control the purse strings. Such people should never run companies that sell life-or-death software.
The reality is there is plenty of space in the software industry to trade off velocity against "competent" software engineering. Take Instagram as an example. No one is going to die if e.g. a bug causes someone's IG photo upload to only appear in a proper subset of the feeds where it should appear.
In the civil engineering world, at least in Europe, the lead engineer would sign papers that would put him as liable if a bridge or a building structure collapses on its own. The civil engineers face literal prison time if they make a sloppy work.
In the software engineering world, we have TOSs that deny any liability if the software fails. Why?
It boils my blood to think that the heads of CrowdStrike would maybe get a slap on the wrist and everything will slowly continue as usual as the machines will get fixed.
Let's think about this for a second. I agree to some extend with what you are trying to say, I just think there's a critical thing missing here in your consideration, and that is usage of the product outside its intended purpose/marketing.
Civil engineers built bridges knowingly that civilians use them, and structural failure can cause deaths. The line of responsibility is clear.
SW companies (like CrowdStrike (CS)) it MAY BE less straight-forward.
A relevant real-world example is the use of consumer drones in military conflicts. Companies like DJI design and market their drones for civilian use, such as photography. However, these drones have been repurposed in conflict zones, like Ukraine, to carry explosives. If such a drone malfunctioned during military use, it would be unreasonable to hold DJI accountable, as this usage clearly falls outside the product's intended purpose and marketing.
The liability depends on the guarantees they make. If they market it for AV used for critical infrastructure, such as healthcare (seems like they do https://www.crowdstrike.com/platform/) - by all means, it's reasonable to hold with accountable.
However, SW companies should be able to sell products and long as they're clear what the limitations are, and it needs to be clearly communicated to the customers.
We have those TOS's in the software world because it would be prohibitively expensive to make all software reliable as a publicly used bridge. For those who died as a direct result of CrowdStrike, that's where the litigious nature of the US becomes a rare plus. And CrowdStrike will lose a lot of customers over this. It isn't perfect, but the market will arbitrate CrowdStrike's future in the coming months and years.
We’re definitely in a moment. I’ve seen a large shift away from discipline in the field. People don’t seem to care about professionalism or “good work”.
I mean back in the mid teens we had the whole “move fast and break things” motif. I think that quickly morphed into “be agile” because no one actually felt good about breaking things.
We don’t really have any software engineering leaders these days. It would be nice if one stood up and said “stop being awful. Let’s be professionals and earn our money.” Like, let’s create our own oath.
> We don’t really have any software engineering leaders these days. It would be nice if one stood up and said “stop being awful. Let’s be professionals and earn our money.”
I assume you realize that you don't get very far in many companies when you do that. I'm not humble-bragging, but I used to say just this over past 10-15 years even when in senior/leadership positions, and it ended up giving me a reputation of "oh, gedy is difficult", and you get sidelined by more "helpful" junior devs and managers who are willing to sling shit over the wall to please product. It's really not worth it.
It’s a matter of getting a critical mass of people who do that. In other words, changing the general culture. I’m lucky to work at a company that more or less has that culture.
Yeah I’ve found this is largely cultural, and it needs to come from the top.
The best orgs have a gnarly, time-wisened engineer in a VP role who somehow is also a good people person, and pushes both up and down engineering quality above all else. It’s a very very rare combination.
> We’re definitely in a moment. I’ve seen a large shift away from discipline in the field. People don’t seem to care about professionalism or “good work”.
Agreed. Thinking back to my experience at a company like Sun, every build was tested on every combination of hardware and OS releases (and probably patch levels, don't remember). This took a long time and a very large number of machines running the entire test suites. After that all passed ok, the release would be rolled out internally for dogfooding.
To me that's the base level of responsibility an engineering organization must have.
Here, apparently, Crowdstrike lets a code change through with little to no testing and immediately pushes it out to the entire world! And this is from a product that is effectively a backdoor to every host. What could go wrong? YOLO right?
This mindset is why I grow to hate what the tech industry has become.
As an infra guy, it seems like all my biggest fights at work lately have been about quality. Long abandoned dependencies that never get updated, little to no testing, constant push to take things to prod before they're ready. Not to mention all the security issues that get shrugged off in the name of convenience.
I find both management and devs are to blame. For some reason the amazingly knowledgeable developers I read on here daily are never to be found at work.
Yes. I’ve had the same experience. Literally have had engineers get upset with me when I asked them to consider optimizing code or refactor out complexity. “Yeah we’ll do it in a follow up, this needs to ship now,” is what I always end up hearing. We’re not their technical leads but we get pulled into a lot of PRs because we have oversight on a lot of areas of the codebase. From our purview, it’s just constantly deteriorating.
IMO, if you want to write code for anything mission critical you should need some kind of state certification, especially when you are writing code for stuff that is used by govt., hospitals, finance etc.
Not certification, licensure. That can and will be taken away if you violate the code of ethics. Which in this case means the code of conduct dictated to you by your industry instead of whatever you find ethical.
Like a license to be a doctor, lawyer, or civil engineer.
There’s - perhaps rightfully, but certainly predictably - a lot of software engineers in this thread moaning about how evil management makes poor engineers cut corners. Great, licensure addresses that. You don’t cut corners if doing so and getting caught means you never get to work in your field again. Any threat management can bring to the table is not as bad as that. And management is far less likely to even try if they can’t just replace you with a less scrupulous engineer (and there are many, many unscrupulous engineers) because there aren’t any because they’re all subject to the same code of ethics. Licensure gives engineers leverage.
I think that could cause a huge shift away from contributing to or being the maintainer of open source software. It would be too risky if those standards were applied and they couldn't use the standard "as is, no warranties" disclaimers.
Actually, no it wouldn't, as the licensire would likely be tied with providing the service on a paid basis to others. You could write or maintain any codebase you want. Once you start consuming it for an employer though, the licensure kicks in.
Paid/subsidized maintainers may be a different story though. But there absolutely should be some level of teeth and stake wieldable by a professional SWE to resist pushes to "just do the unethical/dangerous thing" by management.
I might have misunderstood. I took it to mean that engineers would be responsible for all code they write - the same as another engineer may be liable for any bridge they build - which would mean the common "as is", "no warranty", "not fit for any purpose" cute clauses common to OSS would no longer apply as this is clearly skirting around the fact that you made a tool to do a specific thing, and harming your computer isn't the intended outcome.
You can already enforce responsibility via contract but sure, some kind of licensing board that can revoke a license so you can no longer practice as a SWE would help with pushback against client/employer pressure. In a global market though it may be difficult to present this as a positive compared to overseas resources once they get fed up with it. It would probably need either regulation, or the private equivalent - insurance companies finding a real, quantifiable risk to apply to premiums.
Trouble is, the bridge built by any licensed engineer stands in its location, and can't be moved or duplicated. Software however is routinely duplicated, and copied to places that might not be suitable for ite original purpose.
I’d be ok with this so long as 1) there are rules about what constitutes properly built software and 2) there are protections for engineers who adhere to these rules
Far from being douchey, I think you've hit the nail on the head.
No one is perfect, we're all incompetent to some extent. You've written shitty code, I've definitely written shitty code. There's little time or consideration given to going back and improving things. Unless you're lucky enough to have financial support while working on a FOSS project where writing quality software is actually prioritized.
I get the appeal software developers have to start from scratch and write their own kernel, or OS, etc. And then you realize that working with modern hardware is just as messy.
We all stack our own house of cards upon another. Unless we tear it all down and start again with a sane stable structure, events like this will keep happening.
I think you are correct on that many SWEs are incompetent. I definitely am. I wish I had the time and passion to go through a complete self-training of CS fundamentals using Open Course resources.
> I honestly believe there's A LOT of incompetence in the tech-world
I can understand why. An engineer with expertise in one area can be a dunce in another; the line between concerns can be blurry; and expectations continue to change. Finding the right people with the right expertise is hard.
100% what we seen in the last couple of decades is the march of normies into the techno sphere to the detriment of the prior natives.
We've essentially watched digital colonialism, and it certainly peaks with Elon musk wealth and ego, attempting to buy up the digital market place of ideas.
Applying rigorous engineering principles is not something I see developers doing often. Whether or not it's incompetence on their part, or pressure from 'imbecile MBAs and marketers', it doesn't matter. They are software developers, not engineers. Engineers in most countries have to belong to a professional body and meet specific standards before they can practice as professionals. Any asshat can call themselves a 'software engineer', the current situation being a prime example, or was this a marketing decision?
You're making the title be more than it is. This won't get solved by more certification. The checkbox of having certified security is what allowed it to happen in the first place.
No. Engineering means something. This is a software ‘engineering’ problem. If the field wants the nomenclature, then it behooves them to apply rigour to who can call themselves an engineer or architect. Blaming middle management is missing the wood for the trees. The root cause was a bad patch. That is developments fault, and no one else’s. As to why this fault could happen, well the design of Windows should be scrutinised. Again, middle management isn’t really to blame here, software architects and engineers design the infrastructure, they choose to use Windows for a variety of reasons.
The point here m trying to make is blaming “MBAs and marketing” shifts blame and misses the wood for the trees. The OP is as on the holier-than-thou “engineer” trip. They are not engineers.
I think engineering only means something because of culture. It all starts from the culture of collective people who define and decide what principles are to be followed and why. All the certifications and licensing that are prerequsite to becoming an engineer are outcomes of the culture that defined them.
Today we have pockets of code produced by one culture linked (literally) with pockets of code produced by a completely different ones and somehow expect the final result to adhere to the most principled and disciplined culture.
Not entirely true. The company I worked for, major network equipment provider, had a customer user group that had self-organised to take it in turns to be the first customer to deploy major new software builds. It mostly worked well.
Maybe a key innovation would be to apply backpropagation to optimize the crossover process itself. Instead of random crossover, compute the gradient of the crossover operation.
For each potential combination, "learn" (via normal backprop) how different ways of crossover impacts on overall network performance. Then use this to guide the selection of optimal crossover points and methods.
This "gradient-optimized crossover" would be a search process in itself, aiming to find the best way to combine specific parts of networks to maximize improvement of the whole. It could make "leaps", instead of small incremental steps, due to the exploratory genetic algorithm.
I'm currently trying out Claude 3 (Opus) side by side with ChatGPT (mostly using 4o, but have premium).
So far it's pretty much on par, sometimes Claude gets it better sometimes ChatGPT.
I will say the ones where Claude did better was technical in nature. But.. still experimenting.
I find Claude tends to be better at creative writing and to provide more thoughtful answers. Claude also tends to write more elegant code than GPT, but that code tends to be incorrect slightly more often as well. It tends to get confused by questions that aren't clearly worded that GPT handles in stride though.
I've found Claude useless for writing purposes (even rubber-duck brainstorming), because it eventually but inevitably makes everything more and more sesquipedalian, ignoring all instructions to the contrary, until every response is just a garbage mess of purple prose rephrasing the same thing over and over again.
I don't know what the deal is, but it's a failure state I've seen consistently enough that I suspect it has to be some kind of issue at the intersection of training material and the long context window.
I wanted to like Claude but for all my trying I could not get even Opus to understand brevity. I found myself repeating variations of "do not over-explain. just give brief answers until I ask for more" over and over until I cancelled my subscription in frustration.
I am sure there is a technical skill in getting Claude to shut the hell up and answer, but I shouldn't have to suss out its arcane secrets. There should be a checkbox.
Thank you for introducing me to "sesquipedalian", a word I've never seen before in over 20 years of venturing in the anglosphere, but one which I, as a native speaker of an Awful (and very sesquipedalian) Language, instantly fell in love with. :)
One time I asked about reading/filtering JSON in Azure SQL. Claude suggested a feature I didn't know of OPENJSON. ChatGPT did not, but used a more generalize SQL technique - the CTE.
Another time I asked about terror attacks in France. Here Claude immediately summarized the motives behind, whereas ChatGPT didn't.
Lastly I asked for a summary of the Dune book, as I read it a few years ago and wanted to read Dune Dark Messiah (after watching part 2 of the 2024 movie, which concludes the Dune 1 book). Here ChatGPT was more structured (which I liked) and detailed, whereas Claude's summary was more fluent but left out important details (I specifically said spoilers was ok).
Claude don't have access to searching internet or making plots. ChatGPT seems more mature with access to Wolfram alpha, latex for rendering math, matplotlib for making plots etc.
In my case, code that runs is more convincing than code that doesn't.
Also it's useful to ask questions that you already know the answer to, in order to understand its limits and how it fails. In that case, "better" means more accurate and appropriate.
a teacher once told me, three are three kinds of questions. One is factual, that is a valid answer is maybe a number or details of an event that is documented.. lots of computer things or science knowledge; Second question purely for an opinion .. "Do you like house music?" .. there is no correct answer it is an opinion.. but the Third might be called a "well-reasoned judgement" .. that is often in the realm of decisions.. there are factors, not everything about it is known.. goals or culture outside of the question might shape the acceptable answers.. law certainly.. lots of business things..
extending that to an LLM, perhaps language translation sits as a "3rd type" on top of those three types.. translating a question or answer into another spoken language.. or via an intermediate model of some kind .. but that is going "meta" ..
the point is, there are different kinds of questions and answers, and they dont all fit in the same buckets if "testing" an LLM for better..
reply