Every week we get a new AI that according to the AI-goodness-benchmarks is 20% better than the old AI, yet the utility of these latest SOTA models is only marginally higher than the first ChatGPT version released to the public a few years back.
These things have the reasoning skills of a toddler, yet we keep fine-tuning their writing style to be more and more authoritative - this one is only missing the font and color scheme, other than that the output formatted exactly like a research paper.
There were two step changes: ChatGPT/GPT-3.5, and GPT-4. Everything after feels incremental. But that's perhaps understandable. GPT-4 established just how many tasks could be done by such models: approximately anything that involves or could be adjusted to involve text. That was the categorical milestone that GPT-4 crossed. Everything else since then is about slowly increasing model capabilities, which translated to which tasks could then be done in practice, reliably, to acceptable standards. Gradual improvement is all that's left now.
Basically how progress of everything ever looks like.
The next huge jump will have to again make a qualitative change, such as enabling AI to handle a new class of tasks - tasks that fundamentally cannot be represented in text form in a sensible fashion.
But they are already multi-modal. The Google one can do live streaming video understanding with a conversational in-out prompt. You can literally walk around with your camera and just chat about the world. No text to be seen (although perhaps under the covers it is translating everything to text, but the point is the user sees no text)
Fair, but OpenAI was doing that half year ago (though limited access; I myself got it maybe a month ago), and I haven't seen it yet translate into anything in practice, so I feel like it (and multimodality in general) must be a GPT-3 level ability at this point.
But I do expect the next qualitative change to come from this area. It feels exactly like what is needed, but it somehow isn't there just yet.
Just yesterday I did my first Deep Research with OpenAI on a topic I know well.
I have to say I am really underwhelmed. It sounds all authoritative and the structure is good. It all sounds and feels substantial on the surface but the content is really poor.
Now people will blame me and say: you have to get the prompt right! Maybe. But then at the very least put a disclaimer on your highly professional sounding dossier.
I think it's bound to underwhelm the experts. What this does is go through a number of public search results (i think its google search for now, coudl be internal corpus). And hence skips all the paywalled and proprietary data that is not directly accessible via Google. It can produce great output but limited by the sources it can access. If you know more, cos you understand it better, plus know sources which are not indexed by google yet. Moreover there is a possiblity most google surfaced results are a dumbed down and simplified version to appeal to a wider audience.
This sounds like a good thing! Sounds like “it’s professional sounding” is becoming less effective as a means of persuasion, which means we’ll have much less fallacious logic floating around and will ultimately get back to our human roots:
Not true at all. The original ChatGPT was useless other than as a curious entertainment app.
Perplexity, OTOH, has almost completely replaced Google for me now. I'm asking it dozens of questions per day, all for free because that's how cheap it is for them to run.
The emergence of reliable tool use last year is what has sky-rocketed the utility of LLMs. That has made search and multi-step agents feasible, and by extension applications like Deep Research.
If your goal is to replace one unreliable source of information (Google first page) with another, sure - we may be there. I'd argue the GPT 3.5 already outperformed Google for a significant number of queries. The only difference between then and now is that now the context window is large enough that we can afford to paste into the prompt what we hope are a few relevant files.
Yet what's essentially "cat [62 random files we googled] > prompt.txt" is now being confidently presented with academic language as "62 sources". This rubs me the wrong way. Maybe this time the new AI really is so much better than the old AI that it justifies using that sort of language, but I've seen this pattern enough times that I can be confident that's not the case.
> Yet what's essentially "cat [62 random files we googled] > prompt.txt" is now being confidently presented with academic language as "62 sources".
That's not a very charitable take.
I recently quizzed Perplexity (Pro) on a niche political issue in my niche country, and it compared favorably with a special purpose-built RAG on exactly that news coverage (it was faster and more fluent, info content was the same). As I am personally familiar with these topics I was able to manually verify that both were correct.
Outside these tests I haven't used Perplexity a lot yet, but so far it does look capable of surfacing relevant and correct info.
> all for free because that's how cheap it is for them to run.
No, these AI companies are burning through huge amounts of cash to keep the thing running. They're competing for market share - the real question is will anyone ever pay for this? I'm not convinced they will.
> They're competing for market share - the real question is will anyone ever pay for this?
The leadership of every 'AI' company will be looking to go public and cash out well before this question ever has to be answered. At this point, we all know the deal. Once they're publicly traded, the quality of the product goes to crap while fees get ratcheted up every which way.
The question of "will people pay" is answered--OpenAI alone is at something like $4 billion in ARR. There are also smaller players (relatively) with impressive revenue, many of whom are profitable.
There are plenty of open questions in the AI space around unit economics, defensibility, regulatory risks, and more. "Will people pay for this" isn't one of them.
Yeah, I don't get OPs take. ChatGPT 3.5 was basically just a novelty, albeit an exciting one. The models we've gotten since have ingrained themselves into my workflows as productivity multipliers. They are significantly better and more useful (and multimodal) than what we had in 2022, not just marginally better.
I use these models to aid bleeding edge ml research every day. Sonnet can make huge changes and bug fixes to my code (that does stuff nobody else has tried in this way before) whereas GPT 3.5 Turbo couldn’t even repeat a given code block without dropping variables and breaking things. O1 can reason through very complex model designs and signal processing stuff even I have a hard time wrapping my head around.
On the other hand, if you try to solve some problem by creating the code by using AI only, and it misses only one thing, it takes more time to debug this problem rather than creating this code from scratch. Understanding some larger piece of AI code is sometimes equally hard or harder than constructing the solution into your problem by yourself.
As someone who's been using OpenAI's ChatGPT every day for work, I tested Perplexity's free Deep Research feature today and I was blown away by how good it is. It's unlike anything I've seen over at OpenAI and have tested all of their models. I have canceled my OpenAI monthly subscription.
Every time I see a comment about someone getting excited about some new AI thing, I want to go try and see for myself, but I can't think of a real world use case that is the right level of difficulty that would impress me.
This would be acceptable if it meant adding one more shader, but with "modern" graphics APIs forcing us to sometimes have thousands of permutations for the same shader, every variant you add multiplies that count by 2x.
We also don't have an infinite amount of time to work on each shader. You profile on the hardware you care about, and if the choice you've made is slower on some imaginary future processor, so be it - hopefully that processor is faster enough that this doesn't matter.
A scary amount of critical FOSS projects are just some guy's hobby, and a hobbyist will often work on problems they find interesting, regardless of whether or not they're important for the "community".
If you're coming home from work and have one hour of free time, do you really want to spend it investigating why a Meta+Shift+F11 doesn't work in your program under Wayland on Womperloo Linux 11.4?
That's the #1 reason why people need to be paid to work on important software. In a corporate environment (with somewhat reasonable management) boring but important work will land in someone's Jira ticket and they'll be paid to get it done.
OH, trust me - corporate environments are not at all conducive to making sure bugs get fixed for Womperloo 11.4 users. Corporate environments incentivize working on flashy, highly visible work. Bugfixing edge cases is the opposite of that, and usually falls by the wayside (unless it's the kind of company where quality is a metric that is enforced, which is rare).
The flipside of open source maintainers being unpaid (or, at least, their pay not being linked to their output) is that they have time to care deeply about quality, and many really do. Linus Torvalds is not going to sit idly by while an update is shipped out that breaks Womperloo, I'll tell you that.
> OH, trust me - corporate environments are not at all conducive to making sure bugs get fixed for Womperloo 11.4 users.
That's true, but Meta+Shift+F11 will usually be fixed if it breaks on Windows, and ironically will very likely work Womperloo 11.4 as well, because Wine basically runs all Windows software that doesn't make it an explicit goal not to run under Wine perfectly.
> Linus Torvalds is not going to sit idly by while an update is shipped out that breaks Womperloo, I'll tell you that.
Linux is a great example - it's the most arguably the most successful FOSS project, and that's no small part due to the fact that Linus, as well as a ton of other programmers get paid a lot of money to work on it.
https://blog.hiler.eu/win32-the-only-stable-abi/ makes a good point that the really only stable ABI on Linux is WIN32, especially if you step an inch off the "we have the source and build from it every time" train.
Linus keeps the kernel ABI as stable as he can, but that's only a small part of Linux.
I would argue that he cared deeply about quality long before he was ever making money from Linux. But we're splitting hairs over a tangential argument at this point.
Unfortunately our CEO at a 5000 person company that heavily uses Slack, still hasn't activated their Slack account, so I don't have much to finetune the bot on.
It's not the only technical control they have - every single datapoint an app can gather is ultimately provided from the OS. They could let you disable access to metrics that have proven to be useful for fingerprinting.
They could also attempt block known tracking code - all games with IronSource ads will run the same tracker binary, byte for byte. There's a lot of things they could do, but don't, since in the mainstream they have a pretty good reputation when it comes to privacy.
Alright, just give me 5 minutes to quickly inspect the likely over 100,000,000 lines of code that either go directly into a ROM or are part of the build tooling, and reverse engineer however many binary blobs are involved in the process.
The "you can just read the code" mindset is completely unrealistic, even for software that's orders of magnitudes smaller. If the issue at hand is entering my Google password, I'd rather do it in a ROM built by Google.
I'm glad the writer is enjoying Firefox, but posts like these make me realize we've forgotten how to use computers.
Built-in screenshot tool? "You don't need to install extensions"! Really? Dude just press Print Screen, it even works outside a browser, if one day you have to be put through the unspeakable torture of using a native application! Single-use burner emails are also nice and all, but why exactly does this have to be linked to my browser?
Of course switching browsers it's gonna be a big deal, when you actively walk out of your way to lock yourself to your browser's "ecosystem", but better Mozilla than Google I guess.
Moreover, Firefox makes it super easy to screenshot individual elements on a webpage, such as photos, by automatically determining the screenshot boundaries, which means I don't have to manually drag the screenshot area.
Fair point, but I highly doubt that this is how it's used most of the time. And taking 20 screenshots and compositing them in mspaint can be a meditative experience.
I would put money on it being the most common use case. It's certainly the reason I installed a screenshot extension for Chrome. I like the nod towards humour you've added at the end there, but it's probably time to cash out your chips and accept your losses.
I see what you mean, but that tool allows you to take a picture of the entire vertical length of the page. You can't easily (i.e.: with one click) do that with the print screen key.
How exactly do you use what looks like work-planning software non-commerically?
As for the license, it's their code and they can release it under whatever license they want, but they obviously shouldn't call it open source. Usually companies do this sort of thing to take advantage FOSS's reputation, but in this case it just looks like ignorance to me.
> Usually companies do this sort of thing to take advantage FOSS's reputation
I would say they do it because it conveys to the average person that they can get the source code and modify if they want to. This whole source-available, etc nonsense is just confusing for everyone.
Without having a deeper look into it - could be a replacement for any non-profit or bigger sports club or whatever org that uses Slack or Zulip or whatever now.
But that's about everything that comes to my mind...
Good point. I suppose many people, like me, would not think about that.. but.. IANAL but at least in Germany I think there's often some correlation of "not profit-oriented" and "no commercial purpose" - I mean, every time you let someone pay for membership in your club, it can be seen as commercial, but mostly not.
None of us gets to say that there are some commercial purposes that are ok and some that are not. You have to go by what it says. Or put it this way, some day someone what wants to use it against you, can and will go by what it says, and they will be right and win that argument.
This license is really pretty bad because while they try to allow educational use, educational use is itself usually also commercial use. If you use it in a class that you charge for, if a school that charges tuition uses it, if a youtuber even so much as uses it in a video that has either ads or a sponsor... those are all conmmercial use of the software.
Relying on the the rights-holder to just not persue it, ever, including next year when the rights-holder is some new owner, is just gambling.
Trying to carve out non-commercial is just misguided and ultimately self-defeating in my opinion. It is better than purely traditionally closed software, but ultimately really not by much.
The primary value is, if you happen to rely on this software and can't avoid it, then having any form of access to the source is better than being helpless to the usual black box. At least you can fully document mysteries that aren't fully documented, let alone maybe being able to debug or customize.
If that's what the license says, fair enough, but that's not how I parsed it. This is not a software-specific license so it's not as clear as say the GPL, where there's very explicit language for source code, object code, compilation, execution, distribution, ...
Here's an excerpt from the license:
- 1. Subject to the terms and conditions of this Public
License, the Licensor hereby grants You a worldwide,
royalty-free, non-sublicensable, non-exclusive, irrevocable
license to exercise the Licensed Rights in the Licensed
Material to:
- A. reproduce and Share the Licensed Material, in whole
or in part, for NonCommercial purposes only; and
- B. produce, reproduce, and Share Adapted Material for
NonCommercial purposes only.
I don't understand what I'm reading in the slightest, I wouldn't touch this with a ten foot pole.
I can "produce", "reproduce" and "share" the licensed material. I'm definitely not sharing it, so if "running the code" is allowed at all, it must fall under the "produce" or "reproduce" categories. The text is pretty clear that you can only "produce" and "reproduce" it for NonCommercial purposes as well, so what does that leave me with?
Yeah, I'm with you. CC is not well suited for something that can be run - unlike an image or music where its pretty clear what's happening when you use it.
That's exactly an n-th example of why this non-commercial clause is bogus since the very beginning of CC, and particularly unadapted for software code: no one is able to define clearly what commercial means, and what perimeter it applies to.
Selling the code? (you're a software editor) You could say it's covered/forbidden by the license.
Selling the service the code gives when it is running? (you're a PaaS) You could say too.
Selling anything unrelated to the code and the running app (say, oranges), but using the app to organise privately within a corporation? (you could be a shop owner installing the software for yourself and your team within your own building) 1/ the license says nothing about it, 2/ if it were covered and forbidden, how would it be even enforceable?
> That's exactly an n-th example of why this non-commercial clause is bogus since the very beginning of CC, and particularly unadapted for software code: no one is able to define clearly what commercial means, and what perimeter it applies to.
Agree with you on this one and I'd go step further: CC licences in general are poor fit for software.
> Selling anything unrelated to the code and the running app (say, oranges), but using the app to organise privately within a corporation? (you could be a shop owner installing the software for yourself and your team within your own building)
Excerpt (that I think is most relevant, but it's definitely a nuanced issue):
> uses by for-profit companies are typically considered more commercial [...] one exception to this pattern is in relation to uses by individuals that are personal or private in nature
Based on this, I think the common agreement would be that this is commercial use.
> how would it be even enforceable?
That's not a point for ignoring the license. If you download pirated movies, games, or other software, it's very unlikely you'll get caught, but you're still committing a crime.
However in this case, it actually can be enforcable. If the organization is eg. a startup that raises venture funding or is getting acquired, legal due dilligence will involve examination of all licences for software used.
That's not what I understand from these pages (that only reinforces that even to CC, NonCommercial is not a clear criteria).
They also note NonCommercial as “not primarily intended for or directed towards commercial advantage or monetary compensation.” which perfectly matches my 3rd case above.
For instance, you perfectly can print and display an NC image as a poster in your professional office, it's not "commercial".
> That's not a point for ignoring the license.
It's definitely an argument to ignore this part of the license: an unenforceable item is effectively void.
> If you download pirated movies, games, or other software, it's very unlikely you'll get caught, but you're still committing a crime.
Beware, that's different here. Downloading/uploading pirated items is illegal. Here, the NonCommercial clause is so ambiguous that even CC doesn't know how to put it. So its enforcement is even further delicate and open to interpretation.
Planning work is not the work, it's something around the work, similar to a poster (that could very well present information valuable to the work, but still not be the work you're selling in the end).
You should never write code that's impossible to understand without fancy IDE features. If you're writing such code, the best thing you can do for yourself long term is switch to a text editor without LSP (read Notepad) right now, which will force you to start writing sane code.
This is true for any language, but it's especially true for C++, where most large codebases have tons of invisible code flying around - implicit casts, weird overloads, destructors, all of these possibly virtual calls, possibly over type-erased objects accessed accessed via smart pointers, possibly over many threads - if you want to stand any chance of even beginning to reason about all that you NEED to see the actual, concrete, scientific types of things.
I code Rust just fine without any fancy IDE you should give it a shot. The languages I find hardest to code without fancy IDE features are C and C++ due to their implicit casts. Rust is typically easy to code without IDE features due to its strong type system, lifetimes and few implicit casts.
Rust is one of my favorite new languages, but this is just wrong.
> few implicit casts
Just because it doesn't (often) implicitly convert/pun raw types doesn't mean it has "few implicit casts". Rust has large amounts implicit conversion behavior (e.g. deref coercion, implicit into), and semi-implicit behavior (e.g. even regular explicit ".into()" distances conversion behavior and the target type in code). The affordances offered by these features are significant--I like using them in many cases--but it's not exactly turning over a new leaf re: explicitness.
Without good editor support for e.g. figuring out which "into" implementation is being called by a "return x.into()" statement, working in large and unfamiliar Rust codebases can be just as much of a chore as rawdogging C++ in no-plugins vim.
Like so many Rust features, it's not breaking with specific semantics available in prior languages in its niche (C++); rather, it's providing the same or similar semantics in a much more consciously designed and user focused way.
> lifetimes
How do lifetimes help (or interact with) IDE-less coding friendliness? These seem orthogonal to me.
Lastly, I think Rust macros are the best pro-IDE argument here. Compared to C/C++, the lower effort required (and higher quality of tooling available) to quickly expand or parse Rust macros means that IDE support for macro-heavy code tends to be much better, and much better out of the box without editor customization, in Rust. That's not an endorsement of macro-everything-all-the-time, just an observation re: IDE support.
Have you actually tried coding Rust without IDE support? I have. I code C and Rust professionally with basically only syntax highlighting.
As for how lifetimes help? One of the more annoying parts of coding C is to constantly have to look up who owns a returned pointer. Should it be freed or not?
And I do not find into() to be an issue in practice.
While the C language has a lot of bad implicit casts that should have never been allowed, mainly those involving unsigned types, and which have been inherited by its derivatives, implicit casts as a programming language feature are extremely useful when used in the right way.
Implicit casts are the only reason for the existence of the object-oriented programming languages, where any object can be implicitly cast to any type from which it inherits, so it can be passed as an argument to any function that expects an argument of that type, including member functions.
The whole purpose of inheritance is to allow the programmer to use implicit casts. Otherwise, one would just declare a structure member of the class from which one would inherit in the OOP style and a virtual function table pointer, and one could write an identical program with the OOP program, but in a much more verbose way.
(In the C language, not only the implicit mixed signed-unsigned casts are bad, but also any implicit unsigned-unsigned casts are bad, because there are 2 interpretations of "unsigned" frequently used in programs, as either non-negative numbers or as modular numbers, and the direction of the casts that do not lose information is reversed for the 2 interpretations, i.e. for non-negative numbers it is safe to cast only to a wider type, but for modular numbers it is safe to cast only to a narrower type. Moreover, there are also other interpretations of "unsigned", i.e. as binary polynomials or as binary polynomial residues, which cannot be inter-converted with numbers. For all these 4 interpretations, there are distinct machine instructions in the instruction sets of popular CPUs, e.g. in the x86-64 and Aarch64 ISAs, which may be used in C programs through compiler intrinsics. Even worse is that the latest C standards specify that the overflow behavior of "unsigned" is that of modular numbers, while the implicit casts of "unsigned" are those of non-negative numbers. This inconsistency guarantees the existence of perfectly legal C programs, without any undefined behavior, but which nonetheless compute incorrect "unsigned" values, regardless which interpretation was intended for "unsigned".)
> Otherwise, one would just declare a structure member of the class from which one would inherit in the OOP style and a virtual function table pointer, and one could write an identical program with the OOP program, but in a much more verbose way.
No, you don't have to do that. Once you start thinking about memory and manually managing it, it you'll figure out there's simpler, better ways to structure your program, rather than having a deep class hierarchy with a gazillion heap-allocated objects, each with distinct lifetime, all pointing at each other.
Here's a trivial example. Say you're writing a JSON parser - if you approach it with an OOP mindset, you would probably make a JSONValue class, maybe subclass it with JSONNumber/String/Object/Array. You would walk over the input string and heap allocate JSONValues as you go. The problems with this are:
1. Each allocation can be very slow as it can enter the kernel
2. Each allocation is a possible failure point, so the number of failure points scales linearly with input size.
3. When you free the structure, you must walk over the entire tree and free each obejct one by one.
4. The output of this function is suboptimal as the memory allocator can return values that are far away in memory.
There's an alternate approach that solves all these problems. If you're thinking about the lifetimes of your data, you would notice that this entire data structure is used and discarded at once, so you allocate a single big buffer for all the nodes. You keep a pointer to the head of that buffer, and when you need a new node, you stick it in there and advance the pointer by its size. When you're done you return the first node, which also happens to be the start of the buffer.
Now you have a single point of failure - the buffer allocation, your program is way faster, you only need to free one thing when you're done, and your values are tightly packed in memory, so whatever is using its output will be faster as well. You've spent just a little time thinking about memory and now you have a vastly superior program in every single aspect, and you're happy.
Memory arenas are a nice concept but I wouldn't say they're necessarily an improvement in every possible situation. They increase complexity, make reasoning about the code and lifetimes harder and can lead to very nasty memory bugs. Definitely something to use with caution and not just blindly by default.
Reasoning about the lifetimes of objects in an arena is as simple as it gets - there's only one lifetime, and pointers between everything allocated on the arena are perfectly safe. The complexity of figuring out what's going on, with with respect to the number of objects and links between is O(1).
There's no universal "God pattern" that you can throw at every problem. I used arenas as an example as I didn't want to write a zero-substance "OOP bad" post, but my point wasn't that instead of always using OOP+inheritance you should always use an arena, it was that if you think about your memory, more often than not there's a vastly superior layout than a bunch of heap objects glued together by prayers and smart pointers.
That's all nice and fun until you want to pass stuff around and some objects might outlive the arena. Do you keep the whole arena around, do you copy, do you forget to do anything at all and spend a few days debugging weird memory bugs in prod?
"Non-negative" unsigneds can be validly cast to smaller types. That's why saturating_cast() exists. There are modular numbers where casting to a smaller value is likewise unsafe at a logical level. Your LCRNG won't give you the right period when downcast, even if the modulus value is unchanged.
inheritance isn't required for object oriented programming. the primary facet of oop is hiding implementation details behind functions that manipulate that data.
adding values to a dict via add() and removing them via remove() should not expose to the caller if the underlying implementation is an array of hash indexed linked lists or what. the implementation can be changed safely.
inheritance is orthogonal to object orientation. or rather, inheritance requires oop, but oop does not require inheritance.
golang lacks inheritance while remaining oop, for instance, instead using interfaces that allows any type implicitly defining the specified interface to be used the.
"Hiding implementation details" means the same as "hiding the actual data type of an object", which means the same as "performing an implicit cast whenever the object is passed as an argument to a function".
Using different words does not necessarily designate different things. Most things that are promoted at a certain time by fashions, like OOP, abuse terminology by giving new names to old things in the attempt of appearing more revolutionary than they really are.
Most classic works about OOP define OOP by the use of inheritance and of virtual functions a.k.a. dynamic polymorphism. Both features have been introduced by SIMULA 67 and popularized by Smalltalk, the grandparents of all OOP languages.
When these 2 features are removed, what remains from OOP are the so-called abstract data types, like in CLU or Alphard, where you have data types that are defined by the list of functions that can process values of that type, but without inheritance and with only static polymorphism (a.k.a. overloading).
The example given by you for hiding an implementation is not OOP, but it is just the plain use of modules, like in the early versions of Ada, Mesa or Modula, which did not have any OOP features, but they had modules, which can export types or functions whose implementations are hidden.
Because all 3 programming language concepts, modules, abstract data types and OOP have as an important goal preventing the access to implementation details, there is some overlap between them, but they are nonetheless distinct enough so that they should not be confused.
Modules are the most general mechanism for hiding implementation details, so they should have been included in any programming language, but the authors of most OOP languages, especially in the past, have believed that the hiding provided by granting access to private structure a.k.a. class members only to member functions is good enough for this purpose. However this leads sometimes to awkward programs where some classes are defined only for the purpose of hiding things, for which real modules would have been more convenient, so many more recent versions of OOP languages have added modules in some form or another.
I'll readily admit the languages were marketed that way, but would argue inheritance was a functional, but poor, imitation of dynamic message dispatch. Interfaces, structural typing, or even simply swapping out object types in a language with dynamic types does better for enabling function-based message passing than inheritance does, as they avoid the myriad pitfalls and limitations associated with the technique.
Dynamic dispatch can be accomplished in any language with a function type by using a structure full of functions to dispatch the incoming invocations, as Linux does in C to implement its file systems.
I am actually ok with the conversions and C and think they are quite convenient.
Unsigned in C is modular. I am not sure what you mean by the "latest C standards specify". This did not change. I also do not understand what you mean by the "implicit cast of unsigned are those of non-negative numbers". This seems wrong. If you convert to a larger unsigned type, the value is unchanged and if you convert to a smaller, it is reduced modulo.
In older C standards, the overflow of unsigned numbers was undefined.
In recent C standards, it has been defined that unsigned numbers behave with respect to the arithmetic operations as modular numbers, which never overflow.
The implicit casts of C unsigned numbers are from narrower to wider types, e.g. from "unsigned short" to "unsigned" or from "unsigned" to "unsigned long".
These implicit casts are correct for non-negative numbers, because all values that can be represented as e.g. "unsigned short" are included among those represented by "unsigned" and they are preserved by the implicit casts.
However, these implicit casts are incorrect for modular numbers, because they attempt to compute the inverse of a non-invertible function.
For instance, if you have an "unsigned char" that is a modular number with the value "3", it is incorrect to convert it to an "unsigned short" modular number with the value "3", because the same "unsigned char" "3" corresponds also to 255 other "unsigned short" values, i.e. to 259, 515, 781, 1027 and so on.
If you have some very weird reason when you want to convert a number modulo 256 to a number modulo 65536 by choosing a certain number among those with the same residue modulo 256, then you must do this explicitly, because it is not an information-preserving conversion.
If on the other hand you interpret a C "unsigned" as a non-negative number, then the implicit casts are OK, but you must add everywhere explicit checks for unsigned overflow around the arithmetic operations, otherwise you will obtain erroneous results.
The C89 standard has "A computation involving unsigned operands can never overflou. because a result that cannot be represented b! the resulting unsigned integer type is reduced modulo the number that is one greater thnn the largest value that can be represented by the resulting unsipned integer type" (OCR errors)
You can finde a copy here: https://web.archive.org/web/20200909074736if_/https://www.pd...
Mathematically, there is no clearly defined way how one would have to map from one residue system in modular arithmetic to the next, so there is no "correct" or "incorrect" way. Mapping to the smallest integer in the equivalency class makes a lot of sense though, as it maps corresponding integers to itself when going to a larger type and and the reverse operation is then the inverse, and this is exactly what C does.
We've forgotten how to do it - the idea of dragging a button offends our modern sensibilities. You can't just drag a button, what about the layout?! What about responsive design, how will it look on a 300x200 screen and a 8k one? What about scaling? Reactivity?
Yes, and most of these problems can be very well mitigated by just implementing some sort of a layout constraint system. Xcode does it (AutoLayout), however, it's not nearly as pleasant and straightforward to use as the old VB form designer.
Visual Studio's form editor had decent solutions for that. And most application developers don't care about tiny or huge screens anyway, applications will just be broken if you resize them too much. The software stack they're using should allow them to make the design work on any form factor and resolution, but most of the time nobody cares about those edge cases.
And even then, UI designers worked on abstract units and changing screen density changed the size of the elements (if you used bitmaps you would suffer, but that would be on you). VB6 paid for my first apartment in Brazil.
These things have the reasoning skills of a toddler, yet we keep fine-tuning their writing style to be more and more authoritative - this one is only missing the font and color scheme, other than that the output formatted exactly like a research paper.
reply