I'm glad the writer is enjoying Firefox, but posts like these make me realize we've forgotten how to use computers.
Built-in screenshot tool? "You don't need to install extensions"! Really? Dude just press Print Screen, it even works outside a browser, if one day you have to be put through the unspeakable torture of using a native application! Single-use burner emails are also nice and all, but why exactly does this have to be linked to my browser?
Of course switching browsers it's gonna be a big deal, when you actively walk out of your way to lock yourself to your browser's "ecosystem", but better Mozilla than Google I guess.
Moreover, Firefox makes it super easy to screenshot individual elements on a webpage, such as photos, by automatically determining the screenshot boundaries, which means I don't have to manually drag the screenshot area.
Fair point, but I highly doubt that this is how it's used most of the time. And taking 20 screenshots and compositing them in mspaint can be a meditative experience.
I would put money on it being the most common use case. It's certainly the reason I installed a screenshot extension for Chrome. I like the nod towards humour you've added at the end there, but it's probably time to cash out your chips and accept your losses.
I see what you mean, but that tool allows you to take a picture of the entire vertical length of the page. You can't easily (i.e.: with one click) do that with the print screen key.
How exactly do you use what looks like work-planning software non-commerically?
As for the license, it's their code and they can release it under whatever license they want, but they obviously shouldn't call it open source. Usually companies do this sort of thing to take advantage FOSS's reputation, but in this case it just looks like ignorance to me.
> Usually companies do this sort of thing to take advantage FOSS's reputation
I would say they do it because it conveys to the average person that they can get the source code and modify if they want to. This whole source-available, etc nonsense is just confusing for everyone.
Without having a deeper look into it - could be a replacement for any non-profit or bigger sports club or whatever org that uses Slack or Zulip or whatever now.
But that's about everything that comes to my mind...
Good point. I suppose many people, like me, would not think about that.. but.. IANAL but at least in Germany I think there's often some correlation of "not profit-oriented" and "no commercial purpose" - I mean, every time you let someone pay for membership in your club, it can be seen as commercial, but mostly not.
None of us gets to say that there are some commercial purposes that are ok and some that are not. You have to go by what it says. Or put it this way, some day someone what wants to use it against you, can and will go by what it says, and they will be right and win that argument.
This license is really pretty bad because while they try to allow educational use, educational use is itself usually also commercial use. If you use it in a class that you charge for, if a school that charges tuition uses it, if a youtuber even so much as uses it in a video that has either ads or a sponsor... those are all conmmercial use of the software.
Relying on the the rights-holder to just not persue it, ever, including next year when the rights-holder is some new owner, is just gambling.
Trying to carve out non-commercial is just misguided and ultimately self-defeating in my opinion. It is better than purely traditionally closed software, but ultimately really not by much.
The primary value is, if you happen to rely on this software and can't avoid it, then having any form of access to the source is better than being helpless to the usual black box. At least you can fully document mysteries that aren't fully documented, let alone maybe being able to debug or customize.
If that's what the license says, fair enough, but that's not how I parsed it. This is not a software-specific license so it's not as clear as say the GPL, where there's very explicit language for source code, object code, compilation, execution, distribution, ...
Here's an excerpt from the license:
- 1. Subject to the terms and conditions of this Public
License, the Licensor hereby grants You a worldwide,
royalty-free, non-sublicensable, non-exclusive, irrevocable
license to exercise the Licensed Rights in the Licensed
Material to:
- A. reproduce and Share the Licensed Material, in whole
or in part, for NonCommercial purposes only; and
- B. produce, reproduce, and Share Adapted Material for
NonCommercial purposes only.
I don't understand what I'm reading in the slightest, I wouldn't touch this with a ten foot pole.
I can "produce", "reproduce" and "share" the licensed material. I'm definitely not sharing it, so if "running the code" is allowed at all, it must fall under the "produce" or "reproduce" categories. The text is pretty clear that you can only "produce" and "reproduce" it for NonCommercial purposes as well, so what does that leave me with?
Yeah, I'm with you. CC is not well suited for something that can be run - unlike an image or music where its pretty clear what's happening when you use it.
That's exactly an n-th example of why this non-commercial clause is bogus since the very beginning of CC, and particularly unadapted for software code: no one is able to define clearly what commercial means, and what perimeter it applies to.
Selling the code? (you're a software editor) You could say it's covered/forbidden by the license.
Selling the service the code gives when it is running? (you're a PaaS) You could say too.
Selling anything unrelated to the code and the running app (say, oranges), but using the app to organise privately within a corporation? (you could be a shop owner installing the software for yourself and your team within your own building) 1/ the license says nothing about it, 2/ if it were covered and forbidden, how would it be even enforceable?
> That's exactly an n-th example of why this non-commercial clause is bogus since the very beginning of CC, and particularly unadapted for software code: no one is able to define clearly what commercial means, and what perimeter it applies to.
Agree with you on this one and I'd go step further: CC licences in general are poor fit for software.
> Selling anything unrelated to the code and the running app (say, oranges), but using the app to organise privately within a corporation? (you could be a shop owner installing the software for yourself and your team within your own building)
Excerpt (that I think is most relevant, but it's definitely a nuanced issue):
> uses by for-profit companies are typically considered more commercial [...] one exception to this pattern is in relation to uses by individuals that are personal or private in nature
Based on this, I think the common agreement would be that this is commercial use.
> how would it be even enforceable?
That's not a point for ignoring the license. If you download pirated movies, games, or other software, it's very unlikely you'll get caught, but you're still committing a crime.
However in this case, it actually can be enforcable. If the organization is eg. a startup that raises venture funding or is getting acquired, legal due dilligence will involve examination of all licences for software used.
That's not what I understand from these pages (that only reinforces that even to CC, NonCommercial is not a clear criteria).
They also note NonCommercial as “not primarily intended for or directed towards commercial advantage or monetary compensation.” which perfectly matches my 3rd case above.
For instance, you perfectly can print and display an NC image as a poster in your professional office, it's not "commercial".
> That's not a point for ignoring the license.
It's definitely an argument to ignore this part of the license: an unenforceable item is effectively void.
> If you download pirated movies, games, or other software, it's very unlikely you'll get caught, but you're still committing a crime.
Beware, that's different here. Downloading/uploading pirated items is illegal. Here, the NonCommercial clause is so ambiguous that even CC doesn't know how to put it. So its enforcement is even further delicate and open to interpretation.
Planning work is not the work, it's something around the work, similar to a poster (that could very well present information valuable to the work, but still not be the work you're selling in the end).
You should never write code that's impossible to understand without fancy IDE features. If you're writing such code, the best thing you can do for yourself long term is switch to a text editor without LSP (read Notepad) right now, which will force you to start writing sane code.
This is true for any language, but it's especially true for C++, where most large codebases have tons of invisible code flying around - implicit casts, weird overloads, destructors, all of these possibly virtual calls, possibly over type-erased objects accessed accessed via smart pointers, possibly over many threads - if you want to stand any chance of even beginning to reason about all that you NEED to see the actual, concrete, scientific types of things.
I code Rust just fine without any fancy IDE you should give it a shot. The languages I find hardest to code without fancy IDE features are C and C++ due to their implicit casts. Rust is typically easy to code without IDE features due to its strong type system, lifetimes and few implicit casts.
Rust is one of my favorite new languages, but this is just wrong.
> few implicit casts
Just because it doesn't (often) implicitly convert/pun raw types doesn't mean it has "few implicit casts". Rust has large amounts implicit conversion behavior (e.g. deref coercion, implicit into), and semi-implicit behavior (e.g. even regular explicit ".into()" distances conversion behavior and the target type in code). The affordances offered by these features are significant--I like using them in many cases--but it's not exactly turning over a new leaf re: explicitness.
Without good editor support for e.g. figuring out which "into" implementation is being called by a "return x.into()" statement, working in large and unfamiliar Rust codebases can be just as much of a chore as rawdogging C++ in no-plugins vim.
Like so many Rust features, it's not breaking with specific semantics available in prior languages in its niche (C++); rather, it's providing the same or similar semantics in a much more consciously designed and user focused way.
> lifetimes
How do lifetimes help (or interact with) IDE-less coding friendliness? These seem orthogonal to me.
Lastly, I think Rust macros are the best pro-IDE argument here. Compared to C/C++, the lower effort required (and higher quality of tooling available) to quickly expand or parse Rust macros means that IDE support for macro-heavy code tends to be much better, and much better out of the box without editor customization, in Rust. That's not an endorsement of macro-everything-all-the-time, just an observation re: IDE support.
Have you actually tried coding Rust without IDE support? I have. I code C and Rust professionally with basically only syntax highlighting.
As for how lifetimes help? One of the more annoying parts of coding C is to constantly have to look up who owns a returned pointer. Should it be freed or not?
And I do not find into() to be an issue in practice.
While the C language has a lot of bad implicit casts that should have never been allowed, mainly those involving unsigned types, and which have been inherited by its derivatives, implicit casts as a programming language feature are extremely useful when used in the right way.
Implicit casts are the only reason for the existence of the object-oriented programming languages, where any object can be implicitly cast to any type from which it inherits, so it can be passed as an argument to any function that expects an argument of that type, including member functions.
The whole purpose of inheritance is to allow the programmer to use implicit casts. Otherwise, one would just declare a structure member of the class from which one would inherit in the OOP style and a virtual function table pointer, and one could write an identical program with the OOP program, but in a much more verbose way.
(In the C language, not only the implicit mixed signed-unsigned casts are bad, but also any implicit unsigned-unsigned casts are bad, because there are 2 interpretations of "unsigned" frequently used in programs, as either non-negative numbers or as modular numbers, and the direction of the casts that do not lose information is reversed for the 2 interpretations, i.e. for non-negative numbers it is safe to cast only to a wider type, but for modular numbers it is safe to cast only to a narrower type. Moreover, there are also other interpretations of "unsigned", i.e. as binary polynomials or as binary polynomial residues, which cannot be inter-converted with numbers. For all these 4 interpretations, there are distinct machine instructions in the instruction sets of popular CPUs, e.g. in the x86-64 and Aarch64 ISAs, which may be used in C programs through compiler intrinsics. Even worse is that the latest C standards specify that the overflow behavior of "unsigned" is that of modular numbers, while the implicit casts of "unsigned" are those of non-negative numbers. This inconsistency guarantees the existence of perfectly legal C programs, without any undefined behavior, but which nonetheless compute incorrect "unsigned" values, regardless which interpretation was intended for "unsigned".)
> Otherwise, one would just declare a structure member of the class from which one would inherit in the OOP style and a virtual function table pointer, and one could write an identical program with the OOP program, but in a much more verbose way.
No, you don't have to do that. Once you start thinking about memory and manually managing it, it you'll figure out there's simpler, better ways to structure your program, rather than having a deep class hierarchy with a gazillion heap-allocated objects, each with distinct lifetime, all pointing at each other.
Here's a trivial example. Say you're writing a JSON parser - if you approach it with an OOP mindset, you would probably make a JSONValue class, maybe subclass it with JSONNumber/String/Object/Array. You would walk over the input string and heap allocate JSONValues as you go. The problems with this are:
1. Each allocation can be very slow as it can enter the kernel
2. Each allocation is a possible failure point, so the number of failure points scales linearly with input size.
3. When you free the structure, you must walk over the entire tree and free each obejct one by one.
4. The output of this function is suboptimal as the memory allocator can return values that are far away in memory.
There's an alternate approach that solves all these problems. If you're thinking about the lifetimes of your data, you would notice that this entire data structure is used and discarded at once, so you allocate a single big buffer for all the nodes. You keep a pointer to the head of that buffer, and when you need a new node, you stick it in there and advance the pointer by its size. When you're done you return the first node, which also happens to be the start of the buffer.
Now you have a single point of failure - the buffer allocation, your program is way faster, you only need to free one thing when you're done, and your values are tightly packed in memory, so whatever is using its output will be faster as well. You've spent just a little time thinking about memory and now you have a vastly superior program in every single aspect, and you're happy.
Memory arenas are a nice concept but I wouldn't say they're necessarily an improvement in every possible situation. They increase complexity, make reasoning about the code and lifetimes harder and can lead to very nasty memory bugs. Definitely something to use with caution and not just blindly by default.
Reasoning about the lifetimes of objects in an arena is as simple as it gets - there's only one lifetime, and pointers between everything allocated on the arena are perfectly safe. The complexity of figuring out what's going on, with with respect to the number of objects and links between is O(1).
There's no universal "God pattern" that you can throw at every problem. I used arenas as an example as I didn't want to write a zero-substance "OOP bad" post, but my point wasn't that instead of always using OOP+inheritance you should always use an arena, it was that if you think about your memory, more often than not there's a vastly superior layout than a bunch of heap objects glued together by prayers and smart pointers.
That's all nice and fun until you want to pass stuff around and some objects might outlive the arena. Do you keep the whole arena around, do you copy, do you forget to do anything at all and spend a few days debugging weird memory bugs in prod?
"Non-negative" unsigneds can be validly cast to smaller types. That's why saturating_cast() exists. There are modular numbers where casting to a smaller value is likewise unsafe at a logical level. Your LCRNG won't give you the right period when downcast, even if the modulus value is unchanged.
inheritance isn't required for object oriented programming. the primary facet of oop is hiding implementation details behind functions that manipulate that data.
adding values to a dict via add() and removing them via remove() should not expose to the caller if the underlying implementation is an array of hash indexed linked lists or what. the implementation can be changed safely.
inheritance is orthogonal to object orientation. or rather, inheritance requires oop, but oop does not require inheritance.
golang lacks inheritance while remaining oop, for instance, instead using interfaces that allows any type implicitly defining the specified interface to be used the.
"Hiding implementation details" means the same as "hiding the actual data type of an object", which means the same as "performing an implicit cast whenever the object is passed as an argument to a function".
Using different words does not necessarily designate different things. Most things that are promoted at a certain time by fashions, like OOP, abuse terminology by giving new names to old things in the attempt of appearing more revolutionary than they really are.
Most classic works about OOP define OOP by the use of inheritance and of virtual functions a.k.a. dynamic polymorphism. Both features have been introduced by SIMULA 67 and popularized by Smalltalk, the grandparents of all OOP languages.
When these 2 features are removed, what remains from OOP are the so-called abstract data types, like in CLU or Alphard, where you have data types that are defined by the list of functions that can process values of that type, but without inheritance and with only static polymorphism (a.k.a. overloading).
The example given by you for hiding an implementation is not OOP, but it is just the plain use of modules, like in the early versions of Ada, Mesa or Modula, which did not have any OOP features, but they had modules, which can export types or functions whose implementations are hidden.
Because all 3 programming language concepts, modules, abstract data types and OOP have as an important goal preventing the access to implementation details, there is some overlap between them, but they are nonetheless distinct enough so that they should not be confused.
Modules are the most general mechanism for hiding implementation details, so they should have been included in any programming language, but the authors of most OOP languages, especially in the past, have believed that the hiding provided by granting access to private structure a.k.a. class members only to member functions is good enough for this purpose. However this leads sometimes to awkward programs where some classes are defined only for the purpose of hiding things, for which real modules would have been more convenient, so many more recent versions of OOP languages have added modules in some form or another.
I'll readily admit the languages were marketed that way, but would argue inheritance was a functional, but poor, imitation of dynamic message dispatch. Interfaces, structural typing, or even simply swapping out object types in a language with dynamic types does better for enabling function-based message passing than inheritance does, as they avoid the myriad pitfalls and limitations associated with the technique.
Dynamic dispatch can be accomplished in any language with a function type by using a structure full of functions to dispatch the incoming invocations, as Linux does in C to implement its file systems.
I am actually ok with the conversions and C and think they are quite convenient.
Unsigned in C is modular. I am not sure what you mean by the "latest C standards specify". This did not change. I also do not understand what you mean by the "implicit cast of unsigned are those of non-negative numbers". This seems wrong. If you convert to a larger unsigned type, the value is unchanged and if you convert to a smaller, it is reduced modulo.
In older C standards, the overflow of unsigned numbers was undefined.
In recent C standards, it has been defined that unsigned numbers behave with respect to the arithmetic operations as modular numbers, which never overflow.
The implicit casts of C unsigned numbers are from narrower to wider types, e.g. from "unsigned short" to "unsigned" or from "unsigned" to "unsigned long".
These implicit casts are correct for non-negative numbers, because all values that can be represented as e.g. "unsigned short" are included among those represented by "unsigned" and they are preserved by the implicit casts.
However, these implicit casts are incorrect for modular numbers, because they attempt to compute the inverse of a non-invertible function.
For instance, if you have an "unsigned char" that is a modular number with the value "3", it is incorrect to convert it to an "unsigned short" modular number with the value "3", because the same "unsigned char" "3" corresponds also to 255 other "unsigned short" values, i.e. to 259, 515, 781, 1027 and so on.
If you have some very weird reason when you want to convert a number modulo 256 to a number modulo 65536 by choosing a certain number among those with the same residue modulo 256, then you must do this explicitly, because it is not an information-preserving conversion.
If on the other hand you interpret a C "unsigned" as a non-negative number, then the implicit casts are OK, but you must add everywhere explicit checks for unsigned overflow around the arithmetic operations, otherwise you will obtain erroneous results.
The C89 standard has "A computation involving unsigned operands can never overflou. because a result that cannot be represented b! the resulting unsigned integer type is reduced modulo the number that is one greater thnn the largest value that can be represented by the resulting unsipned integer type" (OCR errors)
You can finde a copy here: https://web.archive.org/web/20200909074736if_/https://www.pd...
Mathematically, there is no clearly defined way how one would have to map from one residue system in modular arithmetic to the next, so there is no "correct" or "incorrect" way. Mapping to the smallest integer in the equivalency class makes a lot of sense though, as it maps corresponding integers to itself when going to a larger type and and the reverse operation is then the inverse, and this is exactly what C does.
We've forgotten how to do it - the idea of dragging a button offends our modern sensibilities. You can't just drag a button, what about the layout?! What about responsive design, how will it look on a 300x200 screen and a 8k one? What about scaling? Reactivity?
Yes, and most of these problems can be very well mitigated by just implementing some sort of a layout constraint system. Xcode does it (AutoLayout), however, it's not nearly as pleasant and straightforward to use as the old VB form designer.
Visual Studio's form editor had decent solutions for that. And most application developers don't care about tiny or huge screens anyway, applications will just be broken if you resize them too much. The software stack they're using should allow them to make the design work on any form factor and resolution, but most of the time nobody cares about those edge cases.
And even then, UI designers worked on abstract units and changing screen density changed the size of the elements (if you used bitmaps you would suffer, but that would be on you). VB6 paid for my first apartment in Brazil.
You can use modern MSVC or Clang with an old C runtime/Windows SDK. It's a pain in the ass since new compilers are way stricter with what they compile, so you get a bunch of warnings, but it will work.
It doesn't matter how many layers of Python you use to obfuscate what a LLM actually is, as long as the prompt and the data you're operating on are part of the same token stream, prompt injection will exist in one form or another.
I imagine that with native tokens for planning and reflection empowering the models I'm referring to, it is something like a search space where we've enabled new reasoning capabilities by allowing multiple progressions of gradient descent that leverage partial success in ways that weren't previously possible. Lipstick or not, this is a new pig.
1. I wonder if we need to start discussing "Prompt Injection" security about humans. Maybe Fox and Far Right marketing is a form of human Prompt Injection hacks.
2. Maybe a better model for how future "Prompt Injection" will work. Hacking an AI will be more about 'convincing it' kind of like how humans have to be 'convinced' like with propoganda.
3. SnowCrash had the human hacking virus based on language patterns from ancient Sumerian. Humans and Machines can both be hacked by language. Maybe more researching into hacking AI will give some insight into how to hack humans.
To use a narrow interpretation of "prompt injection", it comes from how all data is one undifferentiated stream. The LLM [0] isn't designed to detect self/other, let alone higher-level constructs like truth/untruth, consistent/contradictory, a theory-of mind for other entities, or whether you trust the motives of those entities.
So I'd say the human equivalent of LLM prompt injection is whispering in the ear of a dreaming person to try to influence what they dream about.
That said, I take some solace in the idea that humans have been trying to hack other humans for thousands of years, so it's not as novel a problem as it first appears.
[0] Importantly, this is not be confused with characters that human readers may perceive inside LLM output, where we can read all sort of qualities including ones we know the author-LLM does not possess.
My biggest problem with node-based programming interfaces is the absurd number of nodes required for even fairly simple expressions, e.g `b*b-4*a*c` is 9 characters in most textual languages, but it would require 9 nodes in most visual scripting systems.
I imagine you could have an arbitrary "expression" node with N inputs and a textfield, but I've never seen it done and it still feels like a bigger hassle than punching out the expression in a textual language.
Well to be clear, my model (that i'm trying to make, it doesn't exist yet) is very mixed with traditional text editing.
I agree with you, which is why mine is more of a text editor augmented by nodes. Notably each node is a variable scope window into text. I'm imaging a single function, more so than individual AST elements. The node can then have multiple inputs and outputs (or relationships of varying types, as i'm imagining) similar to how Blender can have many fields, inputs and outputs.
The variable scoping would mean you can make a single node as large or as small as you need. Including a whole file, or a single expression within a single function, etc. The goal of this would be to visually reduce unrelated clutter, such that relationships should be clear and not dizzying.
I want the nodes to visually represent how i normally work. Which is to say i have a text editor open and often i'm only looking at a single function in the file. Then i jump in and out of the function to related functions. Similarly those are text editor nodes as well, and so the chain continues.
I should stress, i really enjoy my text editor (Helix). I'm trying to add onto that UX ultimately, rather than replace it entirely. Reduce the things i think/hope i don't care about -- ie unrelated functions in a file -- and add things like visual relationships. Imagine an aggressive traditional text editor setting which folds all code you're not using. But with a slightly different representation which hopefully adds to the experience.
Sidenote, another motivation for me is to leave the Terminal. I've been in Terminal for 20 years now but just leaving it is not alone worth it. I want to toy with graphical representations that would be difficult in a Terminal. Something to justify it's existence when compared to something as easy and flexible as the Terminal is.
He mentioned in the article that the corruption happens at a seemingly random spot the middle of a large buffer, and you can only have a HW breakpoint on 4 addresses in x86-64.
Reproduce the corruption under rr. Replay the rr trace. Replay is totally deterministic, so you can just seek to the end of the trace, set a hardware breakpoint on the damaged stack location, and reverse-continue until you find the culprit.
rr is only works on Linux and the release of Windows TTD was after this blog post was published. Also the huge slowdown from time travel debuggers can sometimes make tricky bugs like this much harder to reproduce.
I would certainly try with a reverse debugger if I had one, but where the repro instructions are "run this big complex interactive program for 10 minutes" I wouldn't be super confident about successfully recording a repro. At least in my experience with rr the slowdown is enough to make that painful, especially if you need to do multiple "chaos mode" runs to get a timing sensitive bug to trigger. It might still be worth spending time trying to get a faster repro case to make reverse debug a bit more tractable.
Built-in screenshot tool? "You don't need to install extensions"! Really? Dude just press Print Screen, it even works outside a browser, if one day you have to be put through the unspeakable torture of using a native application! Single-use burner emails are also nice and all, but why exactly does this have to be linked to my browser?
Of course switching browsers it's gonna be a big deal, when you actively walk out of your way to lock yourself to your browser's "ecosystem", but better Mozilla than Google I guess.
reply