These young people have seen less in their life, and a much larger percentage of their life was filled with the advertising of the AI companies. So no wonder that they are a little bit slower seeing the limitations of the AI models.
“All J verbs (functions and operators) have the same priority and associate right-to-left. For example, a b + c is equivalent to a * (b + c), not (a * b) + c.”*
Your point about not needing operator precedence still stands, though.
> Maybe its rose-tinted goggles for me, but I feel like the internet was a better place (and encouraged a healthy relationship with technology) when it was a place you "went" instead of something thats with you all the time, always connected.
In my experience the difference was rather that a lot of internet users of this generation were radically pro-privacy, and very opposed to any centralized service (they were willing to teach you a lot about setting up your own internet applications, websites or web applications under the promise that you don't use a centralized web platform and you delete your accounts there).
>
It's also good to remember how much breakfast regularly costs now. £15-20 is quite common at mid range places - £10 of yesteryear is exceedingly rare
There exist hotels where breakfast is still very cheap (but the rooms are accordingly more expensive). The reason is that business travelers, the budgets for meals are really tight (you have to pay anything above by yourself), but the maximum allowed costs for hotel rooms are typically much less tight.
To accommodate such business travelers (though these are not the only guests), the hotel makes the breakfast really cheap, but the room accordingly more expensive (but still within the typical budget of business travelers), so that such customers can deduct more travel expenses to the employer.
> It's been a common wisdom now for decades that open source is more secure.
This is not true.
The problem rather is that the managers of many companies don't allow their programmers to apply their knowledge about security - the programmers should rather weed out new features.
The United States has no motive in the constitution or otherwise to let anyone in who behaves in a hostile manner to the country, it's people, or its government.
It's basic rationality. To argue otherwise is to argue that the US has no right to defend itself against external hostile attackers. Utter absurdity. What's the point of a country if it must allow anyone and everyone to enter?
Criticizing the government is not hostility. Its wanting to move towards a better country. This is EXACTLY what the 1st amendment is intended to protect. Whether the legal system decides it applies here is one question, but there are heaps of documents and communications between founding fathers and other figures making this clear. Many of those folks were immigrants themselves. So the idea that it wouldn't apply to legal immigrants is wildly out of line with the founding ethos of the country.
I think on average, outside perspectives are less well-informed than inside ones. It's a decent first-pass filter for quality, despite its inaccuracy.
I see this frequently as an engineer: my pet peeve is the "can't we just..." from someone who has no idea how the system works. Occasionally they're correct that we could make a trivial change to make something work... But most times, that "just" is hand-waving away days/weeks of effort. On the other hand, when "can't we just ..." is uttered by someone else on the same team, they're usually correct that the change is indeed trivial.
In this case, "outside" vs "inside" is actually a good proxy for how informed or accurate the opinion actually is.
Another good example is the stereotypical "expert in a field who thinks their expertise trivially transfers to unrelated fields".
To put it more simply: the distinction exists because outsiders are very frequently blind to the internal complexity of something (a system, an idea, etc), but are still willing to confidently assert their ideas anyway, leading to a frequent association of "outsider" with "poorly-formed opinions".
>
The United States has no motive in the constitution or otherwise to let anyone in who behaves in a hostile manner to the country, it's people, or its government.
Here we are back at the same argument that I just brought:
The definition of what "hostile" is is very arbitrary and can be defined to suit your political agenda.
>
In fact, most modern languages are designed with little to no necessary backtracking and simple parsing, Go and Rust being noteworthy examples.
But to understand how to generate grammars for languages that are easy to parse, you have in my opinion to dive quite deeply into parsing theory to understand which subtle aspects make parsing complicated.
My personal context to understand where I'm coming from - I'm working on my own language, which is a curly-brace C-style language with quite, where I didn't try to stray too far from established norms, and the syntax is not that fancy (at least not in that regard). I also want my language to look familiar to most programmers, so I'm deliberately sticking close to established norms.
I'm thankfully past the parsing stage and so far I haven't really encountered much issues with ambiguity, but when I did, I was able to fix them.
Also in certain cases I'm quite liberal with allowing omission of parentheses and other such control tokens, which I know leads to some cases where either the code is ambiguous (as in there's no strictly defined way the compiler is supposed to interpret it) or valid code fails to parse,
So far I have not tackled this issue, as it can always be fixed by the programmer manually adding back those parens for example. I know this is not up to professional standards, but I like the cleanliness of the syntax and simplicity of the compiler, and the issue is always fixable for me later. So this is a firm TODO for me.
Additionally I have some features planned that would crowd up the syntax space in a way that I think would probably need some academic chops to fix, but I'm kinda holding off on those, as they are not central to the main gimmickTM and I want release this thing in a reasonable timeframe.
I don't really have much of a formal education in this, other than reading a few tutorials and looking through a few implementations.
Btw, besides just parsing, there are other concerns in modern languages, such as IDE support, files should be parseable independently etc., error recovery, readable errors, autocomplete hints, which I'm not sure are addressed in depth in the dragon book. These features I do want.
My two cents is that for a simple modern language, you can get quite far with zero semantic model, while with stuff like C++ (with macros), my brain would boil at the thought of having to write a decent IDE backend.
Colour of bits isn't a property of bits. It's provenance. It's facts about history of the things.
There may be no trace from pure noise to original work, but you didn't get that particular noise randomly, you in fact got it from the original work.
Once you understand that law cares less about the thing itself, and more about the causal chain that led to it, it stops seeming magical and becomes perfectly reasonable.
(Also, FWIW, it's not that far conceptually from code = data, but there's still tons of technical people who can't comprehend the fact that there is no code/data distinction in reality. "Code" vs "data" too isn't a property of bits, it's only a matter of perspective.)
In this particular case there's also the simpler, more technical/mathematical argument: you cannot possibly just "accidentally" have that exact noise. Getting those specific bits instead of any other sequence from the space of random numbers that much long requires you to extend effort at least equivalent to possession of the exact copyrighted work that happens to fall out of the XOR exercise.
Except there are two people, B and C with noise. The only thing you can prove is that if you XOR both noise vectors together, you have a copyrighted work.
Both people will say they are innocent and that the other person used the other's noise vector and the copyrighted work to produce their noise vector.
> Both people will say they are innocent and that the other person used the other's noise vector and the copyrighted work to produce their noise vector.
Simple: because you can't find out who tells the truth, simply jail both. :-)
If they are all posting their noise vectors up on xor-music.com, sure. If they have valid reasons for making available a specific 'noise' vector (maybe they can prove it decrypts to something useful), then probably not.
Judges and juries don't need to guilt to be mathematically proved, they just have to be pretty sure.
Yes. Or at least hint at it, at which point someone will probably volunteer or let slip some information that gives you a rough shape of the causal chain, at which point you know where to dig and pressure further, and eventually convince someone to confess or get a warrant to be sure.
If the prosecuting side has a reason to care that much, it doesn't matter whether it's 10 or 100 people - in fact, if it's 100 people, the original source is in deeper shit because this is now obviously not just personal use, but distribution.
Sounds nice on paper but it becomes exponentially difficult when many people are involved, and some groups of the vectors XOR'ed together also demonstrably result in legal content.
There is no trace from a dead body back to the original act of killing, but police regularly manage to link them anyway (at least when the body had a large enough bank account).
They do this by means such as "questioning people" and "finding evidence". For example, if you have a file on your computer describing your plan to use XOR to infringe copyright, that would be considered "evidence".
This steps over the fact that the first crime is considerably messy while the other is extremely clean and can be committed where the law cannot see without a warrant.
> This is because legal people want something to exist that does not physically exist.
No law exists "physically".
Otherwise: Even in Computer Science the situation is more complicated, as is explained in the linked articles). Relevant excerpt from the first linked article:
"Child pornography is an interesting case because I find myself, and I think many people in the computing community will find themselves, on the opposite side of the Colourful/Colour-blind gap from where I would normally be. In copyright I spend a lot of time explaining why Colour doesn't exist and it doesn't matter where the bits came from. But when it comes to child pornography, I think maybe Colour should make a difference - if we're going to ban it at all, it should matter where it came from. Whether any children were actually involved, who did or didn't give consent, in short: what Colour the bits are. The other side takes the opposite tack: child pornography is dangerous by its very existence, and it doesn't matter where it came from. They're claiming that whether some bits are child pornography or not, and if so, whether they're illegal or not, should be entirely determined by (strictly a function of) the bits themselves. Legality, at least under the obscenity law, should not involve Colour distinctions.
[...]
The computer science applications of Colour seem to be mostly specific to security. Suppose your computer is infected with a worm or virus. You want to disinfect it. What do you do? You boot it up from original write-protected install media. Sure, you have a copy of the operating system on the drive already, but you can't use that copy - it's the wrong Colour. Then you go through a process of replacing files, maybe examining files, swapping disks around and carefully write-protecting them; throughout, you're maintaining information on the Colour of each part of the system and each disk until you've isolated the questionable files and everything else is known to be the "not infected with virus" Colour. Note that developers of Web applications in Perl use a similar scorekeeping system to keep track of which bits are "tainted" by influence from user input.
When we use Colour like that to protect ourselves against viruses or malicious input, we're using the Colour to conservatively approximate a difficult or impossible to compute function of the bits. Either our operating system is infected, or it is not. A given sequence of bits either is an infected file or isn't, and the same sequence of bits will always be either infected or not. Disinfecting a file changes the bits. Infected or not is a function, not a Colour. The trouble is that because any of our files might be infected including the tools we would use to test for infection, we can't reliably compute the "is infected" function, so we use Colour to approximate "is infected" with something that we can compute and manage - namely "might be infected". Note that "might be infected" is not a function; the same file can be "might be infected" or "not (might be infected)" depending on where it came from. That is a Colour.
[...]
Random numbers have a Colour different from that of non-random numbers. [...]
Note my terminology - I spoke of "randomly generated" numbers. Conscientious cryptographers refuse to use the term "random numbers". They'll persistently and annoyingly correct you to say "randomly generated numbers" instead, because it's not the numbers that are or are not random, it's the source of the numbers that is or is not random. If you have numbers that are supposed to come from a random source and you start testing them to make sure they're really "random", and you throw out the ones that seem not to be, then you end up reducing the Shannon entropy of the source, violating the constraints of the one-time pad if that's relevant to your application, and generally harming security. I just threw a bunch of math terms at you in that sentence and I don't plan to explain them here, but all cryptographers understand that it's not the numbers that matter when you're talking about randomness. What matters is where the numbers came from - that is, exactly, their Colour.
So if we think we understand cryptography, we ought to be able to understand that Colour is something real even though it is also true that bits by themselves do not have Colour. I think it's time for computer people to take Colour more seriously - if only so that we can better explain to the lawyers why they must give up their dream of enforcing Colour inside Friend Computer, where Colour does not and cannot exist."
> But then, pushing regular languages theory into the curriculum, just to rush over it so you can use them for parsing is way worse.
At least in the typical curriculum of German universities, the students already know the whole theory of regular languages from their Theoretical Computer Science lectures quite well, thus in a compiler lecture, the lecturer can indeed rush over this topic because it is just a repetition.
reply