I remember adding this to a schools website I developed back in the day.
And I actually think, odd technical choices aside, it's not a bad idea. PICS is far preferable to governments trying to insert themselves into the internet and ban encryption, or private companies trying to maintain infinite lists of "good" and "bad" stuff.
I imagine that the porn sites would be pretty happy to add the content ratings too, since they don't want people who have opted out to visit. You could add a bit of government policing there if you wanted, forcing websites to properly add metadata, and forcing schools to set the filtering up, and that would be fine by me.
The university I work at wanted to make a video archive of performance art openly available on the web. Some of this can be 'challenging' (their preferred term) so we wanted to ensure people didn't just stumble into it. On the other hand we didn't want any form of mediated access which is a barrier for potential users, and also a maintenance burden.
I remembered PICS and hoped something similar had stepped in since it seemed an ideal solution: we describe our content, users make a choice based on that metadata. However we found nothing.
(We ended up adding a ~7 second explanation / warning at the start of every access copy, which is ... ok)
I suppose this is brainstorming for a problem that's been solved, I wonder if the flags of how much a viewer can see can be user-modified and stored in a cookie, and a few seconds before the video starts the viewer will get a reminder of what their preferences are, and what the video's rating is. If the video is "more challenging" than the user's settings, then of course the video won't play. And if the user hadn't set anything, they can just click "Skip the configuration, show me the video".
I still kinda fail to see how this would work in practice. For instance I'd set it up so that a child could see full nudity (let's say greek art or primitive warriors are fine) but no sex (no masturbation, no intercourse)
A site would then have to tag each page with the relevant metadata, including user generated contents, and decide if any specific picture is nudity but not sex. But then my definition of what is sex will probably not match the sites'. Someone fully clothed rubbing it against a table should be banned for me but would probably pass withtout a tag, etc.
Basically, the very concept of having tags to accurately convey the nuances of what's happening in a page or media is complicated, and have that happen at the internet level looks to me like an unsolvable problem.
Some would say "better something than nothing", but I think we quickly fall into problematic cases when muslim libraries start to rely on it to block profane pictures for instance, where a decently maintained white list would work better.
No ratings system can be perfect. But with precisely defined categories, it could work.
User-generated content could be specifically tagged as such, with the parent being able to choose between “allow all”, “allow if has rating and it matches the standard allowed levels”, or “block all”.
To me the intractable part is we still rely on either
- A: content creators to properly/genuinely tag their content
- B: a central entity to tag each content
To clear the simple case, the B route is basically akin to the white listing efforts we have now, and going PICS or not doesn't look to me bring much to the table. If anything, PICS needs more management on the individual side than the current MDM approach where for instance, the school admins have a whitelist of sites available from within the school network.
Going the A route, we still hit every moderation issue ever met by a user generated content system has. The question becomes, is the PICS system better than a bespoke filtering system ?
I'm not sure, because:
Depending on the site I'll probably want different options (I might want exceptions to the "no sex" filtering on medical discussion sites for instance).
The site would be offering very tailored and specific options that would be set to reasonable values by default; site specific apparently could be implemented in PICS, but then I'd have to manage them in the PICS options of my browser, and I might have no idea what half of them even mean, when the site would have in context information on exactly what these options do.
As funny as it is, having the filtering options bound to accounts instead of the browser could yield better results. I could have "personas" depending on what I intend to look for, which would be a pain with browsers (especially site by site).
All in all, I have the feeling it's complicated either way, and PICS trying to be clever and simple brings different pitfalls by being the intermediary.
>Basically, the very concept of having tags to accurately convey the nuances of what's happening in a page or media is complicated, and have that happen at the internet level looks to me like an unsolvable problem.
SOTA computer vision systems are capable of handling this automatically. I've implemented an augmented GPT-4 solution that categorizes nude material into several categories and it outperforms the average human tagger.
The question is: Do you want this to be opt-in or opt-out? If you block all websites which don't offer a rating, then you would probably block way too much, and if you block only websites which are rated adult according the system, then it is unclear why foreign porn websites couldn't simply refuse to rate their sites, creating a competitive advantage.
For PICS, once you enable it in the browser (which isn't normally the default) then it does indeed block all websites unless they have content ratings. Which seems like reasonable default behaviour, provided that schools / concerned parents / etc are aware that they need to press a button to make the web "safe" for their kids. It doesn't interfere with what adults can do.
The most interesting thing for me as an European is looking at the difference in levels between sex and violence according to RSACi. Look at level 1 for instance: on one side we have passionate kisses (with tongue I guess?) and on the other, killing animals already. At Level 2 we are already killing people, while at level 2 clothed sexual touching. Beating someone? Bah, not even contemplated.
Yeah, it seems like here in America, violence doesn't come close to sexual acts unless it has real gore.
That being said, the levels don't have to be comparable to each other. Level 2 violence doesn't have to be morally equivalent to level 2 sexual acts. It just needs to be consistent within its own scale. If they were supposed to be morally equivalent, you wouldn't need 4 separate level sliders.
As a father, I always found violence to be much more awkward to watch with the family than sex. Sex is easy to explain in a way that even young children can understand. Violence less so.
Fair enough, that's a point of view I did not consider. But still, the sex one could perfectly be on a more similar scale by for example putting as Level4 something like "showing forced or violent sex", just as we have death with gore at L4 for violence).
And I know we are talking about a '90s rating system but I lived that era and it's something I always thought, even back in the day. For example, 80% of the movies with guns, bullets and blood AND a sex scene never show women breasts or genitalia, not even for a split second.
I think killing of any kind is obviously more "titillating" than any mild or even moderate sexual content, to someone not already exposed to and desensitized to it.
Since I've been exposed to violent movies (and shows, and videogames) for 25+ years, I can neither remember nor imagine how it feels for someone seeing murder on TV for the first time, but for as long as I can remember, it's been the case that a similar-looking scene could feel cheesy, boring, neutral, or emotionally devastating, depending on the story, setting, tone and other contextual elements. For example, scenes from Saving Private Ryan or The Band of Brothers are still burned into my mind, having both moved and disturbed me greatly as a teenager - while plenty of other movies featured similar ones, and watching them I'd just yawn.
You're looking at the ratings backwards. The sex rating is less sensitive than violence because violence is less tolerated. The sex rating is also more nuanced as sex/intimacy is more nuanced than violence. The ratings in fact rate violence as super bad.
I remember growing up puzzling over some of these corners of the OS I’d didn’t understand. I understood it’s purpose but not how it worked. The ODBC configs in Control Panel was another one. I didn’t know at the time it would be 10-20 years before some of my curiosities were satisfied.
What has happened before will happen again. What has been done before will be done again.
It will start with Mozilla's new AI[1] that will soon be deciding on whether ecommerce offers and reviews are genuine, displaying accusations for the ones it decides shouldn't be taken into consideration by Firefox users. We all know it'll be only be a matter of time before topics such as inclusivity, vulgarity, and at least all of the others IE's Content Advisor covered are added back into the mix.
One aspect that concerns me even more is the profiling and labeling of individuals.
I have a friend who is - as he puts it himself - "even beyond the [autism] spectrum". He recently showed me how often he's falsely accused on Reddit and Discord of being a bot or of having used ChatGTP due to the way he writes. It happens so much and in such aggressive tones that I'm surprised he's still bothering to engage with others on the internet at all. Very sad to see.
It wouldn't surprise me if this AI would pick up on his strange writing style (hard to describe.. very structured) and decide his reviews or whatever consistently aren't genuine or human and brand them accordingly. He wouldn't even be aware of the fact that his Amazon reviews are being invalidated and pointless to spend any effort on since he doesn't use Firefox himself.
There are only two things Mozilla could do to remedy this kind of situation: have their AI also track and profile people with possible mental/neurodevelopmental disorders, or have a system where people themselves can prove to Mozilla that they're human and can't help writing in that manner due to the way their brain is wired. Neither of which I am sure we can all agree on would acceptable in the slightest.
This is a really messed up development and it's been making my friend really anxious about what other things will be coming in the future.
Thank you for talking about your friend. As someone on the autism spectrum, I am worried about a future where most of the humans would become so used to ChatGPT-enhanced human responses, that they might find my broken approach of using language as being another bot response, or as coming from someone who happens to be an incompetent user of ChatGPT. I am not guessing which is worse, but to have your attempt at communicating being deemed suspicious does hurt.
This is one of the aspects of ChatGPT that concerns me a lot. It will result in a greater mistrust of others (it already is doing this) as people have greater difficulty telling what's real and what's not.
I could see how the default position could become that "everyone is fake until proven otherwise", at which point, meaningful online discussions become very difficult or impossible.
I’m genuinely delighted by the “here’s a demo, except it’s only served through a protocol unsupported by the browsers you’d want to try it on” thing. It fits in so beautifully.
Reminds me of how in 2012 I tried to install IE6 on a Windows 2000 Server machine, to improve my experience over IE5 for the purposes of making a simple HTA GUI for some simple scripting, but even once I could find a copy of the IE6 installer, it no longer worked, presumably because it depended on something on the public internet that was no longer there in some way. So I coped with IE5. (And I think it was 5.0, not even 5.5.) I don’t know exactly when that machine was finally decommissioned, but it was definitely gone by 2020.
I had to find a copy of 'IE6 Full' on archive.org in the process of making this blog post. Most copies of IE6 are web installers.
I have to say I'm really quite irritated by Microsoft's habit of deleting old software from their download site, forcing people to get it from potentially untrustworthy third parties. I'm fairly sure there's some downloads on there that are just gone for good.
And, ah, wow. I was not expecting someone to mention IE HTAs here. Welcome to the club. ;) Another thing for the blog queue in the future, though frankly the number of IE-specific technologies is so large I should probably break it out into a separate series.
I found a full installer, but it still didn’t work. ¯\_(ツ)_/¯
Actually it’s coming back to me now, I’m confident it was IE 5.0 that was installed, because once I failed to install IE 6, I tried to install 5.5, and failed at that too.
Really enjoying your series, by the way. So far I’ve known at least a little about all of them save GateKeeper (though I have come across it as a general concept), but I’m impressed with the depth you’re delving to each time. I hope you’re enjoying the process plenty too!
You both could look for official (checksummed) MSDN of Office CD images from the era, or for other software that stated its support for specific Windows version, and had to bundle important system component updates to depend on. Those installers could still assume offline installation, and work in older Windows.
This author has some great stuff, minutiae of internet history I would have never thought to care about all made quite interesting! Another good read: https://www.devever.net/~hl/gatekeeper
I was intrigued by that censorship organization in the late 90ies as well, and actually subscribed to the mailing list of such a ratings bureau in London. As you expected it was full of Tipper Gore-like figures. Extremely comical and crazy. https://en.m.wikipedia.org/wiki/Parents_Music_Resource_Cente...
> The hearing was held on September 19, 1985, when representatives from the PMRC, three musicians—Dee Snider, Frank Zappa, John Denver—and Senators Paula Hawkins, Al Gore, and others
Yes, yes, two wives of senators are absolutely should not be mentioned. It's not like they were anyway involved in the hearing or what else...
Surprisingly, the system of vendor provided ratings are now used by Freedesktop in their AppStream metadata[0] named OARS[1]. But it's a lot simpler that this custom DSL that IE used for content rating.
Note that OARS is at risk of being abandoned¹ because people threatened to get the developer banned from GitHub² for giving app developers the ability to tag their app as having homosexual content (because said content is illegal in some countries.)
> You see, the PICS standard doesn't just define support for one particular ratings scheme (RSACi). Oh, no. Instead, the PICS standard defines an entire DSL for defining custom ratings schemes.
More seriously though, I'd love to know how the whole thing came about, and what was the Lisp connection/lineage that made the DSL use S-expressions.
Also, it seems Microsoft ecosystem has a bunch of such oddities pop up in unexpected places. IIRC, at some point Windows used (or is still using) some Prolog-derivative for declarative networking configuration and troubleshooting?
If you look at it with enough vagueness, I've seen some form of this at every job I've had: someone has a problem to solve, and has an idea that is fun to implement, unnecessarily flexible, and possibly Turing-complete. It's a way of getting the "I did a cool hack" dopamine hits that we nerds love so much, while still doing our jobs. Depending on how well they did at keeping their actual goal in mind, you might wind up with a very useful embedded scripting engine, or a very bad ad hoc build system for a project that did not need its own build system. (I've seen both.)
I would expect there's a large overlap between people who think Lisp and/or Prolog are cool, and people who are prone to doing this sort of thing. Non-trivial knowledge of either of those correlates strongly with being more into programming than the average developer.
> Windows NT networking installation and configuration is controlled by the file NCPA.CPL, which the user sees as the Networks icon within the Control Panel's main window. The bulk of this DLL is written in C++, but it also contains a simplified Prolog interpreter known as "Small Prolog," written by Henri de Feraudy during the late 1980s and put into the public domain. It is available through the C User's Group (Lawrence, KS).
Connector where both of the mating parts are the same. See the linked IBM's data connector for the probably most common historical example (incidentally it is not specific to Token Ring, but part of IBM's structural cabling system and thus a direct competitor to WE/AT&T's modular jacks popularly called RJ-whatever). More contemporary example is Anderson PowerPole, used for various DC power applications.
A prime example of how it's better to not have anything than to do it badly. I wonder how many parents mistakenly thought setting this was all that was needed to keep their kids safe online. Possibly persuaded (knowingly or unknowingly) by said kids.
The elementary school I attended thought this system (+an additional list of blacklisted sites) without a supervisor password was enough to keep students from accessing non-age-appropriate material on school computers. It sure didn't stop me.
And I actually think, odd technical choices aside, it's not a bad idea. PICS is far preferable to governments trying to insert themselves into the internet and ban encryption, or private companies trying to maintain infinite lists of "good" and "bad" stuff.
I imagine that the porn sites would be pretty happy to add the content ratings too, since they don't want people who have opted out to visit. You could add a bit of government policing there if you wanted, forcing websites to properly add metadata, and forcing schools to set the filtering up, and that would be fine by me.