On top of that they monitor for profanity. WHY? It doesn't harm anyone and let's people express themselves how they want.
If I was a parent I wouldn't want some random people at a for profit company spying on my child. That's my job.
Do kids still read 1984?
From the school's perspective, that's likely still considered a success. Lawyers are creative, and school systems (as an extension of the government) are rich targets.
If someone is naive enough to use the school communication system for those things and the monitoring software gets triggered and intervention occurred, then the school has documented proof that they did what they could. If something uses the school system and the monitoring software missed signs of it, then it's the software's fault and not theirs. If someone is shrewd enough to not use the school's system for that because they know it's being monitored, well then you can't blame the school for not seeing messages it had no access to.
Only one of those three scenarios does much to prevent things from happening, and is the least likely one. But all three of the scenarios limit the potential to somehow get a multi-million dollar negligence judgement out of the school if something bad actually happens.
So many of these disturbing developments, that people are quick to attribute to an intentional campaign to desensitize people to government surveillance, are really caused by people trying to CYA. That doesn't make them OK, not by a long shot, but if you misunderstand their causes you're not going to get anywhere in pushing back against them.
I want to throw out though that every fascist and totalitarian movement in the 20th century also had nothing but the best of intentions and was guided by truly moral people who genuinely had the best interests of the people in mind. So saying someone is a good person with well-intentions doesn't mean nothing, it's actually a huge red flag.
Also this is obviously a third party interception of private communications between two parties without consent. It's wiretapping. Wiretapping laws need to be extended to cover textual representations of conversational discourse.
So the Hanlon razor can be abused ...
Hanlon's Razor, paraphrased, dictates always to underestimate your enemy and be lethargic. Of course it can be abused.
How on Earth this crap came to be regarded as a piece of "wisdom" is beyond me.
The strategically sound approach is, in the absence of evidence to the contrary, to always assume the actual outcome was the intent, and respond accordingly.
It's fairly true on small-scale interpersonal relations. Still true _sometimes_ on large scale, but that shouldn't make it excusable or remove accountability from the culprits.
This is a great maxim and deserves to have a name. Maybe Nolnah's razor?
For most people, I would argue, the most strategically sound wisdom is that which is most likely to be correct. For this, Hanlon fits the bill. While absolute paranoia is safer, benefit of the doubt is morally better and more rewarding.
If you believe your enemy's actions can be adequately explained by stupidity, I wonder on what basis you've declared them your enemy in the first place.
Efficacy and root cause analysis are disregarded because management doesn't care about x, they care about not being blamed for x.
I'm more and more certain much of this is the result of post-modernism. God is dead. We can no longer expect the fear of hell or desire for heaven to keep us in line, so the state must become the omniscient god watching over us.
I wish more people would see our police state through that lens, because hell and heaven never kept people in line and the state will do no better. It's all just metric-driven management... in the long-run, nothing really keeps the masses in line.
Amen. What keeps a society stable for a long time is allowing the masses to go out of line on a regular basis without too much collateral damage. It's like letting some steam escape from a boiling pot so it doesn't boil over.
This is essentially what elections are for: to let people revolt against government power without resorting to violence. But if elections are perceived as not valid, or as ineffective, they can no longer release that tension... and to your point, eventually the masses will get out of line one way or another. The longer it takes, the messier it will be.
Authoritarian governments are not even really intended to create long-term stability. They are corrupt in a very essential way: the people running them know they can't last forever, but they don't care as long as they can give themselves a great life, and put off the explosion until after their (natural, comfortable, long-delayed) death.
But surely that's a strawman. No one thinks religion would stop all evil or all people, but there certainly seems to be some kind of psychological effect from being brought up in such a framework, which can be observed, for instance, in latent guilt and residual fear of hell and judgement that happens when people leave certain christian sects for instance, which is totally absent and baffling to those not from such a cultural background.
I think its eminently feasible that SOME people are being 'held in line' through explicit or implicit fear of rules/authority/sanctity/hell/punishment/purity/etc provided by religious and social structures.
Even if you use DDG, you go to a site with GA on it, google cdn, maybe facebook too, anything with those icons to share with a social media site.
Even searching for symptoms. I was searching for something my girlfriend mentioned the other day and now wondering if I'll start seeing some kind of ad related to the issue.
It's not good.
I eventually became a moral nihilist in theory, but someone that recognizes the importance of values in practice. (like a classical liberal against gambling.) And to tell you the truth, I don't know the proper, working way to instill values and work ethics (especially the latter) in people or kids. I, for once, suspect that I came to appreciate a respectful manner through seeing others shoot themselves in the foot by maintaining a rebellious attitude and feeling disgusted with poor manners at times.
Right-wing Millenials who aspire to be good parents often came to appreciate family values/religious values through the same way I did. by rejecting "degeneracy".
Note: I am not saying that right-wingers are morally superior.
Every ban invites testing no matter the consequences. Counter-intellectual cultural engineering can mitigate inquisitiveness in the short-term, at the cost of leaving the state blind and vulnerable to disruption.
Yet many of them completely miss the problem that somebody has to objectively decide what's the "right cause" and rarely will most people agree on the causes being "right" or "wrong".
Not so much support, I'd say - but rather indifferent or "gotten used to it" to the point that it doesn't seem to be a problem in their view. It is much closer to apathy and ignorance.
There is a bit of a generational gap at play here. Most millennials were brought up in rather comfortable conditions. Food, health, stable life, education -- for them this is the world how it always have been and always will be. Fascism, world wars, dictatorships -- all that is a distant news on tv (that they don't even watch or have) or in history books (books? what are those?)
Also, with the never-ending facebook newsfeed, that automagically shows them things that they already agree with, fewer and fewer want to leave that bubble.
In other words, millenials are ok with total surveillance because they do not fathom the real implications of what survelliance state is going to be. They have nothing to "compare it to", no intuition, nothing to relate to. For them it will "never happen".
... which means when it happens, it will be really, really bad. It is akin to having early symptoms of cancer, but if you have never been close to such tragedy in real life, you just go on ignoring the symptoms, go on until it is too late.
I guess it is a fate of every generation, some kind of 3 generational "gap":
1st generation - experiences it, puts in laws and organizations to prevent it from happening in future (examples are United Nations after WW2, nuclear non-proliferation treaties between US and Soviet Union, etc.)
2nd generation - happy / golden generation that still knows remembers the lessons of the 1st one and enjoys the safeguards instituted after the events that are keeping them safe.
3rd generation - decadence and calm before the storm. No one is alive from 1st generation to tell them the horrors of the world wars first hand. 2nd generation are "old timers" and essentially ignored. Personal experience of the 3rd generation is comfort, convenience and never ending entertainment.
For the lack of a better metaphor: Winter is Coming.
It's called the Three Generation Rule, and it's no small part of why I've personally taken it upon myself to try to find a way to make sure any kids or friends I come in semi-regular contact have acquire appreciation for how current infrastructure was arrived at, and understand that none of it is a given.
Unfortunately, I'm not the most successful at it, because I'm pretty sure taking the time to collate all this information on the infrastructural pedigree of modern life in your head tends to come with a tendency toward hermitude.
Also coincidentally, hardly anyone seems to want to know about it.
1) That would make the previous 3rd generation before this current iteration (one cycle ago) the generation that experienced WW1 - they'd hardly be ignorant of the horrors of world war.
2) Boomers (your 2nd generation) went through Vietnam and the cold war, so I'm not sure they often felt like the UN was safeguarding them from, well, anything.
3) Boomers also seem to be indifferent and apathetic about climate change, so I'm not sure how that really fits into the narrative.
The most likely answer is that most people just feel indifferent about things that feel out of their control. Boomers likely didn't spend every waking minute about how close they were at several points to being ended by thermonuclear war.
Hell, even for the greatest generation (well, except for Murrow obviously) let McCarthy run rampant right after WW2 until he tried to go after the army.
I'm not really sure if it's pressure of social media or post-911, 24/7-breaking-news, always trying to manipulate audiences with emotional triggers.
That seems to suggest to me that you're saying that if the primacy of the authoritarian-religious-mindset had stayed around, such an effect wouldn't have happened.
Now, I can respect that there are a minority of relatively liberal religious traditions, but it seems to me that surveillance and monitoring and censorship would be something that you would predict would be pushed by such an authoritarian religious mindset had it stayed around, not caused by a crumbling of its authority.
Surely the common thread here is authoritarianism, and neither religion nor its downfall in the post-modern era, and the reason its happening now is purely because we have the technological means.
When my eldest was in primary school in a very small town, they implemented a surveillance system that I (and a lot of other parents) objected greatly to. There was no other school in the area, and home-schooling was not an option for us.
It took a year or so, but we resolved the situation by moving to another city.
The school is really at a loss here for retribution: in many instance the students are required to use the school's equipment for completing assignments and projects. So the worst they can really do is "restrict" usage -- but that isn't really any different than what they already do. So it's a bit of a headache for me and the kid(s)? A small price to pay for demonstrating the hilarious shortcomings of implementing a technological solution to a purely human problem -- one that requires specially trained humans to fix.
(Of course I'd make sure the kid(s) give me their blessing before messing with the school)
On another note: I have extensive experience working for school districts. This is absolutely a CYA attempt with no foresight into what the outcome of this terrible experiment will be. These solutions tend to be hacked together, easy to circumvent, and poorly implemented. While I have no direct experience with the product mentioned, I do have experience with school-focused solutions. We had to pay the extra money for purely commercial solutions to get anything that was worth the money.
Or rather, if you’re going to make a joke, it had better be as part of a substantive comment that exists to do more than just tell a joke.
"I wonder if I'm gay.", "I read that if you drink bleach it could kill you", or just meta "what happens if I put the phrase XYZ here?".
1984 it is indeed.
What's been implicitly added is "For adults only".
Whatever the under-18 class of "children" (except when they aren't!) don't have rights.
Happened to someone I know. She had a well-paying job but her parents basically stole all the income, naturally making it harder to move out with no savings.
My school in Australia did it completely transparently through a company named "CyberHound" of all things. It advertises in a similar way "protect the children from mental health issues" by running an MITM on all SSL traffic over your school network to inspect all all the push notifications, webpages et cetera sent to your students' devices. The difference is this was transparent and consenting.
The thing is I would not expect any institution's internal network to be un-logged, be that the company I work for, or a school or government. Passive logging of internet sessions and metadata is totally acceptable, its this kind of analysis and information sharing that can be really harmful.
Although I suppose I only say that because my shool actually had mental health support that was very visibly available, and the tracking was quite easily circumvented.
1: Root certificate we have to install on devices
2: All school machines have an "I agree to acceptable use" prompt on login and we have to install certificates ourselves.
I even wrote back in 2015 about analyzing your email (school in my case): https://austingwalters.com/analyzing-email-data/
The truth is companies will do anything to mitigate risk. Knowing or even thinking you can know what someone can do can mitigate that risk. This has a potential savings that can literally be greater than your companies entire net worth.
So, systems like this will continue to get more pervasive.
The war against profanity is strictly a Puritan, I mean American, thing.
In most sensible countries in Europe (read: those still not devoutly religious), profanity on the "state-run" TV stations is a normative part of life (e.g.: "helvete" or "fan" in Swedish - https://youtu.be/4ofbqaLiPe4).
But I agree that this is probably one of the worst ways a school could help reduce these kind of problems.
I wonder why parenting is the one that always triggers this kind of response. "You have to be a parent to have an opinion about parenting." What if I had said "if I worked for Gaggle" or "if I was a teacher". Would you have the same response?
Haven’t observed such a universal perspective shift across other fields of ‘expertise’, nor do those perspectives seem to shift until after more experience, while with parenting it seems to often shift on first realization (“at first sight”).
So no, “if I were a teacher” or “if I worked in medicine” and the like don’t seem as likely to be subject to change.
// Just to be clear: not saying this as a parent. I am not a parent.
The argument you are making is basically identical to the old "guns don't kill people" argument which actually has been proven completely false. Similarly, minimal suicide prevention measures (like fences on bridges, or again lack of access to guns) while they can usually be circumvented by the extremely determined do usually prevent suicides.
How do you measure the efficacy of a fence on a bridge? By the number of suicides by jumping off a bridge or by looking at the suicide numbers in aggregate? Of course it would reduce suicide by jumping but is it actually reducing suicide generally or only a specific method?
England changed from coal gas to natural gas. That prevented one very common method, and it led to a drop in total suicide rates.
It took a while for method substitution to happen.
We also saw similar drops when catalytic convertors were added to cars in the UK.
One of the important parts of reducing access to means an methods is to cause people to switch to less lethal methods. Removing access to coproxamol (in the UK) saved lives because people switched to other meds. Any overdose is dangerous, but some overdoses are less likely to be lethal if medical attention is sought quickly.
England changed the quantities of paracetamol that people could buy. This link only talks about self-poisoning (so it doesn't address the method substitution) but it does talk about characteristics of some people who chose this method: did they go to buy the meds or did they use what was in the home? Were they able to buy large quantities or did the legislation work? What was the length of time between having the initial thought of wanting to overdose and then carrying out the act? http://cebmh.warne.ox.ac.uk/csr/resparacet.html
At the moment one of the strongest recommendations we can make for suicide prevention is to reduce access to means and methods, because that has clear evidence to support it.
You can hear Professor Nav Kapur talk about it here: https://www.youtube.com/watch?v=iWPEVhrWZS0&t=415s
The NCISH will probably have more information or links to research about method substitution: https://sites.manchester.ac.uk/ncish/
It's important that removing access to means and methods is not the only thing we do! It's important, but it's only a part of a package of suicide prevention measures.
> How do you measure the efficacy of a fence on a bridge? By the number of suicides by jumping off a bridge or by looking at the suicide numbers in aggregate?
You don't look at "suicides", you look at self inflicted deaths. You look at self inflicted deaths from people jumping from high places, and you look at the total number of self inflicted death in the area. So far we strongly think that fencing off places like multi-story car-parks saves life and reduces total numbers of self-inflicted deaths.
> Among the many banned words and phrases on Gaggle’s list are "suicide," "kill myself," "want to die," "hurt me,” "drunk," and "heroin."
Children growing up in this sort of environment will come to expect it from their government.
I wonder if they have any data at all that shows they can actually prevent catastrophes with this sort of system or if this is just a placebo with dangerous social side effects?
Anyway, yeah, restricting a few key phrases is amateur hour. It doesn’t actually help, and kids will just find ways around it anyway. Wait until they discover Unicode homographs!
It's totally true though - no amount of word filtering is ever going to serve a real, tangible purpose for people who are having a consensual conversation between each other. Further, if AIM or ICQ back in the day had filtered my conversations, my friends and I would have just moved to one of the other many services that _don't_ use such filtering. It was bad enough when they started showing automatic previews of URLs in messages, the last thing I need is something actually modifying or acting on the text content of the messages.
Public school systems and employees are the government.
Once kids come into HS/Middle School at least now in 2019/2020, most if not all use personal emails and external chatting apps (most likely Messenger if not Instagram Messages or Snapchat). Compounded with psuedonyms, it's hard for any AI to determine who's chatting who... and even if FB/Snap knew who it was, I'm somewhat certain they wouldn't do anything.
However, in Elementary school, email and Hangouts was the way to go (experience from siblings and myself around 5 years ago), so I guess Gaggle's "AI" can determine if elementary and some middle school kids are a "threat".
Regardless, kids nowadays are really into "self-deprecating jokes": "I want to KMS" or other abusive/harmful messages which a good majority use as jokes (in my experience) although they don't have any wrong intentions. I think any AI or NLP model to determine which is a credible threat and which isn't will be extremely hard to tell, even for humans who aren't teens.
It's hard for a human to determine who's chatting who, but not impossible. It'd just take too much attention away from other more important things like teaching.
Getting an AI to determine who's chatting who? It's a lot easier than you think. I recommend you read everything Snowden's released. Or watch Joe Rogan's podcast-interview with him.
Your phone goes with you when you go places. That is literally all that is needed to determine who's chatting who, particularly with using Facebook or Snapchat or TikTok or whatever spyware kids mistake for trendy social apps these days.
> even if FB/Snap knew who it was, I'm somewhat certain they wouldn't do anything.
Maybe, maybe not. You don't know the laws in your country. Even if you did know the laws in your country, people and businesses break laws all the time.
You might not yet see the danger is selling an advertisement to you. I highly recommend you pay attention in any psychology or statistics classes you have.
Sometimes you are joking, but the person you are joking with isn't. Think about that next time you jokingly tell someone to KYS because they got a 65 on a quiz.
Just so you know, they don’t use the AI to determine who is who, just the context of the text. They know exactly who is who despite pseudonyms because of device IDs and tracking cookies and all the rest. The “who is who” is the easy part.
NLP stuff is pretty advanced with things like BERT and T5, but even then, I imagine that doing accurate threat detection will be hard for nns
There has to be a way to reach so-called “troubled” kids without creating a culture of fear around expressing difficult emotions. This is part of a larger trend of parenting surveillance tech (ie Life360) that I find quite disturbing.
Schools seem much more willing to pay for newfangled tech rather than more trained counselors, for example. Outsourcing content moderation to low-paid contract workers is what gave us Facebook. I fear that these technologies will put a nice band-aid on the problem of student mental health while actually making it worse.
The bigger problem is that unlike previous similar boondoggles (see D.A.R.E.), this might have lasting consequences by acclimating a generation to constant surveillance.
This is an absolutely disgusting level of surveillance on developing minds, and trains them to think it's normal to live in a virtual panopticon. It's easy to force societal change by poisoning the impressionable minds of children and simply waiting.
All this does is get kids used to being watched.
On the other hand, I also try to avoid tying my school online identity to my personal online identity, and leave my school account for school. That also means not doing anything personal on district provided chromebooks. However, I only really know to do this due to knowledge in technology focused areas, not something most students have.
My main issue with this sort of tracking is that the students are very loosely told what it is tracked and how; at our school, students were told that the chromebooks used gaggle and not much more than that.
Overall, students and parents should definitely be better taught as to what occurs with surveillance, and in my opinion the current level of surveillance is extremely excessive.
How much does Gaggle Safety Management cost?
Gaggle Safety Management truly is a one-of-kind solution that shouldn’t be mistaken for a less expensive or free alternative. We understand that no two school districts are alike, so our per student pricing is based on your specific requirements as well as the size of your student population. Pricing for other Gaggle solutions also can be customized to the needs of your school or district.
Perhaps a better question to ask is "How much will it cost your school or district if you don't use Gaggle Safety Management?"
A lot of their webpage seems to be riding on fear, as is visible when just visiting their homepage.
They conducted a trial run of the surveillance service at one middle school for a couple months, whereupon they claimed that it prevented two suicides. A suicide per month in a single middle school isn't remotely plausible.
Now of course the kids know about the surveillance. It was announced in the newspaper. But already for years, parents have already been teaching our kids about dealing with the internet: Don't write anything that you wouldn't want to see on the front page of the New York Times. Don't admit to being a member of a hated group. Don't criticize governments that are capable of carrying out censorship beyond their borders. Assume that all "private" information will be stolen or sold.
My kids have told me that every kid at school has figured out how to install a VPN on their cellphone, so they can bypass the content blockers. They know that the VPNs themselves aren't secure. They don't use the Chromebooks except to look up school assignments (which are themselves surveilled via anti-plagiarism service).
Unfortunately the short term desire for entertainment, and to be part of a community, outweighs their long term concerns for security and privacy. But at least from the standpoint of being informed, they're quite well informed.
The software is really worrying, and this is one reason why.
Risk prediction for suicide is really hard. Currently the risk prediction tools (created and used by health care professionals) are bad, and there's strong advice that these should not be used to predict future suicide or self harm, and should not be used to determine who to offer treatment to or who to discharge from treatment. And these tools are somewhat more sophisticated than "has the person used the word suicide?"
It could be stripped down to a few bullet points, in order to make it easier to understand. The notable features are the right to know what personal information is being stored, and a right to be forgotten. It should include a provision, that a person can't be compelled to sign away these rights in return for access to public services.
Schools will prepare students for accepting global surveillance and to counteract that parents will prepare kids for living in cyberpunk underground. I don't think either of those extremes are good for the kids themselves...
Someday I will help lead a school whose software is written by students and whose policies are debated by student government, faculty senate, and the PTA.
How do the schools and Gaggle have access to all of this info anyway? I assume they have school accounts that are provided by the school and therefore the legal property of the school since they are the ones paying for the MSFT 365 accounts? But what about Gmail? I don't use it but I thought it was free? I take it there is some corporate version that is paid for?
They need to add things like going outside, storm, thunder, lightning, raincoat and so forth to the list, since getting struck by lightning is about 600 times more likely nationwide than being shot at school.
So, we're paying close attention to exotic scenarios we'll likely never experience personally, and getting worked up over them. Meanwhile the various elephants in our rooms, the ones that are likely to be actual problems for us (heart disease, dying in a car crash, cancer, etc.) are drab and gray, so we ignore them. I mean who wants to solve real problems? BORRINGGG!
Suddenly just now while typing I realized that
1) an exotic, unlikely scenario
2) that we engage closely with
...is precisely the paradigm of all fiction, fantasy and entertainment. What is a video game other than an exotic scenario you'll probably never experience in real life, that you engage with and try to "get into?"
How about in ancient times? Sure - even back in Greece or in Shakespeare's time, "the play was the thing," namely an exotic scenario (the inside of the royal castle perhaps, which most people will never see) and "getting worked up over it" might be more aptly described by your drama teacher as sharing in the tension, drama and catharsis.
Even our ideas about the future tend to center around exotic utopias, or exotic apocalypses (the writer John Michael Greer has talked about this) whereas in reality things will probably just continue to slowly get shittier, more annoying, less comfortable, less convenient etc., and ironically it'll probably be that way precisely because we've got our heads up our asses with fantasy (including things like "Hey there was a shooting; let's implement 1984.") and can't seem to reach a basic consensus about what reality is and how to keep our house in order. BORRINGGG!!!
Anyway if we're using news of a school shooting as entertainment -- "Riveting tension!" says the New York Times. "So sad!" says the Washington Post -- then suddenly instead of a fictional universe with actors, we're using real people, real victims, for our entertainment. Which feels like a wrong to me.
Huge tangent, sorry. Didn't expect to be writing in this direction today.
The CCP state, Facebook, Google... now Gaggle.
I hope there's some studies of the damage such things do to people.
How much do they lose the ability to control their own lives?
How does this change their attitude to other people?
Respect for authority?
Most of all, this ought to be opt-in, opt-in only, and if you opted-in you can get out again. My reading of the articles says it's none of these.
But I cannot help thinking of my younger self, toying with that Gaggle AI :) I mean as a teenager (old enough here to have been through the "no-future" 80's) me and my friends would have push that Gaggle AI, where no AI has gone before :)
What can they do to the kids who write "inappropriate" stuff ? They cannot prevent the kids to do it again, it is free speech during private conversations.
This reminds me and my cousin who created a secret language "fefe" that easy enough to process and speak, but too confusing to understand when listen too. Kids are going to just do that, new words or old ones with new meaning, and challenge Gaggle.
If you can call that AI, why not a hashmap? :p
"Believe happy thoughts! You are happy! Smile please!"
I think the solution lies outside the tech world and in face to face solutions; train teachers to spot problems and red flags and possibly have a tally system. Are several teachers reporting issues? A student missing classes, showing up long sleeves, withdrawn, depressed? Grades slipping, unable to focus? Set a counsel meeting and get to the bottom of it.
An abstract and mind-you poorly run AI is just a nightmare scenario. I feel sorry for any students going through the system now.
You might also be surprised to know that your employer can also log everything you do with company property. Not that I agree with it but the legal precedent is there and children in school have far fewer rights than others.
Aside from the massive overriding issues of such pervasive surveillance, this practice seems additionally stupid and counterproductive.
As soon as kids figure out that their language is being monitored for certain phrases -- and this is an instant give-away -- they'll simply avoid using those phrases and develop a new code, rendering the entire system useless (as well as abhorrent).
Gaggle's mgt are clearly not the sharpest tools in the shed, making this kind of basic and stupid opsec errors...
However, the software was poorly written, full of bugs, would frequently crash, and generally ineffective. It wasn’t long until we (kids) figured out its weaknesses and we had full control of it —- we could force it to crash, uninstall it, and we even took over it’s command and control server.
Mind you, this was 20 years ago. Today’s kids are up against more mature software but I tend to feel whoever makes these things is generally bottom of the barrel outsourced devshops, so it’s probably all still very low quality. Couple that with kids being generally more tech savvy and advanced these days with clueless school administrators —- kids are generally smarter than you think and need a lot less protecting. Information spreads at lightning speed in a school. I’m sure they all know all about this software and how to get around it.
IMO, that's the biggest problem here. Until we stop treating children the way we treated slaves, people of cololor, women and LGBTQ+ in the past, we will get horrible systems like this. We need to recognize that "it is self-evident that all people are created equal", and that includes children too.
I'm a pretty young user by HN standards and I still keep in touch with under18s. I know a few who have been surveilled at every step and prohibited from using most modern technologies like other teenagers do. This created way more problems than it solved. This was particularly prevalent in the U.S. On the other hand, I never had any rules imposed, and I think I'm in tech only because of that. In Poland where I live, monitoring children's use of technology is almost unheard of. I haven't really heard of any problematic situations that monitoring would solve. On the other hand, I know a few horror stories from the U.S. where monitoring was used.
(a) a culture of authority and draconian measures where adolescents aren't respected and the very people who are supposed to look after them and set an example are instead their enemies and not to be trusted
(b) a culture where significant numbers of adolescents think it's OK to commit extreme acts that harm themselves or others.
Who needs social structures when you can just outsource every problem to a company and throw the ones that don't fit into some variant of prison or mental ward?
This has come up for my family this past week. When my son came home from school and said he feels uncomfortable with all of the video cameras in the hallways I was shocked that this was a thing. He said he even felt uncomfortable going to the washrooms as he couldn't be sure there weren't cameras.
The measures of surveillance employed without notifying the parents was shocking to me--with no option to give feedback or opt out? I haven't even looked into other forms of monitoring like what's shared in the article. But I'm going to find out.
For example, administrators using GoGuardian can get 'Smart Alerts' for 'self-harm' and other objectionable material. They'll receive an email that includes a screenshot of the material in question, and for G Suite, it spans Google search, Docs, and effectively any Google service.
However, that said, I just tried to pull up a link for GoGuardian and noticed they have this disclaimer:
"Please note: Smart Alerts for the "Self-Harm" category is no longer available for new customers. For more information, please reach out to your sales representative."
Incidentally, over the past few years I've been doing an audit of my password manager and couldn't log into the Gaggle account any longer, so I suppose the addresses are deleted after a few years. Hopefully that's current policy as well as I think most people would agree that what you write as a teenager may not reflect well on you later.
I also feel like this CYA policy may backfire. Before they could rightfully say they weren't the responsible party. Now that they are using tax payer funds to "prevent" these things I feel like they put themselves in a position to be responsible if it fails. I am sure there is some legal doctrine term for what I am trying to say.
> I also feel like this CYA policy may backfire. Before they could rightfully say they weren't the responsible party. Now that they are using tax payer funds to "prevent" these things I feel like they put themselves in a position to be responsible if it fails. I am sure there is some legal doctrine term for what I am trying to say.
This is a really interesting perspective and I'm curious if someone knowledgeable on the subject would comment. If public education is a right, can my child be denied that right if I refuse to consent to a license? If the child consents, is that not an invalid contract?
1) This is no way to treat kids that are already suffering from a dysfunctional society created by adults.
2) Even worse, to these kids, this is only going to normalize the idea of the surveillance state. But maybe that's the point.
I just read a great newyorker piece on a security company that sold FUD. This seems like another example of FUD for sale very similar to some security companies out there.
I'm assuming that most progressive college campuses don't use these ridiculously invasive tools (the college I went to ran a fairly unrestricted internal and guest network), but perhaps that's being too optimistic.
All demonstrating that school (and govt) officials are not even aware of the need for privacy.
Let the kids be kids.
It went about the way you'd expect.
Bark's response to DHH  made me fucking literally nauseous. How detached from reality do you have to be to not realize the insular effects that this will have on children? How insulated is one's life that they would be unable to conceive of the obvious and likely immediate effect these tools will have on teenagers who are already struggling with identity and discovery?!
Can't wait until Bark announces the feature where they'll mail you weekly to tell you if your kid is likely to be gay (which as any LGBT person on social media will tell you, is incredibly easy. Facebook has been accidentally outting people via association for a decade or more, and I can't imagine how that is amplified with full access to the screen, messages, raw data.) Just disgusting.
EDIT: Also, wow, he's really trying to drop "prevented" statistics in another Twitter thread. IT's not even impressive. You're monitoring 50M students and have only caught 320-some predators despite pervasive constant monitoring? I don't even find that impressive.
The testimonies too, HOW IS THIS REAL -- "I only get notifications if there are items of concern (sex, depression, bullying, profanity, etc.) Totally worth it!"
As a last plea, young adults are human beings with autonomy. Trying to suppress that autonomy has always had the same effect, every generation has tried.
Meanwhile, I can confidently say that there is a VERY STRONG chance I would not have made it through high school with pervasive, constant surveillance. An overly anxious boy coming to terms with his sexuality in god damn Kansas is hard enough, to think that I would have to worry about every electronic action being reported to my school or parents probably would've made the thoughts of suicide unsuppressable and I wouldn't have had the VERY small community that I sort of accidentally discovered via Facebook, again because there are so many indicators of someone possibly being LGBT even just from what friends you have in common, etc.
Basically, this is a scam where a company gets rich by convincing school districts to abusively surveil their students with no real benefit.