Hacker News new | past | comments | ask | show | jobs | submit login
Gaggle monitors the work and communications of almost 5M students in the US (buzzfeednews.com)
333 points by minimaxir 20 days ago | hide | past | web | favorite | 155 comments



This is completely absurd and I'm convinced it solves zero problems. The kids don't know they're being spied on until they do something "wrong". Gaggle (a terrible name btw) claim they prevent suicides and school shootings. This might "work" at first until kids know they are being spied on. Now they know not to send an email saying "I'm going to kill people". All your doing is preventing communication using school provided technology and eroding students trust in the school. You're not actually preventing a shooting. Kids can still do whatever they want and there's nothing you can do to stop it.

On top of that they monitor for profanity. WHY? It doesn't harm anyone and let's people express themselves how they want.

If I was a parent I wouldn't want some random people at a for profit company spying on my child. That's my job.

Do kids still read 1984?


> All your doing is preventing communication using school provided technology and eroding students trust in the school. You're not actually preventing a shooting. Kids can still do whatever they want and there's nothing you can do to stop it.

From the school's perspective, that's likely still considered a success. Lawyers are creative, and school systems (as an extension of the government) are rich targets.

If someone is naive enough to use the school communication system for those things and the monitoring software gets triggered and intervention occurred, then the school has documented proof that they did what they could. If something uses the school system and the monitoring software missed signs of it, then it's the software's fault and not theirs. If someone is shrewd enough to not use the school's system for that because they know it's being monitored, well then you can't blame the school for not seeing messages it had no access to.

Only one of those three scenarios does much to prevent things from happening, and is the least likely one. But all three of the scenarios limit the potential to somehow get a multi-million dollar negligence judgement out of the school if something bad actually happens.


We need an America-specific corollary to Hanlon's razor: "Never attribute to malice that which can be explained by an attempt to avoid liability in a maximally litigious society."

So many of these disturbing developments, that people are quick to attribute to an intentional campaign to desensitize people to government surveillance, are really caused by people trying to CYA. That doesn't make them OK, not by a long shot, but if you misunderstand their causes you're not going to get anywhere in pushing back against them.


I agree 100% this is about CYA and not a true interest in preventing bad things. However it is a bad thing in itself, in addition to profanity they police illicit sex contact such as normal underage high school sex, attempts, or discussion. What a nightmare to have that sort of thing be monitored by some private quasi-police security agency (even worse a private company masquerading as a police agency that doesn't have to follow the bill of rights).

I want to throw out though that every fascist and totalitarian movement in the 20th century also had nothing but the best of intentions and was guided by truly moral people who genuinely had the best interests of the people in mind. So saying someone is a good person with well-intentions doesn't mean nothing, it's actually a huge red flag.

Also this is obviously a third party interception of private communications between two parties without consent. It's wiretapping. Wiretapping laws need to be extended to cover textual representations of conversational discourse.


In france some people are using as a defense the line : We prefer to be judged as stupid than criminals ...

So the Hanlon razor can be abused ...

https://www.nouvelobs.com/opinions/20071004.OBS7990/malhonne...


What do you mean, it "can be abused?"

Hanlon's Razor, paraphrased, dictates always to underestimate your enemy and be lethargic. Of course it can be abused.

How on Earth this crap came to be regarded as a piece of "wisdom" is beyond me.

The strategically sound approach is, in the absence of evidence to the contrary, to always assume the actual outcome was the intent, and respond accordingly.


>How on Earth this crap came to be regarded as a piece of "wisdom" is beyond me.

It's fairly true on small-scale interpersonal relations. Still true _sometimes_ on large scale, but that shouldn't make it excusable or remove accountability from the culprits.


> in the absence of evidence to the contrary, always assume the actual outcome was the intent

This is a great maxim and deserves to have a name. Maybe Nolnah's razor?


An even more strategic position would be to assume the outcome was the intent even in spite of evidence to the contrary (which could be faked). And to assume that the worst potential outcome, rather than the actual outcome, was the intent. Those changes are both more strategically sound, so long as we define most strategically sound as 'most resistant to attack'.

For most people, I would argue, the most strategically sound wisdom is that which is most likely to be correct. For this, Hanlon fits the bill. While absolute paranoia is safer, benefit of the doubt is morally better and more rewarding.

If you believe your enemy's actions can be adequately explained by stupidity, I wonder on what basis you've declared them your enemy in the first place.


we do not live in a super litigious society. You're just brainwashed.


Blaming the software and then shrugging "well, we did everything we could" seems to become a pattern to deflect accountability. Worrying


It doesn't just apply to software, it works for any system. The pattern is obvious. Management hears 'x is a problem', they go out and buy the cheapest thing that claims to address x, then they wash their hands and say 'look we handled x'.

Efficacy and root cause analysis are disregarded because management doesn't care about x, they care about not being blamed for x.


Has a school ever been successfully held liable for a school shooting?


How much of this is typical social indoctrination? Teach children early that they are constantly spied on and they'll accept it in their cars, on their phones, in their houses. Total surveillance state.

I'm more and more certain much of this is the result of post-modernism. God is dead. We can no longer expect the fear of hell or desire for heaven to keep us in line, so the state must become the omniscient god watching over us.


> I'm more and more certain much of this is the result of post-modernism. God is dead. We can no longer expect the fear of hell or desire for heaven to keep us in line, so the state must become the omniscient god watching over us.

I wish more people would see our police state through that lens, because hell and heaven never kept people in line and the state will do no better. It's all just metric-driven management... in the long-run, nothing really keeps the masses in line.


> in the long-run, nothing really keeps the masses in line.

Amen. What keeps a society stable for a long time is allowing the masses to go out of line on a regular basis without too much collateral damage. It's like letting some steam escape from a boiling pot so it doesn't boil over.

This is essentially what elections are for: to let people revolt against government power without resorting to violence. But if elections are perceived as not valid, or as ineffective, they can no longer release that tension... and to your point, eventually the masses will get out of line one way or another. The longer it takes, the messier it will be.

Authoritarian governments are not even really intended to create long-term stability. They are corrupt in a very essential way: the people running them know they can't last forever, but they don't care as long as they can give themselves a great life, and put off the explosion until after their (natural, comfortable, long-delayed) death.


incentive structures and monitoring don't influence people's behavior?


People still murdered and committed adultery and broke the Sabbath even after they'd been told that an omniscient being would definitely know and punish them. I think that was the poster's point, anyway.


(not personally religious, or really for surveillance in schools, would probably teach my children to build their skills circumventing it to be honest)

But surely that's a strawman. No one thinks religion would stop all evil or all people, but there certainly seems to be some kind of psychological effect from being brought up in such a framework, which can be observed, for instance, in latent guilt and residual fear of hell and judgement that happens when people leave certain christian sects for instance, which is totally absent and baffling to those not from such a cultural background.

I think its eminently feasible that SOME people are being 'held in line' through explicit or implicit fear of rules/authority/sanctity/hell/punishment/purity/etc provided by religious and social structures.


I suspect I am. Do you not think about what you search for on the internet before you do it? Sometimes I really have to think, "do I want google to know I want to know about this?"

Even if you use DDG, you go to a site with GA on it, google cdn, maybe facebook too, anything with those icons to share with a social media site.

Even searching for symptoms. I was searching for something my girlfriend mentioned the other day and now wondering if I'll start seeing some kind of ad related to the issue.

It's not good.


From anecdote as someone who was brought up in a very religious household and went to an Islamic school for boys over the summer for multiple years, I can't disagree enough. Young religious people tend to be cynical about everything from morality and values to will and work ethics. If you have a son on which you impose a religious lifestyle, you have a good chance that they put an act in front of you.

I eventually became a moral nihilist in theory, but someone that recognizes the importance of values in practice. (like a classical liberal against gambling.) And to tell you the truth, I don't know the proper, working way to instill values and work ethics (especially the latter) in people or kids. I, for once, suspect that I came to appreciate a respectful manner through seeing others shoot themselves in the foot by maintaining a rebellious attitude and feeling disgusted with poor manners at times.

Right-wing Millenials who aspire to be good parents often came to appreciate family values/religious values through the same way I did. by rejecting "degeneracy".

Note: I am not saying that right-wingers are morally superior.


Choosing short term, surreptitious pleasure while ignoring potential punishments after death is nothing like typing a curse word in Google and immediately getting visual feedback that the indiscretion was flagged.


Of course it does, but the state's control is limited to outward behavior in monitored settings. Nothing can forcibly change our internal and individual values.

Every ban invites testing no matter the consequences. Counter-intellectual cultural engineering can mitigate inquisitiveness in the short-term, at the cost of leaving the state blind and vulnerable to disruption.


I mean, increasingly more Millenials [0] support censorship and surveillance when it's for the "right cause". Maybe the result of having grown up with the pressure of social media?

Yet many of them completely miss the problem that somebody has to objectively decide what's the "right cause" and rarely will most people agree on the causes being "right" or "wrong".

[0] https://medium.com/@nayafia/why-do-millennials-support-gover...


> increasingly more Millenials [0] support censorship and surveillance

Not so much support, I'd say - but rather indifferent or "gotten used to it" to the point that it doesn't seem to be a problem in their view. It is much closer to apathy and ignorance.

There is a bit of a generational gap at play here. Most millennials were brought up in rather comfortable conditions. Food, health, stable life, education -- for them this is the world how it always have been and always will be. Fascism, world wars, dictatorships -- all that is a distant news on tv (that they don't even watch or have) or in history books (books? what are those?)

Also, with the never-ending facebook newsfeed, that automagically shows them things that they already agree with, fewer and fewer want to leave that bubble.

In other words, millenials are ok with total surveillance because they do not fathom the real implications of what survelliance state is going to be. They have nothing to "compare it to", no intuition, nothing to relate to. For them it will "never happen".

... which means when it happens, it will be really, really bad. It is akin to having early symptoms of cancer, but if you have never been close to such tragedy in real life, you just go on ignoring the symptoms, go on until it is too late.

I guess it is a fate of every generation, some kind of 3 generational "gap":

1st generation - experiences it, puts in laws and organizations to prevent it from happening in future (examples are United Nations after WW2, nuclear non-proliferation treaties between US and Soviet Union, etc.)

2nd generation - happy / golden generation that still knows remembers the lessons of the 1st one and enjoys the safeguards instituted after the events that are keeping them safe.

3rd generation - decadence and calm before the storm. No one is alive from 1st generation to tell them the horrors of the world wars first hand. 2nd generation are "old timers" and essentially ignored. Personal experience of the 3rd generation is comfort, convenience and never ending entertainment.

For the lack of a better metaphor: Winter is Coming.


This is not an uncommon observation, and it even has a section on Atomic Rockets of all places.

http://www.projectrho.com/public_html/rocket/colonysite.php#...

It's called the Three Generation Rule, and it's no small part of why I've personally taken it upon myself to try to find a way to make sure any kids or friends I come in semi-regular contact have acquire appreciation for how current infrastructure was arrived at, and understand that none of it is a given.

Unfortunately, I'm not the most successful at it, because I'm pretty sure taking the time to collate all this information on the infrastructural pedigree of modern life in your head tends to come with a tendency toward hermitude.

Also coincidentally, hardly anyone seems to want to know about it.


“Hard times create strong men. Strong men create good times. Good times create weak men. And, weak men create hard times.“


Ehh, I'm not so sure this analogy really holds.

1) That would make the previous 3rd generation before this current iteration (one cycle ago) the generation that experienced WW1 - they'd hardly be ignorant of the horrors of world war.

2) Boomers (your 2nd generation) went through Vietnam and the cold war, so I'm not sure they often felt like the UN was safeguarding them from, well, anything.

3) Boomers also seem to be indifferent and apathetic about climate change, so I'm not sure how that really fits into the narrative.

The most likely answer is that most people just feel indifferent about things that feel out of their control. Boomers likely didn't spend every waking minute about how close they were at several points to being ended by thermonuclear war.

Hell, even for the greatest generation (well, except for Murrow obviously) let McCarthy run rampant right after WW2 until he tried to go after the army.


with the pressure of social media

I'm not really sure if it's pressure of social media or post-911, 24/7-breaking-news, always trying to manipulate audiences with emotional triggers.


Really? The result of post-modernism?

That seems to suggest to me that you're saying that if the primacy of the authoritarian-religious-mindset had stayed around, such an effect wouldn't have happened.

Now, I can respect that there are a minority of relatively liberal religious traditions, but it seems to me that surveillance and monitoring and censorship would be something that you would predict would be pushed by such an authoritarian religious mindset had it stayed around, not caused by a crumbling of its authority.

Surely the common thread here is authoritarianism, and neither religion nor its downfall in the post-modern era, and the reason its happening now is purely because we have the technological means.


I think OP meant that the backlash of conservatives to post modernism was to go for authoritarianism. It uses to be that conservatives felt that the fear of god would keep society decent. With that gone, they need to reach for something else.


Intentional or not, that's precisely what will happen.


If I were a student and knew about that I'd purposefully put in a variety of words and phrases that might get flagged in every email


And if I were your parent, I'd seriously look into moving you to a less abusive school.


In small towns there's often no choice. There's only one high school in Santa Fe.


Yes, this is true. Sometimes doing right for your kids requires a great deal of sacrifice.

When my eldest was in primary school in a very small town, they implemented a surveillance system that I (and a lot of other parents) objected greatly to. There was no other school in the area, and home-schooling was not an option for us.

It took a year or so, but we resolved the situation by moving to another city.


Time to do a report on Scunthorpe


Funny story.. back in 2000 a new boss starts at UK govt agency with student focus. This is around University entrance time. He wants to stamp some authority and show he knows better than the last boss. To add some context he was a Londoner in Scotland who obviously knew better than the great unwashed north of the border. Pretty much the first thing he does is look at the flagged words list on incoming emails. Why he felt this level of micromanaging would endear him to his Scottish team is lost to time. Anyway, he snorted and with a smirk berated the admin team for leaving out "cunt" and told them to add it in. The admin team, said nothing and did what they were told and waited.... Over the weekend over 10000 emails were flagged and had to be manually read by the small admin team, who were used to a small amount every week. Our London friend handed his notice in within 6 months :)


Does the word have a different meaning in Scotland or is it just not considered offensive?


In Scotland, the north of England, and Australia, you'll often hear 'cunt' used as a term of endearment between good friends. It's a banter thing.


Then you'd be doing a "violation" and get a warning. Presumably some in-school punishment for repeat violations.


In this context, I'd be encouraging my children to do this (and certainly take the blame for them); false positives ftw! It'd be a great, teachable moment where I can discuss many sociological, technological, and ethical issues at once. On top of that I can teach them a thing or two about opsec.

The school is really at a loss here for retribution: in many instance the students are required to use the school's equipment for completing assignments and projects. So the worst they can really do is "restrict" usage -- but that isn't really any different than what they already do. So it's a bit of a headache for me and the kid(s)? A small price to pay for demonstrating the hilarious shortcomings of implementing a technological solution to a purely human problem -- one that requires specially trained humans to fix.

(Of course I'd make sure the kid(s) give me their blessing before messing with the school)

On another note: I have extensive experience working for school districts. This is absolutely a CYA attempt with no foresight into what the outcome of this terrible experiment will be. These solutions tend to be hacked together, easy to circumvent, and poorly implemented. While I have no direct experience with the product mentioned, I do have experience with school-focused solutions. We had to pay the extra money for purely commercial solutions to get anything that was worth the money.


This is not a false positive, attempts to subvert or sabotage surveillance clearly indicate latent antisocial tendencies and should be punished to the full extent of the law. Please report yourself to the nearest Malcontent Utilization Facility, citizen!


While this might sound like a joke or an exaggeration, this kind of authoritarianism is spreading like wildfire in the US.


i got it. not sure why you're being downvoted.


Hacker News doesn’t like jokes.

Or rather, if you’re going to make a joke, it had better be as part of a substantive comment that exists to do more than just tell a joke.


That wasn't a joke of the, "Why did the chicken cross the road?" sort. Satire can be used to illustrate a point, often more effectively than plain-old words; hence, the comment is no longer grayed out.


Technically only second part was intended as a joke. First part is how I honestly expect the school to take such interference.


There's plenty of space for triggering false positives.

"I wonder if I'm gay.", "I read that if you drink bleach it could kill you", or just meta "what happens if I put the phrase XYZ here?".


Sounds like a good opportunity for a civics lesson. Today's subject: mass civil disobedience.


Given the article's description of the company's hiring process, it seems like it would be straightforward for a group of high school students to become ‘Safety Representatives’ and… act creatively.


Well said. As the article states, it is indeed a solution in wait of a problem. Also don't discount the possibility that the school may have a need to be seen doing something (meaningful or not).

1984 it is indeed.


Welcome to the modern mentality of alleged criminals being guilty until proven innocent. We have our Bill of Rights for a reason here in America but some lessons are easily forgotten.


> We have our Bill of Rights for a reason here in America

What's been implicitly added is "For adults only".

Whatever the under-18 class of "children" (except when they aren't!) don't have rights.


In many (most? all?) states children are effectively still property. For example, in many states it's legal for parents to steal wages from their children even if their children are high school graduates with a work permit - all based on age.

Happened to someone I know. She had a well-paying job but her parents basically stole all the income, naturally making it harder to move out with no savings.


That's really disturbing that someone's parents would steal their children's money if it's legally acquired.


Exactly. And if you bring up a generation of kids who have been normalized to the idea of constant surveillance and censorship, they're less likely to assert their rights as adults. Many of them will even learn the lesson that surveillance and censorship are good things that save lives. It's insidious and disgusting.


The monitoring of children is such an insidious topic because it is so easy for the average reasonable person to justify. I know my own Mother has felt parental guilt that she location-track me and my young sister, because, what if something happens? Children generally get less independence and rights than adults and we do this because it is always adults that have to be responsible for their wellbeing. So it is very easy to justify this kind of montoring, and at least here, there are likely legal obligations for schools to do so.

My school in Australia did it completely transparently through a company named "CyberHound" of all things. It advertises in a similar way "protect the children from mental health issues" by running an MITM[1] on all SSL traffic over your school network to inspect all all the push notifications, webpages et cetera sent to your students' devices. The difference is this was transparent and consenting[2].

The thing is I would not expect any institution's internal network to be un-logged, be that the company I work for, or a school or government. Passive logging of internet sessions and metadata is totally acceptable, its this kind of analysis and information sharing that can be really harmful.

Although I suppose I only say that because my shool actually had mental health support that was very visibly available, and the tracking was quite easily circumvented.

1: Root certificate we have to install on devices

2: All school machines have an "I agree to acceptable use" prompt on login and we have to install certificates ourselves.


I kind of can't help myself since I'm in a similar space...

https://hnprofile.com/author_profiles?utf8=%E2%9C%93&search=...

I even wrote back in 2015 about analyzing your email (school in my case): https://austingwalters.com/analyzing-email-data/

The truth is companies will do anything to mitigate risk. Knowing or even thinking you can know what someone can do can mitigate that risk. This has a potential savings that can literally be greater than your companies entire net worth.

So, systems like this will continue to get more pervasive.


>On top of that they monitor for profanity. WHY? It doesn't harm anyone and lets people express themselves how they want.

The war against profanity is strictly a Puritan, I mean American, thing.

In most sensible countries in Europe (read: those still not devoutly religious), profanity on the "state-run" TV stations is a normative part of life (e.g.: "helvete" or "fan" in Swedish - https://youtu.be/4ofbqaLiPe4).


The worst of it is that kids learn being monitored is normal and "a good thing". Something far too many adults are already comfortable with.


In the current climate, if you can create a product a school can refer to to prove they are doing everything for the safety of your kids, you can probably decide the price yourself.

But I agree that this is probably one of the worst ways a school could help reduce these kind of problems.


1984 is boring. Point them to Demolition Man and Minority Report instead.


It is pretty well established that people are allowed to speak their mind in our society, but they better be ready for the consequences!


I’ve observed “if I were a parent” may be a nonsense phrase. I’m not aware of any parents who still hold the same beliefs across the board as their preconceptions of how they’d think about their kids.


Just because people often report the experience of massive perspectival shifts after becoming a parent doesn't actually elevate their authority on the matter, or invalidate the opinions/beliefs of non-parents. That's just a straight up fallacy. I'm not saying the average parent doesn't have relatively more authority on matters of parenting than the average non-parent, but there's nothing inherent about becoming a parent that justifies radically elevating the epistemic status of their beliefs about parenting.


I knew someone would say this. I almost used "as a parent" instead. Hacker news and the internet in general is full of people making opinions about things they aren't experts in and that's okay.

I wonder why parenting is the one that always triggers this kind of response. "You have to be a parent to have an opinion about parenting." What if I had said "if I worked for Gaggle" or "if I was a teacher". Would you have the same response?


I think it's a combination of being tired of constantly feeling judged on a very personal part of your life and a feeling that the dependency and inconvince a child, especially a small child, adds to your life is unmatched by almost anything else, giving other little experience to draw from when criticizing or otherwise "trying to help". Even when coming from other parents, unsolicited "advice" can be annoying because every child is different and every parent likes to parent differently.


I have observed the above re: parent vs. not yet parent.

Haven’t observed such a universal perspective shift across other fields of ‘expertise’, nor do those perspectives seem to shift until after more experience, while with parenting it seems to often shift on first realization (“at first sight”).

So no, “if I were a teacher” or “if I worked in medicine” and the like don’t seem as likely to be subject to change.

// Just to be clear: not saying this as a parent. I am not a parent.


There needs to be a name for the reverse Gell-Mann Amnesia Effect. I.e., "You don't have the right piece of paper/you're from the wrong background/insert other personal attack here, therefore you're not qualified to comment." It's an annoying, cheap way to dismiss a perfectly good argument via ad-hominem means rather than actual reasoning.


On the bright side (well, less dark side really) at least it teaches the kids a very important lesson about putting things in writing while they are young and stakes are low.


"Give me six lines written by the most polite kid, and our contractor's child safety system will find enough in them to expel him."


Look, I agree 100% this tech is creepy, invasive, and bad but your argument that it doesn't prevent bad behavior (the extreme of which is school shootings) doesn't seem true at face value (honestly I have no idea but neither do you). Yes of course very determined and very smart kids might still get away with it but there is a very good chance it actually does decrease the rate of these incidents significantly.

The argument you are making is basically identical to the old "guns don't kill people" argument which actually has been proven completely false. Similarly, minimal suicide prevention measures (like fences on bridges, or again lack of access to guns) while they can usually be circumvented by the extremely determined do usually prevent suicides.


> The argument you are making is basically identical to the old "guns don't kill people" argument which actually has been proven completely false. Similarly, minimal suicide prevention measures (like fences on bridges, or again lack of access to guns) while they can usually be circumvented by the extremely determined do usually prevent suicides.

How do you measure the efficacy of a fence on a bridge? By the number of suicides by jumping off a bridge or by looking at the suicide numbers in aggregate? Of course it would reduce suicide by jumping but is it actually reducing suicide generally or only a specific method?


We have a couple of natural experiments.

England changed from coal gas to natural gas. That prevented one very common method, and it led to a drop in total suicide rates.

It took a while for method substitution to happen.

We also saw similar drops when catalytic convertors were added to cars in the UK.

One of the important parts of reducing access to means an methods is to cause people to switch to less lethal methods. Removing access to coproxamol (in the UK) saved lives because people switched to other meds. Any overdose is dangerous, but some overdoses are less likely to be lethal if medical attention is sought quickly.

England changed the quantities of paracetamol that people could buy. This link only talks about self-poisoning (so it doesn't address the method substitution) but it does talk about characteristics of some people who chose this method: did they go to buy the meds or did they use what was in the home? Were they able to buy large quantities or did the legislation work? What was the length of time between having the initial thought of wanting to overdose and then carrying out the act? http://cebmh.warne.ox.ac.uk/csr/resparacet.html

At the moment one of the strongest recommendations we can make for suicide prevention is to reduce access to means and methods, because that has clear evidence to support it.

You can hear Professor Nav Kapur talk about it here: https://www.youtube.com/watch?v=iWPEVhrWZS0&t=415s

The NCISH will probably have more information or links to research about method substitution: https://sites.manchester.ac.uk/ncish/

It's important that removing access to means and methods is not the only thing we do! It's important, but it's only a part of a package of suicide prevention measures.

> How do you measure the efficacy of a fence on a bridge? By the number of suicides by jumping off a bridge or by looking at the suicide numbers in aggregate?

You don't look at "suicides", you look at self inflicted deaths. You look at self inflicted deaths from people jumping from high places, and you look at the total number of self inflicted death in the area. So far we strongly think that fencing off places like multi-story car-parks saves life and reduces total numbers of self-inflicted deaths.


Hard not to compare this to surveillance in mainland China.

> Among the many banned words and phrases on Gaggle’s list are "suicide," "kill myself," "want to die," "hurt me,” "drunk," and "heroin."

Children growing up in this sort of environment will come to expect it from their government.

I wonder if they have any data at all that shows they can actually prevent catastrophes with this sort of system or if this is just a placebo with dangerous social side effects?


Also, have they ever actually seen kids communicate? With the way the world seems to be, my friends and I joked a lot about wanting to die (and still do). We don’t actually mean it, it’s just a really strong way of saying that shit’s messed up and it’s all rather absurd. The joke itself also implies irreverence. It’s usually taboo to joke about dying, so doing so is a bit subversive. It can also be used as pure ironic detachment. For example: [1]. I don’t know if current youths make jokes like this, but my friends and I do.

Anyway, yeah, restricting a few key phrases is amateur hour. It doesn’t actually help, and kids will just find ways around it anyway. Wait until they discover Unicode homographs!

[1] https://knowyourmeme.com/memes/like-this-image-to-die-instan...


hahaha! I am laughing so much at these images right now. Genius.

It's totally true though - no amount of word filtering is ever going to serve a real, tangible purpose for people who are having a consensual conversation between each other. Further, if AIM or ICQ back in the day had filtered my conversations, my friends and I would have just moved to one of the other many services that _don't_ use such filtering. It was bad enough when they started showing automatic previews of URLs in messages, the last thing I need is something actually modifying or acting on the text content of the messages.


>Children growing up in this sort of environment will come to expect it from their government.

Public school systems and employees are the government.


Of course they don't have data. None of this creeping Orwellian shit is ever justified with real, meaningful data because that's not how the people who support it think, nor do the people selling it care.


They aren't banned. They trigger intervention. Because schools don't want kids killing themselves.


This made me laugh. I'm currently in High School and we use GSuite (we also used GSuite in elementary/middle school for that matter), but am not 100% if we use Gaggle, but even if we did:

Once kids come into HS/Middle School at least now in 2019/2020, most if not all use personal emails and external chatting apps (most likely Messenger if not Instagram Messages or Snapchat). Compounded with psuedonyms, it's hard for any AI to determine who's chatting who... and even if FB/Snap knew who it was, I'm somewhat certain they wouldn't do anything.

However, in Elementary school, email and Hangouts was the way to go (experience from siblings and myself around 5 years ago), so I guess Gaggle's "AI" can determine if elementary and some middle school kids are a "threat".

Regardless, kids nowadays are really into "self-deprecating jokes": "I want to KMS" or other abusive/harmful messages which a good majority use as jokes (in my experience) although they don't have any wrong intentions. I think any AI or NLP model to determine which is a credible threat and which isn't will be extremely hard to tell, even for humans who aren't teens.


> Compounded with psuedonyms, it's hard for any AI to determine who's chatting who... and even if FB/Snap knew who it was, I'm somewhat certain they wouldn't do anything.

It's hard for a human to determine who's chatting who, but not impossible. It'd just take too much attention away from other more important things like teaching.

Getting an AI to determine who's chatting who? It's a lot easier than you think. I recommend you read everything Snowden's released. Or watch Joe Rogan's podcast-interview with him.

Your phone goes with you when you go places. That is literally all that is needed to determine who's chatting who, particularly with using Facebook or Snapchat or TikTok or whatever spyware kids mistake for trendy social apps these days.

> even if FB/Snap knew who it was, I'm somewhat certain they wouldn't do anything.

Maybe, maybe not. You don't know the laws in your country. Even if you did know the laws in your country, people and businesses break laws all the time.

You might not yet see the danger is selling an advertisement to you. I highly recommend you pay attention in any psychology or statistics classes you have.


The students at my school used to joke and say KMS and KYS all the time, too... until one of those kids actually did. And then they stopped saying that to each other almost over night.

Sometimes you are joking, but the person you are joking with isn't. Think about that next time you jokingly tell someone to KYS because they got a 65 on a quiz.


You're implying that an environment where people don't jokingly reference suicide would have led to a better outcome. Is there evidence for this?


So you have evidence that telling people to kill themselves is safe?


I remember at some neighboring school this happened and teachers and students moved on like nothing happened since the school was so competitive.


Excellent! So hopefully this means that it's not desensitizing kids to surveillance, just teaching them how to avoid it since they know it's there.

Good times.


Haven't been in school for a while but this was my first thought. Kids social lives exist far from the realm of the Official School Account™️


> Compounded with psuedonyms, it's hard for any AI to determine who's chatting who

Just so you know, they don’t use the AI to determine who is who, just the context of the text. They know exactly who is who despite pseudonyms because of device IDs and tracking cookies and all the rest. The “who is who” is the easy part.


You're assuming the AI in question has access to such information. A state actor might, but not random company XYZ. It's certainly possible, but I doubt this company is equipped to do so; there's not an easy way to show the school what additional value this provides. Unless they could find a way to "sell it", such a capability would cost the company significantly more likely for no additional revenue.


The "random company" is running the software the users are using. They are hooked into the account management system.


I'm a high school student as well, I can confirm what Surya (what’s the chance I’d bump into you on a random HB thread ;) ) is saying.

NLP stuff is pretty advanced with things like BERT and T5, but even then, I imagine that doing accurate threat detection will be hard for nns


> Gaggle is one of the biggest players in today’s new economy of student surveillance, in which student work and behavior are scrutinized for indicators of violence or a mental health crisis, and profanity and sexuality are policed.

There has to be a way to reach so-called “troubled” kids without creating a culture of fear around expressing difficult emotions. This is part of a larger trend of parenting surveillance tech (ie Life360) that I find quite disturbing.

Schools seem much more willing to pay for newfangled tech rather than more trained counselors, for example. Outsourcing content moderation to low-paid contract workers is what gave us Facebook. I fear that these technologies will put a nice band-aid on the problem of student mental health while actually making it worse.


This is a combination of CYA (“we did everything we could, we even surveilled all their communications”), and tech “magic” mentality: where new, unproven technologies are sold as an easy fix for complex societal issues. 10 years from now we’ll read about how none of this actually helped any kid in real crisis - but it’ll make a few school administrators feel good and in control for a few years.

The bigger problem is that unlike previous similar boondoggles (see D.A.R.E.), this might have lasting consequences by acclimating a generation to constant surveillance.


I’d not be at all surprised to learn D.A.R.E. went a long way to vilifying drug users.


Per the article, this tech is used so counselors can reach out to kids who aren't reaching out to counselors.


I try to have a charitable interpretation of other people's actions, but I can't help but feel like this is the old trick of using "think of the children" to force the leading edge of a wedge on something reprehensible.

This is an absolutely disgusting level of surveillance on developing minds, and trains them to think it's normal to live in a virtual panopticon. It's easy to force societal change by poisoning the impressionable minds of children and simply waiting.


It's not even meaningful. I read through the article and it's just scanning word docs and anything else that goes through your school email account. How many kids are dumb enough to threaten harm upon themselves or others, or otherwise share explicit imagery on their school email account...

All this does is get kids used to being watched.


I am currently attending a school at which GSuite and Gaggle are both used. I agree that Gaggle is extreme, especially the "three strikes" system.

On the other hand, I also try to avoid tying my school online identity to my personal online identity, and leave my school account for school. That also means not doing anything personal on district provided chromebooks. However, I only really know to do this due to knowledge in technology focused areas, not something most students have.

My main issue with this sort of tracking is that the students are very loosely told what it is tracked and how; at our school, students were told that the chromebooks used gaggle and not much more than that.

Overall, students and parents should definitely be better taught as to what occurs with surveillance, and in my opinion the current level of surveillance is extremely excessive.


From Gaggle's FAQ (https://www.gaggle.net/frequently-asked-questions/):

How much does Gaggle Safety Management cost?

Gaggle Safety Management truly is a one-of-kind solution that shouldn’t be mistaken for a less expensive or free alternative. We understand that no two school districts are alike, so our per student pricing is based on your specific requirements as well as the size of your student population. Pricing for other Gaggle solutions also can be customized to the needs of your school or district.

Perhaps a better question to ask is "How much will it cost your school or district if you don't use Gaggle Safety Management?" (emphasis mine)

A lot of their webpage seems to be riding on fear, as is visible when just visiting their homepage.


My school district started giving out a Chromebook to each kid, and has these things locked down pretty tight. Kids are forbidden from installing their own software. (We tried installing Python / Jupyter). They also hired a surveillance company, maybe the one mentioned in the article.

They conducted a trial run of the surveillance service at one middle school for a couple months, whereupon they claimed that it prevented two suicides. A suicide per month in a single middle school isn't remotely plausible.

Now of course the kids know about the surveillance. It was announced in the newspaper. But already for years, parents have already been teaching our kids about dealing with the internet: Don't write anything that you wouldn't want to see on the front page of the New York Times. Don't admit to being a member of a hated group. Don't criticize governments that are capable of carrying out censorship beyond their borders. Assume that all "private" information will be stolen or sold.

My kids have told me that every kid at school has figured out how to install a VPN on their cellphone, so they can bypass the content blockers. They know that the VPNs themselves aren't secure. They don't use the Chromebooks except to look up school assignments (which are themselves surveilled via anti-plagiarism service).

Unfortunately the short term desire for entertainment, and to be part of a community, outweighs their long term concerns for security and privacy. But at least from the standpoint of being informed, they're quite well informed.


> whereupon they claimed that it prevented two suicides. A suicide per month in a single middle school isn't remotely plausible.

The software is really worrying, and this is one reason why.

Risk prediction for suicide is really hard. Currently the risk prediction tools (created and used by health care professionals) are bad, and there's strong advice that these should not be used to predict future suicide or self harm, and should not be used to determine who to offer treatment to or who to discharge from treatment. And these tools are somewhat more sophisticated than "has the person used the word suicide?"

https://www.nice.org.uk/donotdo/do-not-use-risk-assessment-t...

https://www.nice.org.uk/donotdo/do-not-use-risk-assessment-t...


In my view a serious problem is that a person becomes "labeled" as a suicide risk, and has to bear that cross for the rest of their lives. It could get you kicked out of school, barred from employment, and so forth.


These addon services are troubling, but I don’t even want my child using a chrome book (really google account) either, which are at best unnecessary. What are our rights in this regard?


I've thought of an idea: The city council could pass a local ordinance similar to GDPR, for companies or public institutions that operate within the city limits.

It could be stripped down to a few bullet points, in order to make it easier to understand. The notable features are the right to know what personal information is being stored, and a right to be forgotten. It should include a provision, that a person can't be compelled to sign away these rights in return for access to public services.


I wonder what can parents do to prepare kids for that -- teach them to never post anything personal where school could be watching (so nowhere), create a fake public persona to avoid closer scrutiny, and so on?

Schools will prepare students for accepting global surveillance and to counteract that parents will prepare kids for living in cyberpunk underground. I don't think either of those extremes are good for the kids themselves...


Honestly, I'd flood their system with words, phrases and various images from the internet.


So much for education for democratic citizenship.

Someday I will help lead a school whose software is written by students and whose policies are debated by student government, faculty senate, and the PTA.


This is so utterly horrifying I can't even put into words my disgust. Although, I probably should have seen it coming.

How do the schools and Gaggle have access to all of this info anyway? I assume they have school accounts that are provided by the school and therefore the legal property of the school since they are the ones paying for the MSFT 365 accounts? But what about Gmail? I don't use it but I thought it was free? I take it there is some corporate version that is paid for?



Profiting off of fear-mongering. But I suppose there's a certain amount of sales genius (albeit evil) in manipulating faint-hearted administrators to buy your product.

They need to add things like going outside, storm, thunder, lightning, raincoat and so forth to the list, since getting struck by lightning is about 600 times more likely nationwide than being shot at school.


Lightning doesn't make global news.


Right, because, I suppose, it's 600X more commonplace than school shootings!

So, we're paying close attention to exotic scenarios we'll likely never experience personally, and getting worked up over them. Meanwhile the various elephants in our rooms, the ones that are likely to be actual problems for us (heart disease, dying in a car crash, cancer, etc.) are drab and gray, so we ignore them. I mean who wants to solve real problems? BORRINGGG!

Suddenly just now while typing I realized that 1) an exotic, unlikely scenario 2) that we engage closely with ...is precisely the paradigm of all fiction, fantasy and entertainment. What is a video game other than an exotic scenario you'll probably never experience in real life, that you engage with and try to "get into?"

How about in ancient times? Sure - even back in Greece or in Shakespeare's time, "the play was the thing," namely an exotic scenario (the inside of the royal castle perhaps, which most people will never see) and "getting worked up over it" might be more aptly described by your drama teacher as sharing in the tension, drama and catharsis.

Even our ideas about the future tend to center around exotic utopias, or exotic apocalypses (the writer John Michael Greer has talked about this) whereas in reality things will probably just continue to slowly get shittier, more annoying, less comfortable, less convenient etc., and ironically it'll probably be that way precisely because we've got our heads up our asses with fantasy (including things like "Hey there was a shooting; let's implement 1984.") and can't seem to reach a basic consensus about what reality is and how to keep our house in order. BORRINGGG!!!

Anyway if we're using news of a school shooting as entertainment -- "Riveting tension!" says the New York Times. "So sad!" says the Washington Post -- then suddenly instead of a fictional universe with actors, we're using real people, real victims, for our entertainment. Which feels like a wrong to me.

Huge tangent, sorry. Didn't expect to be writing in this direction today.


A really sad state of affairs when we are monitoring all our children's communications. Its the same as those parents installing spyware on their kids' devices: an attempt to alleviate the symptoms, rather than the causes: loneliness, lack of attention, lack of parenting skills, poverty, lack of funds for good teachers, ...


I find it interesting how this sort of thing is spreading so far.

The CCP state, Facebook, Google... now Gaggle.

I hope there's some studies of the damage such things do to people.

How much do they lose the ability to control their own lives?

How does this change their attitude to other people?

Respect for authority?

Most of all, this ought to be opt-in, opt-in only, and if you opted-in you can get out again. My reading of the articles says it's none of these.


It would be great if a hacker could get this system to create a bunch of false positives to the extent that the schools could no longer justify the $60k price tag on this. Also, isn't that enough money to hire an additional school psychologist? This is not a problem software can fix.


This is awful. Privacy should be a fundamental right.

But I cannot help thinking of my younger self, toying with that Gaggle AI :) I mean as a teenager (old enough here to have been through the "no-future" 80's) me and my friends would have push that Gaggle AI, where no AI has gone before :)

What can they do to the kids who write "inappropriate" stuff ? They cannot prevent the kids to do it again, it is free speech during private conversations.

This reminds me and my cousin who created a secret language "fefe" that easy enough to process and speak, but too confusing to understand when listen too. Kids are going to just do that, new words or old ones with new meaning, and challenge Gaggle.


They say it's AI but it seems like just a hash map to me.


Most AI seems to be either that, human workers pretending to be AI, or some half-baked, misapplied and not working deep neural network. I have a huge respect for companies that know about linear regression.


And what is a neural network but just interconnected weighted functions?

If you can call that AI, why not a hashmap? :p


How is this legal? I'm not doubting the school's intentions but this is a massive breach of privacy, a sort of dystopic 1984 scenario.

"Believe happy thoughts! You are happy! Smile please!"

I think the solution lies outside the tech world and in face to face solutions; train teachers to spot problems and red flags and possibly have a tally system. Are several teachers reporting issues? A student missing classes, showing up long sleeves, withdrawn, depressed? Grades slipping, unable to focus? Set a counsel meeting and get to the bottom of it.

An abstract and mind-you poorly run AI is just a nightmare scenario. I feel sorry for any students going through the system now.


> How is this legal? I'm not doubting the school's intentions but this is a massive breach of privacy, a sort of dystopic 1984 scenario.

You might also be surprised to know that your employer can also log everything you do with company property. Not that I agree with it but the legal precedent is there and children in school have far fewer rights than others.


>>student emails are scanned in real time, and those that violate the rules are blocked from their intended recipients.

Aside from the massive overriding issues of such pervasive surveillance, this practice seems additionally stupid and counterproductive.

As soon as kids figure out that their language is being monitored for certain phrases -- and this is an instant give-away -- they'll simply avoid using those phrases and develop a new code, rendering the entire system useless (as well as abhorrent).

Gaggle's mgt are clearly not the sharpest tools in the shed, making this kind of basic and stupid opsec errors...


Back when I was in school, they had a software called “Foolproof” installed on all the computers that largely did the same thing. In addition to that it prevented system-level changes to the computers and crippled them in other ways, like preventing installation of applications and hooking into Windows to implement all sorts of restrictions. It also did content monitoring and would regularly take screenshots of whatever you were doing.

However, the software was poorly written, full of bugs, would frequently crash, and generally ineffective. It wasn’t long until we (kids) figured out its weaknesses and we had full control of it —- we could force it to crash, uninstall it, and we even took over it’s command and control server.

Mind you, this was 20 years ago. Today’s kids are up against more mature software but I tend to feel whoever makes these things is generally bottom of the barrel outsourced devshops, so it’s probably all still very low quality. Couple that with kids being generally more tech savvy and advanced these days with clueless school administrators —- kids are generally smarter than you think and need a lot less protecting. Information spreads at lightning speed in a school. I’m sure they all know all about this software and how to get around it.


> “[There’s] a very consistent and long-standing belief that children have fewer rights to their own communications, and to their own inner thoughts and to their own practices,”

IMO, that's the biggest problem here. Until we stop treating children the way we treated slaves, people of cololor, women and LGBTQ+ in the past, we will get horrible systems like this. We need to recognize that "it is self-evident that all people are created equal", and that includes children too.

I'm a pretty young user by HN standards and I still keep in touch with under18s. I know a few who have been surveilled at every step and prohibited from using most modern technologies like other teenagers do. This created way more problems than it solved. This was particularly prevalent in the U.S. On the other hand, I never had any rules imposed, and I think I'm in tech only because of that. In Poland where I live, monitoring children's use of technology is almost unheard of. I haven't really heard of any problematic situations that monitoring would solve. On the other hand, I know a few horror stories from the U.S. where monitoring was used.


One of these days, someone is going to suggest the radical idea that there might be some connection between

(a) a culture of authority and draconian measures where adolescents aren't respected and the very people who are supposed to look after them and set an example are instead their enemies and not to be trusted

and

(b) a culture where significant numbers of adolescents think it's OK to commit extreme acts that harm themselves or others.


dont tell them that they're the problem, because they (and their investors) are making loads of money on positioning themselves as the "solution."


Its part of a broader pattern of factory-lining everything in the industrialized world. Any solution that is not codified into a law or structure of some kind is "illegible" to those who only see structures.

Who needs social structures when you can just outsource every problem to a company and throw the ones that don't fit into some variant of prison or mental ward?


When I was going through school I was taught that this is the sort of thing that happens in the GDR--the surveillance state. We watched videos/documentaries on this. These are the tools of totalitarian states is what we were taught. The technology has changed but the end result hasn't--it's still chilling.

This has come up for my family this past week. When my son came home from school and said he feels uncomfortable with all of the video cameras in the hallways I was shocked that this was a thing. He said he even felt uncomfortable going to the washrooms as he couldn't be sure there weren't cameras.

The measures of surveillance employed without notifying the parents was shocking to me--with no option to give feedback or opt out? I haven't even looked into other forms of monitoring like what's shared in the article. But I'm going to find out.


While Gaggle is indeed crazy intrusive, they're not really alone in this world of K12 monitoring. GoGuardian and Lightspeed Systems are doing the same thing, perhaps to a lesser degree, but maybe not.

For example, administrators using GoGuardian can get 'Smart Alerts' for 'self-harm' and other objectionable material. They'll receive an email that includes a screenshot of the material in question, and for G Suite, it spans Google search, Docs, and effectively any Google service.

However, that said, I just tried to pull up a link for GoGuardian and noticed they have this disclaimer:

"Please note: Smart Alerts for the "Self-Harm" category is no longer available for new customers. For more information, please reach out to your sales representative."


My high school gave all students Gaggle email addresses 10-15 years ago. The account was used as part of an assignment in a computer class to send some emails, though I think the teachers expected you to continue using the account for personal things. I don't recall being told the account was being watched by anyone. Makes me glad I never used it for anything beyond the assignment.

Incidentally, over the past few years I've been doing an audit of my password manager and couldn't log into the Gaggle account any longer, so I suppose the addresses are deleted after a few years. Hopefully that's current policy as well as I think most people would agree that what you write as a teenager may not reflect well on you later.


I am a childless tin-foil hatter so I am asking out of ignorant curiosity- is there any way to avoid tracking on your child? Does refusing to agree to Google/Gaggles/MS/Etc's ToS, EULA or Privacy policy mean the public schools can refuse your child education? Is there some different ToS and privacy policy for minors forced into using these services?

I also feel like this CYA policy may backfire. Before they could rightfully say they weren't the responsible party. Now that they are using tax payer funds to "prevent" these things I feel like they put themselves in a position to be responsible if it fails. I am sure there is some legal doctrine term for what I am trying to say.


> I am a childless tin-foil hatter so I am asking out of ignorant curiosity- is there any way to avoid tracking on your child? Does refusing to agree to Google/Gaggles/MS/Etc's ToS, EULA or Privacy policy mean the public schools can refuse your child education? Is there some different ToS and privacy policy for minors forced into using these services?

> I also feel like this CYA policy may backfire. Before they could rightfully say they weren't the responsible party. Now that they are using tax payer funds to "prevent" these things I feel like they put themselves in a position to be responsible if it fails. I am sure there is some legal doctrine term for what I am trying to say.

This is a really interesting perspective and I'm curious if someone knowledgeable on the subject would comment. If public education is a right, can my child be denied that right if I refuse to consent to a license? If the child consents, is that not an invalid contract?


Two brief thoughts

1) This is no way to treat kids that are already suffering from a dysfunctional society created by adults.

2) Even worse, to these kids, this is only going to normalize the idea of the surveillance state. But maybe that's the point.


This feels like an extension of how the work world is being run increasingly. More and more process, policies and automated decisions and less common sense. Good to teach children as early as possible to fall in line :(


> Gaggle touts itself as a tantalizingly simple solution to a diverse set of horrors. It claims to have saved hundreds of lives from suicide during the 2018–19 school year.

I just read a great newyorker piece on a security company that sold FUD[0]. This seems like another example of FUD for sale very similar to some security companies out there.

[0] https://www.newyorker.com/magazine/2019/11/04/a-cybersecurit...


Somewhat of a tangent, but I can't help thinking about it – since workplace surveillance has now also been normalized in the US, four years of college might be the only place where young American adults will have the ability to be exposed to a place where they can say or do anything on the Internet without Big Brother looking over their shoulder.

I'm assuming that most progressive college campuses don't use these ridiculously invasive tools (the college I went to ran a fairly unrestricted internal and guest network), but perhaps that's being too optimistic.


Note the politics of funding such surveillance as a justification for not needing to restrict access to weapons.


Ethics and privacy rights notwithstanding, how can we help to educate schools that software like Gaggle are more fraudulent than they are helpful? It's quite clear to me all this product is doing is maintaining a blacklist controversial words and flagging any content that is passed through it that contains them, and is calling this AI to make it sound more advanced than it is. False advertisement to a community that is generally not very tech savvy. Are parents and schoolboards really comfortable with $10/hour contract employees looking through their children's content to decide its obscenity? With the money these school districts spend on scammy software, they could be spending it on recruiting health experts. I'm confident this company justifies its product's existence by taking a secondary school kid's joke about killing himself or herself out of context, flagging it, and claiming they prevented a suicide to boost their metrics.


I was not happy to learn recently our school was using google sites and chromebooks. We weren’t asked or even notified about that. Gaggle is another large step in the wrong direction.

All demonstrating that school (and govt) officials are not even aware of the need for privacy.


Foucault was astute in observing that schools mirror the architecture and processes of prisons.


This is completely unnecessary. Has our world really come to a complete dragnet? Anything in the name of safety? Innocent as long as you keep proving yourself innocent. This is no longer the land of the free and the home of the brave.

Let the kids be kids.


Seeing this headline reminded me of that Lysol TV ad. All sounds like a noble cause until they briefly mention “and partnering with a smart thermometer company”. And bam, there’s your kids data being sucked up by medical company...


Sex and violence and curse words have never been allowed at school. Kids can do all these things on their personal Gmail/WhatsApp if they want.


Sounds like something out of a Black Mirror episode.


Heh, I came here looking for this comment. That's because it literally is! "Arkangel" follows a very young girl through to young adulthood. When she was very young, she was part of a pilot program that allowed her mom to see what she was seeing at all times, filter visual and auditory content, etc.

It went about the way you'd expect.

Bark's response to DHH [1] made me fucking literally nauseous. How detached from reality do you have to be to not realize the insular effects that this will have on children? How insulated is one's life that they would be unable to conceive of the obvious and likely immediate effect these tools will have on teenagers who are already struggling with identity and discovery?!

Can't wait until Bark announces the feature where they'll mail you weekly to tell you if your kid is likely to be gay (which as any LGBT person on social media will tell you, is incredibly easy. Facebook has been accidentally outting people via association for a decade or more, and I can't imagine how that is amplified with full access to the screen, messages, raw data.) Just disgusting.

[1]: https://mobile.twitter.com/brandonhilkert/status/11892177623...

EDIT: Also, wow, he's really trying to drop "prevented" statistics in another Twitter thread. IT's not even impressive. You're monitoring 50M students and have only caught 320-some predators despite pervasive constant monitoring? I don't even find that impressive.

The testimonies too, HOW IS THIS REAL -- "I only get notifications if there are items of concern (sex, depression, bullying, profanity, etc.) Totally worth it!" -Bark Mom


I can't seem to edit my comment any further (Thanks HN for just returning 200 on the edit and trashing the content of my update! This is the third time I've typed it, thanks a lot!)

As a last plea, young adults are human beings with autonomy. Trying to suppress that autonomy has always had the same effect, every generation has tried.

Meanwhile, I can confidently say that there is a VERY STRONG chance I would not have made it through high school with pervasive, constant surveillance. An overly anxious boy coming to terms with his sexuality in god damn Kansas is hard enough, to think that I would have to worry about every electronic action being reported to my school or parents probably would've made the thoughts of suicide unsuppressable and I wouldn't have had the VERY small community that I sort of accidentally discovered via Facebook, again because there are so many indicators of someone possibly being LGBT even just from what friends you have in common, etc.


You have to wonder about the slimy people who advocated for and approved this crap in the school systems. Only in America


oh my god what a world.

Basically, this is a scam where a company gets rich by convincing school districts to abusively surveil their students with no real benefit.


DHH really laid into Bark” this week: “https://mobile.twitter.com/dhh/status/1189235145192157184


Creepy. Who's working on homeschooling?


but i was just writing a paper on the social impact of Beck's mid-90s hit, "Loser"


piazzo is doing the same thing.


Have to get them ready for the police state




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: