Hacker News new | past | comments | ask | show | jobs | submit login

My school district started giving out a Chromebook to each kid, and has these things locked down pretty tight. Kids are forbidden from installing their own software. (We tried installing Python / Jupyter). They also hired a surveillance company, maybe the one mentioned in the article.

They conducted a trial run of the surveillance service at one middle school for a couple months, whereupon they claimed that it prevented two suicides. A suicide per month in a single middle school isn't remotely plausible.

Now of course the kids know about the surveillance. It was announced in the newspaper. But already for years, parents have already been teaching our kids about dealing with the internet: Don't write anything that you wouldn't want to see on the front page of the New York Times. Don't admit to being a member of a hated group. Don't criticize governments that are capable of carrying out censorship beyond their borders. Assume that all "private" information will be stolen or sold.

My kids have told me that every kid at school has figured out how to install a VPN on their cellphone, so they can bypass the content blockers. They know that the VPNs themselves aren't secure. They don't use the Chromebooks except to look up school assignments (which are themselves surveilled via anti-plagiarism service).

Unfortunately the short term desire for entertainment, and to be part of a community, outweighs their long term concerns for security and privacy. But at least from the standpoint of being informed, they're quite well informed.




> whereupon they claimed that it prevented two suicides. A suicide per month in a single middle school isn't remotely plausible.

The software is really worrying, and this is one reason why.

Risk prediction for suicide is really hard. Currently the risk prediction tools (created and used by health care professionals) are bad, and there's strong advice that these should not be used to predict future suicide or self harm, and should not be used to determine who to offer treatment to or who to discharge from treatment. And these tools are somewhat more sophisticated than "has the person used the word suicide?"

https://www.nice.org.uk/donotdo/do-not-use-risk-assessment-t...

https://www.nice.org.uk/donotdo/do-not-use-risk-assessment-t...


In my view a serious problem is that a person becomes "labeled" as a suicide risk, and has to bear that cross for the rest of their lives. It could get you kicked out of school, barred from employment, and so forth.


These addon services are troubling, but I don’t even want my child using a chrome book (really google account) either, which are at best unnecessary. What are our rights in this regard?


I've thought of an idea: The city council could pass a local ordinance similar to GDPR, for companies or public institutions that operate within the city limits.

It could be stripped down to a few bullet points, in order to make it easier to understand. The notable features are the right to know what personal information is being stored, and a right to be forgotten. It should include a provision, that a person can't be compelled to sign away these rights in return for access to public services.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: