I can say firsthand they are excellent people, both technically and also morally. They are careful about security and protecting people and data. There is the usual email protection for virus scanning, spam blocking, abuse alerting, archiving of legal items, and the like.
In my direct experience, the entire chain of command up to and including the UCB CTO is solid. So I hope Napolitano steps up and explains what's happening now.
In the meantime if any UC Berkeley people want to learn how to use GPG for encrypting email, and VPNs for encrypting traffic, I will donate pro bono hours.
But, I think people need to be aware that it's not just email: "The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of all this data ("full packet capture"). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus."
Is there anything we can do to protect ourselves from, say, posting something into a HN forum? Or any chat program? Or FB chat, etc...?
> 30 days of ... "full packet capture"
Also known as "XKEYSCORE".
Also notably, a system like this in such a privileged place on the network has potential to go beyond simply surveillance and can be used for offensive purposes by injecting malicious content. Especially if control of the system falls into the wrong hands. These types of risks which come about by introducing a tap like this are rarely considered, people tend to focus on the passive data surveillance side as the primary issue. But as a student or faculty, I'd be more concerned about the distribution of malware.
Many foreign actors (basically, China) could exploit it to steal research information from students and faculty. PGP emails wouldn't protect against a rootkit.
TLS or other encryption. However, if they [UCOP, vendor, other] get your key(s) they will be able to read archived traffic.
By "trusted" I mean something you reasonably control, not things like like privateinternetaccess/the next cheap vpn/FREE TORRENT SUPER PRIVACY STEALING NSA VPN/etc.
I say this with a barest understanding of how the TOR browser interfaces with your computer, the local routers, and the internet at large.
If someone is knowledgeable, please, let us all know: can the UC archive if you use TOR? If not, this is a good work around while the courts take their time.
Otherwise use TLS for everything, seems like HN, and all chat programs do already
given Napolitano's pedigree, in AZ and DHS, i think the best explanation you can expect, if any, that it is either to protect children and/or national security without any details.
Btw, how she could end up as the head of UC?. Her record is like a mini-Bush in action. You can't make it up https://en.wikipedia.org/wiki/Janet_Napolitano#Walmart.E2.80...
Do you seriously think that choosing between new spectrometer and network snooping device she would choose spectrometer over her DHS security contractors?
I can only suppose that the UC search committee somehow ended up with only Hoover and Napolitano on the short list, and as Hoover hadn't returned multiple phone calls...
Napolitano seems to me like a competent mainstream (“centrist Democrat”) politician, not unlike most of the people who would be considered for a post like UC President. She’s not my personal favorite, and I disagree with her about many policy topics, but in a lineup of US ex-governors, she’s easily in the top quarter. By contrast, Bush was an absolute disaster in every way.
Generally, the communist witch hunts of the 50's, 60's and 70's (especially the House Un-American Activities Committee) aren't considered a good thing. Apparently now they are.
Not likely, she is the former head of the Homeland Security Department
Could you explain why this should be necessary when email communication between the servers and clients is already encrypted via SSL/TLS? Can they bypass that? (And if so how?)
You're trusting the server too much. While TLS servers an important role, it is only protecting the link, not he data.
End-to-end encryption (which PGP/GPG provides) is important for the same reason bittorrent is so successful: it protects the data, not the host. When bittorrent was new, a common concern was that you might be getting the data form anybody, including people who are malicious. This concern was a product of traditional download methods where you had to trust the person sending the data ("only download from reputable sources"). Bittorrent bypassed that problem by providing hashes, so the data itself could be verified regardless of how you got it.
Securing of a host or connection suffers from the same problem. Just like how reputable fileservers can still be hacked or strong-armed into serving incorrect data, the server at the end of the TLS connection can be compromised.
That depends on the people at the TLS endpoints who see the plaintext. If they are collaborators, they simply build in access to the plaintext and we call it "Prism". If they are refuseniks, it might be necessary use a national security letter.
As a rule of thumb, if you care enough that it should not be seen by others, encrypt before it is sent, before it is written to disk. You have no control over a stolen laptop, over what the backup servers (assuming remote backup), etc... If it is private, encrypt. There's a bit of a learning curve, but once you do it often enough it just becomes part of the workflow.
Good tools you should use on a regular basis: KeepassX, gpg, Enigmail
Please read the parent's question, asking why it would be necessary to encrypt when using SSL/TLS. I explain why, and yes I do go a little bit further to more general cases, because I am assuming that if they don't understand the need to encrypt in this specific case they could benefit, and hopefully appreciate, to understand the more general cases as well.
SSL/TLS protects the data in motion, from people monitoring the networking by illegitimately MITMing users.
GPG protects data at rest from server operators reading things they should not.
RFC2487 Section 5:
A publicly-referenced SMTP server MUST NOT require use of the
STARTTLS extension in order to deliver mail locally. This rule
prevents the STARTTLS extension from damaging the interoperability of
the Internet's SMTP infrastructure. A publicly-referenced SMTP server
is an SMTP server which runs on port 25 of an Internet host listed in
the MX record (or A record if an MX record is not present) for the
domain name on the right hand side of an Internet mail address.
Yes, I am aware of the RFC2119 meaning of "MUST NOT." In reality, nothing prevents the servers from disallowing that downgrade, except that they may not be interoperable with other servers on the internet. If the operator of the server wishes to make that tradeoff, then requiring STARTTLS is an option.
That is what actually happened when I tried it.
You may also test using that address on checktls
I receive emails from whitehouse.gov and other .gov sites just fine, so they seem to be ok with it.
Also, my self signed CA cert and self signed server cert regenerate every hour for those that do not support PFS. My DH primes regenerate periodically. If you are someone that needs to assure you are talking to me, you don't need a CA. Instead, you test the connection to me from multiple ISP's before and after you send me an email.
For real security, you would encrypt the payload. Maybe an encrypted 7zip inside a PGP encrypted message.
Suppose Alice (email@example.com) emails Bob (firstname.lastname@example.org). If Alice sends Bob an email without PGP encryption, the text of those emails is stored on both Google's and Yahoo's servers, allowing either of those companies to read the email (or a government if served with a subpoena, or others -- possibly the general public -- in the event of a data breach). This is true regardless of whether it was sent with TLS at every point in the chain.
On the other hand, if Alice uses PGP to actually encrypt the text of the email with Bob's and her public key, nobody except Alice and Bob themselves (not even Yahoo or Google, who have the message stored on their servers) can read the message.
It's an important distinction -- as major consumer messaging products like Apple's iMessage, WhatsApp, etc. have recently started to implement this end-to-end encryption where even the company themselves are unable to read the message their users are sending, the FBI and other federal agencies have started to actually complain about their newly limited ability to spy, unlike when only TLS was used and they could just serve an NSL to Apple/Google/etc. to make them give up your message without your knowledge (with their only other option being shutting down their business, more or less).
I'd be very surprised if correctly configured SMTP using TLS is intercepted.
If I use SMTP to send via gmail with SSL/TLS over port 587, the headers show it hopping through 5 stanford.edu servers, and finally `spf=softfail (google.com: domain of transitioning email@example.com does not designate (a stanford IP) as permitted sender)`.
Stripping TLS is unforgivable. Shame on them.
Been some time since I used a desktop email client, so I'm unsure if clients have now been upgraded to reject sending mails to/through servers that don't do STARTTLS. But I'm guessing no.
It's sometimes quite helpful to built on the codebase that invented SSL.
Honestly, if you think that's bad, try getting a server online (in the datacenter!). The network admins take their firewall rules very seriously (as well they should).
For me, the biggest pain is the two (yes, two) backup services that must be installed for my department.
This doesn't make any sense - either you were sending them via Stanford or you weren't. If you weren't, how can they be relayed via Stanford?
One of the benefits of working in academia in CS has been the absence of top-down corporate IT control -- no MDM on you mobile devices, no third-parties having root on your devices, the ability to have a static IP, etc.
1. incredible freedom to work on whatever I want
2. a real focus on prototypes and curiosity, with less pressure to "ship working artifacts"
3. explicit recognition that "learning new skills" is part of the job. When I was in industry, if I needed an antenna designed, I would just go hire an RF engineering firm. Now I can spend time hacking on it myself, which I really enjoy
4. Absence of corporate bullshit, which has traditionally included tremendous IT freedom, no centralized control of messaging / official PR line, etc.
5. Generally no IP bullshit unless you want it -- everyone loves open-source, etc. AMP is a real leader here.
Things that are a challenge:
1. project management is often lacking -- it can be hard to focus, and the freedom to explore is a double-edged sword of "it's been 18 months, what papers have we actually finished?"
2. The pay isn't great -- CS postdocs make $60-70k/year. Fortunately I can consult on the side to pad this out.
3. A lot of your inputs are 23-year-old graduate students, who are smarter than you, enthusiastic, and inexperienced. This can be a fantastic source of new creative ideas, but sometimes they also decide to switch from julia to rust 80% of the way through a project. Sometimes you wish you could just hire that RF engineer and get shit done (see above above)
tl;dr : I get to build the coolest shit with the most amazing people helping me and learn a ton of new things, in an environment which generally shares my values.
It would be bad enough if you were, say, provided a laptop to use for work, but I'm guessing they want you to install that on your personal machines too, no? Unconscionable.
 - https://en.wikipedia.org/wiki/Janet_Napolitano
Condoleeza Rice on the board of Dropbox, a mainstream file sharing/synching service where some of our "papers and effects" are kept. https://en.wikipedia.org/wiki/Dropbox_%28service%29#2014-201...
There's a thing called regulatory capture: "Regulatory capture is a form of political corruption that occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating." https://en.wikipedia.org/wiki/Regulatory_capture
This seems like that in reverse. I wonder what kind of capture it should be called.
> Janet Napolitano [was] the Secretary of Homeland Security under Barack Obama.
That qualifies her w.r.t. surveillance and exercise of power. Sounds a lot like her approach to the UC system! Barack Obama, while a Democrat, commands a fleet of extrajudicial killing machines that have been used to murder American citizens without trial. It was during Napolitano's time at DHS that drones for surveillance and control (read: killing) became cemented as governmental policy.
Elsewhere here commenters mentioned John Yoo and Condi Rice for their roles in enabling W's torture policy. How is Napolitano's support for drones (and many other things - http://www.foxnews.com/opinion/2013/07/17/janet-napolitanos-...) different?
> - The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of all this data ("full packet capture"). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus.
Just how expensive was that system?!
I'm not the president of a major university, but how is this justified in the least?
You've actually dramatically understated it, I think. The UC doesn't exist through the good graces of the state. The UC's good graces are sufficient to make them exist under the state constitution absent any other part of government. If the legislature or governor wanted to shut them down, the only way they could do it is if they could get enough other regents to vote for it. The UC system has a great deal more power than the DMV. For one, the legislature is explicitly not allowed to regulate them outside a few ways involving funding. Also, the UC has its own state level police agency, that is controlled by UC, not any other part of the state government.
The DMV is involved in some of the things CHP does, but CHP doesn't report to the DMV.
UCPD reports to UC.
UC governance is pretty fuckin' wild.
Is the "uproar" because of the capturing itself or because the captures are being sent to/monitored by an external third-party?
Unfortunately, many folks do not have clear knowledge about exactly what gets monitored and what capabilities the monitoring allows.
In the USA, at least, "owners" of a network can do anything they want with the traffic going through it including providing fake certs for the purpose of clear-text search of https traffic, storing communications for however long they want, and providing these communications to whomever they please.
So much for a "free country".
I was going to type a response, then remembered the guideline against introducing classic flamewar topics. I'll just say this is myopic at best.
If you want to protect yourself, then protect yourself. You live in a country where this is even possible.
My point is that people (the public) don't actually understand the extent and implications of network monitoring, and almost no one is speaking out about it.
As an example consider that there are folks on this very thread asking about whether or not https is "safe" on a network which is owned by someone else. If there is a lack of knowledge _HERE_, imagine what the situation is for everyone else who doesn't even know what a MITM "attack" is. You can't "protect yourself" if you don't know what the threat is.
I am glad that the UC community is raising a stink about this.
If you work for someone else, they can do whatever they want with your communications on their devices.
I don't work at Berkeley but my guess is their system can do more than record emails which is extremely pervasive.
I agree with regard to companies, but I think it's a bit less clear for a university in general. Yes, the faculty and staff are clearly employees and so in a that sense it's reasonable to expect their internet usage will be monitored. But on the flip side, one of the historical aspects of universities is supposedly intellectual freedom. And in the sense that faculty might feel a chilling effect on their intellectual freedom if they know or suspect everything this type of monitoring, monitoring might be less acceptable.
When it comes to students, this argument is perhaps less clear. The students are not employees. Yes it's true they have all clicked "I Agree" to the university's network terms of service (but with all TOS's, how many of them actually read it?). But again, universities are ostensibly havens for intellectual freedom; what would pervasive monitoring do to that?
So, I think you are technically and legally correct, but I wonder if that justification is in the long-term best interests of universities as institutions of learning and exploration.
If I was a professor I'd want to work somewhere where I feel safe and secure because it's eerie having someone standing over your shoulders, let alone 24/7.
You mentioned intellectual freedom which is a feeling of security. I find that feeling misleading in the US with the NSA's new data center in Utah collecting our information.
Still it would be wise of Napolitano to reverse course but I think it's unlikely given her background.
Imagine the landlord of an apartment complex secretly collecting complete packet captures of the residents' network traffic.
Students have significantly reduced rights as they are not considered tenants so much as "members" of a university.
...in the US...
> they can do whatever they want with your communications on their devices.
Two band-aids are available for circumventing that too. They are called "VPN routing" and "Tor".
Deep Session Inspection®. Decode and analyze content in real-time, no matter how deeply embedded it is. The Deep Session Inspection engine sees every single packet that traverses the network, reassembles those packets into session buffers in RAM, and recursively decodes and analyzes the protocols, applications and content objects in those session buffers in real-time - while the sessions are occurring. This allows XPS to “see deeper” into applications and, in particular, the content that’s flowing over the network.
Detect and Investigate Retrospectively. Investigate what attackers have done in the past. By collecting and storing rich content-level metadata from both the network and the endpoint, XPS provides a lighter, faster and less expensive way to analyze historical data.
VENDOR: We promise that we are doing all of the stuff that the law says we're supposed to do to protect the covered data that you're sending us. Look, see the pretty pictures of a fancy data center in our marketing materials?
CUSTOMER: OK, that's good enough for us! (checks off box on list)
In the first email, which basically is loaded with innuendo and short on actual facts it states:
UCOP defends their actions by relying on secret legal determinations and painting lurid pictures of "advanced persistent threat actors" from which we must be kept safe. They further promise not to invade our privacy unnecessarily, while the same time implementing systems designed to do exactly that.
Then in the nest email says:
A network security breach was discovered at the UCLA Medical Center around June 2015.
UCOP began monitoring of campus in networks around August 2015.
ONLY AFTER this monitoring, on August 27, 2015, did UCOP issue a new cybersecurity policy online under the heading of "Coordinated Monitoring Threat Response." The policy describes how UCOP would initiate "Coordinated Monitoring" of campus networks even though it is believed that such monitoring was already underway prior to the announcement of the new policy.
So first they were drumming up conspiracy theories about "supposed" threats to the network and in the second email, they outline there actually was a breach of their network.
I guess the real issue is they have no idea who the vendor is and what exactly they're doing with their data. The good news is appears they're only holding up to 30 days of data, but aren't clear what happens after the 30 days.
I would be more concerned about the lack of transparency with what they intend to do with the data and who the hell the vendor actually is. Nothing like having some shadowy government vendor snooping around your network and storing and analyzing your data without letting you know what they're doing.
Thanks, for me it is: Don't use Google Apps for Education for anything except for taking advantage of it by uploading your encrypted data to the nice "unlimited" Google drive space or sending PGP mails.
Is this really abusable in this fashion?
"Oh hey look, snapchat traffic!"
Who did the installation, and how were they coerced into secrecy?
The article says information is sent directly to the vendor? Who is the vendor?
The article mentions "attorney-client privilege." Which counsel? Do they work for the state?
UC CIO Tom Andiola is said to have promised that the monitoring equipment would be removed and disclosed.
Then other UC senior management retracted that promise. Has anyone followed up with Tom Andiola? Hasn't he got a system-wide trust problem now? How is he taking being hung out to dry this way?
And, on top of all that, how does a UC president hire a contractor in secret? How is that legal? And how do you think it happened? Was Janet Napolitano really that concerned "for the children," or was this a sweetheart deal with the seeds sown back at DHS?
- These are University resources?
- Any corporation you work for would be monitoring their email systems and letting users know that. Why are these Univ. resources owned by the people?
- There's many legitimate reasons for monitoring. Security, legal defense, etc..
I'll admit trying to sneak this through over objections was probably not handled in the best way from a PR perspective.
And if you're privacy minded you should already know - most privacy debates that make the news - are moot.
If you have something truly secret - you must encrypt - so no man in the middle can read. If you think you have privacy sending any unencrypted email over any public network you're dreaming.
Assume that anyone with the key - has your data. Ask the guys that lost millions with Mt. Gox.
In the past, those with these abilities fought hard to protect those who didn't.
When did that change?
But when I send that data over public networks unencrypted, I willingly give up that privacy.
Pretending like the transmitted data was safe in the first place, before they set up these servers - is a lie, and only plays into the hands of those who would snoop on you.
You may willingly give up privacy, but many users do so unwittingly. Their ignorance does not make the current state of affairs just.
>Pretending like the transmitted data was safe in the first place, before they set up these servers - is a lie, and only plays into the hands of those who would snoop on you
Ignorance of privacy concerns on public networks is not the same as pretending the transmitted data was safe in the first place.
The article and controversy are attempting to call attention to the mechanisms in place that are a threat to privacy of all users. Your response was "well anyone who doesn't want their data exposed should know better." This is a disturbing attitude.
The general public is at least somewhat aware of these problems. People pushing a surveillance agenda (esp. marketing) often extrapolate this awareness, suggesting that people are surrendering privacy willingly either because they don't care or as part of a "trade' for services.
In reality, people often feel powerless. Without the necessary technical knowledge to create alternatives, the aggressively pushed surveillance option seems like the only choice.
> "well anyone who doesn't want their data exposed should know better."
That's classic victim blaming.
I think the common "this isn't surprising" response has some victim blaming in it, too, when it becomes a thought-terminating cliche. Jacob Appelbaum's is probably right in his interpretation: that saying something isn't surprising is a coping mechanism that probably means "I can't do anything about it". Unfortunately, sometimes it's used to shut down discussion.
Why wouldn't these Univ. resources be owned by the people?
Check the title of UniversityOfCalifornia.edu:
>University of California | The only world-class public research university for, by and of California.
The University is established by the California constitution.
Article 9. Section. 9. (a) The University of California shall constitute a public
trust, to be administered by the existing corporation known as "The
Regents of the University of California," with full powers of
organization and government, subject only to such legislative control
as may be necessary to insure the security of its funds and
compliance with the terms of the endowments of the university and
such competitive bidding procedures as may be made applicable to the
university by statute for the letting of construction contracts,
sales of real property, and purchasing of materials, goods, and
Also, I imagine some of the faculty would have concerns beyond their personal privacy; things like academic freedom.
Mooted by what? The fragility of privacy itself?
No judicial review. At all.
> On Dec. 7, 2015, several UC Berkeley faculty heard that UCOP had hired an outside vendor to operate network monitoring equipment at all campuses beginning as early as August 2015.
Full packet captures of network traffic are extremely, extremely valuable in post-compromise incident investigation and in incident detection, as they allow for vastly more complex analysis of traffic (done post-hoc with more complicated logic) than is practical on the wire. There's absolutely no need to invoke the NSA here: multiple private vendors offer these systems. It sounds like UC went with Fidelis, another major provider is RSA NetWitness (now part of EMC). These systems are fairly common on corporate networks, the main thing that limits their installation is cost: just the storage becomes rather costly at large scales.
Invoking attorney-client privilege on matters related to security is pretty common in the private world. The reason for this is that any security investigations and reports are subject to legal discovery and may be used to establish liability in the event that someone sues you for a matter related to a cybersecurity incident. The primary way to protect this information is to place the cybersecurity function under legal counsel so that all security work is work-product of an attorney and so under privilege. This is a recommended best practice in the security compliance community. Public institutions do this less frequently for the reason that it is often prevented or superseded by the relevant public record/accountability law, or unnecessary due to some type of immunity for example, but this may not be the case in California.
All in all, nothing here strikes me as particularly unusual practice for a large organization. What I do see is that UC has made several massive mistakes in implementation:
1. It must be completely clear to users that they have no expectation of privacy when using organizational networks. Unfortunately, many users do not realize this, and many organizations do not sufficiently communicate it. All users of organizational networks should sign an agreement to ensure that they are aware that they have no expectation of privacy. This is already legally true in as far as I know all cases, but there is an ethical obligation, I think, to ensure further than that.
2. Universities present a particularly tricky situation because there is a captive audience of users who rely on the university network for their personal usage. Ideally this should be 100% segregated from the institutional network, I believe, but I have work experience in a small university's IT and I can tell you how difficult this is to manage - and I can imagine that the problems at the scale of even a single UC campus are so much greater. They can and definitely should work harder to balance network management against the privacy of their captive users.
3. It appears that there are inadequate controls in place (or at least disclosed) to protect this data. I am very uncomfortable with the involvement of a third-party without thorough documentation of their controls in place and their liability in the event of misuse. There must also be further internal controls - both technical and administrative - to guard against misuses. Simply asserting that the data is only used for security is not sufficient, set actual controls to ensure this and establish how violations will be handled.
4. Creating fragmentation within the IT org is very common in universities but still a terrible idea. All levels of IT and security operations should be 100% on board with security mechanisms used, which appears to not be the case here.
A couple of auxiliary thoughts:
- If they are intercepting SSL (which may be a good idea for a corporate network, there are several factors to weigh against each other) this will of course be limited to computers that they manage.
- Tivoli BigFix, as mentioned elsewhere, is a common and rather good endpoint security solution. A similar competitor is Cisco NAC. These aren't scary NSA codewords, they're commercial products that many corporations use to ensure that all computers on a protected network meet a minimum security configuration. Whether or not they are appropriate in the ways that some universities use them is a very touchy issue, I don't think that they are, but that means that potentially much more costly (and inconvenient) controls will need to be in place.
- Universities need to carefully manage the fact that they are often not perceived as corporate orgs in terms of their network practices, although they usually behave like them. There are certainly complications at universities. Open communication of policies and procedures will help to alleviate this, as well as good network management (once again, complete isolation of residential and administrative networks should be the goal).
The monitoring may be routine and innocent. But it shouldn't be secret, and it shouldn't be in stark violation of the University's own (stated) policies.
"University employees who operate and support electronic communications resources regularly monitor transmissions for the purpose of ensuring reliability and security of University electronic communications resources and services (see Section V.B, Security Practices), and in that process might observe certain transactional information or the contents of electronic communications."
This is followed by standard restrictions (only for valid purposes, controls to protect information, etc). They provide a stronger assurance of privacy than I would expect, but still leave plenty of latitude for this activity.
Edit: copied and pasted from a PDF. never again.....
Further edit: V-B is really interesting and requires user permission "it is necessary to examine suspect electronic communications records beyond routine practices." This is kind of a strange rule, but particularly since they're using a third-party vendor, all of this would easily fall under routine.
It's important to note that this policy primarily discusses "disclosure," which I can't see this being considered. It does go to a third party, but one contracted for internal purposes.
Conflict a bit with the product's advertised description: "The Deep Session Inspection engine sees every single packet that traverses the network, reassembles those packets into session buffers in RAM, and recursively decodes and analyzes the protocols, applications and content objects in those session buffers in real-time - while the sessions are occurring. This allows XPS to “see deeper” into applications and, in particular, the content that’s flowing over the network." 
Boiled down: "University employees are not permitted to seek out... content when not germane to system operations and support", juxtaposed against, "XPS reassembles those packets... and content objects... to "see deeper" into applications and, in particular, the content that's flowing over the network."
Or do we go all bureaucratic-NSA-legalese and say that, "all content is [could be] germane to some kind of ethereal 'persistent threat'". And that "'automated packet inspection' based on human-generated rulesets is different from a human 'seeking out' content." If so, then there shouldn't even be a privacy carve-out, as there's nothing from which to carve. And the first sentence in the policy is entirely meaningless.
Edit: Content included. Content is where the bad things go, of course, and a lot of the features that come out of full packet data are the ability to do things like auto-detonation of executable files for malware detection. Extremely content-based detection heuristics.
And is that really the (required) 'least invasive' mechanism to achieve a properly functioning network?
IV - A - "The University does not examine or disclose electronic communications records without the holder’s consent."
IV - B - "The University shall permit the examination or disclosure of electronic communications records without the consent of the holder of such records only: (i) when required by and consistent with law; (ii) when there is substantiated reason (as defined in Appendix A, Definitions) to believe that violations of law or of University policies listed in Appendix C, Policies Relating to Access Without Consent, have taken place; (iii) when there are compelling circumstances as defined in Appendix A, Definitions; or (iv) under time-dependent, critical operational circumstances as defined in Appendix A, Definitions."
IV - B1c - "In addition, California law requires state agencies and the California State University to enable users to terminate an electronic communications transaction without leaving personal data (see Appendix B, References). All electronic communications systems and services in which the University is a partner with a state agency or the California State University must conform to this requirement.
In no case shall electronic communications that contain personally identifiable information about individuals, including data collected by the use of "cookies" or otherwise automatically gathered, be sold or distributed to third parties without the explicit permission of the individual. "
IV - C2b - "In the process of such monitoring, any unavoidable examination of electronic communications (including transactional information) shall be limited to the least invasive degree of inspection required to perform such duties. This exception does not exempt systems personnel from the prohibition (see Section IV.A, Introduction) against disclosure of personal or confidential information.
Except as provided above, systems personnel shall not intentionally search the contents of electronic communications or transactional information for violations of law or policy."
Go troll someplace else.
If not, the society is not a democracy or the citizens are idiots.
The etymology of "idiot": from "idiota" (Late Latin) an "uneducated or ignorant person", and from "idiotes" (Greek) a "layman, person lacking professional skill".
Now, vote wisely.