MIT is just moving to IPv6.
Actually... MIT forcing an entire generation of future engineers to deal with IPv6... That will literally push innovation.
The "just" is incorrect. There are four bullets on the slide: DHCP, IPV6, private IPV4/NAT, and firewall. From the diagram it looks to me like even if you move to IPV6 (which, as others have noted, MIT has not yet rolled out, so at this point you can't), you will still be behind the firewall, so setting up a service visible to the Internet will still be more difficult than it used to be.
I totally understand the need for a campus-wide firewall. The MIT network is a juicy target for botnets, and individual students are not good enough at running security on their own computers. The old approach to IP assignment was that you needed to get your IP approved and made routable by IS&T anyway, and if they detected botnet activity on your computer, they'd manually intervene and make it unroutable again. That sounds like a lot of work.
If computers end up with firewalled but publicly routable IPv6 addresses, that sounds perfect.
If they detect bad activity, they blacklist your MAC address so you can't connect. This is no different under the new scheme, and has nothing to do with NAT.
But I have my doubts.
MIT was always going to be the last place on earth to go total IPv6
Forcing people to use anything is never a good way to promote innovation.
I went to MIT for my undergrad and doctoral studies. One of the main reasons I chose MIT over other schools was the ease of availability of static IP addresses, unlimited symmetric gigabit bandwidth, no port restrictions, and other things. I even mentioned this in my undergrad application essay. I built a lot of things with it and learned a lot in my time there. I probably learned more outside of classes than in classes, and I think that's one of the distinguishing aspects of MIT culture.
Of course it is. That's how innovation happens. They are focused on overcoming a constraint of the system they operate within. In this case, it will be to get around the limitations of the private IPv4 network, or to make the upcoming IPv6 network easier and more appealing to use.
Most innovations are to overcome some sort of limitation, whether that is with a man-made system or just the laws of nature as we currently understand them. Unbounded innovation hardly ever occurs and usually results in some shitty mobile game.
Now that's not to say MIT IS&T isn't behaving extraordinarily shitty here. But this won't stifle innovation, just refocus it. Whether that's towards a more worthy goal is certainly up for debate.
What if I'm a biology expert and want to run a server to demo something cool? I should be spending my time doing innovation in biology.
What if I'm a deep learning enthusiast and came up with something cool to demo? I should be spending my time hacking at that.
What if I'm a physics student and want to start a blog?
The majority of MIT students are awesome innovators, but most are not innovators in TCP/IP. Forcing a bunch of people who are not networking specialists and sysadmins to deal with the lack of IPv6 support in the rest of the world is not going to promote innovation where it needs to be.
NAT is technological friction. Telling people that "they should not do something" is also friction.
What about even just learning to write apps in your own time? I built dozens of demo websites while I was a student there. A few were slashdotted. Having IPv4 addresses and access to bandwidth from my bedroom was a massive blessing.
The problem isn't configuring IPv6. The problem is without IPv4 you cannot easily make a server that is guaranteed accessible from anywhere in the world, by anybody. That's not a problem that most MIT students are in a position to solve or innovate in a short time. At least not without spending money for an AWS instance that frankly most undergrads don't have the money for.
Innovating in making an HTML5 app to demo your cool bioinformatics project on the other hand is a weekend hackathon deal.
And servers are basically free at MIT. You can just pick up and assemble them up off of reuse when labs throw away various parts. Plug them into your gigabit ethernet socket in your bedroom, get an IPv4 address, and you're up and running in less than an hour.
You're the one who said people should be spending their time doing innovation in biology, hacking on deep learning, and starting a blog, rather than learning how to configure and deal with networking.
If anyone wants to do those things, they can do them now and not have to worry about installing and configuring servers and dealing with IP networking. Or they can work on innovating in networking.
The fact is, however, that no one is going to say "if my server wasn't IPv6 only, then people would read my blog". They'll most likely be saying that because there's a lot of other content to consume on the Internet, and attention is limited. And if that is the case, they can get a $5 a month digital ocean instance, or one from any number of other providers, accessible over IPv4 and IPv6, and serve both protocols to entire Internet.
Also, chances are, the audience for a physics student's blog is most likely at some university, which has a good chance of having a working IPv6 stack; especially now that MIT is going wholehog on IPv6.
Two words: seat belts. The auto industry fought this tooth and nail, as did parts of the general public. But once this painful transition was accomplished, it resulted in a big improvement in automotive safety.
Other examples include EPA regulations that forced out the use of hazardous chemicals and processes. This, in turn, also produced a notable series of entirely improved processes: better for the environment, and often cheaper costs and/or better end results (although not universally, to be sure). The potential for innovation had been present, but these mature industries had to be forced into innovation. The very concept that R&D might improve their bottom line as well as their externalized costs was practically foreign.
You're arguing against a straw man. This is about the promotion of innovation. Yes, seat belts were innovative, and left to their own course were going nowhere. Some innovations face adoption challenges that go beyond the mere "meh" of "non-consumption". They are major social battles, requiring large and multi-faceted campaigns to succeed and overcome social inertia. E.g. for seat belts: regulations on manufacturing, laws and law enforcement campaigns, public education and marketing, etc.
Google is one of the worst offenders when it comes to dragging their butt on IPv6. Their cloud offerings have zero support, and some of their default Linux cloud images ship with IPv6 disabled in the kernel so that even if you run network virtualization software you don't get it. Much of their front-facing stuff supports it, but users can't actually use it for anything.
Microsoft comes next. Azure has no IPv6 to speak of.
Amazon is finally rolling it out. "Second tier" VPSes like Digital Ocean, Linode, and Vultr have had it for aeons.
They actually rolled IPv6 support out with their latest firmware.
I'm at sea at the moment, so I can only find the play store note:
IPv6: Enable IPv6 on your Google Wifi."
(I'm busy and don't have time to deal with this BS. I just need internet access that works.)
Also, what is this whole "Comcast did not play nice with them" trope? I've dealt with Comcast many times, and used Asus, TP-Link, Cisco (the DPC3010 modems are my favorite) and others with them without issue. They aren't even a factor in your internal network and whether or not IPv4 or IPv6 works in it...
At my startup's house:
- Using the Comcast default combined modem+router works.
- Using the Google Wifi AP with a DPC3010 resulted in the DPC3010 being bricked upon connecting to the cable line. This is after the DPC3010 was working fine at my home. That DPC3010 no longer works, even at home, and only gets a power light upon startup.
- Using the Google Wifi AP with a TP-Link modem works flawlessly. No IPv6 though.
- 3 other routers that work at home do not work at my startup's house with the TP-Link modem. Asus RT router gets a fake 10. address from Comcast, Xiaomi router stops responding after several hours, Cisco router gets no DHCP lease whatsoever. All three work at home, all flawlessly.
Comcast is unwilling to debug, saying that unless I use their official combined modem+router they will not provide support.
It probably is firewalling all incoming IPv6 connections (?). The Google Wifi app has no settings about this. Googling and stackoverflowing for 10 minutes didn't find a solution. That's when I give up.
Sorry Google and the IPv6 community but I have more important things I need to be working on than dealing with this BS that never works the first time. Back to IPv4 and port forwards, which is working fine for me right now, and will let me get back to my work. :-/
If I come up with a super-awesome computer vision algorithm and want to run a server in my dorm room to demo it, being forced to use IPv6-only when the school has enough IPv4 addresses is a stupid annoyance and will only reduce the number of people that can reach the website. Running on AWS or other IaaS service isn't an option for many students without much cash.
I would be really happy to have only ipv6 addresses in my VPC, as that would make connecting up multiple VPCs much easier since I know their ip space won't overlap.
Free ADSL, which has been providing IPv6 access via 6rd for like 10 years, even started deployment of IPv6 only DSLAMs in April.
This makes the absence of IPv6 on major platforms and sites very visible. There is no excuse not to have IPv6 today, especially for market leaders, and its lack thereof is definitely a showstopper WRT services we choose to use.
As an anecdote, we have seen some constantly increasing traffic over IPv6 in our logs, and our customers are definitely not on the technical side, very far from it.
Again, the thing I would like to see is being able to either peer only ipv6 for VPCs, or have a VPC that is ipv6 only. That to me will greatly increase flexibility and simplicity if I'm ok with an ipv6-only deployment
Security via network segmentation. IMO, NAT gateway is good place to lock down and put in network security appliance to track/block all the unwanted connections.
NAT is terrible from a network engineering perspective, it was mostly a patchwork to deal with the rapid expansion of the internet and the shortage of IPv4 space. IPv6 brings a lot of cool technology to the table in, like MTU path discovery, header extensions and proper anycast. It also makes dealing with subnets and network segments a lot more sane and scalable.
NAT is very useful to the network engineer. NAT lets you turn one network into a different one on a simple one-to-one translations basis.
However NAP got mixed up with masquerading to the point where when someone says NAT they assume you also mean masquerading when in fact the two are different concepts. NAT when you are doing a many-to-one translations is a disaster in many ways, but it works just well enough (and face it, the alternatives don't exist)
One to One NAT translation is fine although it still breaks a lot of things, especially on the IPV6 side of things. (like MTU path discovery).
My guess is that IPv6 is the ISDN of the 21st century... an intermediate step between two networking paradigms, one being IPv4 and the other being something we haven't seen yet. IPv6 will appeal to specialists but will never, by itself, see wide adoption. The fact is that NAT works for 99.99% of users, and works very well.
You're free to see these as orthogonal problems, but if enough people think as you do (moving IPv4 to NAT without first deploying IPv6), then the Internet as we know it is dead.
*Consumer, and Enterprises in the US
Then why are they doing NAT?
Edit: Although moving to NAT might push innovation too, or at least some clever hacks. Speaking of that, if you want use a relay to do NAT traversal then is TCP over ARQ (on UDP) as bad as TCP over TCP?
Because they sold their IPv4 addresses.
Normally, ISPs get a /32, from which, they hand out /48s to their customer. And, with pretty much zero paper work, and ISP can get a second /32 (usually adjacent with their first /32 so they can summarize as a /31).
So - an ISP might get 2001:1868::/32 and then hand off 2001:1868:0209::/48 to a customer.
Because a /48 allows 2^16 or 65k networks, each network containing (effectively) an infinite number of hosts, pretty much every single geographic region company can be effectively served with a single /48. The /32 allows the ISP to have 65K customer (each of which has 65K networks).
What on earth is MIT going to do with a 2603:4000::/24? I'd love to hear the story behind why they got such a large block.
edit: according to https://www.arin.net/fees/fee_schedule.html this is considered a "medium" (WTF?) allocation with a cost of $4k/year.
A large ISP entity like comcast or AT&T can now have say a single /16 or /24 allocation and pretty much no matter how much they subdivide up their regional routing, routing to AT&T can easily be coalesced and summarized , and every end customer can still get a /64 till pretty much the end of time.
I'm trying to grok why MIT went for a /24 instead of a /32. Because they could?
If they had requested a boring /48 IPv6 allocation (that anybody can have just by asking) - they would have had 2^16 /64 networks, and each network could have had basically an infinite number of hosts.
But, this is the IPv6 world, so I would have expected MIT to claim they were a LIR (Local Internet Registry - equivalent of a small ISP or larger) - and asked for a /32 - which would have given them 2^32 networks - or 4 Billion networks to work with. They probably would have assigned the networks by segmenting them on a per site basis - so each site would have had a /48 assigned, so they could have up to 65K sites, each site having 65K networks, each network having (effectively) infinite number of hosts. That is, a /32 would have been far, far, far larger than their /8 was. Easier to manage as well (no VLSM - nothing ever smaller than a /64) And, keep in mind, with a single, no contest request, they could have gotten the /32 adjacent to theirs (another 4 Billion networks, or 65K /48s) so they could aggregate on a /31.
Instead, they've asked for a /24. And I'm just darn intrigued as to why they think they can make use of such an address space. If they weren't constrained with their /8, then a /32 would have been far more than they ever required. (And odds are a /48 would have been sufficient with even modest address management).
I mean, I work with really large mesh networks, millions of nodes, some of our subnets have 20K nodes each on them - we roll out /48s like they are nothing, and even after deploying a couple hundred customers over 10 years, and 25 million nodes, I think we've used up maybe 1500 /48s.
BTW - this doesn't even take into account that they can use RFC 4193 up the wazoo for all sorts of interesting non-globally routable experimental internal networks.
I"m just hoping someone from MIT is reading HN and will clue us in.
There are 2^21 /24s in the existing Global Unicast Space. About 2 million. A large number, to be sure. But if every entity in the world MITs size asked for a /24, we'd start burning through them pretty quickly....
Now, anyone who wants to can ask for a /48, and they are automatically given it, and they are able to create 2^16 networks, each network with an effectively infinite number of hosts on it. And, there are so many /48s on the (current) Global Unicast space (prefixes starting with 001), that everyone on earth could be granted 4000 of them. And, in reality, ISPs started already conserving, and handing out /56s instead of /48s to most consumers/SoHo - further increasing that number by 255x.
And that's just the 001 prefix. Based on what we learn about allocating space there, new policies can be device for the other 6 prefixes available. We aren't going to run out of IPv6 address space.
Also, the allocation policies are much different. In IPv4, allocations are made to being as small as possible (while still meeting immediate needs) in order to conserve addresses. The cost is that the routing table grows like crazy, since network operators need to keep coming back year after year (or sooner) to get more address blocks.
In IPv6, allocations are made to cover the network operator's long-term needs so that they won't have to get a second, non-contiguous block. By "wasting" address space you cut down on routing table size, since even very large networks only need to announce a small number of routes.
Even with this "waste", we've only allocated a tiny amount of IPv6 space. None of the RIRs have had to go back to IANA for new IPv6 space since 2006, even with allocations ramping up in the last few years. (And IANA only gives the RIRs a /12 at a time.) We seem to be fine for the foreseeable future.
But even if somehow we do start using up too much v6 address space, the good news is we can always change allocation practices. Let's say in 2040 we realize we've used up way more of the IPv6 space than we expected we would by then. The RIRs can always change their policies, favoring conservation over smaller routing tables like they used to with IPv4. I seriously doubt that will happen within any timeframe we can reasonably make predictions about, but if it does, we'd have a lot more flexibility than we did with IPv4 depletion.
on Telekom Germany every customer get's a /56
Fun fact their router's (Speedport) can't do prefix delegation to other routers. you need to buy one from another vendor
We are humans. We're not rational beings, we're rationalizing beings. We like to pretend we are better but there is a valid reason that they say science progresses one death at a time.
MIT's admins are just admins. Same mentality you find in all the rest of academia, and they're actively hostile to everything that distinguishes MIT.
We got faster service with fewer restrictions (no P2P service filters) for like $10/mo with student pricing, and still with fixed IPs.
I'm not sure why what innovative service a university can provide in this space in 2017?
which meant about 1000 rooms sharing 100MBit internet access. This was somewhat mitigated by local DC++ networks in each housing area to keep piracy downloads off the shared link.
Also, by acting as the ISP, it also gets to be more protective of its log data than an outside provider might be.
It's not that an outside ISP couldn't provide the same level of service and speed at the same or less cost, while protecting the interests of the MIT community, it's just that I don't expect it's as likely.
Then later in the article:
> NAT deployment doesn't benefit the Institute in any way, other than to make things more difficult.
Possibly ignorant question-- could this choice be influenced by the inescapable rise of cheap IoT devices flowing in from China?
I mean if a freshman arrives with a desktop rig they bought purposely to use as an experimental server, and they explicitly register the software using a web form, you can imagine a very loosy-goosy relationship among students and IT built on good faith.
But if a freshman unloads their luggage and a few dozen random internet-connected baubles drop out and start joining the network, what is IT supposed to do? Especially considering MIT probably does a lot of research for DoD...
I just wonder why MIT didn't give more time to move and why it doesn't provide a replacement in eg cloud credits.
Disclaimer: I wasn't actually party to any of this, I heard it second hand, corrections welcome.
Andrew was pretty dated and virtually everyone I knew was already forwarding their emails through their Gmail anyways. Transitioning to Google Apps was basically cutting out the middle-man while adding on some interesting functionality.
1. Now everyone also had a better calendar system and file sharing with Google Drive within the CMU namespace. This meant that clubs, for example, could create Google Forms that were restricted to actual CMU students with CMU email addresses, and they could use that guarantee to do more interesting things like adding information from the Directory on top of those emails without having to ask for it (such as Major and School).
2. We also got a way to allow people to use their Andrew logins on any student-made website with simple OAuth2 that restricted to using the @andrew.cmu.edu emails on Google Accounts. Before this, apps like ScheduleMan and the StuGov apps (which I developed on when I was there) had to get special permission to use Shibboleth and there were a bunch of restrictions for having those certs including having to run on CMU infrastructure.
Now, any student can create applications with the same login guarantees of only being accessed by CMU students and allow for one-click login and registration (since most relevant information can be obtained by looking up their email in the Directory), vastly increasing the usability of these apps.
In my view, that transition was definitely a net-positive to the CMU ecosystem, in both usability and development.
For all of the cool things I got to do (troubleshoot a breakin at the South Pole, send the RIAA a DMCA takedown notice when they stole our content (absolutely the highlight of my career), etc.), we spent the vast majority of our time on nonsense. We processed dumb breakins by the hundreds, had to enforce DMCA takedowns, and the like.
I'm also all for innovation and giving people the freedom to deploy services and innovate, but I would have killed to deploy all IPs by default behind NAT/firewalls and work with researchers to help them understand their responsibilities before giving them public IPs.
These are two separate things.
There is no security difference between "route port 80 of one of our public IPs through to my NATted address" and "open port 80 for my public address".
The public addresses are easier to administrate, troubleshoot, log, etc.
Also ironically, I can't reach any of the newly NAT'd networks from my XVM instance. I bet the XVM maintainers haven't been warned about the NAT.
Edit: In the late 80s the morris worm was launched from MIT, but the network admins of the 80s didn't overreact like this. I wonder why.
Things I got to try at MIT that would be a lot harder on AWS:
- set up a TOR exit node
- set up a single-system image cluster across five 1U servers
- set up a ZFS box with RAID-Z and dm-crypt
- played with a real lisp machine
- put a raspberry pi on the internet (something I did for several projects)
NAT deployment doesn't benefit the Institute in any way,...
I have often had changes foisted upon me that when I looked at them I could see no benefit. In every instance the 'benefit' I didn't see was one that I typically didn't approve of and so hadn't listed in my set of 'possible benefits'.
From reading the article though it sounds like MIT has had a very open and loosely (if at all) documented set of features around network access. And in today's world network access is many things more than it was 10 years ago. But perhaps the process of going through and documenting all of the things they do was 'too expensive' compared to setting it up the way the institution wanted it to work and then dealing with any fallout as it arose.
Another in a series of signs that the Internet is moving from science project to critical infrastructure.
Besides, what kind of controlled unclassified information could possibly be residing on dormitory networks?
I'm terrified of IPv6 for this exact reason - it assumes every device will have direct access to the Internet. As we've seen with security cameras and other IoT devices, they just aren't designed to protect themselves and are easily hackable, so making them accessible to the wider Internet is crazy. Firewall options for IPv6 for most home routers are limited at best at the moment... I for one am quite happy to have all my local devices happily running behind an IPv4 NAT (in addition to a firewall), knowing they can't be targeted directly without some sort of concerted effort.
I completely agree in principle with the old adage "NAT is not a firewall", but in reality it often effectively works as one for many consumers, many of whom probably don't have the technical knowledge to understand the protection it has indirectly afforded them for the past however many years.
Also MIT hasn't even rolled out IPv6 yet...
I run our servers on public IP addresses, behind a firewall. Troubleshooting and debugging is made much easier, and there's never any conflict with VPNs etc.
> It's likely that the only difference is that you'd also have to specify what ports you want exposed to the outside world
Port 80, please. With NAT, you can't offer that to more than one computer.
"SIPB (MIT's volunteer student computing group) offers free "cloud hosting" to anyone with an Athena account.
reverse proxies like nginx also have plain tcp support so it allows you to easily run several services
Not to mention - who runs it? It needs to be trusted to terminate TLS or do 5-tuple proxying based on the SNI destination (not all clients send SNI). Also if the MIT student is doing something akin to protocol level development it's possible a middle proxy will prevent them from doing their work.
There is also the hassle factor. You may stop people from ever trying something because of the added hoops they must go through.
That's a hack to work around a shortage of IP addresses. Why would I use one, when I don't have that shortage?
Actually I have been experimenting this for my pet projects. Downside is that it's relatively slow but getting "global" address is click (well a few lines of config) away...
In fact, you shouldn't run them both 
0 - https://riseup.net/en/security/network-security/tor/onionser...
I do, but then I also hosted hidden services, relays and exit nodes...
All of course 6 ignored them, and I don't believe they had any impact with Athena. Certainly they would have been upset by faculty writing the root password on all the whiteboards.
In fact they had nothing to do with IP allocation (I doubt they knew what TCP was). I wonder what bureaucratic maneuvering gave them control of that!
In the age of AWS, maybe it's time the elites see how the other half lives?
MIT by holding one to class A address block for so long have sets a bad example. Much of the 16million address owned by them are never really used. How many other universities have access to this kind of wasted luxury ? Standford returned theirs back ages years ago.
Sure IPv6 was inevitable but if ford, MIT and rest had not been so greedy the rest of the world would have not grown up beind a NATed internet. I share an ip with 1000's of others, every third google search throws up a captcha..
Unfortunately, IPv6 deployment is still below 20% (as measured by Google, https://www.google.com/intl/en/ipv6/statistics.html) so a publically-accessible IPv6 address is not yet sufficient.
What's blocking me is router firmware. It can do IPv6, but only as an afterthought. Sadly, no level of adoption is going to fix that, until I buy a new router.
A router that fixes all this is at least 120. That is too much for me. I tried dd-wrt, but that doesn't fix the first router on the chain.
MIT had a very non-authoritarian, egalitarian culture, as Richard Stallman described it:
"I went to a school [Harvard] with a computer science department that was probably like most of them. There were some professors that were in charge of what was supposed to be done, and there were people who decided who could use what. There was a shortage of terminals for most people, but a lot of the professors had terminals of their own in their offices, which was wasteful, but typical of their attitude. When I visited the Artificial Intelligence lab at MIT I found a spirit that was refreshingly different from that. For example: there, the terminals was thought of as belonging to everyone, and professors locked them up in their offices on pain of finding their doors broken down. I was actually shown a cart with a big block of iron on it, that had been used to break down the door of one professors office, when he had the gall to lock up a terminal." (https://www.gnu.org/philosophy/stallman-kth.html)
In 2004, the MIT AI Lab was "upgraded" to the new Stata Center building, an unwieldy, Frank Gehry-designed monument to a recent MIT president's ego, and the antithesis of what it replaced, Building 20. Building 20 was a utilitarian construction from WW2 with no pretenses of becoming a prized or permanent spot on campus. Instead, its residents helped it organically acquired a character of its own as Wikipedia describes well:
'Due to Building 20's origins as a temporary structure, researchers and other occupants felt free to modify their environment at will. As described by MIT professor Paul Penfield, "Its 'temporary nature' permitted its occupants to abuse it in ways that would not be tolerated in a permanent building. If you wanted to run a wire from one lab to another, you didn't ask anybody's permission — you just got out a screwdriver and poked a hole through the wall." [...] MIT professor Jerome Y. Lettvin once quipped, "You might regard it as the womb of the Institute. It is kind of messy, but by God it is procreative!" [...] Because of its various inconveniences, Building 20 was never considered to be prime space, in spite of its location in the central campus. As a result, Building 20 served as an "incubator" for all sorts of start-up or experimental research, teaching, or student groups. [...] Building 20 was the home of the Tech Model Railroad Club, where many aspects of what later became the hacker culture developed [not to mention pranksters and lock pickers, as well].'
Sadly, the TMRC's elaborate railroad, which exhibited interesting pre-miniaturization computation, didn't survive the dismantling of Building 20 and was eventually replaced with modern components. I also hear the Stata Center has two spires, one maddeningly named after Bill Gates, separating the two fiefdoms of computer science at MIT in glass-paneled offices meant to flatter status-conscious administrative types. Since Frank Gehry's architecture is proprietary and depends on strict tolerances, there's scant building modification going on.
That's why I think you can see these network changes as a tragic continuation of a destruction of the historical character of MIT, even though they may also be necessary.
More info about Building 20: