Hacker News new | past | comments | ask | show | jobs | submit login
By installing NAT, MIT stifles innovation (achernya.com)
272 points by catherinezng on June 26, 2017 | hide | past | favorite | 176 comments

A lot of fuss, but if you look at the presentation slide in the middle of the page (https://4.bp.blogspot.com/-PyyPpTv1p7g/WU7hMEBnm4I/AAAAAAAAE... for reference) it is clear that MIT is not stifling anything or shutting anyone's mouth.

MIT is just moving to IPv6.

Actually... MIT forcing an entire generation of future engineers to deal with IPv6... That will literally push innovation.

Yes, yes, it’s been three months since the provost triumphantly announced that IPv6 will finally be coming to the campus network, and communication about IPv6 on campus goes back much longer than that, but we have yet to see it in a single building (beyond the buildings where students have set up their own tunnels, and who knows if those will even work with the new network). Meanwhile, the NAT was deployed in twelve buildings with same-day notice and no prior communication, leaving some student groups with unreachable servers. If this had anything to do with pushing innovation, don’t you think the priorities and communication pattern would have been a bit different?

> MIT is just moving to IPv6.

The "just" is incorrect. There are four bullets on the slide: DHCP, IPV6, private IPV4/NAT, and firewall. From the diagram it looks to me like even if you move to IPV6 (which, as others have noted, MIT has not yet rolled out, so at this point you can't), you will still be behind the firewall, so setting up a service visible to the Internet will still be more difficult than it used to be.

The post isn't objecting to the firewall, though.

I totally understand the need for a campus-wide firewall. The MIT network is a juicy target for botnets, and individual students are not good enough at running security on their own computers. The old approach to IP assignment was that you needed to get your IP approved and made routable by IS&T anyway, and if they detected botnet activity on your computer, they'd manually intervene and make it unroutable again. That sounds like a lot of work.

If computers end up with firewalled but publicly routable IPv6 addresses, that sounds perfect.

Even in the old approach, you get publicly routable addresses over DHCP. The approval was for static addresses only, and was very fast, because you're literally on the same network as the DHCP addresses.

If they detect bad activity, they blacklist your MAC address so you can't connect. This is no different under the new scheme, and has nothing to do with NAT.

Independent living communities controlled this process using an automated registration system. In my living group I was elected to the position democratically, attended a training seminar, and passed down the knowledge to my successor. I hope they continue to give the students the same autonomy.

But I have my doubts. http://pilot2021.com/index.html

MIT doesn't have IPv6 everywhere yet? That's lame.

MIT just sold off half of its class A subnet.

MIT was always going to be the last place on earth to go total IPv6

They'll get to IPv6 faster than the DOD, I guarantee it.

Even Verizon wireline hasn't deployed IPv6....

Are you saying that owning an entire class A of IPv4 addresses was a disincentive to go IPv6? ;)

No. IPv6 is great in concept but the world just isn't ready for it yet. Even our Google Wifi access points don't support IPv6 in their latest firmware, so I have no way of using IPv6 even though Comcast supports it. AWS IPv6 support has been sketchy until only this year. Many parts of the world are happily dancing with their IPv4 NAT and their sysadmins have no incentives to support IPv6 whatsoever.

Forcing people to use anything is never a good way to promote innovation.

I went to MIT for my undergrad and doctoral studies. One of the main reasons I chose MIT over other schools was the ease of availability of static IP addresses, unlimited symmetric gigabit bandwidth, no port restrictions, and other things. I even mentioned this in my undergrad application essay. I built a lot of things with it and learned a lot in my time there. I probably learned more outside of classes than in classes, and I think that's one of the distinguishing aspects of MIT culture.

> Forcing people to use anything is never a good way to promote innovation.

Of course it is. That's how innovation happens. They are focused on overcoming a constraint of the system they operate within. In this case, it will be to get around the limitations of the private IPv4 network, or to make the upcoming IPv6 network easier and more appealing to use.

Most innovations are to overcome some sort of limitation, whether that is with a man-made system or just the laws of nature as we currently understand them. Unbounded innovation hardly ever occurs and usually results in some shitty mobile game.

Now that's not to say MIT IS&T isn't behaving extraordinarily shitty here. But this won't stifle innovation, just refocus it. Whether that's towards a more worthy goal is certainly up for debate.

Innovation will happen, but it is heavily misdirected.

What if I'm a biology expert and want to run a server to demo something cool? I should be spending my time doing innovation in biology.

What if I'm a deep learning enthusiast and came up with something cool to demo? I should be spending my time hacking at that.

What if I'm a physics student and want to start a blog?

The majority of MIT students are awesome innovators, but most are not innovators in TCP/IP. Forcing a bunch of people who are not networking specialists and sysadmins to deal with the lack of IPv6 support in the rest of the world is not going to promote innovation where it needs to be.

Running a server isn't demoing something cool in biology, coming up with something cool to demo in deep learning, or starting a blog. If these people are going to spend their time doing innovation in biology, hacking on deep learning, or writing about physics, they shouldn't be spending their time running a server or configuring IP.

Why not? It's 2017, and it's super-easy to spin up a server and code something cool to demo something you did in any of these fields. Or spin up an MVP of a random product idea you have. Never tell anyone that "they should not be doing something" when they want to. That too stifles innovation. The idea is to create a low-friction path for people to do what they actually want to do.

NAT is technological friction. Telling people that "they should not do something" is also friction.

What about even just learning to write apps in your own time? I built dozens of demo websites while I was a student there. A few were slashdotted. Having IPv4 addresses and access to bandwidth from my bedroom was a massive blessing.

The problem isn't configuring IPv6. The problem is without IPv4 you cannot easily make a server that is guaranteed accessible from anywhere in the world, by anybody. That's not a problem that most MIT students are in a position to solve or innovate in a short time. At least not without spending money for an AWS instance that frankly most undergrads don't have the money for.

Innovating in making an HTML5 app to demo your cool bioinformatics project on the other hand is a weekend hackathon deal.

And servers are basically free at MIT. You can just pick up and assemble them up off of reuse when labs throw away various parts. Plug them into your gigabit ethernet socket in your bedroom, get an IPv4 address, and you're up and running in less than an hour.

Why not?

You're the one who said people should be spending their time doing innovation in biology, hacking on deep learning, and starting a blog, rather than learning how to configure and deal with networking.

If anyone wants to do those things, they can do them now and not have to worry about installing and configuring servers and dealing with IP networking. Or they can work on innovating in networking.

The fact is, however, that no one is going to say "if my server wasn't IPv6 only, then people would read my blog". They'll most likely be saying that because there's a lot of other content to consume on the Internet, and attention is limited. And if that is the case, they can get a $5 a month digital ocean instance, or one from any number of other providers, accessible over IPv4 and IPv6, and serve both protocols to entire Internet.

Also, chances are, the audience for a physics student's blog is most likely at some university, which has a good chance of having a working IPv6 stack; especially now that MIT is going wholehog on IPv6.

It still makes no sense that forcing researchers and students at MIT to access the Internet through a NAT has anything to do with innovation in IPv6, though, unless you count all the innovative ways students are going to come up with getting around the IPv4 NAT... which is a pretty pointless exercise.

Forcing people to use anything is never a good way to promote innovation.

Two words: seat belts. The auto industry fought this tooth and nail, as did parts of the general public. But once this painful transition was accomplished, it resulted in a big improvement in automotive safety.

Other examples include EPA regulations that forced out the use of hazardous chemicals and processes. This, in turn, also produced a notable series of entirely improved processes: better for the environment, and often cheaper costs and/or better end results (although not universally, to be sure). The potential for innovation had been present, but these mature industries had to be forced into innovation. The very concept that R&D might improve their bottom line as well as their externalized costs was practically foreign.

Creating seat belts is innovation. Creating regulation requiring cars to have them and passing laws that people have to use them is not innovative.

Quoth the parent to my post (emphasis mine): > Forcing people to use anything is never a good way to promote innovation.

You're arguing against a straw man. This is about the promotion of innovation. Yes, seat belts were innovative, and left to their own course were going nowhere. Some innovations face adoption challenges that go beyond the mere "meh" of "non-consumption". They are major social battles, requiring large and multi-faceted campaigns to succeed and overcome social inertia. E.g. for seat belts: regulations on manufacturing, laws and law enforcement campaigns, public education and marketing, etc.

I took "promote innovation" to mean "promote the creation of innovations" not "promote the adoption of innovations."

In the case of IPv6 I disagree. Every system administrator should have someone constantly pestering them about it 24/7. It's very important to the future of the Internet that it gets rolled out, otherwise we end up with a fundamentally asymmetrical Internet where endpoint devices are incapable of forming direct links with other endpoint devices. (Yes you can NAT hole punch but that is never going to be reliable without enough address space to fully connect the graph.)

Google is one of the worst offenders when it comes to dragging their butt on IPv6. Their cloud offerings have zero support, and some of their default Linux cloud images ship with IPv6 disabled in the kernel so that even if you run network virtualization software you don't get it. Much of their front-facing stuff supports it, but users can't actually use it for anything.

Microsoft comes next. Azure has no IPv6 to speak of.

Amazon is finally rolling it out. "Second tier" VPSes like Digital Ocean, Linode, and Vultr have had it for aeons.

> Even our Google Wifi access points don't support IPv6 in their latest firmware

They actually rolled IPv6 support out with their latest firmware.

I'm at sea at the moment, so I can only find the play store note: https://play.google.com/store/apps/details?id=com.google.and...


IPv6: Enable IPv6 on your Google Wifi."

If your access point wont pass IPv6 traffic - you should consider another AP vendor. I have plenty of older network gear that cannot do IPv6 - but it passes the traffic along un-molested.

I tried 4 different AP vendors (Asus, TP-Link, Cisco) and Comcast did not play nice with them. Random disconnects all the time. Only Google's worked without a hitch. But alas, no IPv6.

(I'm busy and don't have time to deal with this BS. I just need internet access that works.)

That isn't what Aloha was getting at, why is a layer 2 device (in this case a WiFi AP) even interacting with IP addressing? You should be able to run whatever you want, whether that be IPv4, IPv6, or your own custom protocol using raw ethernet frames (of which there are quite a few).

Also, what is this whole "Comcast did not play nice with them" trope? I've dealt with Comcast many times, and used Asus, TP-Link, Cisco (the DPC3010 modems are my favorite) and others with them without issue. They aren't even a factor in your internal network and whether or not IPv4 or IPv6 works in it...

So at home my Comcast connection works fine with most modems and routers without any problems.

At my startup's house:

- Using the Comcast default combined modem+router works.

- Using the Google Wifi AP with a DPC3010 resulted in the DPC3010 being bricked upon connecting to the cable line. This is after the DPC3010 was working fine at my home. That DPC3010 no longer works, even at home, and only gets a power light upon startup.

- Using the Google Wifi AP with a TP-Link modem works flawlessly. No IPv6 though.

- 3 other routers that work at home do not work at my startup's house with the TP-Link modem. Asus RT router gets a fake 10. address from Comcast, Xiaomi router stops responding after several hours, Cisco router gets no DHCP lease whatsoever. All three work at home, all flawlessly.

Comcast is unwilling to debug, saying that unless I use their official combined modem+router they will not provide support.

I still think the combined router/modem is the issue. I've worked with one of the combined router/modems with a Comcast business connection before. Very buggy, and Getting the static IPv6 and IPv4 is a herculean task. The Cisco DPC3010 has its own oddities. I would suggest looking at some of the newer cable modems that Comcast supports. If you need static IPv4 though, you have to use Comcast's equipment.

The combined one works, it's just devoid of features I want, hence the separate modem and router. As for the router the Google one is the only one that seems to work at my startup's place. The other routers I tried only work on my home Comcast line.

Honestly I would ditch the Comcast provided router, and bring your own cable modem and router. I can't count all the number of issues I've had with Verizon and Comcast provided equipment.

I am almost positive I saw IPv6 in the release notes for the latest Google Wifi app update, am I crazy?

Thanks! I enabled IPv6 on my Google Wifi router. I see a Global inet6 addr in my ifconfig. IPv6 websites work. I can ping6 my machine from the outside. I can ssh -6 into my machine from other machines on the LAN. But unfortunately I still cannot ssh -6 into my machine from an outside computer.

It probably is firewalling all incoming IPv6 connections (?). The Google Wifi app has no settings about this. Googling and stackoverflowing for 10 minutes didn't find a solution. That's when I give up.

Sorry Google and the IPv6 community but I have more important things I need to be working on than dealing with this BS that never works the first time. Back to IPv4 and port forwards, which is working fine for me right now, and will let me get back to my work. :-/

When MIT rolled out IPv4, the world wasn't ready for it either.

"Rolled out" is different from "forced to use".

If I come up with a super-awesome computer vision algorithm and want to run a server in my dorm room to demo it, being forced to use IPv6-only when the school has enough IPv4 addresses is a stupid annoyance and will only reduce the number of people that can reach the website. Running on AWS or other IaaS service isn't an option for many students without much cash.

dheera really gets it! I can cruft decommissioned (but working) hardware from trash piles in loading docks; I cannot cruft AWS credit.

Google is known to be against new and safe technologies. Google Finance still uses Flash, and Android has the worst IPv6 support despite it being based on Linux. What's your point?

On the other hand, selling its IP4 blocks may do more to slow down the move to IPv6.

Indeed! If anyone needs to feel the squeeze for IPv4 to make a move to IPv6, it is AWS... which MIT is conveniently selling the IPv4 addresses to!

AWS does support ipv6 everywhere; the problem is that many consumers do not (I can't access ipv6 on my current provider w/o doing work on my side, for example) and so the need for public ipv4 is going to continue for years.

I would be really happy to have only ipv6 addresses in my VPC, as that would make connecting up multiple VPCs much easier since I know their ip space won't overlap.

The situation over here is quite the opposite, if you're using the regular plans of the major ISPs (which is most people) you have IPv6.

Free ADSL, which has been providing IPv6 access via 6rd for like 10 years, even started deployment of IPv6 only DSLAMs in April.

This makes the absence of IPv6 on major platforms and sites very visible. There is no excuse not to have IPv6 today, especially for market leaders, and its lack thereof is definitely a showstopper WRT services we choose to use.

As an anecdote, we have seen some constantly increasing traffic over IPv6 in our logs, and our customers are definitely not on the technical side, very far from it.


Here in the US, Verizon wireline has been dragging their feet on a IPv6 deployment for years.

That's new. Last time I tried to get an IPv6 address for an EC2 instance it was either impossible or you had to set up this complicated virtual network thing depending on where your EC2 instance was physically hosted.

Yup, they rolled out IPv6 support across several services in January 2017: https://aws.amazon.com/blogs/aws/aws-ipv6-update-global-supp...

AWS will give each VPC a /56, and each subnet a /64


Again, the thing I would like to see is being able to either peer only ipv6 for VPCs, or have a VPC that is ipv6 only. That to me will greatly increase flexibility and simplicity if I'm ok with an ipv6-only deployment

well lately aws (even ec2) supports IPv6. would be cool if they would enforce it. I.E. only IPv6 internally and only via some kind of edge router to IPv6.

Sometime, I felt force all IOTs devices, typical laptops, Phones, behind NAT is actually safer for internet as whole.

Security via network segmentation. IMO, NAT gateway is good place to lock down and put in network security appliance to track/block all the unwanted connections.

except NAT does neither network segmentation or lock down the network. Those things are done by a router and firewall. Implementing proper security of IoT devices can be solved by A) writing more secure software for IoT devices and B) having a proper firewall solution with sane defaults. Using NAT as a tool to masquerade your IP addres is not secure. see NAT hole punching for example [1]

NAT is terrible from a network engineering perspective, it was mostly a patchwork to deal with the rapid expansion of the internet and the shortage of IPv4 space. IPv6 brings a lot of cool technology to the table in, like MTU path discovery[2], header extensions[3] and proper anycast[4]. It also makes dealing with subnets and network segments a lot more sane and scalable.

[1] https://en.wikipedia.org/wiki/Hole_punching_(networking) [2] https://tools.ietf.org/html/rfc1981 [3] https://www.cisco.com/en/US/technologies/tk648/tk872/technol... [4] https://en.wikipedia.org/wiki/Anycast

> NAT is terrible from a network engineering perspective, it was mostly a patchwork to deal with the rapid expansion of the internet and the shortage of IPv4 space

NAT is very useful to the network engineer. NAT lets you turn one network into a different one on a simple one-to-one translations basis.

However NAP got mixed up with masquerading to the point where when someone says NAT they assume you also mean masquerading when in fact the two are different concepts. NAT when you are doing a many-to-one translations is a disaster in many ways, but it works just well enough (and face it, the alternatives don't exist)

ah well, i should have been more clear in my comment. PAT especially is a disaster.

One to One NAT translation is fine although it still breaks a lot of things, especially on the IPV6 side of things. (like MTU path discovery).

Has any security threat ever relied on NAT hole punching from the outside in? The only cases I can think of involve defective gateway firmware, and IPv6 is hardly a panacea for that.

My guess is that IPv6 is the ISDN of the 21st century... an intermediate step between two networking paradigms, one being IPv4 and the other being something we haven't seen yet. IPv6 will appeal to specialists but will never, by itself, see wide adoption. The fact is that NAT works for 99.99% of users, and works very well.

By a couple weeks, perhaps. Before IANA ran out of IPv4 space in 2011, they allocated a /8 roughly once a month:


Moving IPv4 to NAT and moving to IPv6 seems orthogonal.

IPv6 is the standard way to preserve end-to-end connectivity when IPv4 NAT breaks it.

You're free to see these as orthogonal problems, but if enough people think as you do (moving IPv4 to NAT without first deploying IPv6), then the Internet as we know it is dead.

Why do you say that? They're converging with the way the rest of the world* does things.

*Consumer, and Enterprises in the US

Not if you're selling your IPv4.

> Actually... MIT forcing an entire generation of future engineers to deal with IPv6

Then why are they doing NAT?

Edit: Although moving to NAT might push innovation too, or at least some clever hacks. Speaking of that, if you want use a relay to do NAT traversal then is TCP over ARQ (on UDP) as bad as TCP over TCP?

They're doing NAT for IPv4. The IPv6 network space listed in the slide is publicly routable and almost certainly untranslated. It's possible it's firewalled though.

> Then why are they doing NAT?

Because they sold their IPv4 addresses.

Indeed! The only reason they're doing this is clearly that they want to sell even more IPv4 addresses. Because even with the current /9, it's more than enough to go around the campus.

Wow - 2603:4000::/24. That's the largest block of IPv6 addresses I'm aware of being handed out to a single entity.

Normally, ISPs get a /32, from which, they hand out /48s to their customer. And, with pretty much zero paper work, and ISP can get a second /32 (usually adjacent with their first /32 so they can summarize as a /31).

So - an ISP might get 2001:1868::/32 and then hand off 2001:1868:0209::/48 to a customer.

Because a /48 allows 2^16 or 65k networks, each network containing (effectively) an infinite number of hosts, pretty much every single geographic region company can be effectively served with a single /48. The /32 allows the ISP to have 65K customer (each of which has 65K networks).

What on earth is MIT going to do with a 2603:4000::/24? I'd love to hear the story behind why they got such a large block.

edit: according to https://www.arin.net/fees/fee_schedule.html this is considered a "medium" (WTF?) allocation with a cost of $4k/year.

It has nothing to do with amounts of addresses, and everything to do with making dividing stuff up for routing easier.

A large ISP entity like comcast or AT&T can now have say a single /16 or /24 allocation and pretty much no matter how much they subdivide up their regional routing, routing to AT&T can easily be coalesced and summarized , and every end customer can still get a /64 till pretty much the end of time.

I totally understand why Comcast, AT&T, Verizon and other service providers would want /16s. They are continent wide providers with millions of customs (millions of sites).

I'm trying to grok why MIT went for a /24 instead of a /32. Because they could?

Because they are replacing their /8. They want to make sure they are never constrained.

With their IPv4 /8 they had 2^24 IPv4 addresses to work with, or, from a network perspective, 2^16 /24 (65K) networks, each network containing no more than 254 hosts.

If they had requested a boring /48 IPv6 allocation (that anybody can have just by asking) - they would have had 2^16 /64 networks, and each network could have had basically an infinite number of hosts.

But, this is the IPv6 world, so I would have expected MIT to claim they were a LIR (Local Internet Registry - equivalent of a small ISP or larger) - and asked for a /32 - which would have given them 2^32 networks - or 4 Billion networks to work with. They probably would have assigned the networks by segmenting them on a per site basis - so each site would have had a /48 assigned, so they could have up to 65K sites, each site having 65K networks, each network having (effectively) infinite number of hosts. That is, a /32 would have been far, far, far larger than their /8 was. Easier to manage as well (no VLSM - nothing ever smaller than a /64) And, keep in mind, with a single, no contest request, they could have gotten the /32 adjacent to theirs (another 4 Billion networks, or 65K /48s) so they could aggregate on a /31.

Instead, they've asked for a /24. And I'm just darn intrigued as to why they think they can make use of such an address space. If they weren't constrained with their /8, then a /32 would have been far more than they ever required. (And odds are a /48 would have been sufficient with even modest address management).

I mean, I work with really large mesh networks, millions of nodes, some of our subnets have 20K nodes each on them - we roll out /48s like they are nothing, and even after deploying a couple hundred customers over 10 years, and 25 million nodes, I think we've used up maybe 1500 /48s.

BTW - this doesn't even take into account that they can use RFC 4193 up the wazoo for all sorts of interesting non-globally routable experimental internal networks.

I"m just hoping someone from MIT is reading HN and will clue us in.

They want it to sell it in a couple of centuries

I don't see what the problem is. The IPv6 address space is so large it doesn't matter at all.

That's true when we are talking about /48s. There are enough /48s to hand out to everyone (and then everyone gets 65K networks to work with). In the current Global Unicast space (prefixes starting with 001) there are 2^45 /48s. Effectively unlimited number, as there are 2^33 people on earth - so everyone would get 2^12 /48s.

There are 2^21 /24s in the existing Global Unicast Space. About 2 million. A large number, to be sure. But if every entity in the world MITs size asked for a /24, we'd start burning through them pretty quickly....

Isn't that what they once said about IPv4 when they were divvying it up?

Orders of Magnitude are completely different, of course. When MIT got it's /8, (one of just 255 available), it gave them the ability to create 2^16 networks with just 254 hosts on each one.

Now, anyone who wants to can ask for a /48, and they are automatically given it, and they are able to create 2^16 networks, each network with an effectively infinite number of hosts on it. And, there are so many /48s on the (current) Global Unicast space (prefixes starting with 001), that everyone on earth could be granted 4000 of them. And, in reality, ISPs started already conserving, and handing out /56s instead of /48s to most consumers/SoHo - further increasing that number by 255x.

And that's just the 001 prefix. Based on what we learn about allocating space there, new policies can be device for the other 6 prefixes available. We aren't going to run out of IPv6 address space.

Sometimes I wonder if we'll have an IPv6 shortage eventually. There aren't all that many /16 networks to go around.

The difference is that very, very few organizations require blocks that enormous in IPv6. And the ones that do will most likely never need to ask for more. (For comparison, Comcast, the largest home ISP in the US, has a /20.) But in IPv4, a /16 (or smaller blocks adding up to one) is nothing for even a mid-sized ISP.

Also, the allocation policies are much different. In IPv4, allocations are made to being as small as possible (while still meeting immediate needs) in order to conserve addresses. The cost is that the routing table grows like crazy, since network operators need to keep coming back year after year (or sooner) to get more address blocks.

In IPv6, allocations are made to cover the network operator's long-term needs so that they won't have to get a second, non-contiguous block. By "wasting" address space you cut down on routing table size, since even very large networks only need to announce a small number of routes.

Even with this "waste", we've only allocated a tiny amount of IPv6 space. None of the RIRs have had to go back to IANA for new IPv6 space since 2006[1], even with allocations ramping up in the last few years. (And IANA only gives the RIRs a /12 at a time.) We seem to be fine for the foreseeable future.

But even if somehow we do start using up too much v6 address space, the good news is we can always change allocation practices. Let's say in 2040 we realize we've used up way more of the IPv6 space than we expected we would by then. The RIRs can always change their policies, favoring conservation over smaller routing tables like they used to with IPv4. I seriously doubt that will happen within any timeframe we can reasonably make predictions about, but if it does, we'd have a lot more flexibility than we did with IPv4 depletion.

[1] https://www.iana.org/assignments/ipv6-unicast-address-assign...

> and every end customer can still get a /64 till pretty much the end of time.

on Telekom Germany every customer get's a /56 Fun fact their router's (Speedport) can't do prefix delegation to other routers. you need to buy one from another vendor

Deutsche Telekom (AS3320) has a 2003::/19.

Definitely makes sense for SPs (Service Providers) like Deutsche Telekom to pick up larger blocks - /16s even for the big ones. But MIT is more likely classified as an LIR (Local Internet Registry) - /32s are more appropriate for them.

MIT is selling its IPv4 space to fund its transition to IPv6. Didn't see this link anywhere in the article.


"The Library has the entire Net 18 address space registered at many hundreds of publishers of licensed e-resources. With no prior notice, we have been forced into non-compliance with our licenses with every such provider." I wonder what if the publishers actually sued MIT and Amazon, with maybe a injunction preventing Amazon from using the space.

That kind of statement is simply an institutional prelude to internal negotiations for an increased departmental budget.

Oh no, they have to tell the publishers their new subnet. The drama.

Projects have broken over this, with no real gain for implementing it. You would expect more rationality from MIT, of all places.

No, no I would not expect more rationality. I am an alum. I am long since out of academia and have no idea what is behind this. However, if I had to guess, I bet there is some inner politics. Just 'cause it is MIT doesn't mean it is is free from human nature. And, truthfully, I've seen some petty things get all sorts of dramatic.

We are humans. We're not rational beings, we're rationalizing beings. We like to pretend we are better but there is a valid reason that they say science progresses one death at a time.

I wonder if they felt like IPv4 addresses were a depreciating commodity and selling them in 5 years would not have brought in as much cash?

MIT's faculty and students are MIT faculty and MIT students/

MIT's admins are just admins. Same mentality you find in all the rest of academia, and they're actively hostile to everything that distinguishes MIT.

I moved to student housing in Sweden in 2004 when they had aging network infrastructure (all 100 MBit but that also applied to the shared links to the housing areas[0]), and by the next year they just ditched the school-sponsored network and moved to making students pay for third party internet (distribution to rooms was still Ethernet-based but now with a citywide fiber backhaul run by the municipal power company shared by regular apartment buildings).

We got faster service with fewer restrictions (no P2P service filters) for like $10/mo with student pricing, and still with fixed IPs.

I'm not sure why what innovative service a university can provide in this space in 2017?

[0]which meant about 1000 rooms sharing 100MBit internet access. This was somewhat mitigated by local DC++ networks in each housing area to keep piracy downloads off the shared link.

Do you have to pay $10 per server you set up?

Also, by acting as the ISP, it also gets to be more protective of its log data than an outside provider might be.

It's not that an outside ISP couldn't provide the same level of service and speed at the same or less cost, while protecting the interests of the MIT community, it's just that I don't expect it's as likely.

I don't recall the pricing for extra IPs but it was probably something like that. I just had a stack of servers built out of donated parts in my dorm room hosting different services on VHosts. At one point I was pushing 3 TB/mo in outgoing bandwidth on that $10/mo plan.

It's not about what innovate service a university can provide: it is about what kind of innovation the university can help empower its students to make.

> Instead of being renumbered into publicly-accessible IP ranges, IS&T is moving all of campus into RFC-1918 10/8 addresses, and enforcing the campus firewall, which will be made up of Palo Alto 7050 devices, which are best known for their deep-packet inspection feature, App-ID.

Then later in the article:

> NAT deployment doesn't benefit the Institute in any way, other than to make things more difficult.

Possibly ignorant question-- could this choice be influenced by the inescapable rise of cheap IoT devices flowing in from China?

I mean if a freshman arrives with a desktop rig they bought purposely to use as an experimental server, and they explicitly register the software using a web form, you can imagine a very loosy-goosy relationship among students and IT built on good faith.

But if a freshman unloads their luggage and a few dozen random internet-connected baubles drop out and start joining the network, what is IT supposed to do? Especially considering MIT probably does a lot of research for DoD...

I'm all for supporting innovation and community services, but I think author is not mentioning other possible causes, like DMCAs, malware and spam (including unintended), which could have damaged the reputation.

I just wonder why MIT didn't give more time to move and why it doesn't provide a replacement in eg cloud credits.

If it went down like it did at CMU, IT polarized into a camp that wanted to maintain the traditional stack and a camp that wanted to tear it down and replace it with contemporary cloud services. When the latter won, they wasted no time in salting the earth of the former's territory.

Disclaimer: I wasn't actually party to any of this, I heard it second hand, corrections welcome.

I feel like that is different though. In-fact, that transition increased innovation capability in my view as someone who wrote some of the largest reaching on-campus applications at CMU during my time there.

Andrew was pretty dated and virtually everyone I knew was already forwarding their emails through their Gmail anyways. Transitioning to Google Apps was basically cutting out the middle-man while adding on some interesting functionality.

1. Now everyone also had a better calendar system and file sharing with Google Drive within the CMU namespace. This meant that clubs, for example, could create Google Forms that were restricted to actual CMU students with CMU email addresses, and they could use that guarantee to do more interesting things like adding information from the Directory on top of those emails without having to ask for it (such as Major and School).

2. We also got a way to allow people to use their Andrew logins on any student-made website with simple OAuth2 that restricted to using the @andrew.cmu.edu emails on Google Accounts. Before this, apps like ScheduleMan and the StuGov apps (which I developed on when I was there) had to get special permission to use Shibboleth and there were a bunch of restrictions for having those certs including having to run on CMU infrastructure.

Now, any student can create applications with the same login guarantees of only being accessed by CMU students and allow for one-click login and registration (since most relevant information can be obtained by looking up their email in the Directory), vastly increasing the usability of these apps.

In my view, that transition was definitely a net-positive to the CMU ecosystem, in both usability and development.

So why did the latter win?

The non-CS departments weren't huge fans of Andrew when I was there. Instead of learning to use it, they would work around it. I suspect the tension eventually came to a boiling point.

Many years ago in a past life, I worked on the network security team at the University of Chicago. We had a similar policy (and they may still for all I know) of just being able to requisition publicly routable IPs and run whatever you wanted on them with no default firewall rules applied at the border. Not for nothing did we call this a "target rich environment".

For all of the cool things I got to do (troubleshoot a breakin at the South Pole, send the RIAA a DMCA takedown notice when they stole our content (absolutely the highlight of my career), etc.), we spent the vast majority of our time on nonsense. We processed dumb breakins by the hundreds, had to enforce DMCA takedowns, and the like.

I'm also all for innovation and giving people the freedom to deploy services and innovate, but I would have killed to deploy all IPs by default behind NAT/firewalls and work with researchers to help them understand their responsibilities before giving them public IPs.

> behind NAT/firewalls

These are two separate things.

There is no security difference between "route port 80 of one of our public IPs through to my NATted address" and "open port 80 for my public address".

The public addresses are easier to administrate, troubleshoot, log, etc.

I agree. I meant the / to indicate that either one or a combination would have been helpful in different ways, but I certainly could have phrased it better.

Good question indeed. Why didn't MIT give more time to move, and in fact gave absolutely no notice, such that WMBR, the campus radio station, had to scramble to get their online radio back online again?

Just reading the story now, but I can confirm that college radio stations (and in general radio stations) are build to run and then be forgotten. My campus still uses a rather old copy of Simian which relies not only on IPv4, but filepaths shorter than 255 characters.

SIPB (MIT's volunteer student computing group) offers free "cloud hosting" to anyone with an Athena account.


Honestly that sounds like a better alternative, than hosting on bare metal.

By default the service max out at 512 MB of RAM (yes, the service was invented in a different decade, way before AWS was cool). So if I want anything more than that, I'm better off running my own server.

Also ironically, I can't reach any of the newly NAT'd networks from my XVM instance. I bet the XVM maintainers haven't been warned about the NAT.

Did not know that. Honestly MIT's IT department should of given stakeholders more notice about this. I've honestly never had the chance to work on a public facing network (My alma mater, GMU, has a mostly natted network).

Isn't innovation worth more than the cost of malware, spam, and DMCAs?

Edit: In the late 80s the morris worm was launched from MIT, but the network admins of the 80s didn't overreact like this. I wonder why.

MIT is run by corporate shills now driven by profits, and IS&T is just another bureaucracy with its own interests to justify existence.

Did CIDR even exist in the 1980s, let alone NAT? (They are basically now selling off half of their "Class A" block now to AWS)

Cloud credits aren't a replacement for local servers.

Things I got to try at MIT that would be a lot harder on AWS:

- set up a TOR exit node

- set up a single-system image cluster across five 1U servers

- set up a ZFS box with RAID-Z and dm-crypt

- played with a real lisp machine

- put a raspberry pi on the internet (something I did for several projects)

But it still just doesn't make any sense to me. They can just firewall the entire campus network, and firewalling can very well be done without NAT...

At a certain level of firewalling you bring the disadvantages of NAT. For example, if you block all input, or even just HTTP(S).

From the article:

NAT deployment doesn't benefit the Institute in any way,...

I have often had changes foisted upon me that when I looked at them I could see no benefit. In every instance the 'benefit' I didn't see was one that I typically didn't approve of and so hadn't listed in my set of 'possible benefits'.

From reading the article though it sounds like MIT has had a very open and loosely (if at all) documented set of features around network access. And in today's world network access is many things more than it was 10 years ago. But perhaps the process of going through and documenting all of the things they do was 'too expensive' compared to setting it up the way the institution wanted it to work and then dealing with any fallout as it arose.

Another in a series of signs that the Internet is moving from science project to critical infrastructure.

The benefit is that they can use the proceeds of the sale of the IP block to upgrade their infrastructure.

Agreed and it doesn't seem to be one that the author considered.

I wonder if any of this is related to the new NIST Standards[1], which have to be followed by research labs who receive government funding. I could see MIT, already having to retrofit a lot of their research networks, also changing around the network architecture in other places aswell.


Nah, NAT doesn't provide the security; firewalling does.

Besides, what kind of controlled unclassified information could possibly be residing on dormitory networks?

Sorry, but NAT provides practical security for all but the most pedantic use cases.

I'm terrified of IPv6 for this exact reason - it assumes every device will have direct access to the Internet. As we've seen with security cameras and other IoT devices, they just aren't designed to protect themselves and are easily hackable, so making them accessible to the wider Internet is crazy. Firewall options for IPv6 for most home routers are limited at best at the moment... I for one am quite happy to have all my local devices happily running behind an IPv4 NAT (in addition to a firewall), knowing they can't be targeted directly without some sort of concerted effort.

IPv6 supports NAT...

True, but I think its fair to say the crazy wide proliferation of IPv4 NAT in consumer routers wouldn't have been nearly as large if IPv4 address exhaustion hadn't been a very real thing forcing hands. IPv6, at least for the foreseeable future, doesn't have this motivating factor for using NAT.

I completely agree in principle with the old adage "NAT is not a firewall", but in reality it often effectively works as one for many consumers, many of whom probably don't have the technical knowledge to understand the protection it has indirectly afforded them for the past however many years.

PDF is titled "Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations"

Author got it wrong. MIT wants you to use IPv6.

How can MIT people use IPv6 when it hasn't been rolled out on campus yet? How does it make any sense to put the campus (Ethernet) network behind a NAT, when MIT still has, half of what MIT has before but more than plenty to go around?

NAT will make future sales of IPv4 blocks easier? As you say, MIT doesn't need all those 8 million IP addresses, and eventually will adopt IPv6 anyway. Might as well sell surplus v4 space while it's still valuable.

But why is MIT making money on those addresses more valuable than giving its students an opportunity to experiment and innovate? Why would selling those "unused" spaces to Amazon further the cause of IPv6?

Also MIT hasn't even rolled out IPv6 yet...

Would the logic not be that the money will be spent on more useful things for students?

Perhaps. I haven't seen public discussion on the plans to use the fund though, other than vague promise that it will be used on Internet things.

> Net proceeds from the sale will cover our network upgrade costs, and the remainder will provide a source of endowed funding for the Institute to use in furthering its academic and research mission.

Source: https://gist.github.com/simonster/e22e50cd52b7dffcf5a4db2b8e...

When the two are in direct conflict, is it more important for MIT to maximize the value it can extract out of its properties, or for it to promote adoption of improvements to technology?

kudos to this well-researched post. As a student with a server in a dorm room, I really hope they don't take away my public IP address.

There are advantages to being on a private network behind a firewall ... and they could still offer a DNS name and routing to your computer if it was on a private network. It's likely that the only difference is that you'd also have to specify what ports you want exposed to the outside world. This is a win for you from a security perspective - having additional layers of security won't hurt you.

NAT is not an additional layer of security.

I run our servers on public IP addresses, behind a firewall. Troubleshooting and debugging is made much easier, and there's never any conflict with VPNs etc.

> It's likely that the only difference is that you'd also have to specify what ports you want exposed to the outside world

Port 80, please. With NAT, you can't offer that to more than one computer.

You nailed it! Students love that they can just spin up a whole new web server, no questions asked. I certainly won't be where I am on sysadmin-type skills without the kind of tinkering that the un-NAT'd network affords.

That's great and all, but the majority of us have lived in a IPv4 NAT world for most of our lives. A previous poster even mentioned that a student group runs a cloud hosting service on campus, making some of this moot:

"SIPB (MIT's volunteer student computing group) offers free "cloud hosting" to anyone with an Athena account. http://xvm.mit.edu/"

But you can, a simple reverse proxy can let the same port be used for multiple servers and pick based on hostname or query (for http).

reverse proxies like nginx also have plain tcp support so it allows you to easily run several services

You have to scale the reverse proxy, and you've added another point of failure.

Not to mention - who runs it? It needs to be trusted to terminate TLS or do 5-tuple proxying based on the SNI destination (not all clients send SNI). Also if the MIT student is doing something akin to protocol level development it's possible a middle proxy will prevent them from doing their work.

There is also the hassle factor. You may stop people from ever trying something because of the added hoops they must go through.

So, now I have to run a reverse proxy -- another point of failure, another thing to debug when something's not working.

That's a hack to work around a shortage of IP addresses. Why would I use one, when I don't have that shortage?

Well, you can have tons of static IP addresses, or funding to support MIT and future IT upgrades.

NAT is not security at all.

Wish I had access to such resources during my education ! Too bad they are breaking their own system.

Perhaps doing it over .onion?

Actually I have been experimenting this for my pet projects. Downside is that it's relatively slow but getting "global" address is click (well a few lines of config) away...

But that would just be ridiculous, considering that experimenting with Tor relays is like a favorite student passtime...

You don't need to run a Tor relay in order to run a hidden service. I have thrown in this idea as it's a dead simple (cheap/free, and you don't have to coordinate with anyone) to get your stuff publicly, let alone for experimental purpose.

> You don't need to run a Tor relay in order to run a hidden service

In fact, you shouldn't run them both [0]

0 - https://riseup.net/en/security/network-security/tor/onionser...

Sure, but I'm just pointing out an example where MIT students get to be actors as well as playwrights, whereas now one must follow the prescribed lines and mustn't be too naughty.

And hosting websites is? I don't see the average student doing that either.

I do, but then I also hosted hidden services, relays and exit nodes...

Well, an average student isn't going to do anything interesting. The MIT I know works to enable its most resourceful and enterprising students, and is not satisfied with just enabling "being average".

Yes! An average student can learn to host their server very easily with public addresses, and that was how I got started.

When I was at the Institute (80s) the IT services were a barrier to computation. They had their big 390/VM system used for accounting and some course 15 stuff. One intern digitized the Mens et Manus logo and IT excited trumpeted that they had done so -- jeez it had been in a font on the Xerox XGP at the AI lab for what, 15 years at that point?

All of course 6 ignored them, and I don't believe they had any impact with Athena. Certainly they would have been upset by faculty writing the root password on all the whiteboards.

In fact they had nothing to do with IP allocation (I doubt they knew what TCP was). I wonder what bureaucratic maneuvering gave them control of that!

At UCSD we not only got a public IPv4 address for each device but also an automatic *.dynamic.ucsd.edu subdomain assignment based on the device hostname. Came in handy for my Raspberry Pi.

Yep, that's the way it's traditionally been at MIT as well. The DHCP hosts get things like DHCP-ipaddressspelledoutinenglish.mit.edu. The fear is that it's all going away.

Good luck with name collisions.

My college didn't have internet campus wise, you had access to a limited, firewalled internet "protected" by Fortinet so i can't feel empathy for the MIT alumni since you can perfectly work without those tools.

Sorry that your college doesn't see Internet access the same way MIT does. But don't you think that by making its campus network more unnecessarily restrictive, MIT is setting a bad example?

Not OP, but not really.

In the age of AWS, maybe it's time the elites see how the other half lives?

No op , i am happy my uni does not see the internet as MIT does and acts far more responsibly.

MIT by holding one to class A address block for so long have sets a bad example. Much of the 16million address owned by them are never really used. How many other universities have access to this kind of wasted luxury ? Standford returned theirs back ages years ago.

Sure IPv6 was inevitable but if ford, MIT and rest had not been so greedy the rest of the world would have not grown up beind a NATed internet. I share an ip with 1000's of others, every third google search throws up a captcha..

sounds like UC Berkeley

Who sold the IP addresses to amazon? That part was not clear to me.

MIT did. They owned that ip block.

Does MIT have IPv6? If so just use that.

Author here -- MIT does not currently have IPv6. Although MIT did receive a /24 IPv6 allocation, https://whois.arin.net/rest/net/NET6-2603-4000-1, it's not routable everywhere on campus yet.

Unfortunately, IPv6 deployment is still below 20% (as measured by Google, https://www.google.com/intl/en/ipv6/statistics.html) so a publically-accessible IPv6 address is not yet sufficient.

20% is starting to reach critical mass.

Those are the fast moving and the new parts of the internet. I bet that in a few years IPv6 deployment will slow down as the remaining systems become older, less maintained or even opinionated (like me). For many years to come some kind of IPv4 connectivity will remain a requirement to access all of the internet/web.

I'm not switching my private network. This has nothing to do with wider adoption, nor do I have issues with IPv6 as a protocol.

What's blocking me is router firmware. It can do IPv6, but only as an afterthought. Sadly, no level of adoption is going to fix that, until I buy a new router.

Time for a new router, then, and by "new" I mean any produced in the last ten years. I have several old routers in a junk drawer that only do 10/100 and even they support it.

It supports it, but DHCP and the firewall are much less configurable. Things are exacerbated by my being behind 2 routers.

A router that fixes all this is at least 120. That is too much for me. I tried dd-wrt, but that doesn't fix the first router on the chain.

I think MIT is in the process of rolling out IPv6 officially. They only got an allocation early last year, and I think right now it is being used on the VPN network.

For everyone talking about this being merely a question of technical updates, it might help to see this in the bigger picture of a pattern of changes going on at MIT.

MIT had a very non-authoritarian, egalitarian culture, as Richard Stallman described it:

"I went to a school [Harvard] with a computer science department that was probably like most of them. There were some professors that were in charge of what was supposed to be done, and there were people who decided who could use what. There was a shortage of terminals for most people, but a lot of the professors had terminals of their own in their offices, which was wasteful, but typical of their attitude. When I visited the Artificial Intelligence lab at MIT I found a spirit that was refreshingly different from that. For example: there, the terminals was thought of as belonging to everyone, and professors locked them up in their offices on pain of finding their doors broken down. I was actually shown a cart with a big block of iron on it, that had been used to break down the door of one professors office, when he had the gall to lock up a terminal." (https://www.gnu.org/philosophy/stallman-kth.html)

In 2004, the MIT AI Lab was "upgraded" to the new Stata Center building, an unwieldy, Frank Gehry-designed monument to a recent MIT president's ego, and the antithesis of what it replaced, Building 20. Building 20 was a utilitarian construction from WW2 with no pretenses of becoming a prized or permanent spot on campus. Instead, its residents helped it organically acquired a character of its own as Wikipedia describes well:

'Due to Building 20's origins as a temporary structure, researchers and other occupants felt free to modify their environment at will. As described by MIT professor Paul Penfield, "Its 'temporary nature' permitted its occupants to abuse it in ways that would not be tolerated in a permanent building. If you wanted to run a wire from one lab to another, you didn't ask anybody's permission — you just got out a screwdriver and poked a hole through the wall." [...] MIT professor Jerome Y. Lettvin once quipped, "You might regard it as the womb of the Institute. It is kind of messy, but by God it is procreative!" [...] Because of its various inconveniences, Building 20 was never considered to be prime space, in spite of its location in the central campus. As a result, Building 20 served as an "incubator" for all sorts of start-up or experimental research, teaching, or student groups. [...] Building 20 was the home of the Tech Model Railroad Club, where many aspects of what later became the hacker culture developed [not to mention pranksters and lock pickers, as well].'

Sadly, the TMRC's elaborate railroad, which exhibited interesting pre-miniaturization computation, didn't survive the dismantling of Building 20 and was eventually replaced with modern components. I also hear the Stata Center has two spires, one maddeningly named after Bill Gates, separating the two fiefdoms of computer science at MIT in glass-paneled offices meant to flatter status-conscious administrative types. Since Frank Gehry's architecture is proprietary and depends on strict tolerances, there's scant building modification going on.

That's why I think you can see these network changes as a tragic continuation of a destruction of the historical character of MIT, even though they may also be necessary.

More info about Building 20:




Its called progression, long time since the Trojan room coffee pot.

Maybe they'll offer 1:1 NAT on-request?

Why does this blog require javascript?

Unfortunately because of Google. Dunno why they did that, it doesn't make it any better; but it's not really the author's fault either.

It's not obligatory to use Google products for your blog.

AFAIK only some Blogger themes require JS.

Here's an archived copy that doesn't require JS:


thank you

That's the blogger platform

MIT wasn't running NAT before now? WTF? Talk about a security nightmare.

I might be weird but I always think it's funny to hear what the elites complain about. It's like hearing Yalies complain about their CS department or someone at Harvard complain about the food - completely divorced from the rest of us.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact