Last time I was at the MIT media lab for a conference, I was able to get an unfirewalled external ip address from their wireless network. It was amazing. I briefly streamed live audio of a talk via shoutcast, but of course, nobody uses that stuff anymore. It really makes you ponder what a cloudless internet would be like.
A little over a year ago, my university gave up their generous range of IP addresses. You could plug in ethernet, and not just get an internet routable IP (albeit firewalled from incoming traffic from the internet), you would even be assigned a subdomain off of the school's .edu domain. It was great. Students ran servers in their dorms. Clubs ran servers. Some professors ran servers. Even though you couldn't listen on a socket for incoming traffic from the internet by default, it was unfirewalled internally. I had to briefly live in student housing, and I was able to connect to my server via the school's http proxy via corkscrew. There were so many cool uses for it. Students were encouraged to run web servers if they needed hosting. It was also much faster
I think CMU still provides fairly decent network services.
It was an insanely useful utility provided to students, and any serious engineering school should do it.
Even your comment about streaming, seems like mainly nostalgia vs reality as you kind of admit. Any modern streaming infrastructure works from behind a NAT since most people are behind one these days.
I was referring to people with broken IPv6 setups, mostly Teredo and 6to4 (old Mac OS X versions), for whom IPv4 worked but IPv6 was not actually routable. At the time, publishing AAAA records would lock out all of those users.
He was referring to your lack of qualification in the above. An IPv4 user can indeed access Google, which is an IPv6 enabled site.
A user who only has IPv4 connectivity relies on their software recognizing this when accessing an IPv6 enabled site. Software's failure to do so is one of the main reasons for the slow rollout of IPv6.
No, I think that our future is isolated carrier-grade NAT, connecting to cloud services exclusively. And that makes me sad.
I had to disable ipv6 on my Linode, because Clouldflare, Google, and Facebook do not seem to trust any ipv6 traffic coming from Linode. Only using ipv4 has fixed everything.
And, as for ip4 blocks being obsolete, I still think it's wrong to sell them off. Universities were endowed with these. If they don't have a use for them, they should make them available for someone who does. Not just sell them to be used for some corporation's infrastructure.
I hate how so many people on hackernews seem to think that not selling these ipv4 addresses is a "waste". People are only looking at this in purely financial terms. I consider it to be wasteful to sell them to Microsoft or Amazon or whatever. No matter how much money they offer. Universities are supposed to use the gifts they are given to try to better society.
Aren't these ip addresses going to just be used for things like NAT traversal? Why doesn't MIT make them available for some non profit to offer that service as a public utility?
If a flight school has an operating runway and hanger, should they just make all their students use flight simulators and demolish their runway?
Should an agricultural school sell off their farmland to real estate developers? If their students really weren't benefiting from their property, then they should donate it to some organization that could carry on the spirit of what they were supposed to be doing with it.
If your university doesn't have a lot more demand for IP addresses then yeah -- it makes sense that they'd just hold on to their network space, continue to NAT their DHCP pools and keep doing what they're doing. That's not an indictment of IPv6, it just means that the work to deploy IPv6 doesn't provide any benefits to them. However, newcomers and networks that foresee expansion can't just "stay the course" -- there just isn't enough IPv4 for them -- and do need to bite the IPv6 bullet.
Deploying IPv6 does indeed require work in terms of management (all tools that assume the length/format of an IP address need to be updated, netadmins need to learn and deploy RA guard and friends) and your network silicon needs to do IPv6 in the fastpath -- but even with all these pain points, carrier-grade NAT'ing IPv4 is nevertheless more painful, expensive, and slow.
Forward-thinking network operators have been taking advantage of the LTE and DOCSIS 3 transitions to enable native IPv6 everywhere, and Facebook is moving to IPv6-only infrastructure -- and it's paid off! About 80% of network traffic (to Akamai) from T-Mobile and Verizon is IPv6 (from Comcast it's lower -- around 50%): https://www.akamai.com/uk/en/our-thinking/state-of-the-inter...
The future is IPv6 -- it's just not quite equally distributed yet.
Isn't selling them off to someone who will pay money for them pretty much the definition of making them available for someone who has a use for them?
It's not like you can't buy an IPv4 address if you want one because Amazon and Google own them all.
We just have this situation where a few universities, the US DoD, and a handful of corporations own a disproportionate number of IP addresses because they were early to the game. They all got cheap IPv4 addresses. So HP owns two /8 addresses while Google does not.
I'm not seeing a situation where donating millions of IPv4 addresses to someone is going to improve society relative to Amazon forking over a bunch of dollars to a university.
Universities are uniquely positioned to not be constrained by market forces. Part of the reason that universities get privileged access to things, is because they are seen as working in the public interest. Government and business and wealthy donors throw money at universities, because they want to see innovation and public good come from it.
Maybe that's idealistic of me, and universities are just a kind of business, and only that. MIT already gives the public free hosting, with an externally routable, unfirewalled ip4 or ip6 address, to anyone who comes on campus. I don't think any other universities in the world do that. And MIT does this at great expense. (this is how Aaron Schwarz was able to download so much of JSTOR)
It makes no financial sense for MIT to be doing this. Perhaps they should lock down their network as much as possible, and probably outsource it's management.
MIT's network policies are not purely about what makes financial sense. The Institute was one of the birthplaces of the internet, and they see the ongoing development of it as one of their core missions. MIT has a close relationship with W3C, which is located a stone's throw away. Their network is multi-homed to the max; I think they peer with ALL the backbone providers that operate in the US. The school has ludicrous amounts of internet bandwidth; I think it's now measured in terabits per second. After all, they have to be able to cope with thousands of students watching Netflix along with all the research!
And just in case there is a REAL internet collapse... they have an emergency info site at mit.NET that is hosted far away from the campus. That would let them get some information out even if Cambridge were wiped off the map and all the name servers for .edu were to fail.
ISPs don't care. They would rather everyone just use carrier-grade NAT.
In the meantime, you should feel bad every time you make this argument because you're helping to make the world worse and enabling everyone else to just kick the can down the road.
This seems to be a "feature" of owning your own modem rather than renting one of theirs. I hesitate to even try to get this fixed via their tech support.
I guess if I want IPv6 connectivity, I'm either going to have to get a tunnel from Hurricane Electric (SIXXS is shutting down), or carve out some IPv6 space from my Linode and tunnel it to my home network.
But if you want a subnet, you can make a second DHCPv6 request with prefix delegation metadata, and the server will establish a route for a full /64 subnet to your existing IP and return it to you. Then you can hand these out however you want (though they won't give you bigger than a /64 so you can do internal subnet routing).
The only annoyance is that none of the existing Linux distros support this yet as part of their network client integration, so you have to script it yourself around dhclient (see the -P argument), which is a little hairy if you want to get the hooks right (I punted and just did it once manually and left dhclient running).
Getting it to work with the ER-POE was non-intuitive but not as bad as I was expecting. I'm even getting a /60...
Most people will read this, I think, and think a Linux client won't accept a V6 from a router that has obtained it's delegation via DHCP-PD..
I think OpenWRT has all you need for this. At least I think that's exactly what I did with my ISP config on my own router just with the LuCi web interface.
From what I remember hearing, Comcast was giving /60s thru PD.
The /128 sounds perfectly normal, and is used as a "point to point" link between the ISP and your modem. Over this, DHCP prefix delegation typically gives you a whole second subnet of /60 or so. If your modem isn't handling DHCP-PD, then all your see is 1 measly address even though they've likely allocated and pushed 295,147,905,179,352,825,856 addresses to you.
I'm now using a Ubiquiti EdgeRouter POE, and maybe I should give it another shot. I'm not convinced I'll actually get a prefix from Comcast, though.
I don't have any contacts myself, I'm just a customer and interested observer.
I tunnel all traffic through Comcast via a VPN, though, because I consider them a hostile network, so I don't especially care.
I regularly ssh straight into to my suspended WoL Macbook over v6, and never went out of my way to set up a single bit of it. A half dozen new technologies working perfectly out of the box!
I'm planning on doing exactly this using openvpn as my transport (because I already have an extensive ipv4 openvpn system). Has anyone tried it? I'm not interested so much in how to do it, which I think I know, but in anecdotes along the lines of "it makes openvpn crash" or "linode told me I was being abusive and cut me off" or whatever.
I've had ipv6 for decades I guess. I even got one of the Hurricane Electric "sage" tee shirts last decade. But when sixxs goes down I lose ipv6 connectivity for the first time since the DSL days at the turn of the century.
CloudFlare etc certainly dont do this with SiXXS IPv6 addresses, or any traditional residential provider I'm aware of.
* T-mobile: ipv4
* Office: ipv4
* Verizon: ipv4
* University: ipv4
* Linode: ipv6 (but flagged as hostile)
I'm not being backward about ipv4, I'm really not. But we're in denial about the fact that we still live in an ipv4 world. And I don't like seeing the last vestiges of non-profit ipv4 address space being liquidated.
Granted, I have business class so that I don't violate their ToS by hosting stuff and to get a dedicated IP.
This caused me to switch ISP's and now I have a static /48. That's much nicer.
If I tried streaming a conference on youtube, facebook, or periscope, my stream could be muted or stopped because someone plays copyrighted music or something.
They are... indirectly. MIT can do far more to assist students and researchers using the money they raise by selling off IP addresses than they can by holding on to the addresses in case 16 million students all decide that they want to host their own podcasts.
Yes, universities should dole out as many IP addresses as students want. Where else will students get an opportunity to build something with that infrastructure?
Maybe the university could use their infrastructure to create a non-profit ISP that serves some underutilized market.
If I were endowing land to a university, I would want them to use it, or make it available for the public good. Not just sell it off.
Mind you, an /8 block might sell for maybe $3-$7 per address, or $50M on the conservative side.
How about providing services to the general public in some way, then? Transfer them to a non-profit trust that will allocate them to anyone who would do something important with them.
If a university had extra land, I would argue that they provide it to someone who could give it back to the public in some way.
It is because universities are supposed to do this in the first place, that they are endowed with things.
All I'm hearing in response is the unjustified assertion that the IPv4 addresses are more useful somehow. I'd like to hear why the IPv4 addresses are so useful, and why you need 17M of them, and how they'll improve society more than $50M of research or education would.
If IPv4 addresses are useful to Microsoft or Amazon, then perhaps they can be useful to the general public, in some way. Indeed, MIT already provides IPv4 hosting to anyone who physically shows up at their campus. Maybe MIT could grant their IP addresses to a non-profit that would find a way to improve society.
It's debatable what universities should be doing. I've talked about this with several people who work in grant writing, and administration of well respected universities. What I have been told, is that it is generally the job of universities to educate certain people, with the end goal of improving society, and the world. And that is what justifies universities hand picking who they educate; affirmative action. So, picking out students from poor and marginalized communities, and targeting them for education, would arguably do more good for society as a whole. Universities played a part in ending slavery, by demonstrating that black people could indeed compete with white people intellectually. And that this mission is explicitly laid out in the charters of many universities.
But, I'm aware of how indirect that is. Not everyone will agree. And certainly not all, if not most universities are like this. I think a lot of universities are just educational and research facilities.
It sounds like the argument is that IPv4 addresses are useful to ISPs and cloud providers, therefore they must be useful to MIT. Somehow. The question of why IPv4 addresses would be useful to MIT is completely dodged.
I am also reading the comment about what universities should be doing and it sounds contradictory. You say the job of a university is to "educate certain people", but then complain that some universities are "just educational and research facilities". It feels like complaining that a bakery is "merely a provider of baked goods".
Education and research are part of the university's mission. Making the world a better place isn't the mission, it's the desired outcome of education and research. So giving grants to students so they can attend classes furthers the university's mission, and giving grants to faculty so they can do research furthers the university's mission, but providing IPv4 addresses to some vague hypothetical project is not part of MIT's mission.
Manchester Uni (circa ~2001) as part of JANET (https://en.wikipedia.org/wiki/JANET) had unmetered, unfiltered 100Mbit ethernet ports straight into all the dorm rooms, each with a dynamic public IP on the end. A publicly routable 100MBit line back then was a big deal (the fastest connection I could get at my parents was a bonded ISDN line).
It was thanks to that that I first started playing around with linux (RedHat 5 or 6 back then iirc). I remember assembling a server box with cheap PC parts, hosting an FTP (...posting the Dyn domain for it to alt.2600 and later getting in trouble after the uni received a letter from Universal :), running audiogalaxy satellite 24/7, setting up an IRC server for me and my friends to co-ordinate Quake 2 games (or counter strike - can't remember), trying (and failing) to do my own email with qmail and all sorts of other fun stuff I've forgotten. Good times.
MIT student body size: ~11,200
Unique IPs on this block: ~16.7 MILLION.
That's ~1500 unique IPs per student.
>Students ran servers in their dorms. Clubs ran servers. Some professors ran servers.
Well, they're still keeping a whole boatload and IPv6 is here, so you're lamenting a scenario that isn't actually happening. MIT students will not have a shortage of IPs for such activities.
Sorry, but we're way past the point of any rational argument to reserve 16 million ipv4 IPs for such a tiny student body. IPv4 hoarding is counter-productive. It keeps organizations from moving to v6 in a timely fashion and over-powers organizations that ran quickly during the IP goldrush way back when. Its not merit based and its just dirty politics as far as I'm concerned. We have v6 now and setting up v6 to v4 gateways is trivial for v4-only destinations. If anything if you love hacking around and setting up boxen then you should be thrilled at what v6 is offering and the amount of addresses you can trivially get, not the opposite. Its legitimately democratizing due to the lack of scarcity while v4 allocation is nothing but representative of the dirty politics and dirty economics of scarcity.
I doubt Vint Cerf is crying in his cornflakes about MIT giving up v4 addresses, and I would assume he's thrilled to hear about v6 getting more traction:
Some researchers wanted a 128-bit space for the binary address, Cerf (recalled) ... But others said, "That's crazy," because it's far larger than necessary, and they suggested a much smaller space. Cerf finally settled on a 32-bit space that was incorporated into IPv4 and provided a respectable 4.3 billion separate addresses.
"It's enough to do an experiment," he said. "The problem is the experiment never ended."
 MIT is only giving up half of this mother lode, so each student merely is allocated ~700 each. MIT also claims 14m of those IPs have never been used in internet history.
Good. I didn't realize how many ip addresses they have. I think that universities are endowed with finite resources like land or ip addresses have a responsibility to make them available to students and fellows, and by extension, improve society.
Selling everything to the highest bidder isn't something I want to see done.
Like most large institutions, MIT doesn't give public IP addresses to most client devices (desktops, laptops, phones, and tablets); they get DHCP address in the 10.x.x.x range behind a NAT because it's a bit easier to firewall them that way. But real IP addresses are available if you need them to run services.
There were so many cool projects that their network facilitated. I went to this exhibition, after the network "upgrade", and spoke to a student who's wearable device gets sensor readings from itself using HTTP to a node.js server hosted on a VPS. He was having latency issues.
Our Webserver is on a subdomain of the .edu(lug.udel.edu) and is totally accessible from the internet. It even listens for incoming connections. We run a mail server, a mirror, and an IRC server on there. It's great because even alumni can connect from wherever. I think there is a central firewall for every connection coming in, but I haven't run into problems for my uses.
In fact, every system registered for an IP on the UD intranet is assigned a public, world-accessible IP on the 184.108.40.206/16 subnet and can listen for incoming connections. Even wireless devices.
Now, I've got 5 static IPs from Verizon FiOS on some ancient grandfathered plan. Years ago, they "changed" the addresses, prompting me to ask, "what part of 'static' is not clear?"
I spent much of the day troubleshooting with their support before I figured out to ask them to tell me what five static IPs they have listed for us in their system. Turns out they changed with no notification and even their internal people didn't know. Spent more time trying to get our old IP addresses back as some of our vendors had IP restrictions and some of our partners accessed one of our internal servers. That turned out to "not be possible" so spent Monday filling out forms to update to the new IPs everywhere else.
I think I still have one of their senior VPs cell phone numbers in my phone from the late calls that night including a couple while I was having a party at my house.
Having an ipv6 prefix is useless to me (as someone that is behind the abomination called CGNAT / 'Dual Stack Lite').
T-Mobile LTE has been exclusively IPv6 on most phones for some time now. I'm pretty sure other mobile providers have done the same or are in the process of doing so.
All my Android devices have supported IPv6 as long as I've had it running on my network. I don't have an iOS device but since T-Mobile sells them I assume they work with IPv6.
My iphone works perfectly with T-Mobile's ipv6 network -- it gets a v6 address of its own and also anyone who tethers to it also gets v6 address.
How hard have you looked? My iPhone has native IPv6 on Verizon.
I'm from Germany. We have three big players here: Vodafone (my corporate mobile), Deutsche Telekom, EPlus (I think that is owned by/known as Orange elsewhere?).
Vodafone: No ipv6 address
Telekom: Can't test
EPlus: No ipv6 address
Google's ipv6 test (I checked with 'ip addr' before and don't get any ipv6 address other than the link-local one): Left SIM is Vodafone, right one is a reseller for EPlus.
In other words: Maybe, maybe ONE of the three mobile networks hands out ipv6 in Germany (I doubt it though?). For me and everyone around me, ipv6 services are unreachable from any phone and accessing ipv6 services from random machines (friend's house, hotel network) is highly unlikely. Like .. I'd give it a 10% chance to work and my home network is ipv6 only for years now, so it's not like I haven't tried..
tl;dr MIT is selling off half of 220.127.116.11/8 (8 million IPs)
I guess that's good for them, but it's still a dick behavior.
Off-offtopic: The cable modem broke down once. We called the guy, he picked it up 20 minutes later, soldered the broken capacitor and we had our internet back in 2 hours. It was on a Saturday.
He didn't have to do any of that, he had monopoly on broadband on the entire area. Small ISPs are the best.
I'm guessing part of it is because they only preside over a small area that there's a human element to it, they must feel responsible for the internet of these people, and they're close enough to them (in the hierarchy) that that actually matters.
edit: Looks like a lot more than just 18.145/16, based on https://whois.arin.net/rest/org/AT-88-Z/nets
So they don't have 18.104.22.168/8, but they do have 22.214.171.124/9?
Could someone explain this? I feel like I'm missing something (not the 2^16 addresses, the "#unicode code points")
edit: From wikipedia:
"Unicode comprises 1,114,112 code points in the range 0x0 to 0x10FFFF. The Unicode code space is divided into seventeen planes (the basic multilingual plane, and 16 supplementary planes), each with 65,536 (= 216) code points."
The worst part about getting 16/8 from Compaq was that it wasn't contiguous address space, so you couldn't use a single netmask for both. :(
Stanford had a /8 like MIT but they sold that off [edit: or gave it back] a while back.
Back when the internet was still young, you would go to ARIN (the American Registry of Internet Numbers) and say "I think I need 256 special numbers" and they would give you a block of 256 of them. Something like 126.96.36.199 - 188.8.131.52. If you asked for about 65,000 special they might give you 184.108.40.206 through 220.127.116.11.
MIT (the Massachusettes Institute of Technology in Boston, MA, USA) was one of the first people to ask for special numbers and so asked for over 16,000,000 of them (this was 1/256 of all of them) and so MIT was given 18.104.22.168-22.214.171.124.
Now today, all of the special numbers have given to people and companies. This makes each special number extra special and large groups of special numbers very extra special. Since MIT had 16,000,000 of the special numbers all next to each other, they decided to sell half of their special numbers for a lot of money.
Edit: Simple English: https://xkcd.com/simplewriter/
Before CIDR you couldn't route or advertise rando netmasks like a /9 or a /20. There were no steps between a /16 and a /8 so if you convinced (someone, forget who) that you "need" more than 65K ip addresses (and MIT is big enough to need more, theoretically) then you got a /8.
A lot of noobs think ipv4 rolled out with CIDR; not so. For "a long time" people had to make due with classful routing.
From memory, isn't RIP v1 only classful routing compatible? Been awhile since I used something that old.
I don't think they specifically asked for 16 million of them. Back at the time, that was the unit in which they were doled out. https://en.wikipedia.org/wiki/Classful_network#Background:
"Originally, a 32-bit IPv4 address was logically subdivided into the network number field, the most significant 8 bits of an address, which specified the particular network a host was attached to, and the local address, also called rest field (the rest of the address), which uniquely identifies a host connected to that network. This format was sufficient at a time when only a few large networks existed, such as the ARPANET, which was assigned the network number 10, and before the wide proliferation of local area networks (LANs). As a consequence of this architecture, the address space supported only a low number (254) of independent networks, and it became clear very early on that this would not be enough."
I think it's also more like the special big blocks of special numbers were given away without thinking much when the internet was young.
Just wait until every cell in your body has its own IP.
There are currently 7.49 billion people in the world . Let's be conservative again and round that up to 8.
Multiplying these numbers gives us 3.2x10^23 cells. The IPv6 Global Unicast address space (2000::/3) is 125 bits wide, for 2^125 = 4.3x10^37 combinations. This gives us 1.33x10^14 addresses per cell.
Ofcourse, thinking about IPv6 in terms of number of addresses is wrong, because you should be thinking in terms of number of networks (/64).
That gives us 2^(125-64) = 2.3x10^18 networks, or 0.000007 networks per cell, which is 268054250 networks per person.
I think we're okay.
EDIT: Fixed formatting.