Hacker News new | past | comments | ask | show | jobs | submit login
Google Fiber is live in Kansas City, real-world speeds at 700 Mbps (arstechnica.com)
282 points by zoowar on Nov 14, 2012 | hide | past | favorite | 146 comments



I think one incredible result of this is that it may raise expectations in the mind of the common consumer. This is very important for the coming data limit war in mobile and landline data caps, a war that consumers are already losing.

(I want to be clear: Speculation ahead.)

Mobile phone providers and internet providers (in America) seem to be setting data caps artificially low and saying that it covers the most common use cases (98% of customers) so it shouldn't be something we have to worry about.

Except that I imagine these very low caps are long play to justify price increases when using 4 or 8 or 20+ GB per month on your phone becomes the norm, or 600GB/month on your desktop becomes the norm.

Hopefully Google (and other companies) can show consumers in America (and perhaps elsewhere) the possibilities of large amounts of data over the wire TODAY so that its not a slow climb to higher caps, but a more abrupt shift for some that leaves far more complaints at Comcast and Verizon's door.

They won't reply to power-user consumers, they don't care. One only hopes they'll reply to mass consumers using their competition, even if it is merely tangential competition for the most part.


I'm reminded of Fake Steve Jobs' dig at AT&T's short-term thinking:

http://www.fakesteve.net/2009/12/a-not-so-brief-chat-with-ra...

"Now there was a lot of demand for [the "Meet the Beatles"] record — so much that the plant that printed the records could not keep up. Now here’s the lesson. Do you think the guys who were running Capitol Records said, Gee whiz, the kids are buying up this record at such a crazy pace that our printing plant can’t keep up — we’d better find a way to slow things down. Maybe we can create an incentive that would discourage people from buying the record. Do you think they said that? No, they did not. What they did was, they went out and found another printing plant. And another one and another one, until they could make as many records as people wanted.

[AT&T CEO] Randall is like, Okay, I get your point. I’m like, You know what, I don’t think you do, because if you did, we wouldn’t be sitting here having this conversation, would we?"


I don't think the analogy works, because you're already an AT&T customer. AT&T building out capacity, doesn't necessarily lead to more sales; the price/GB drops, they don't necessarily make more money, whereas building more printing plants does lead to more sales. Unless of coarse their offering gets better than Verizon's, and people start switching.


A better analogy is airplanes and air travel. We have the technology to build supersonic passenger jets, but the economics don't work: leisure travelers are unable/unwilling to pay more for their ticket to get at their destination two times as fast. Their time isn't valuable enough to justify the added expense. Yet, business executives are willing to pay for supersonic business jets because their time is that valuable. Supersonic private jets are already under development.

Consumers would love faster broadband, but they can't afford to pay more for it. Businesses can, and they have a variety of options available.


Isn't Google Fiber showing us that the cost isn't as extreme as the providers would have us believe?


Google Fiber is a disruption play in a limited market. While not a loss leader, this type of roll out isn't cheap, especially for existing plants.

That being said, I'm jealous, so very jealous of Kansas right now.


If Google's roll-out significantly more expensive than any other type of facilities-based build-out?

In fact Google's product is rational: If you are going to bother to build out in a competitor's territory, you want to get all of your competitor's customers to switch, so you need a disruptive product.


My understanding is that the new build out is cheaper than traditional HFC builds. The difference here is incumbent providers would need to abandon large swaths of their infrastructure to compete with a disruption play in Kansas.

Google is spot on to roll out FTTH, but existing plants aren't going to disappear any time soon, and likely will not react to the disruption play at large.


Yes. That's the point. Google has already stated that they're fiber program is NOT a loss leader, as many have speculated.


Actually, the Concord was operated profitability for several years mostly though a dramatic increase in ticket prices which there customers did not really care about.

Market research had revealed that many customers thought Concorde was more expensive than it actually was; thus ticket prices were progressively raised to match these perceptions.[36] It is reported that British Airways then ran Concorde at a profit, unlike their French counterpart.[127][128] http://en.wikipedia.org/wiki/Concorde

As to network capacity, it's actually fairly cheap, prices are based around perceived value not cost just look at text messaging plans. That and cable competing with hulu/netflix.


A better analogy is airplanes and air travel.

An even better analogy is that the companies that build airplanes also build cars. They have a decent margin on cars but a much lower margin on airplanes. So, even if they could make planes that were larger and faster, they profit more from keeping people driving their cars.


I see what you mean, that AT&T's income does not scale with customer usage, though AT&T did lose (existing and potential) customers due to their network problems.


Could it have been - previously fundamental value of data limits was low because of low smartphone market penetration, but now that all consumers have smartphones and demand for bandwidth has skyrocketed, and before telecommunication companies can catch up with infrastructure upgrades, the value of existing bandwidth has gone up, and therefore, so has the price?


>Except that I imagine these very low caps are long play to justify price increases when using 4 or 8 or 20+ GB per month on your phone becomes the norm, or 600GB/month on your desktop becomes the norm.

I work for a mobile phone provider in the US. I'm at the bottom of the totem pole, so I'm not really privy to what the people at the top are thinking (and don't think what I'm saying is in any way official), but I don't think this is true. We really do have bandwidth constraints, and it takes a small fortune plus a couple years to slog through the process of putting up a new cell site.

The reality is if we don't have caps some percentage of our customers is going bittorrent twenty HD movies at the same time, completely swamping that leg of our network and causing everyone else's cat videos to start buffering. If we double our capacity they'll just double the number of movies they download at the same time. We can't win by adding capacity.

The part that pisses me off about all this is everyone I know who compulsively downloads a bunch of movies hardly ever actually watches them. It's some kind of pack rat syndrome.


That doesn't quite make sense.

Mr Torrent has no real-time requirement. So, shape his traffic when other customers are streaming.

At 3am, when nobody is using your network, torrenting HD movies is no problem.


I don't know if we can shape the traffic on the RF side.


You can shape packet flows at the GGSN. GGSNs are edge routers and that's where features like traffic shaping are usually available.


Companies don't need excuses to justify price increases, it's a fair approximation to say that they just always set their prices at whatever level maximizes their revenue. Now, they might be looking at data caps as a method or price descrimination, but that's something else.


This is the only reason Google is able to take advantage of this situation, and the disruption will be massive. The value for Google to gain by being this provider is possibly the highest synergy possible.


What's interesting is that increases in bandwidth lead to corresponding increases in innovation.

Facebook and other Web 2.0 companies could not exist without DSL/cable/3G/4G connectivity - photos would take too long to load, for example.

Beyond increases in resolution and fidelity in multimedia, I speculate what would be truly innovative with this sort of speed. Any ideas?

For example, it would be possible to do sub-minute DNA analysis on the cloud by quickly reading a tissue sample.


Think about what a Chromebook would be capable of with always-on 700Mbps.

They could sell that thing with practically nothing inside -- no hard drive, barely any RAM, no processor to speak of, just a screen and a keyboard -- and it could outperform the best desktops that money can buy, just by offloading computation to Google's massive data centers in real time.

Of course, in the short term latency will still be an issue. So more realistic would be to just get rid of the hard drive (why bother? It's practically the same delay to access the nearest data center's SSD as your own), but keep the RAM and GPU, and enough of the CPU to keep things running smoothly. But I definitely expect to see computation offloaded to the cloud as Internet speeds ramp up. It would be super cool to be writing a python script to test something out and just type in a call MapReduce or whatever on gigs of data, and have it Just Work in the cloud in real time.


And then something like Plan 9 could finally be for the masses. Just less elegantly implemented by Google than what Bell Labs did back in the 80s.

Come to think of it; perhaps Rob Pike and Ken Thompson are already working on this in Google's secret lab? Maybe using and improving more of their old design than anybody will realize. As PG discovered in 1995 (see Beating the Averages), when your product runs on your servers, you can use whatever technology works best to implement it.


I so wish they're hacking a proof of concept right now.


Re: your Python comment, we already have this ;) http://www.picloud.com/ - I don't usually write comments like this, but I used it about a month ago and was thorougly impressed (no affiliation). Great for stuff where you just want to run a bunch of Python in parallel and not worry about wrangling instances or sysadmin stuff.

When I tried it, jobs up to 40 of their virtual cores (they load share across EC2s) would execute immediately, while more than that triggered job queuing.


I fear latency is always going to be the issue that stops this working across arbitrary networks, with the amount of buffering that goes on. How many hops is Google Fiber to the nearest cloud services provider?


I would imagine very few, everyone wants to peer with Google.


I imagine some form of computer prediction combined with intelligent buffering could minimize the effects of latency for many tasks.


You're still bound by the laws of physics. Nothing real time is ever going to get to you faster than a piece of fiber can carry it.


Latency to access a SSD is calculated in nanoseconds, whereas latency across networks is typically calculated in terms of milliseconds. Order of magnitude difference here.

That being said, to an end user, the difference between 100 nanoseconds and 100 milliseconds is - probably - very small.


The difference is small for a single file. Then they try to start an app that loads 1000 files on startup (say, a game or Rails...), and those milliseconds turns to seconds, while the nanoseconds turns to still mostly imperceptible microseconds.


A nanosecond is 1 billionth of a second. A millisecond is 1 thousandth of a second.

1/1000000000 vs. 1/1000

Six orders of magnitude.

Edit: your point may hold true as I'm not sure SSD access or seek times are in the 10ns range.


Hmm, I thought it was more like 10s to 100s of microseconds of latency for a SSD.

I've seen estimates that Google fiber latency (when plugged in, to the nearest Google datacenter) could be anywhere from microseconds to 10s of milliseconds, but I don't have any reliable sources and I don't have any expertise here myself.

I'm hoping that someone who does know might chime in?


> why bother? It's practically the same delay to access the nearest data center's SSD as your own

Except it's not. And 700Mbps is still only about 1/3 of the throughput of current cheap-ish laptop SSD's, and a magnitude less than higher end server grade SSD setups...

We've got quite a way to go yet, and for many types of applications, latency will remain a reason for local storage "forever" for the simple reason of speed of light.


You can already do everything you've described (assuming you're in Kansas).


Right, but not on $50 worth of hardware. I guess what I'm picturing is companies like Google selling super cheap keyboard+screen combos along with a "cloud" plan, where you get access to so many terabytes of memory and so many cores 24/7 with perhaps leeway to use more cores as desired (but you pay for what you get).


If you want to compute a lot, the cloud is always more expensive than your own hardware.

Cloud only pays off if you're doing something major short-term (your load is spikes and then drops off for a while), or you're just plain lazy.


Always 'on' mobile is a real game changer, outside of bandwidth increases.

To do DNA analysis on the spot like that would require some more improvements in sequencing technology first before you could classify the upload to a service as the bottle-neck.


That is a great suggestion about the DNA analysis stuff. I think with bandwidth like this, alongside the huge jumps that hardware is making, something like that will be possible in the future.

This is actually part of my YC application, to bring advancements in technology (ie, bandwith, interwebnets) to scientific data. With core facilities running huge amounts of samples, and doing it in a consistent manner that is shared in real time, we could really make some improvements to scientific advancements.


For a transition process, until rest or the world catches up, Google could pay[0] users for allowing some part of their hard drives to be used as a cache for YouTube, cloud services, or just some more popular websites with throughput problem. With such bandwidth, the users could become the cloud.

[0] - or lower the pricing, or sth.


Video conferencing would probably be a lot more fun in ubiquitous HD+ quality.


Does anyone know why video conferencing is so poor now? Even on 30mbit/10mbit connections on both ends, we still get drops and lags a few times per 20 min call on both skype and google voice. It seems like latency is the real issue, though larger uplinks on your average person's connection will surely help some too.


When you stream a movie in HD, it buffers up about 30 seconds worth of data, so if the connection slows down, you won't notice.

When doing a live video conference, you can only have about a 1 to 2 second buffer before the connection becomes unusably laggy.


1-2 seconds? Hardly - Humans start noticing at about a tenth of a second in lag, and it's unusable at beyond a half second. When you're videoconferencing, all of your latency is eaten by the speed of light - You've got no time for buffering, you've barely got time for pre- and post-processing.


>"We just got it today and I’ve been stuck in front of my laptop for the last few hours," Mike Demarais, founder of Threedee, told Ars. "It’s unbelievable. I’m probably not going to leave the house."

Is it just me, or does this read incredibly similarly to an Onion article?


Perhaps you've been reading too much Onion. The Onion's style is a deadpan delivery imitating the predominant formatting and cadence of the popular press (e.g., USA Today).


The reporter didn't mention that I have been sitting in this house without internet for over a week.. I need to catch up on a lot of work.


Mike,

Can you comment on Ben Barreth? I read the article but I don't understand how this could be sustainable. Mortgaging your 401K in the hopes of getting donations on a charity project seems like a no-win situation. And I'm not speaking strictly from a financial perspective. Can you give us some more perspective on this? Or if Ben is out there, maybe he can comment. Thanks!


I'll get Ben on here tomorrow, but he has a few ideas to make this sustainable. The housing market here is really affordable and there are already some companies interested in sponsoring the house, so its not as crazy as it sounds. I believe his current focus is finding investors to buy other homes in the area who are down for the cause.


I'm open to any ideas. So far:

1) rent one of the rooms on airbnb? Need 16+ days a month rented to break even. Maybe push Fiber tourism? 2) get corporate sponsorship. Ask 10 small-mid businesses to sponsor $100/month for the house. Stick their logo all over the walls and the website 3) when google comes out with the business plan, install a new fiber drop and host a mini data center in the home for non-production clients (I imagine I'd have to compete with a bunch of ppl in KC for this service, once Google opens those doors).


Oh, it wasn't a personal comment. It was the mixture of writing style and slightly less-so, the subject. It specifically reminded me of this article:

http://www.theonion.com/articles/last-male-heir-to-bloodline...


Reminds me of when I got cable as a teenager. I was really really psyched, it's hard to imagine now. I literally went out,got myself a bean bag chair, and planted myself in front of the TV for an entire weekend to marvel at having ~40 channels to watch.


It's probably not quite as insane as it sounds until the rest of the internet catches up. I had 650 Mbps at Georgia Tech and realized most websites can deliver nowhere near that speed.


One particular action won't demand that kind of speed, that's sure. However, think of scenarios, where there's multiple users down- and uploading stuff. For example, you and your family are watching tv via iptv provider (multiple tv sets in the house, each streaming HD video), at the same time you might have bittorrent server somewhere in the basement and numerous mobile devices (tablets, phones). The demand can break 50Mbps for one technically advanced family.

Now, if you think of businesses, then the demand for data throughput might be 10x greater or more. Depends on the type of the business and number of users (users also being servers).


Yeah, my brief experience with a gigabit connection in school was pretty underwhelming. I could break 50 MB/sec down with bittorrent, but a lot of websites couldn't even hit 1 MB/sec and even CDNs rarely broken 10 MB/sec.


BitToreent should utilized more building next generation high speed Interweb. Deliver gigs of multimedia in microseconds.


Or they limit the bandwidth per IP because there is really no need for that kind of insane bandwidth.


We'll find a need, have no worries about that. It's human nature, we always find ways to use 110% of the resources available.


Something tells me gatech and google have different links to the internet....


Universities and research institutes are also connected to private peering links such as ESNet[1] or Internet2[2]. This is actually a huge benefit because these links are typically uncongested and you can get much more reliable transmission on them (and hence consistent latency and faster transfer speeds). Other countries/continents also have their own private academic networks that peer with the US ones.

I think I read somewhere that they are upgrading the ESNet backbone to be 100Gbit/s and GEANT's[3] to be 1000 Gbit/s, so I wouldn't immediately write off University/academic networking as automatically inferior to Google's.

Honestly it's hard to tell without knowing any of the details of how much capacity Google is provisioning in Kansas City (and how much contention there is). How much backbone fiber is laid to KS/MO - it doesn't seem to be where existing big-scale connectivity is (unlike say, DC Metro area where a ton of datacenters are located or London/Paris/Amsterdam).

[1] http://www.es.net/ [2] http://www.internet2.edu/ [3] http://www.geant.net/pages/home.aspx


I'm not sure if it's true or not, but when I went to school there, the OIT people bragged that GT had the second fastest internet connection to the Pentagon.


I've currently got gigabit fiber internet where I live (a tiny segment of the Minneapolis market) and although it is fast I can't say that it has dramatically improved much of my online experiences. Downloading happens much faster in most cases, but I don't do a whole lot of downloading and often times the rates are limited by the provider of the content. Websites seem to load up about as fast as they used to. The rest of the internet needs to catch up before fiber connections can truly be utilized.


People had cable and DSL for years before YouTube showed up.


Fiber connections won't be utilized until more people have them.


Unfortunately the telecom companies should be able to keep Google out of larger markets with zoning laws and political power. I severely hope to be proven wrong, however.


Most of what the telcos have going for them are barriers to entry and a nice duopoly.

The barriers they have to entry might be significant to you or me, but they are pretty trivial to Google, especially in the wired space. In the wireless space they still need to overcome the insane bidding for spectrum that the duopoly permits.

Google needs KC to prove that the $70 price for fibre is not 'below market', once they've rolled out a profitable fibre service @ $70 per month they can roll into markets full of regulatory capture and the FTC will be largely unable to say that they are using their search monopoly to undercut telcos.

If they don't offer phone service the telco is even more screwed as that's where most of the regulatory power is.


I used to live in a city which had only one ISP. They have a deal with the city that restricts building of infrastructure from competing products (just a rumor - but competing ISPs just so happen to be available in smaller surrounding cities 5-10 miles away).

I'm not sure if this is common in other cities, but it really wouldn't surprise me one bit.


When the majority of voters are clamoring for Google Fiber or the equivalent, things would change in a hurry, to ensure politicians' re-elections. Putting Google Fiber somewhere creates demand elsewhere.


Yeah, rolling out fiber in the easiest city doesn't prove much.


Honest question - what makes it uniquely easy to roll out in KC vs. other cities?


The politicians in KC greatly supported the movement in an effort to boost entrepreneurship in KC. They are working on a program called LaunchKC (http://launchkc.com/) to foster tech innovation


Does anyone do anything that would greatly benefit from a connection like this in their home or have any examples of things that aren't possible now and these speeds will make possible (businesses, technology etc)? I can only really think of backup that will be greatly affected.

I currently have 75/20 in my apartment and I don't think I've yet to be in a situation where I needed more speed or was frustrated about my speeds, the problem for consumer broadband seems to be that while there are people with good connections there are still tens of millions scraping by with 1 or 2mbps which is holding their internet enjoyment back a great deal. My parents are living with around 1.5mbps (with fluctuations) making even basic streaming difficult. Hopefully the fact that Google are managing to do this will make ISPs step up their game, although I'm not sure what Google are going for here (proof of tech? entering consumer ISP space?).


> Does anyone do anything that would greatly benefit from a connection like this

Replacing optical media with high speed lines would allow some funky new market mechanics like...

Games:

- Instead of downloading the full 12Gb+ game, simply load levels/textures as needed

- Distributed game session hosting where the players temporarily and reliably become the host of a shard

- Something like OnLive could be high res/low image compression/ultra low latency

- Download 12Gb+ games on a whim and pay as you go for playing it - stop playing/paying for it and the provider can cancel the game's certificate and it gets pulled from the system. Want it back? Download it again in a minute, save games and all

Movies:

- Make way for 4k video via net

- Make way for low compression 1080p video via net

- Make way for every person in the household streaming 1080p...while video chatting in 1080p, while playing an RTS on an OnLive-like service...while backing up their files to a remote server. Etc.

The future is going to need some big pipes.


> Instead of downloading the full 12Gb+ game, simply load levels/textures as needed

Blizzard has already executed this impressively for several of their titles. Starcraft II is an 8GB download, but after about 400MB you can actually start the game. It'll then continue downloading non-invasively in the background while loading needed data on demand. Same goes for World of Warcraft, and probably other titles as well.


Continuing on the game theme, I recall that Rage originally had hundreds of gigabytes of textures, and they pared it down to the point that it would fit on a BluRay. That would not be necessary with such a fat pipe. How many design decisions have been based on the space limitations of BluRays, or, worse yet, DVDs?

And 1080p is nothing. It's already outmatched by the retina display on a Macbook Pro. Right now the utility of higher resolution displays is limited by the lack of availability of media, since net connections are too slow and BluRay is the best you can do with optical media, but that bottleneck would go away with higher speeds.


"- Distributed game session hosting where the players temporarily and reliably become the host of a shard"

I think that's how Halo's online gameplay has worked for a while. http://halo.wikia.com/wiki/Connection_Host


That's how most Xbox games work. There was a bit of a uproar about it when the service started. Players were miffed of paying to play games they already bought, and not even getting dedicated servers. That quickly died down.


That quickly died down.

Not really. Host advantage is still a thing that happens in a lot of multiplayer games, and peer-to-peer multiplayer has never come close to the same experience as a dedicated server.


Oh, it's definitely still bullshit. You're right, it can ruin games directly, and even indirectly as others may assume you have an advantage because you're the host and quit. Dedicated servers are indeed simply better on games with more than two players.

I just meant the outrage over it seems to be much less a thing now. Now it's just accepted that this is how it should be done. I think like afterburner suggested in the sister comment to this one, those who were seriously offput by it have left and that's that.


It died down because the players who cared left?


I think most people just swallowed it. Sure, some went exclusively PC/PS3, but I don't think that many quit because of it.

Personally, I dropped XBL years ago because I was tired of children eager to sling racial and homophobic slurs. Playing games online just wasn't fun to me. I went on to play a lot of TF2, but virtually no one there uses headphones mics either. That, to me, was the appeal of Xbox Live. Everyone had a mic. We were to communicate and make the games fun on a whole new level. Instead, well, no. That didn't happen.


Download 12Gb+ games on a whim...

Sort of. 12GB (I presume you meant GB) still takes a minute or two to move around locally- storage isn't fast enough to flush 12GB in seconds.

Now, if your device just had comical amounts of RAM, you could manage it.


How is ~12GB ram comical? It's a lot easier for the common consumer to get 16 gigabytes of RAM today that is to get the kind of network speeds we're talking about here.


16GB of RAM is only $55-$65 depending on the speed, and 64GB SSDs are around the same price. Compare that to the price of one of those 12GB games, and you can see that local storage bottlenecks need not be a problem for gamers.


The PC in front of me has 24GB - it should have had 32GB, but one of the modules was dodgy and I never got around to replacing it.


Pay-as-you go gaming is actually a really interesting idea.


It is a little bit strange that in this community there seems to be floating surprisingly lot of opinions in the spirit of "Who needs more than 640k RAM?". I find it obvious that more storage and faster network connectivity both are immediately benefiting consumers. Almost everyone produces or consumes some form of digital media and with ongoing improvements in quality of these products, the requirements for comfortably storing and transferring of media are rising also.

I think that a lot of people from kids to elderly ones are sharing their digital photos or videos and in my opinion it clearly improves the experience when you upload somwhere or drag-and-drop your 500 vacation pictures to your aunt in your favorite IM and BOOM - they are already transferred instead of waiting for minutes or even hours.


As the pipe gets thicker the client can get thinner. So services like this allow us to move more and more to the server side. If you follow this line of thinking you see a few interesting things become feasible:

* Your computer can become just a screen with a Ethernet cable in the back. The desktop comes from the cloud.

* Dito for your TV/Games console. Instead of buying a PSP or XBox that can all be run in the cloud and just streamed to your display.

A whole bunch of things we do today just get an order of magintude easier - streaming video and audio to and from the house, downloading content, hosting services out of our home, etc, etc


But at the same time, everyone is moving away from desktops and certainly away from wired connections. Most people are using wi-fi in the home, and more laptops, tablets, mobile while desktops are dying. Wireless is still only at 200mbps. It's going to have to get faster for these speeds to mean anything for the average consumer.


200 mbps is plenty. Problem is that real speeds are mostly lower. Note that most office networks still operate at 100Mbps and are doing just fine. Arguably so though.


>Your computer can become just a screen with a Ethernet cable in the back. The desktop comes from the cloud.

>Dito for your TV/Games console. Instead of buying a PSP or XBox that can all be run in the cloud and just streamed to your display.

Oh boy I can have even less control over my stuff than I do now.


A simple idea — look how Apple now only releases OS updates via download. With speeds like this (~GB) we may be able to forgo updates altogether. Boot the entire OS from the cloud/remote server.

Perhaps then software will become versionless, and we just run the latest and greatest that has been pushed to the public.

I have 50Mb/s over 10Mb/s which I pay handsomely to Comcast for. No cap is nice. No packet inspection is nice. No QOS on ports is nice. I've had it a year. We very easily do 500-1000GB a month, and are just pulling down SD video at about 300-500MB per file.

I have backups with Arq once an hour to S3 that take longer than I want. I keep a full time VPN on the media server that downloads torrents, there's a few phones, tablets, etc. it's pretty easy to saturate the upstream. And I would live to lite up the VPN network wide.

Funny that the VPN service is unlimited and faster than my connection and I only pay $29.00 a month plus get much more than just VPN. It could be lower if I just wanted VPN.

The VPN provider can afford it, are making a profitable business out of it, and push it to 100% of their users. Yet Comcast and AT&T would like us to believe 1-5% of their users could even make an impact on their network if they tried?! If they can, that's sad that a small business can better manage their infrastructure than an actual upstream wholesale provider and/or cable supplier. Comcast shuffles uncompressed HD from their feeds at 6.6Gb/s [1] all day long. NetFlix drops the data at their front door so it's free. They have plenty of overhead for us Internet users.

http://www.ciena.com/connect/blog/How-much-bandwidth-does-Br...

Just for fun I throttled my laptop down to 2Mb/s synchronous, remembering back to the day when I paid $600.00 a month for a T1 at 1.54 Mb/s and people were amazed. I felt at the throttled 2Mb/s the connection was non useable.

We are falling behind. The only way this is going to get fixed is if Americans start to see that we are sitting at ~13th place in some "contest", and that China, Iran, Africa, Bulgaria, and third world countries we have a general sentiment of dislike for — are winning the "contest".

At that point Americans will care. How dare the Russians beat us into space. How dare the Koreans can build a better Internet than us!


I don't think people will care until those places can do awesome things we can't do. If they can buy a $500 TV that does everything their computers do (stream UHDTV films, games, etc.) then maybe they'll care. But, until then, so what?

And really, even then, so what? People's hearts just don't seem to be in that contest anymore. I think they're more likely to care about beating their friends in fashion than another nation. And if their friends don't have it, then they don't care much either. Hopefully this is here something like Google Fiber makes a difference. But then, without software to make it stand out...

I think it'll take true spectacle to get people really interested. As much as you or I would like this kind of thing, I just don't know what would sell it to the average person without requiring the purchase of additional tech as well.


> With speeds like this (~GB) we may be able to forgo updates altogether. Boot the entire OS from the cloud/remote server.

Speed isn't the biggest issue. Reliability is. Until my broadband is as reliable as my power supply, that's not an option I'd tolerate.

> Perhaps then software will become versionless, and we just run the latest and greatest that has been pushed to the public.

That's a terrifying prospect, given that pretty much every major software upgrade I've experienced have broken one or the other feature I like or need. I'll take my software upgrades when I'm prepared, thank you...


My internet service has been more reliable than my electric power for several years now... the power goes out once a year or two, but I still have internet access on batteries.

But I'm like you and prefer to have control over my own software and data. I might use it for tablets or ereaders though.


Streaming HD quality video, remote access to files, heck even something as simple as photographer's portfolio sites who could start using images that are way bigger, higher quality, etc. which is especially useful with ever increasing screen sizes.

So many applications for this, one can't even begin to name 'm all.

.edit: ooh, remote desktop gaming... sweet!


Check out Mozilla Ignite (https://mozillaignite.org/). We are trying to get people applying for their grants to move to our fiberhood.


I think one of the goals of the project is to seen what develops when bandwidth is increased by 2+ orders of magnitude. The constraints suddenly change and it's hard to predict where it might lead. Think about our connections today and what we wouldn't imagine if our connections were two orders slower.


Anything involving video would be much faster. I work in entertainment and it takes hours to move files around, not to mention transcoding to lower resolutions to allow the files to even fit through the "pipes." Some people are still delivering DVDs by courier around here if you can believe that.


Anyone who works with video (editors, compressionists, whatever) could do their thing remotely in a much more feasible manner. Several dozen gigabytes of video still takes quite a while to move around even on a 100/100 Mbps connection (I have 100/10 Mbps at home myself), and a gigabit connection would really help with that.


4k video. It's here, now. Still camera resolution is also not going to stop expanding. The cure for discontent with digital audio is more bits. Hangouts that approach a sense of telepresence.

There is no shortage of high-value, broad-market cases.


You could have remote recording sessions between musicians within a 100-200 mile radius, with live video, if the packet switching latency is low enough. Within the same city, it could feel like being in the same room.


Tor.


Limiting factor is the slowest node in the circuit. Not your connection to the network.


Yes, so if many more people have 1 Gbps connections then it will be easier to establish circuits where the slowest link is a gig link.


Perhaps more relevantly, the limiting factor is the upload bandwidth in the circuit. Faster consumer speeds would seem to have a very relevant effect there.


Telecommuting high def video. Upstreaming dozens of security cam/robot videos from your house. Caching data. Google can cache much of youtube and web right on your internet router so that you pay the electricity bill instead of them. Telemedicine.


What are the terms of service with this? Can I ask a friend in the area to install it and run some high-bandwidth services? I look forward to seeing whether this turns KC into a gold-mine town reminiscent of the 1800s. I don't even know what I'd do with 700mb upload. Invent wall-to-wall video streaming? It's always been a dream of mine to have breakfast with my family from a distance with near-perfect, full-size resolution like some sci-fi movie in the 80s. (Ok seriously, if someone does this please invite me over for breakfast, even if it's remote-breakfast with a neighbour).


Their unlinkable TOS says no servers.


Can you run Torrent server, Minecraft server, Counter strike server, Web server for your blog, Web server for your micro business that uses 0.1 MByte/s - 2MByte/s upload?

I can do all those things on my 10USD/month 20/20 mbit home net in Poland.


Fair enough. But P2P applications can take the brunt of the load and you can run your API on Firebase or something. I look forward to innovation. I also would think that the boundary between server and P2P might start to blur, given sufficient saturation of such possibilities around the world.

This seems to the ToS element you mention: http://support.google.com/fiber/bin/answer.py?hl=en&answ...


REALLY? Because I was just seriously considering moving to KC until I read that sentence. If the legal agreement says no servers, then what is the point? What is the point of moving your startup to KC if you technically aren't allowed to run a business off of Google Fiber?


What home ISP allows you to run a server? "Business class" is always required for that sort of thing.

That said, few startups should be hosting their own servers, no matter the speed of their internet connection. Everything else you do will be blazing fast though.


I've run servers off residential IPs before. I use to maintain a community box for me and a bunch of friends when I lived with my parents. It wasn't fast but it met our needs. The only problems I ever ran into with the ISP was when I tried to run a mail server.


Optimum Online, on their Boost package


Productivity of your crew, mostly. Serving a consumer or b2b website is probably not something you're going to want to do out of a house any time soon. Google Fiber isn't redundant, it doesn't give you better RW security or route around any of the other problems of enterprise-grade hosting. It's just FAST internet.


> Serving a consumer or b2b website is probably not something you're going to want to do out of a house any time soon.

I always thought this was primarily because of bandwidth and scale issues. Removing the bandwidth issue leaves the scale issue, and that's just a matter of how far you're willing to go in your own home, isn't it?


Your home isn't nearly as reliable as a data center, not to mention the lack of elasticity compared to the cloud.


From the FAQ: "For businesses located in qualified fiberhoods, we plan to introduce a small business offering shortly". If you're seriously interested I'd give them a call and get more details on the exact timeline.


This whole thing is an experiment by Google to see how people will use the Internet in the future, when people can actually get gigabit/s connections. I think the point is to see how end users use it, not to host a lot of content.

I think that's the wrong view, though. Maybe what happens when people get that much bandwidth is, they will naturally start running more servers!


Why would you run your startup's servers from your residential ISP?


That is sort of a catchall to say that they can cut off any node that causes trouble. Not to say it is a great idea to run a CDN from your house, but they aren't trying to crush everyone like Comcast does to people on home contracts -- there is no more expensive business tier Google wants to push you up to.


It seems like a lot of risk to base your business on the proposition that Google probably won't enforce their TOS.


It's no more unreasonable than running a business off any other ISP's residential/consumer service. Google says in their FAQ that if you want to run a business off their service, contact them directly - the service they're advertising isn't for you.


Nobody's talking about moving to another city to take advantage of another ISP's residential/consumer service as a business decision.


I like to think how this could change how ideas are spread and organizing. I imagine something like google hangout but with 100 like minded people all present at the same time discussing a common interest, with something like a real time voting mechanism in place so that those with (presumably) better ideas are more likely to be heard.

In one example I'm imagining a city council meeting, where anyone who just has the time to log in can take part. And then I realize that maybe this would lead to horrible popular ideas being more easily spread.


I'd like to think that this will remove people from an office enviroment and allow people to work from home more. This is probably a bit much for most people, but is there really a need for you to sit at a desk in an office when you can do the same thing at home? Real time communications (via video) allow for instant communication between employees, and you can easily add someone to the conversation. The implications of something like this are actually quite big. A change to the 9-5 grind (for better or worse will depend on how it's implemented). The impact of this could be significant when you think of all of the things the 9-5 encompasses, for example, getting stuck in traffic at 8:30am.


If this was offered by bell or rogers you'd get 1Gbs transfer, with 100gb of bandwidth


Yes, I wonder what Google's plans wrt to Canada are (if any). It's hard to believe that the Canadian big telcos will let it happen, though.


I think the current law prohibit non canadian companies from entering the market, but I might be wrong on this


If it doesn't I'm sure the lobbyists will start sliming out of their holes soon enough.


Video conferencing is the main pain I have that this would help with.

Obviously, really high quality audio, HD video, with multiple participants.

Then I'd like a bit more than that - panorama video so I can see all the surroundings in detail of where the other person is. Not just point audio, but a mesh of audio so I can move around in the space. All streamed to Google Glasses or similar.

And if that isn't enough, Kinnect-like 3D, so my body is scanned with depth as well.

Done well, so that tech geeks actually start to have their conferences online rather than in the flesh... This'll reduce carbon emissions, and mean people can spend more time with their families.

Will that use enough bandwidth?


No, 50 Mbps will be sufficient for this with the right codecs and gear. Now if you want it all as raw data - that makes a difference.


50Mbps is barely sufficient for high quality current HD.


One of my pet peeves is how data stream rates are always represented in bits per seconds. It's extremely deceptive to people who don't know the difference between a bit and a byte. We shouldn't measure storage in bytes, but internet speeds in bits.


You also have to factor in error correction overhead for TCP transmissions. I find it pretty accurate to shift the decimal one spot to the left when estimating maximum real-world Internet transmission speeds in bytes, rather than dividing by 8.


I can only dream of the MMORPGS possible with this, hopefully these speeds hit critical mass in NA within a decade


Here in Australia the government is rolling out 1Gbps fibre to everyone as part of the National Broadband Network project. Unfortunately its projected to take until 2021 before its completed.


It is ridiculous to see how disk seek speed will really be the next big bottleneck. In a few years, I expect these types of inet speeds to be relatively more commonplace.


I really wish this would come to Canada, and Toronto specifically. The business internet market in this city is ridiculous and ripe for a real disruption.


The speed is just ridiculous. Especially the upstream. So many possibilities. So jealous. I like how he shares his connection though.


Is it just a coincidence that Kansas City is also home to a very small shop that frequently accounts for 10% of US equity volumes?


Yes. BATS has been a positive member of KCMO's tech community for years and had no part in KCK landing the Google Fiber deal.


Much smaller shop than BATS. I believe they have less than 20 employees.


Are you referring to BATS?


Pretty pimp. I'd love to see this hit more and more people, and not just KC.


is anyone else afraid of outbound ddos attacks :), what do they have in place to cut these off really quick?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: