Hacker News new | past | comments | ask | show | jobs | submit login
Setting up Starlink (jeffgeerling.com)
403 points by giuliomagnifico 71 days ago | hide | past | favorite | 330 comments

I'm using Starlink right now. AMA.

I'm in East Idaho. Currently my dish angles itself to the north. It rarely moves itself north/south, and slightly moves east/west throughout the day. I've read that right now it locks onto a single satellite, although they're adding multi-satellite support later.

My speeds are inconsistent, and interestingly they start slow (around 60 Mbps) but after a couple seconds they'll get to 150-200 Mbps (which is awesome for downloads). Latency is consistently in the low 30ms. I get some downtime every day, so it really is a "beta" like they say. I have a backup WISP.

Setup was literally take dish out of the box, insert into tripod (included), plug in cables, connect to the wireless routers SSID and activate with the starlink app. After that I put the included router into storage and plugged in my Protectli[1] running CentOS. Everything works great. My only complaint is the CGNAT, but given the difficulty associated with procuring IPv4 addresses, it's understandable.

[1] I love this thing. Highly recommend: https://smile.amazon.com/gp/product/B0741F634J/ref=ppx_yo_dt...

This is OT but can you share a little bit more about what it's like living in East Idaho? I grew up in Boise and now live in Chicago but with the way things have been going lately I've been considering moving back, and there's a real appeal to living in a smaller town in the eastern part of the state (my dad was from Twin Falls). Internet access is always a bit of a concern though, which maybe Starlink ameliorates? Sorry to be so off-topic but you're the first person I've seen on HN from eastern Idaho!

Sure! I love living here. The only thing is the airports are small. You have to drive to SLC or Boise for good selection of flights. Other than that though, I love it. Population density is much nicer. All the stores and restaurants you want, but traffic isn't too bad.

There's a lot of fiber out here, but you'll need to either be near a city center of in a new enough area. The homes that are 10 to 15 years old are underserved and you're mostly stuck with a wireless ISP. But ... Starlink is about to negate that in my opinion!

My family and I spend a lot of time in the northern Idaho handle near Coeur d'Alene. We love it up here. My wife is Asian (matters to the story), we are liberals, but the folks are generally* terrific and kind. I star “generally” because I would say that the average person is much, much nicer than the folks we meet on the west coast, but the not so great people are much more open with racist, hateful, and frankly scary confrontations. We are invited into peoples homes, have made fast friends with many locals, love the pure beauty of the place, but the lows are much lower when they happen.

I agree completely. There are tons of wonderful, tolerant people here. There are a few gross people too that tend to be a lot more gross than on the west coast, but overall it's not too bad.

> live in Chicago but with the way things have been going lately I've been considering moving back,

I live in chicago( south loop/pilsen ) too. What has been going on lately that is making you consider the move?

FWIW I lived and worked in Chicago for 10 years. Never liked it. Expensive, and life just seemed really complicated. I always felt stressed out. If you grew up in a rural or small town lifestyle, it's a big adjustment.

Eventually moved back to my hometown and wouldn't care if I never saw Chicago again.

I know a few people from the Seattle area who do part time work there - unless you enjoy and agree with religious fundamentalists, die-hard MAGA anti maskers or people whose personality revolves around owning 35 guns, you might want to spend some time there first... Boise by comparison is much more secular and liberal than some of those areas.

As you specify people from Seattle, and these people went out of their way to point out that the people are religious, republicans who like guns, I have a feeling they aren't particularly open minded or tolerant people. Religious or secular, blue team or red team, basic politics shouldn't color your ability to get along with people. Fundamentally people are pretty similar, and American culture is quite homogeneous.

Well, particularly in the current situation where masks somehow became a political hill to die on, I think you are underselling the matter some.

It's a bit difficult to feel all that neighborly or friendly with people adamant that they won't even wear a mask to help protect your own health.

And all the topics listed - religion, political affiliation, guns - are all infamous for causing strife. Even the most tolerant person can easily wish to simply minimize the chances of a conflict. Tensions over these matters also only seem to be intensifying, which further exacerbates the matter.

While anti-maskers are definitely more likely to be politically right, due to weird behavior of the last administration, the reality is, the majority of Americans are fine wearing masks in public. Polling i have seen showed 75%+ were fine with masks.

I think that likely shows the problem, rather than anything else.

(I'm going to assume you mean that number nationwide)

75%, or roughly so, nationwide would be one thing if it was evenly distributed. But as you noted, the issue is tied to politics (I think more than you imply, but doesn't matter). We also know political affiliation is not evenly distributed.

So, a number like 75% mask compliance makes it possible, and frankly very likely, that there are areas with substantially lower rates.


Speaking from experience or speculation? I can't stand how HN has started to turn into a political soapbox. The amount of assumptions being propagated makes me understand the social tension the US has.

How did a Starlink post devolve into this chain about rural ideologies?

> I know a few people from the Seattle area who do part time work there

GP did us the courtesy of stating the source of their views. Seems to be hearsay, but if this statement is true it's not speculation.

I would encourage you to listen to, and believe, the very real experiences of people of color, same-sex couples, and transgender or nonbinary people who have spent time in locations like Eastern Idaho.

I don't disagree, but I would caution heavily about stereotyping an entire geographic area based on anecdotal experiences. Different neighborhoods around here can be radically different. One street up is "the Mormon street" where everyone is LDS, but a few streets past that is the polar opposite.

I think when we blanket stereotype people based on geographic location, the error rate is quite high. In my opinion, unacceptably, but then I've always value tolerance, acceptance, and open mindedness (although for people that don't offer that to others I admit little patience, and I don't want to be around them).

> Different neighborhoods around here can be radically different.

Having lived in places with similar ideological demographics, I caution that living in a place means you're going to visit all sorts of neighborhoods there, and it would be (not saying you're recommending this) untenable to recommend that certain people would do just fine if they keep to the neighborhoods where they are welcome.

If a particular neighborhood in an area is hostile to certain people for no good reason, it makes the entire area hostile because all of the people involved can and do operate outside of the geofence.

I’m sure the generalization earned the downvotes but candid opinions good and bad help paint a more well rounded picture, thanks.

Eh. They aren't wrong per se, but there are a lot of folks that might look like the fit the picture, but are just from a different cultural background. A little tolerance goes a long way in any direction.


City slickers should live in the city. Moving to a rural area where people want freedom because your 500k a year job lets you live anywhere and then crying my god there is someone not double masking! is not a solution.

Here I am trying to read about satellite internet and you have to pollute the discussion with toxic partisan cliches and irrelevant divisive anecdotes.

I wish you’d quarantine yourself to social networks like Reddit where the discourses are already sufficiently poisoned.

The person asked "what's it like living in east idaho" - and I answered based on some very detailed descriptions provided to me by people who've spent a lot of time there. I consider it an unavoidable fact that unless you are an ideological match with the majority of the persons who already live there, it's going to be a very miserable experience. Some of the social attributes that I described are statistically documented for that region in poll data.

Considering that probably 40% of your hn comments seem to be some variety of partisan axe grinding on American-specific political issues, maybe you might want to take a look in the mirror first.

We actually launched a service that provides you with a public IPv4 address and a /56 IPv6 block over WireGuard. It bypasses CGNAT and provides unblocked ports, including port 25. The cheapest plan starts at $8/month. Here's a link!


Looks pricey, whats the advantage of that vs setting up wireguard on your own vps for less money?

We are a managed service meaning you can sign up, download the WireGuard configuration, and forget about it. For a few extra dollars, you get reliability (we use BGP so we can failover to different datacenters), and clean IP addresses that aren't associated with spam or other cloud providers.

FWIW and since this is HN, I don’t think it’s pricey at all, it seems to very much match what I would expect a service like this to cost.

I wonder if I somehow can set up my Mikrotik routers to tunnel through you and transparently provide IPv6 for my whole LAN, that would be swell.

If you just want public IPv6 for your LAN, you could get address space for free from TunnelBroker.net and the setup for your router is well documented.

The Mikrotik RouterOS v7 beta does include Wireguard support, though I haven't tested it yet.

Wheres the IPv4 address from? Because if it's AWS, that's gonna be rather useless.

We lease IPv4 /24 and IPv6 /48 blocks from a reputable IP address broker, check it against spam blacklists, and finally announce the clean IP addresses at a datacenter of our choice. We are not tied down to any hosting company. We tested a datacenter failover scenario and switched the BGP route from Chicago to Toronto, while a YouTube video was playing, and it didn't even stutter!

Do streaming services work? Anyway to make the IP appear residential?

google 'residential proxies for sale' if you really want to do this, the big content sources are getting pretty good at distinguishing between last mile ISP blocks of dhcp customers and IP blocks that are announced in a colocation/datacenter environment.

For most services that distinguish between residential and commercial IPs, your traffic is outbound so CGNAT shouldn't be much of a problem to route normally.

Netflix works with just the "global" range. Their business vs. residential classification accuracy is probably near perfect.

The whois for the ip's in the screenshot say they are reassigned from OVH, and geolocate to Quebec.

That's the prototype version.

I am curious about your wire-guard setup. Have you been able to automate the setup or is it still a manual process to set up a new customer connection? I have been waiting for wg-dynamic to be ready because I dont feel like using static assignments. It looks like it could be a while until wg-dynamic is ready. What has you experience been?

Yes, it is fully automated. I don't think wg-dynamic will be ready in the next few years, or even remotely cross-platform when compared to the current wireguard implementation and popularity. Our code for choosing an IP address from our pool is really short, just a quick SQL query and a round robin selection.

can by pass Chinese GFW reliably?

According to the story it's locked to a designated "cell" with a diameter of something less than 60 miles, so it would be just a nice modern end-table in China. The fact that it depends on accurate GPS and phones home continuously makes the geofence hard to hack.

I think they're asking about the Hoppy network, not Starlink.

We would be glad to test it out with you, no contacts in China unfortunately.

FWIW, from my limited experience with the firewall there--they do less banning protocols, and more banning specific endpoints people are using to bypass the firewall.

I never tested WireGuard, but, for instance, I had no issue connecting via OpenVPN to personal and work VPNs from China (on standard ports with no effort to hide). If you try and use OpenVPN to connect to any of the well-known VPN providers with it though, it will be blocked.

Even if you find it works initially, as soon as someone catches wind of it being used to bypass the firewall and/or they see a surge of traffic to your services, you're likely to end up blackholed.

Especially for a paid service, it's probably more of a support headache than it's worth.

Unless you're looking to make a stand, I'd probably just steer clear of the whole situation. It's illegal in China to operate an internet services without a license and it could come back to bite you in the ass down the road if you ever intend to do business there or travel there.

> interestingly they start slow (around 60 Mbps)

For many of us that's reasonably fast ;). In the UK, FTTC provided over POTS (often BT or some form of unbundle), the top end is 80Mbs/20Mbs.

I don't know what you'd expect with cable provisioned areas supplied by the likes of Virgin (DOCSIS).

Oh 60 is fast around here too. I mostly meant that I've never seen a download start slow and progressively get faster. If anything it usually bursts at the begihning and then throttles back (comcast did that to me the most)

I've observed download starting slow and getting faster on all connection types: dialup, DSL, fibre, mobile wireless from 2g to 4g. It's just how TCP operates. Interesting that your experience on Comcast is different.

I'm not sure that bursting at the beginning has much to do with the underlying tech but more with marketing.

I wouldn't be surprised if it had something to do with trying to trick speed tests into reading higher speeds before it slows down.

Downloads are expected to speed up over time. Google TCP window scaling to learn more.

It's usually quite common over e.g. 4g. You first have to wake up the radio and attach to a cell tower, which can take 50ms or so. Once you've done that latency for subsequent packets can be much lower (e.g. 5ms).

Virgin provide up to 300 or 500 Mbps depending on area

Wow that's nice. Virgin. I never seen any cable in the houses I lived in London. Maybe it's not common in Zone 1?

They provided underlying connectivity to the premises at my prior built-to-rent apartment building in zone 2. The building had its own routers, Ethernet and WiFi infrastructure, and the management of everything was contracted out to an IT consultancy.

200 Mbps to each apartment was offered so I assume they had fibre and not DOCSIS.

I imagine the issue in zone 1 is having to tear up the roads to get cable installed. In new builds you have to do that anyway, so it's good business to get in on the action.

They have trial areas where they do 1Gbps lines as well, and when I was using one it would reliably hit that.

We’ve got that. We were on 200 and it wouldn’t go above 60. Since they finished the upgrade for docsis 3.1 and we switched to 1gbps it’s been what we wanted 200 to be - reliable and invisible. I hate them with a passion, but hate the UK gov more for its incompetent handling of fibre incentives and coordination nationally.

FTTH seems to be fairly prevalent in new builds in the UK now; my house is 5 years old and FTTH is the only option here. This does rather limit your ISP choices somewhat, but you can get pretty decent speeds. BT upgraded me to 950/140 just last week.

FTTH with asynchronous speeds? Thats bullshit. The only reason for that is that they use cable or dsl somewhere in the path from you into their core network

I take it you mean asymmetric speeds.

It is done for market segmentation purposes, not because there is a technical requirement for it. Asymmetric vs symmetric is used to separate residential from business customers, and charge the later category higher prices.

If they offer symmetric speeds to home users, they worry that business customers will try to switch to residential plans in order to save money.

> If they offer symmetric speeds to home users, they worry that business customers will try to switch to residential plans in order to save money.

If their key differentiator between the two connection types are upload speeds then they should adjust the pricing and/or clarify (or actually add some) the value add for the business line. That is just unfortunate

As another user commented, it's about market segmentation and not equipment limitations. I can get near-symmetric speeds from other FTTH ISPs but it was significantly more expensive than I was willing to pay, and I don't really need it anyway.

138/20 down/up for me on Virgin (Google speed test, Manchester to London, using wifi). Was sold 200/20, and have got that in the past using Ethernet.

Besides Starlink, I'm curious to hear more about your Protectli. Why do you like it so much? What can it do, that you can't do on your own computer with some scripting and such?

Not OP, but the Protectli is just hardware and you install your own OS. So it is running your own computer with "scripting and such", just a separate low-power, small footprint computer with dual NICs. This is good for managing network for your entire house/office/remote site.

I've got some PC-based firewalls like the Protecli. Mini-ITX, atom-based, 8gb ram. I can often find these for <$50 on auction sites. If it's one I'll be the main "owner" of, I like to run VyOS for firewall and routing. This is the open-source fork of Vyatta, and Ubiquiti's EdgeOS is a commercial sibling (granted, EdgeOS has or at least had some advantages over IPv6 PD). VyOS is debian arm based, so lots of packages like ZeroTier VPN can be added easily. I like VyOS/EdgeOS because of the full CLI/scriptable config.

I recently setup 3 of these for a radio club. These will live in mountaintop tower locations and provide VPN+NAT. Since these might get modified by others, I went with OPNSense. OPNSense is a fork of PFSense with a nice Web UI and community support.

> VyOS is debian arm based

not to quibble too much but I think the bulk of vyos development takes place on x86-64 and the standard build-your-own-install ISO guide is for x86-64. Though it certainly can be compiled for various types of arm CPU.

this is because if you want things like eight or ten 10GbE interfaces, or eight 10GbE and four 100GbE in one server, the only economically viable and fully stable platform for vyos right now is x86-64.

Right you are. In fact, I was thinking of the EdgeOS on EdgeRouter, and I was wrong there as well, since that is actually mips64.


Like the other respondent said it's just a small general purpose computer: you still need to load an OS like Linux. Mine is currently a PFSense firewall mostly out of laziness and a desire for speed. If I had to do it again I'd buy something else because the model I have (FW4B) struggles with gigabit internet speeds and the next model up is too expensive to consider vs just buying a small form factor computer instead.

Sorry I missed this question earlier!

I love that it runs vanilla Linux/BSDs and is a complete white box. I'm able to configure it exactly the way I want it, and I can even run non-trivial services on it. These days I mostly just port forward as I like to keep things as separate as possible so that when one thing goes down I don't lose everything, but it's great to have options.

Especially while I've got two ISPs, the ability to bond and use both at the same time is super neat.

These have not been prettied up for public consumption, but this is how I have it configured if you're interested: https://gist.github.com/FreedomBen/f8a50c7a98c07171a99c419a5...

Can you compare with T-Mobile’s 5G home Internet plan? Similar numbers advertised, $60 a month, no caps.

"no caps" - but watch what happens if you move 200 or 300GB downstream on a LTE based last mile residential connection in a month. you'll either get shut off, throttled to a few Mbps, or billed extra. Look at the fine print in the terms of service. If you have a household with multiple people that watch netflix, download movies, or even want to download a single xbox one or PS4/PS5 game (they can be 120-180GB now), watch out.

T-mobile doesn’t apply a traditional throttle but instead they deprioritize you on the towers. So as long as you're on a tower that doesn't typically get congested (typical in residential areas since people use their home Wi-Fi instead) you're fine.

Um how do you expect Starlink to treat you once it’s GA?

A whole lot better than the quotas and caps enforced on consumer grade geostationary satellite stuff right now (hughesnet, viasat) that is available in the same price tier of $85-130/month. And also better than 90% of WISPs.

Relative to the helpful details on LTE you posted, though? (Obviously Starlink is incomparably better than the status quo satellite options...)

Why do you expect better than LTE?

And would you say more about WISP behavior? I know WISPs are all over the place, but in my small experience, a technically competent WISP will not look at your usage unless there’s contention impacting other customers.

Like, Starlink has said they won’t service urban areas, to prevent degradation due to contention. So I expect them to use the usual TOS and technical controls... to prevent contention. I don’t see what makes them special here. If they had some special sauce to provide more cumulative bandwidth to subscribers than LTE and WISPs, I’d expect them to open up to urban areas stuck with Comcast, and profit massively.

Replying to your reply, I don’t buy it. Your experience now on Starlink is analogous to being the first WISP subscriber on a newly deployed AP. Time will tell anyway. (And I will still be a grateful Starlink customer at my off-grid cabin, even with the throttling and caps. Starlink is a game changer!)

My perspective is informed by having been in locations previously dependent on geostationary based satellite services, as a comparison.

For people who have budgeted, procured and installed 'serious' two way geostationary stuff in the past (one example of which would be a 2.4m two port linear compact cassegrain antenna, NJR PLL LNB, a 40W BUC, a Comtech CDM760 modem and a 1U sized Cisco router), for 1:1 SCPC dedicated transponder capacity based services, starlink is atonishingly fast.

I could pull out a check book and spend $45,000 on buying terminal hardware and $30,000 a month in transponder space and not be able to achieve the speeds that starlink can do right now. Even if starlink was only 20 Mbps down and 4 Mbps up, go price what 20x4 service will cost by traditional geostationary right now (hint: start looking at $1200 per Mbps per month and multiply by N number of Mbps).

It is indeed a good theory that I'm seeing unreasonably higher than normal speeds right now and better latency, jitter and packet loss because I'm in a similar situation to being the first customer on a new WISP PtMP AP sector. But I also have a great deal more confidence that spacex's continued paces of launches and satellite deployment will keep up with providing at least a 100 Mbps down x 15 Mbps up service. I do not think that they will let it degrade into a contended-service-hell where customers see a very poor end user experience.

My perspective on starlink is also informed by knowing the price right now for Inmarsat and Iridium based offshore and aviation data services (sub 2 Mbps) and the $ per megabyte costs. There's already starlink aviation terminals in beta, and terminals for maritime and offshore use. It'll be a game changer there. The market for a globe-covering LEO high throughput satellite network is much larger than just the US48 state consumer residential internet/small business last mile internet market.

If your cabin is off-grid, why do you want to be online? Doesn't that sort of defeat the point?

Not the person you replied to, but off-grid doesn't mean no power. I have an off-grid cabin with enough solar capacity to run my laptop, as well as quite a few other things. I'm looking forward to getting Starlink there myself. It's a fantastic, quiet place to work with no opportunity for interruptions.

photovoltaic power systems are a lot less costly than they used to be - if you're fully off grid and won't be connecting to it, having absolute fidelity to electrical code might not be a high priority. If you have some technical ability and can follow documentation and instructions, big 360W panels are like $0.45/watt now, and the setup process for a charge controller and battery bank isn't excessively difficult.

Sometimes there's not an option to be on the grid.

Typical WISP last mile: Something like a ubiquiti rocket 5ac gen2 on an RF elements 60 degree horn antenna, in a 40 MHz TDD channel somewhere in the 5.x GHz band, aggregate capacity of the entire AP might be around 240 Mbps. Shared between 20 to 30 customers. I'm regularly seeing 250-330 Mbps down on Starlink right now with beta equipment, and very, very few WISPs except those who are doing 60 GHz based PtMP micro-POP setups can match that. It's a real challenge for a WISP to have a few dozen houses all trying to download the latest 180GB Call of Duty update connected to one AP.

I'm not nearly as optimistic about WISPs in the long run, compared to my views 8-10 years ago. Really difficult to reach locations will go to starlink or similar (as a replacement for consumer grade geostationary), other places where the customers per square km density is sufficient will eventually get overbuilt with GPON last mile that provides vastly more throughput and capacity.

The most clueful and forward thinking WISPs I know are all making every effort possible, within whatever capital resources are available to them, to develop in house capacity for doing rural aerial FTTH. Buying bucket trucks, getting training, learning how to splice fiber, design GPON architecture, working with state PUCs for pole access, etc.

WISP and Musk are ruled by the same Shannon, so I'm skeptical that there will be a significant advantage over the long term, but satellite has a big advantage it terms of coverage -- the difference between "can you see that tiny hilltop in the distance over the trees?" and "can you see the sky?"

Same Shannon limit, but very different bands and channel sizes as well. Most WISPs are limited by the unlicensed frequency bands that exist in FCC part 15, and things like 3.5 and 3.65 GHz. One of the things that can go wrong with that is many WISPs in the same area fighting over the same bands from 5100-5900 MHz.

By comparison LTE fixed last mile services in some places (where expensive spectrum is owned by entities like tmobile) have some of the prime tree-penetrating frequencies in the 600 and 700 MHz bands, and 2.5 GHz band. One of the reasons why clearwire was acquired by sprint was for their 2500 band.


Typical is to quietly throttle you into oblivion.

Above average users are not and never welcome.

I can't compare but I think one of the selling points is going to be the satellite coverage for Starlink versus T-Mobile's cell towers.

I do like T-Mobile's offering as an alternative to these crappy cable companies.

Thinking about it, they may complement each other.

t-mobile will probably have cell towers/coverage where people live vs starlink which grants coverage where people do not live.

I can't because the nearest city to me with 5G is several hours away. The 4G at my house is super spotty. I've used it for Webex calls when my home internet went down and it will mostly handle that, but I would never want to rely on it for my primary connection.

Have you tried T-Mobile? I don't know what they have configured, but they certainly own a lot of midband 2.5mhz spectrum in east idaho. They have 140mhz in Idaho Falls and Twin Falls, and 180mhz in Pocatello, and at least 60mhz everywhere else in east idaho. It looks like they have a decent amount of sites too. I'd have to dig a bit more but if you don't have access to 5g on T-Mobile today, it's not gonna be more than a couple months away.

Fascinating! I go to Idaho Falls fairly often. My phone is Verizon but my wife's in T-Mobile (MVNO). I don't think her phone is 5G capable though.

I have the Tmobile service. I use at least 500 Gb/mo without problems. Others on the reddit forum use over a Tb without problems. The terms of service say Tmobile will deprioritize us when there is congestion, but I haven't experienced it. Also, it appears that Tmobile is limiting the number of subscribers in a given area to prevent degraded service. Overall, I like it (full disclosure: I have a dual Wan setup with xfinity being the other provider)

At first I thought “30s” meant “30 seconds” rather than “thirties (ms).”

My opinion of Starlink and what it means for the world was being completely reshapen until I read your comment again!

Ah yes, thanks! I did mean thirties but I changed it to `ms` to avoid confusion :-)

I have never heard of adding 'ies' onto the end of numbers to mean 'ms'. I'm (probably) not alone, fyi!

What industry is that lexicon from?

Internet connection speeds, in context. Latencies faster than milliseconds are not found in home internet connections (or anywhere maybe?) and 30 second latencies in Starlink would have prevented the product from even existing or would imply faulty hardware.

I don't think it meant "ms" specifically, it meant in the range of 30-40 ms; the milliseconds was implied from the context.

That just means 30-39. Probably would be clearer if written 30's. The ms is implied, due to the context.

what is clearer about implying the 30 posseses something? the post said 'low 30s', so to me that would imple 30-35, or even 30-33 if one wanted to call 34-36 mid 30s and 37-39 high 30s.

That would be a greengrocer's apostrophe.

Can you setup Starlink without the app, i.e: no cell phone? I'm getting real tired of devices that have no management/setup UI of their own.

I do think you need the app and a phone for at least 5 minutes. I did the initial power up and setup with that, and after 5 minutes of verifying it worked, replaced the starlink provided router with my own. Anything that is an ordinary 1000BaseT 1500mtu DHCP client will get an address when connected to the PoE injector.

The weird angled router they had out is just a convenience for non technical consumers who want an all in one 802.11ac box. The app on the phone also does the very basic first time setup step of defining an SSID and WPA2-PSK key.

Their router uses OpenWRT and supposedly you can hit it at, but when I try that I get a redirect to www.starlink.com

I did the same thing and got the same redirect. I haven't experimented with anything except GET / but it would be interesting to try throwing some params in there and trying other paths.

all of the interesting data is served from the phased array antenna unit itself, not the router...



Yeah, after another week or so to get my initial impressions with the router as-is, I'm going to do some more experiments and also swap out a couple other routers to see what I can do.

Sweet man, would you be interested in teaming up a bit? I'm fairly busy the next couple of weeks but I can find a little time to do some hacking. I'm FreedomBen on Keybase, or if you want to email me freedomben <at> proton mail dot com I can give you a real email address. Totally fine if not though!

It's not the official place where I'm working on this particular project, but if you have any notes or feedback you'd want to track/share through my internet-monitoring project, please feel free! Email is a bit tough, as the volumes right now mean I sometimes see a message quickly, other times after days or weeks :)


Awesome, will do! And that project looks great by the way. Thanks for the link

I do not believe so. I hate it as well but decided to tolerate it because at least once it is setup, I don't have to continue using it.

I setup without use of the router or app.

Yes. I’m not even using their wifi box.

Hows the gimble, I read its auto stabilizing is that true? And how well built is the dish, do you think it can handle long periods of constant wind exposure of 15+ knots

I've been contemplating putting one on my boat for use while at anchor. There is constant movement but its horrible

It's not a gimbal, it's a set of stepper motors and gears. It's not designed for constant movement or tracking. The current starlink terminal is a dual beamforming phased array that will align itself to have its flat face oriented towards the area above you that has the highest simultaneous density of satellites at any given point in time. Beta terminals in north WA state, for example, are angled about 10 degrees off flat, looking slightly north.

In fact a current starlink terminal (which has a 6-axis sensor and GPS receiver built into it) will turn itself off if it detects movement. The terminals for things like yachts are not available to the public yet, though I have no doubt they're in the works.

The dish basically locks into an orientation after it links up with the satellites, so it's not really a "gimbal" in that it's not constantly moving around.

The dish is heavy and feels tough; I'd be more worried about your mount than the dish itself with regard to wind; we're having 40 mph gusts today in St. Louis and dishy's working fine.

I'm more worried about hail, though... hopefully we can avoid the golf ball variety this spring.

I’ve had 80km/h winds and it’s just sitting on the bare ground without being fastened. Also had -25C temp and knee high snow drifts. No issues.

I use pfsense with the Protectli -- what exactly are you doing with CentOS just manually configuring as a FW for your LAN?

Is the upload/download speed the same? Does your public IP frequently change?

EDIT: Ah -- CGNAT. Missed that part.

Yep, I just enable IPv4 packet forwarding and use firewalld to manage ports. I use dnsmasq to provide DHCP and DNS services.

Here are my config files if you're interested. I've redacted my domain and some of the mac addresses. One of these is a shell script that sets up the firewall: https://gist.github.com/FreedomBen/f8a50c7a98c07171a99c419a5...

Thanks for taking the time to post the files. I actually learned a few things and like the setup. Might give it a shot. I haven’t been happy lately with the business behind pfsense, this might push me to configure my own.

How is the power consumption compared to a regular router + fiber/ADSL dongle ?

I can answer that I've been monitoring the full setup (dish + router) through a Kill-A-Watt for a few days, and it never goes below 94W, and averages a little over 100W.

I don't have numbers for just the starlink stuff since I have other things plugged in to my UPS, but that sounds about right. The power brick is always warm to the touch.

My other ISP also uses PoE to power a wireless dish (line of sight) and uses a little less power but not a ton less.

the power consumed for a WISP PtMP last mile CPE radio is considerably less than starlink. Typical CPE for a ubiquiti, cambium or similar antenna will be 8-12W at the AC wall power side of the PoE injector. Starlink is more like 100W constant.

I see report of 5-10w on ubiquiti gear.

How much of that is dumped into the antenna? It’s LOS so it shouldn’t need too many watts for TX.

ISPReview have a good article on this. It's more than I expect. https://www.ispreview.co.uk/index.php/2021/03/electricity-co...

From the article the author suggests 100W continuous using POE++.

Thank you for this! I was going to sign my parents up. They live closer to Boise.

Are the speeds symmetrical? In other words upload is the same as download?

no, I've had a beta terminal for months, upstream averages 15-18 Mbps with brief bursts to 30.

15-30mbps is about what I get for my upstream “gigabit” cable service.

that's typical on DOCSIS3 cable since they intentionally allocate and bond a much greater number of channels for downstream capacity, to match the usage patterns of hundreds of users in aggregate. The actual amount of RF available for upstream is quite small in a typical configuration.

You can get a modem with 32 down and 8 up channels. 8 up channels can support over 200 Mbps. But Comcast will still limit you to 5-10 Mbps.

Then there's DOCSIS 3.1, which actually supports up to 1 Gbit/s up, but Comcast still only gives you 35 Mbps on their gigabit plan.

IMO it just comes down to Comcast and other cable providers being cheap and not investing in their infrastructure to provide better upload speeds, even though the tech itself is capable of it.

Nearly all cable networks use a low split, cutting off around 42Mhz for upstream bandwidth. Not all bandwidth from 0 to 42Mhz is usable due to external interference, most cable systems are able to get 4 full sized 8Mhz Docsis 3.0 channels into this space and one partial channel of 3 to 4Mhz.

Certain ISPs like Cox have started using OFDMA (Docsis 3.1) upstream channels as it is 50% more efficient than classic Docsis channels and you can operate it closer to spectrum with interference since it can run subchannels at lower modulation

just because the modem is capable of 8 channels up does not mean that it's likely your local segment to the CMTS is configured that way.

I hope they can change that, we all need more upload these days working at home

So, how do you use these devices in an SD-WAN active/active configuration? What is your unified VPN solution that lets you use both upstream channels simultaneously?

Have you been using ssh at all? How's that experience? That's my main use case.

How about video/audio calls? does Wi-Fi calling work well?

Yes I SSH all day every day, and it has been totally fine (at least while the service is up. There is daily down time currently). Video calls the same. No issues (when it's up).

That said, when I'm on my own machines or ones that I can install things on, I can't recommend mosh highly enough. I've literally gotten on a plane and had the shell pick up thousands of miles away without missing a beat once the laptop was back online at the hotel.

Not OP but I've setup a continuous reverse SSH from a farm in the South to a Comcast residential service in the Bay area.

SSH works but there's enough latency and other general network variation that makes me think it's not quite good enough an experience to spend a day remotely editting files.

For anything not requiring really low-latency, Starlink absolutely shines. Watching the local news from my childhood farm on the other side of the country via satellite internet feels like the future.

Ahh, kids of today.

I'm on traditional satellite for one week out of every two - at 600ms RTT. I'm either SSH'ing into hosts over that link, or worse, using a Citrix VDI to access (mostly) SSH terminals at the far end.

It's tolerable but far from enjoyable.

30ms latency would be an utter delight, not just compared to geostationary, but also compared to what we had in the late 20th century in terms of terrestrial connections.

I’m on MS Teams video calls a lot. It’s fine for speed, except sometimes the connection drops for a minute. I assume because a satellite is not overhead at that time. Supposed to improve as more satellites are sent up.

I the app absolutely required?

I wish I had 60 mbps :( That would be a big upgrade for a lot of people.

Wow, I'm impressed! I thought it would take more than a decade to implement when it was announced, but it's here and it works. Just amazing.

100/16 Mbps is pretty decent I guess, hopefully it doesn't go down as the number of users goes up. The latency is great imo, 40ms using satellites? I don't think anyone has achieved that before.

Would a bigger dish work better or not?

Aiming for 20 ms later this year. (Elon time)

A bigger dish would not lower latency but may increase signal strength leading to better throughput. But it's not a simple parabolic antenna. It is the first consumer oriented phased array antenna tracking the satellites as they move across the sky, so that would increase the antenna cost even more.


« A bigger dish would not lower latency but may increase signal strength leading to better throughput. »

A better signal strength would probably lead to better modulations being used, therefore, less transmission and reception time, and a better latency, or am I wrong ? (at least that's the case for Wi-Fi: the better the signal strength, the lower the transmission time so the better the latency).

Latency in this case is dominated by time-of-flight to the satellite.

At 100 Mbps with a 40 ms latency, there are about 2 megabits in the air between the ground station and CPE.

> Latency in this case is dominated by time-of-flight to the satellite.

Is it though? Starlink orbits at 550 km, time-of-flight from ground to satellite to ground would be only 3.7ms, twice that makes ~20% of the roundtrip latency.

There are more than 2 roundtrips. Any MAC that has to perform time division multiplexing on a shared uplink has to poll all base stations over time to figure out which ones have data to transmit, and how much data is queued. Once the satellite knows how much data the ground station has to transmit, it assigns sufficient timeslots, transmits the assignment and then waits for data to come back. This is very similar to PON networks where upstream is shared, but the difference is sub-1ms latency vs 4-5ms latency. Sadly, this does have unfortunate latency implications for how long web pages take to load and render. Streaming video should, however, work swimmingly.

It’s interesting they’d do TDMA over some flavor of code division multiple access (CDMA).

From what I read it has to do with the fact the antennas are high gain directional antennas and not omnidirectional ones like on cell phones. With cell phones you are kinda walking around in a soup of cell signals all sharing the same spectrum at once... you and hundreds of other people are broadcasting in the same frequencies at the same time and they all tell each other apart because they all use a different “language”; the Wikipedia CDMA article does an excellent job explaining this.

I would think that as more satellites get launched they could use WCDMA and signal from your station could be seen by multiple satellites in orbit much like a cell phone can reach multiple towers.

Writing it out... I bet TDMA is required because the FCC would never grant a block of spectrum where hundreds of thousands of ground stations were using low gain, somewhat omnidirectional antennas to reach a constellation of satellites in space....

It's even more complicated than that. Thanks to MIMO antenna arrays, signals from multiple ground stations can be received and decoded at the same time (MU-MIMO). The advances in radio MACs over the past 20 years is seriously impressive compared to what was considered high tech in the 1990s, and it's all a result of Moore's law making it cheaper to do more math in the same size and power envelope as older semiconductors.

Had to look up MIMO.

That is actually really cool.

So it is doing some combination of time division and spacial division.

550km is the closest approach. Usually it will be at least sqrt(2) times that (ie at least a slant angle of 45 degrees), maybe 2 times that. Plus the latency from the Gateway to the actual server. And the Gateways can have even greater slant angle to the satellite than the mobile terminals.

I don't think it'll usually be at a slant angle of at least 45°. The beta requires a field of view of 100° after tilting. I can't find the maximum tilt angle, but SpaceX has authorization to transmit only 25° degrees above the horizon, so the maximum slant angle is 65°.

But think what that means in terms of SOLID angle, not linear angle.

Orbital planes are so close together that it doesn't make much difference. However, now that I've actually calculated it, your sqrt(2) factor seems to be about right for the average distance -- there's too few satellites per plane in the current phase.

In this phase Starlink uses 72 orbital planes, with 22 satellites per plane, so 1440 satellites in total (they're almost there). It orbits at 550km above Earth's surface, so the orbit has radius 6921km, which gives an orbital length of 43486km.

Separation between orbital planes varies depending on your latitude, but assume the worst case, where it is 43486km / 72 / 2 = 302km¹. Thus, the nearest orbital plane is at most 302km / 2 = 151km away from the orbital plane directly overhead. However, since the planes process, on average the nearest orbital plane is only half that, or 76km away from the plane overhead.

Satellites within each plane have a separation of 43486km / 22 = 1976km. Thus, there's always a satellite at most 1976km / 2 = 988km away¹ from any point in each orbital plane, and on average there's a satellite half that away, or 494km.

Adding all this together, the nearest satellite is on average √(550^2 + 76^2 + 494^2) = 743 km away (at the worst latitude).

[EDIT: Actually, that's improper averaging, the correct average is obtained with ∫√(550^2 + x^2 + y^2) dx dy / ∫ dx dy on x=0..151, y=0..988, which yields 777km].

The original plan used 24 planes with 66 satellites, which reduces average distance to 617km. At more favorable latitudes the difference with the current design would be even larger.

[EDIT: This should be 635km.]

¹ This is distance on the surface of the orbital sphere, straight-line distance is a bit less. It probably doesn't make much difference.

This is a great comment. Thank you for writing it.

It could also mean less retransmission, depending on how the MAC works. That would lower the average latency.

So, could multiple single units be coupled together?

I assume that would be less expensive than creating larger circuit boards, shipping and packaging.

I read that as "future versions of the antenna will be much improved", in price and performance.

I know Moore's Law is being repealed, but that's still how new types of electronics work, right?

Moore’s law was specific to the way IC manufacturing worked. While it does (/did?) help reduce the price of electronic simply because they relied on ICs, the big price decreases you see in products after their early-adopter phase are likely just general economies of scale.

Making small early batches of anything is more expensive per unit than making a ton of them —- regardless of whether it’s cutting edge tech or a plastic chair.

I think in the antenna's case, the primary goal would be to reduce costs for now (while still maintaining reliability), since the things currently cost $1500 per unit to make (SpaceX takes a loss on each new customer initially, for now).

I have seen as low at 15.85 ms from my starlink terminal to downtown Seattle. On average it's more like 22-23 ms.

Centurylink DSL from across the sound to Seattle is generally 21ms, although there is little deviation. (Two line bonded vdsl)

just the segment from your modem to the DSLAM is probably 14-16ms, which can be typical for VDSL2.

> 40ms using satellites?

Starlink sats are low earth orbiting (about 250 miles away). The really high latency sats that people used to use were geosynchronous sats that are parked about 30,000 miles away, and the round trip delay between earth, bird, earth.

> 100/16 Mbps is pretty decent I guess

Its fantastic, can't wait until this is available where I live. Currently paying $200+ a month for 20Mbps from local wireless company.

2 miles North from here there is AT&T fiber and Comcast available, 5 miles South there is Comcast (150Mbps) but I'm in a small community of homes where only options are satellite or fixed wireless.

Make friends with someone on fibre 2mi away with an old TV tower or at the top of a hill. Set up your own link.

Yeah, that's how we got most of our Internet access here in Czech Republic, starting ~2005. The monopolistic telecom company was totally incompetent so people banded together, bough some upstream connectivity and the connected themselves to that link via wireless point to point links using commodity WiFi hardware with directional antennas.

Since them most of these networks are still in place, now in the 100 mbit/gbit territory. And while most turned into commercial volcanoes over time, there are still some that work as community organizations to this day.

This seems weirdly specific and not terribly practical advice

how so?

Find an old tower? c'mon

The latency may well be better than landlines in some cases. For instance, data from an edge cache in your city will be faster than starlink. But data from china? Starlink may win.

I wonder if it could improve gaming/video conferencing with people far away.

Right now, that's not really the case though, as the satellite interlinks aren't operational yet. All your traffic goes through a base station relatively close to you, and continues over "regular" fiber around the world.

This article and accompanying videos are interesting in that regard: https://www.circleid.com/posts/20191230_starlink_simulation_...

It suggests they could be lower latency than a great-circle path ground fiber without the satellite interlinks.

If you’re paying 20+ms per bounce through the atmosphere, no chance.

Not true as the slant angle can be pretty large for Gateways. It is still in principle possible to beat fiber.

I know my Spectrum cable connection gets high 30s range for latency. Starlink is close at low 40s, and as they get more sats up, that latency average may go down.

> I thought it would take more than a decade to implement when it was announced,

The idea of low orbit satellites for internet has been around at least since the late 1990's [0]


The cell grouping is interesting. A colleague likes the outdoors, so he's preordered one for his Suzuki Jimny, to mount on it. I wonder if Starlink are considering this use case.

I haven't been able to preorder mine, because we're planning on moving out from the city to a small village next year, but the Starlink website requires a street address.

Our villages are quite primitive, no street names (I think it's cos nobody's thought of it). So, the nearest town where there's street names, is quite far.

I was feeling uneasy about using it as an address, this article sort of cements that concern.

I have 50/50Mbps fiber, but reckon we could still be served by 20 down if needed. Exciting!

Starlink has said that they're planning on offering a roaming use-case, but they aren't there yet (https://www.slashgear.com/spacex-starlink-cell-location-limi...).

Part of the issue is that Starlink cells are going to be very limited in capacity for the foreseeable future: https://www.ispreview.co.uk/index.php/2020/09/analyst-probes.... Cowen suggests that Starlink "should eventually be able to serve 485,000 simultaneous data streams in the USA with 100Mbps speeds or 1.5 million streams with over-subscription." That's in late 2026 or 2027 when Starlink has deployed around 12,000 satellites (they're at about 10% of that now). If a bunch of people decide to bring Starlinks to a popular area, the cell for that are simply won't have the bandwidth to support all those users. Imagine people going to Burning Man with lots of Starlinks or bringing their Starlink when they go on vacation. It's not meant to be a portable WiFi hotspot. I'm guessing that portable use might cost more since Starlink has to assume that you might be taking up capacity in places where bandwidth is more scarce.

In terms of preordering, you can order without a street address. Starlink's website says, "Can't find your address? Try a Plus Code with City" (and links to https://support.google.com/maps/answer/7047426?co=GENIE.Plat...).

One thing I would also point out about Starlink is that they only guarantee that you'll be able to use the $500 dish (plus $50 shipping) for 12 months before being forced to replace it. Starlink is new tech and I'm guessing SpaceX wants to be able to upgrade things without maintaining support for less-efficient, older equipment. I don't expect them to force upgrades on people on a whim, but they do spell out that the $500 dish might not be allowed on their network a year after your purchase. I don't think they want to make customers unhappy, but I think they want to make sure they can upgrade their network without getting sued for not supporting expensive customer-purchased equipment forever.

> they only guarantee that you'll be able to use the $500 dish (plus $50 shipping) for 12 months

I would guess that if they disconnect your dish from the network, they'll give you a new one for free. I know that isn't what their terms and conditions state, but it would be bad business to do anything else.

They'll hope that most users choose to upgrade first (for more speed or other features).

$500 is pretty much free for this equipment. Production price is currently around $3000 according to SpaceX president Gwyneth Shotwell [0].

[0]: https://twitter.com/jeff_foust/status/1379457265435078665

That tweet says <$1500. ("less than half the original $3000")

33% of the manufacturing cost isn't nearly free, in my opinion, but it is definitely a loss leader product.

Since the antenna can't be used for anything other than receiving starlink service, it isn't really logical not to consider the cost as "$500 + $100/month forever"

> Imagine people going to Burning Man with lots of Starlinks or bringing their Starlink when they go on vacation.

I deeply hope people aren’t going to those to download OS updates and watch Netflix. Might have some crazies that feel the need to livestream the whole thing, but should be okay if only a handful at a time.

A neat way to prioritize users is to give priority to whoever has downloaded least in the past 24 hours.

Then those who abuse the data get slow service and nobody else.

> Imagine people going to Burning Man with lots of Starlinks or bringing their Starlink when they go on vacation.

That is exactly why I would be interested in it. :) Not just Burning Man, but other events. We do special event medical and having some reliable roving internet access like Starlink would be absolutely magical for us.

You could imagine a handful of starlinks providing the backhaul for public wifi at an event.

"he's preordered one for his Suzuki Jimny, to mount on it. I wonder if Starlink are considering this use case."

It sounds like they don't support a roaming base station for now. From the FAQ: https://www.starlink.com/faq

Can I travel with Starlink, or move it to a different address?

Starlink satellites are scheduled to send internet down to all users within a designated area on the ground. This designated area is referred to as a cell.

Your Starlink is assigned to a single cell. If you move your Starlink outside of its assigned cell, a satellite will not be scheduled to serve your Starlink and you will not receive internet. This is constrained by geometry and is not arbitrary geofencing.

Sounds like some sort of authorization comes down from the satellite, and they don't want to have to push all authorizations from all satellites. Which is odd for a full duplex service. Searching around google suggests a "cell" is roughly a 4.5 mile radius circle, and you probably aren't in the center. So movement would be pretty restricted.

This is changing soon. SpaceX filed with the FCC for approval for mobile ground stations on vehicles like RVs, boats, and planes. It's coming! https://fcc.report/IBFS/SES-LIC-INTR2021-00934/3877177.pdf

I’m guessing roaming clients is how they are actually going to make money.

I wonder if you can schedule changes to this.

I imagine they'll eventually have more expensive plans available for mobile/roaming applications, but obviously you want a well spread base load to avoid saturating any one area. It sounds from some of the other posts like they also do a bunch of active assignment/handoff stuff as the satellites pass, and that gets a lot more complicated once you're also accounting for moving ground stations.

You don’t need an address. I’m in the preorder queue for a small island with no streets, much less addresses.

whoa. Is it your island? What region? How large is it? I need more details.

You can put a google plus code (like coordinates) into the starlink address finder for places without street addresses.

Why not just coordinates? Talk about overcomplicating.

Probably because a plus code is harder for someone to screw up.

Find the place on Google Maps, copy the plus code.

Rather than entering lat/lon coords... except did you get the sign right, or maybe you flipped lat/lon, or got a significant digit out.

… or you used minutes with a decimal minute, or minutes and seconds, or maybe decimal degrees.

There comes a time when a format has so many confusable variants, it’s best to make a new unambiguous format.

I prefer “three words”. You get three words and they identify a location suitably accurate for navigation. https://what3words.com/

If only 3 words wasn't a proprietary black box, which demands that you use their service / api and that you are not allowed to reproduce it without their assent. Plus there's a bunch of other drawbacks to w3w if you just search a bit online.

At least the algorithm for plus codes is known and can be reused even if Google decides to drop it in the future.

A system that depends on a functioning proprietary API to resolve coordinates is idiotic. That’s so unreliable that it can’t be used for anything more than an ephemeral exchange. At that point you might as well have a 3 word url shortened link.

I prefer an unambiguous and open way of communicating locations on the surface of the Earth, to a proprietary service from a company with a long history of pulling the plug on products.

"Plus codes" are also known as "open location codes", and they are open source, not proprietary, nor do they rely on a central service.

You appear to be conflating other proprietary systems with this open one.



Interestingly, the plus code for me is almost working. It's sending me to another place once I've put in the location on the site.

It's a good start though, I'll try other close plus codes until I get it right. Would still have been better to use coordinates.

Clicking on a map location in Google maps shows a context menu where the first entry is the latitude and longitude, click that and it copies it to the clipboard. No need to use a Google specific encoding.

I'm not sure that using Google Maps to demonstrate copying a single way of encoding coordinates vs using an open standard is a terribly convincing argument.

Having a field that takes lat and lon has all sorts of ways to enter data either incorrectly, or in an unexpected format - if someone has coordinates from some other source they might be typing it in, rather than copy/pasting.

Also, as others have mentioned - plus codes (or Open Location Codes) are an open standard that can be implemented by anyone under an Apache 2.0 license with a whole bunch of example implementations on github[1]

[1] https://github.com/google/open-location-code

> Probably because a plus code is harder for someone to screw up

Probably because a plus code is harder for someone to use with non-Google services and open source map tools

It's open source... anyone can encode and decode plus codes: https://github.com/google/open-location-code


Interesting edge case of no street names (which in retrospect is not an edge case at all given the use case for remote users!). Wonder why they haven’t allowed GPS coordinate input?

they do when you sign up in a geographic location as a "available to ship now" beta customer, after putting in your address, it brings up a view of a satellite/google maps view of your area and asks you to zoom in and click on the map on the precise location where the terminal will be installed.

Yeah man strapping this puppy on my camper was my plan. It will be interesting to see how it works out

Aside: The router design is as impractical as it is futuristic. The thing would fall over if you looked at it sideways, and the solitary LED on the front was hard to see unless in a dark room or looking closely, straight at it. Hopefully a 2nd iteration will be better!

Wow, he's right about the shape of the router. It also looks like it forces the ethernet cable in front if you want to be able to see the LED, and the cable itself is pointed downward: https://preview.redd.it/42rc9fkqwnw51.jpg?width=960&crop=sma...

I suppose that's a minor nit, given what Starlink delivers, though. I'm curious how practical it might be for on-board aircraft wi-fi. That's a space that could use a leap in bandwidth/tech, as FAA certification makes it difficult to keep equipment current. I'm curious if tracking is hard since the satellites and the "ground station" are both moving around...the aircraft on all axis points.

>Wow, he's right about the shape of the router. It also looks like it forces the ethernet cable in front if you want to be able to see the LED, and the cable itself is pointed downward: https://cdn.hackaday.io/images/8430831598574092393.jpg

I'm not sure the image you link is a fair representation of the router. The one he actually shows is SIGNFICANTLY shorter/smaller/wider and honestly while it's similar it's a different shape.


I have one and it's really light weight and prone to tipping over, it was only used for five minutes during the initial setup. Based on some people who found the serial pins and watched its booloader it's based on a customized openwrt, the circuit board is pretty basic.

there's some good shots in here of the router removed from its enclosure: https://www.youtube.com/watch?v=ObCTB8ol3Ng

it doesn't seem much more complicated to me than the PCB of something like a mikrotik hap ac2 ($65), based on simplicity I'd say it's like a $40-50 router BOM, max.

Hrm, okay. Linked to a different image that shows the shape from two different aspects.

Acting like the LED not being bright is a /negative/ thing is odd to me. Why would you care if it’s lit up unless you’re investigating an issue and you’re right in front of it anyways? Just seems well designed to me.

I absolutely detest bright LEDs, but in this case, it's a dim LED set back in a recessed hole in the router that is hard to make out at all unless viewing it straight on.

Better this than a blaring blue LED I guess, but it would be nice to just have it easier to see in general. Having a shiny brushed reflective surface to the router (instead of, say, black or pure white) exacerbates the problem.

The router looks 'cool' like the Cybertruck, but could use some refinement physically. The shape also makes it hard to mount on a wall, or even to place it horizontally if the need arises.

Note that one can use any router with the dish. Basically the dish provides you with a NATed private IP which you can assign to the WAN interface of your router of choice and go from there.

It’s a bit more complicated than that if you want reliability. They have two IP ranges on the same physical link, and their patched DHCPD is not standards compliant. If the dish loses its connection, you will be stuck with a useless local IP unless you block DHCP leases in the 192.168 range.

Oh, that's interesting. Is the router then completely just a commodity thing that has no specific Starlink functionality at all?

Edit: Some reddit posts suggests the Starlink App doesn't work if you don't use the router. But also that the app isn't terribly useful outside of the "obstructions view", which is mostly a one-time need during installation.

The starlink app works with your own router, you just need to set up a static route.


It works fine if you add a static interface route to on the wan interface of your router.

That's correct. The starlink app only works when using the included router. The app is useful if you like to see numbers and graphs of dropped packets, throughput, etc, but otherwise isn't needed.

I haven't tested this yet, but apparently if you use your own router you may be able to get an IPv6 address assigned.

I use the command interlocking strips for all my devices that I want standing up: https://www.command.com/3M/en_US/command/products/~/Command-...

E.g. putting echo dots near a ceiling, or little lamps in a spot to prevent tipping by children, for routers to stand, etc.

I like the satisfyingly ‘click’ when things are in place :)

> I'm curious how practical it might be for on-board aircraft wi-fi

I suspect it'll work well for this scenario. My understanding is that the cruising speed of a plane is relatively slow compared to all the other motion involved (LEO satellites move fast!). The US Air Force is reportedly working with SpaceX to test Starlink in various conditions including in flight https://www.avweb.com/aviation-news/air-force-testing-starli....

> It's powered through PoE++ (using around 100W of power continuously)

Not familiar with Sat gear, but this number seems extremely high to me for just powering an antenna (and some motors occasionally)

Other users have measured power consumption around 100W constant in operation. https://www.reddit.com/r/Starlink/comments/l22f1u/power_cons...

Power consumption could probably be a bit lower, but there are limits. Keep in mind that this is a phased array maintaining a relatively high bandwidth and high SNR link with the satellite. This is complicated microwave electronics design, a whole different ball game than old grandpas bragging about how they made a contact with someone across the ocean with 5W on their homebuilt HF radio. Starlink may have also made a conscious decision to make the user terminals "overpowered" so they can use a less sensitive receiver on the satellite, saving SpaceX weight, power usage, construction costs, and launch costs. The cost of more power usage is paid for by the user with an extra penny per month in their electric bill.

More like $10/month, not a penny.

That's assuming that you could get the power consumption down to zero, which is impossible. While a penny was an exaggeration, a 5W power savings over a month is about 50 cents savings on someone's power bill.

Yep, for me it would be $8.50.

For reference, this is an order of magnitude greater than the typical power consumption for existing satellite dishes / 4G modems, which typically operate at around 10W of power consumption.

For reference, most existing satellite dishes are receive only, and 4G modems only have to close link on a tower a few miles away, and at RFs where atmospheric attenuation is much lower.

Huh? You think existing satellite internet is receive only? How exactly would the web work in such a system?

> Keep in mind that this is a phased array maintaining a relatively high bandwidth and high SNR link with the satellite.

Is it transmitting anything to the satellite when you are not actually trying to send internet data?

It's definitely doing... something. I had my Kill-A-Watt in it for a few hours with some usage and lots of idle periods, and between the dish and router, power consumption was never below 94W. Average around 104W, peaks at 124W.

Motors weren't being used during my measurement period either.

The dish gets pretty warm in operation (it was already a warm spring day, so it wasn't trying to de-ice or anything).

My guess is that it's always communicating with the Satellites to adjust its position.

For anyone interested in a teardown, Shahriar at The Signal Path is a pro and he did one here:


Highly recommended. Goes deep into the antenna design and overall components (and why it costs so much to manufacture).

Are you in the UK? (Wondering where pennys are still in common usage.)

Here in Australia, for 100W / hour I'm going to be paying ~ A$250 / year to keep it powered. That's about 160 euro, or 200 usd.

And it has 2 motors for dynamic aiming

There’s a pretty crazy amount of electronics in the dish. They are likely selling it at well below cost at present.

Teardown showing PCB at: https://youtu.be/iOmdQnIlnRo?t=2152

And more detailed RF analysis at https://youtu.be/h6MfM8EFkGg

The CEO just mentioned first version of the dish cost $3000 to manufacture, and it's currently down to $1500. Major goal is to reduce manufacturing cost, since they're selling the thing for $500 and taking a loss right now.

the closest thing I have ever previously seen to a starlink phased array is the flat panel phased array radar in the nose radome of an air superiority interceptor type aircraft.

It's not just an antenna.

It's actually a whole satellite terminal as well as all the electronics for controlling the phased array. It can operate entirely without the router they give you - plug in a computer, or another router (into the white port on the POE injector), and off you go.


If you're interested in watching a Teardown of Dishy, you can see one here: https://www.youtube.com/watch?v=iOmdQnIlnRo

100W is actually really low in the class of two way satellite equipment that's capable of 100 x 15 Mbps speeds or greater.

I am not talking about ordinary cheap consumer grade hughesnet or viasat stuff, but if you were to do the power budget for an idirect x3 modem and a traditional geostationary ku-band VSAT setup with a 20W BUC, the actual AC wall power consumed would be quite a lot more. Just the BUC is going to be 200W.

Hughes and viasat idle at about 30W and peak at 70W. Both are capable of 100/15 speeds.

but in real world use, often much, much less. The average screenshot of speed tests I see from unhappy viasat and hughesnet customer is like 2 to 5 Mbps down and 0.9 Mbps up. Maybe at 0300 in the morning.

What I was referring to is the class of VSAT equipment much more costly than a proprietary viasat or hughesnet terminal, which the ordinary consumer will never see, that's capable of doing a dedicated 100/15.

Fair enough, and you seem reasonable based on all your comments. Why is anyone giving any credence to the starlink speed tests? They are maybe 5% of the target user base they'll need to be profitable with this number of satellites, yet again and again, nobody mentions that. The funny thing is that everyone dismisses the common outages as a beta issue, but the speed and latency are never considered a beta feature.

Just for comparison satellite dish (Satellite television dish) uses approximately 30 watts


A satellite dish does not transmit any information to the satellite. Satellite TV is a pure broadcast system in the forward direction.

Moreover, comparing a parabolic receiver with a phased array is quite unfair. The amount and complexity of the electronics and processing power required is several orders of magnitude different.

A better comparison might be HughesNet. Not perfect since it's different classes of satellites, but their modem is spec'd at drawing up to 60 watts on their latest gen. Though, in practice, it tends to draw around 15-25 watts, at least according to comments I can find online.

One of the benefits of this (maybe justified retroactively, but very real) is that it melts snow off the dish.

Seems to me that most "normal" satellite gear like satellite tv is receiver-only.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact