Hacker News new | comments | show | ask | jobs | submit login
Deploying a global, private CDN (edgemesh.com)
114 points by jloveless 51 days ago | hide | past | web | 88 comments | favorite



Ugh, I hate these P2P "CDNs". A few years back, CNN tried this for their streaming video with technology from a company called Octoshape. Users (including myself) were unwittingly conned into accepting the plugin in order to watch live video. This created a huge mess for big corporate IT departments, who suddenly had hundreds or thousands of desktop machines streaming out video whenever there was a major news event.

I realize that this is a server-side option now, however. Still, it's a crappy deal. A decently-sized deployment of public cloud boxes to support your private CDN is going to cost far, far more than an actual CDN. Public cloud bandwidth is obscenely priced compared to what you can get it for on the CDN market.


Author here:

>"Public cloud bandwidth is obscenely priced compared to what you can get it for on the CDN market"

Amen to that! I think where this comes in to play is when you've already got colo space and excess capacity (e.g. eBay etc) and/or you'd like to leverage other edge pop's outside of your provider (e.g. mainland China). But it also adds some level or protection against correlated backbone issues if you can add p2p edges along other providers (similar to Netflix's design). When we looked at the correlation across existing CDN providers we found it was ~95%[1].

Video streaming specifically is _especially_ bandwidth intensive and will definitely cause issues in corporate LANs. It's one of the reasons we add ASN categorized black listing (e.g. residential vs. hosting vs. corporate etc)

[1] https://blog.edgemesh.com/understanding-diversification-netw...


Hello, lead engineer here. Just to clarify, you don't need a plugin to enable Edgemesh as a user. Everything is 100% browser compliant. Your webmaster adds our one line of code and our one javascript file you are done. Your users will never even see a pop up.


Obviously the plugin isn't the issue, the fact that my browser will now be uploading X MB/sec, which I have no control over, to random peers. This sucks, especially because I have capped monthly bandwidth from my penny-pinching ISP.

Why should I host and seed your data for free?


We detect if you are on a metered connection and we disable all seeding functionality to ensure you are not playing for the bandwidth we use. We also have an opt out mechanism, but that is up to the installing site to implement. The reason you would want to host data is that you get a faster internet experience. We fill your cache with assets that YOU are likely to request in the future. Performance is our driving metric.


You and jloveless have been commendably open in this submission. Personally, I think that if you're not destroying someone's limited data then you're meeting what should be expected of you. Demanding that you deliver your content in a certain way is silly IMO.

Say I have a web application that displays a complex rendering of millions of constantly changing points, and for some reason it's very expensive for me to do computing. However it's easy to write some javascript that renders the millions of points on the user's computer. It's absurd to say I'm being unreasonable by streaming more data to the user instead of rendering frames and streaming video. Using my upload speed is annoying, but it's still stupid to pretend that using a website is entirely one-sided. It's like complaining about ad bandwidth.

Abuse is one thing, but this isn't categorically bad. Plus, it's really cool!


Thank you! We work _really_ hard to ensure we're staying off CPU, managing disk, and making every replication event count (generally intra-ASN). But it's also really valuable for our NGO clients (and other non profits). My personal favorite is an aide program where they literally bring a Supernode on a laptop, setup a WiFi[1] point in the middle of no-where and can support a fully interactive site for refugee's who have devices when they reach the camp. They can then find out where they are, and what's going on - and you can power a surprisingly large site from a single laptop . It's also really helpful in places like sub-Sahara Africa where in region bandwidth capacity _dramatically_ outstrips off country bandwidth.

[1] http://www.meshpoint.me/


That's fantastic! I have a personal vendetta against heavy websites specifically because of how unusable they are in remote countries, so that sounds just fucking awesome to me.


> Demanding that you deliver your content in a certain way is silly IMO.

Perhaps, but expecting that visiting a website implies that my computer is not transparently inserted into another companies CDN distribution scheme is not..

via: https://edgemesh.com/product

" ClientRecieve & Render

When a user visit's an edgemesh enabled site their browser begins to execute the client side Smart Mesh™ accelerator. This code uses our patent pending distribution method to transparently and seamlessly join the edgemesh overlay network. While the your web page assets are requested, the client side code analyzes the response time from your servers to the browser and will optimally decide when to request assets (images, videos, etc.) from the mesh network vs. fetching the assets from your server as normal. If the client obtains the assets from your servers, it alerts the Hub process to store these new assets on the mesh. Best of all, this dynamic crawling of your webpage means no more management of cache settings, even on dynamic content.

Smart Mesh™ ensures your users always have the most recent copies of the most requested assets, automagically. "

" HubMesh & Store

The Hub process is a client side Javascript engine which loads in parallel to the user's page load process. The Hub is the client side brains behind edgemesh, and allows the browser to effectively pre-cache content. The Hub communicates with the edgemesh signal servers and gets the optimal list of assets for this browser. Unlike simple peer enhanced solutions, the Hub allows for Cross Origin asset replication.

For example, if your users are viewing https://example.com the Hub process allows their browser to request cached assets from other active edgemesh users - even those currently viewing other sites! The Hub intelligently replicates the edge caches across geographies and networks, and in most cases ensures your visitors have a local copy of your content before they even know they need it. Best of all, the Hub ensures that your site joins the millions of other mesh enabled users - allowing you to tap into the colocated acceleration of peers across the entire community. "

Not quite sure how this isn't that much different than a JS based Bot client / trojan horse TBH, although the traffic isn't officially "malicious", but rather part of some 'innovative and disruptive new startup tech'..

I will look forward to see this go the way of the Bonzi Buddy and Clippy.


I'm talking about seeding though. Seeding data to random peers will make my internet slower, not faster. I don't want to seed. If your service makes my machine start seeding to random people because I accidentally visited a website that uses your malware, then that sucks!

Maybe implement some kind of blockchain solution so that I get paid for the data I seed? (/s)


I have no interest in having my storage or bandwidth abused for anything that is not being shown on my screen right that very moment. And even then, uploading this content to other people is ludicrous. I will be sure to block your assets.

Anyone know of a good way to detect sites that are rude enough to abuse my network connection for their own gain?


Probably the best way is to check the WebRTC stats [1] if using Chrome. We sit atop the WebRTC stack for p2p functionality.

[1] https://testrtc.com/find-webrtc-active-connection/


Bravo for your transparent and open approach to user feedback. The standard these days seems to have fallen low, with countless companies implementing sneaky ways to exploit users however they can, with sugar-coated, obfuscated language. It's refreshing to see such honest replies from this project, especially considering the question is about how to avoid participating unwittingly. I also would prefer not to share bandwidth in this way without knowing, but as you described in another comment, I see there are real positive benefits when used ethically.


> We detect if you are on a metered connection

How that? What makes you think that is even possible?


What we do it we have a mapping of ASNs that are flagged as metered. When your client comes online we take the IP, map to the ASN and determine if it is able to upload (e.g. on cellular/metered etc). We buy this data today and you can always drop an email to meter_notice@edgemesh.com with your IP and we will add it in. We also prioritize upload partners for known ASNs (e.g. you're more likely to be chosen for upload if you are on Verizon Business than Verizon Fios than Telstra).


Verizon's FIOS explicitly forbids this in section 4.3:

3. Restrictions on Use. The Service is a consumer grade service and is not designed for or intended to be used for any commercial purpose. Except as otherwise set forth in this Agreement, you may not resell, re-provision or rent the Service, (either for a fee or without charge) or allow third parties to use the Service via wired, wireless or other means. For example, you may not provide Internet access to third parties through a wired or wireless connection or use the Service to facilitate public Internet access (such as through a Wi-Fi hotspot), use it for high volume purposes, or engage in similar activities that constitute such use (commercial or non-commercial). If you subscribe to a Broadband Service, you may connect multiple computers/devices within a single home to your modem and/or router to access the Service through a single Verizon-issued IP address, and if available through the Service, you may permit guests to access the Internet through your Service’s Wi-Fi capabilities. You also may not exceed the bandwidth usage limitations that Verizon may establish from time to time for the Service, or use the Service to host any type of server. Violation of this Section may result in bandwidth restrictions on your Service or suspension or termination of your Service.

Source: http://www.verizon.com/about/sites/default/files/Verizon-Onl...


Let me first say ... IANAL. That being said this (like most legal language) is a broad as possible and by design. Having spoken with Verizon (Wholesale, Wireless and Edgecast teams) there seems to be a consensus that models that limit their (the telecoms) transit costs are encouraged and there's a number[1][2] of commercial examples where thats the case. Indeed - their own CDN offerings don't (yet) have the economics (today) to support more distributed caches, so something like this which is lightweight and requires no DNS/infrastructure changes is interesting. A place where this is getting a lot of discussion is where we'd least expect it: on the LTE networks. Since there isn't yet[3] a solution for mobile peering there's a lot of discussion around solutions to run low cost, light weight caches _inside_ the Radio Area network.

[1] Xbox One | https://www.nanog.org/sites/default/files/wed.general.palmer... [2] Spotify | https://community.spotify.com/t5/Desktop-Linux-Windows-Web-P... [3] http://datacenterfrontier.com/vapor-io-teams-with-tower-tita...


So you've blacklisted AT&T, CenturyLink, Cox, Exede, HughesNet, MediaCom, StarTouch, SuddenLink, and Comcast?

All of those have the majority of their subscribers paying extra fees once they cross an invisible usage line, AKA a "data cap".


Let me see if I'm understanding correctly.

You're using visitor's upload bandwidth and you see not notifying them as a feature? I'm not sure I can see the justification for that.


But will users their bandwidth get abused?


Probably.

https://sig.edgeno.de/edgemesh.client.min.js is being added to my uBlock list.


Good call. I just submitted an issue to UBlock to get their JS client added to the block lists:

https://github.com/uBlockOrigin/uAssets/issues/659

Sorry Edgemesh team, but this kind of activity without user opt-in is not okay.


No worries - that's exactly what uBlock is there for! :) We've struggled to find the best way to add opt out on the client side without effecting the actual page itself (e.g. pop-up etc) and would love to get some thoughts on this - please PM me if you have any. Alternatively we went with a more aggressive approach on network based detection (e.g. metered connections, replication across ASN vs intra-ASN etc).


A little notification does seem like the best idea to me. Obviously it would need to be zero-effort, way unobtrusive, and nice and reassuring. "Hey- on unmetered connections, this page may balance network load with your unused bandwidth. You shouldn't notice any difference in speed! [Learn More][Edgemesh CDN]" Maybe even the slightly more aggressive "Your unused bandwidth is helping to speed up other people's connections!" Pop a neat little box in the lower right hand corner on the first visit, and have it minimize/disappear after a couple seconds.

I think it's a little skeevy to have it be completely silent. That doesn't mean it has to be super loud though.


We've added this to 1.7.2 [1] That will roll into production tomorrow evening.

Thanks for all the feedback HN community!

[1] https://github.com/edgemesh/edgemesh/releases


Man, you guys are awesome and endlessly tolerant of the slightly fanatical hyperbole being directed at you.


Although i am not at all affiliated w Edgemesh as a company, I can tell you (first/second? hand) the personality and good-naturedness in this team is through the roof.

Plus its just a killer product made by killer devs, pretty sure Spotify does P2P cache-sharing too btw.


I think it should be a browser-level feature, just like allowing notifications or location access.

Browsers should have an option to follow one of four behaviors: a) allow all P2P connections, b) always ask, c) allow a low-volume (say, <32kbps) P2P traffic, but throttle and ask if the rate tries to go beyond the safe threshold and d) deny all P2P connections. With b) or c) being a sane default, and a JS API to check permissions programmatically.

While this doesn't solve the problem right now (and would probably take a long while to happen), as a long-term solution, I think that would be the best way for everyone, providers and consumers.

I just think if you'll raise an issue with the mainstream vendors (Mozilla, Google, Opera, Vivaldi) you (as a company) this idea may have slightly better chances to be heard than just some random end-user suggestions.

If this idea fits your vision, of course.

---

As a short term, I guess maybe you can implement some proprietary API and suggest your users (webmasters) to show a confirmation panel that fits their site look-and-feel. With some readily-available sample implementation that they can just use if they don't want to spend time at all (besides adding a line of code).


We've documented it here [1] and also put an example footer implementation on our homepage[2] as well. Thanks for the feedback!

[1] https://edgemesh.com/docs/getting-started/opt-out [2] https://edgemesh.com


I like the idea of a browser level feature for opt in to WebRTC (on a per Origin basis) - and it was proposed circa 2011 web WebRTC was coming of age. It's probably worth revisiting that discussion.

Also with regards to detecting metering client side you're 100% correct - you can't reliably do it in any way on the client side (although for mobile there are some APIs to detect cellular vs. wifi [1]). What we do it we have a mapping of ASNs that are flagged as metered. When your client comes online we take the IP, map to the ASN and determine if it is able to upload. We buy this data today and you can always drop an email to meter_notice@edgemesh.com with your IP and we will add it in.

[1] https://developer.mozilla.org/en-US/docs/Web/API/Navigator/c...


how can you even know if a connection is metered from within a browser?!

thats crazy talk rigth there. there are so many variations that all happen completely outside of the browser domain and/or the connection destination.


Aside from the connection API - What we do it we have a mapping of ASNs that are flagged as metered. When your client comes online we take the IP, map to the ASN and determine if it is able to upload (e.g. on cellular/metered etc). We buy this data today and you can always drop an email to meter_notice@edgemesh.com with your IP and we will add it in. We also prioritize upload partners for known ASNs (e.g. you're more likely to be chosen for upload if you are on Verizon Business than Verizon Fios than Telstra).


that's noble. really. but by far deterministic.


via the Network Information API.

https://developer.mozilla.org/en-US/docs/Web/API/Network_Inf...

Connections start metered and are then upgraded when an unmetered connection is successfully detected.


Yes, I've seen the phrase "seeding to random peers" pop up a few times. Peer selection is actually extremely smart and is by no means "random".


It's in the official uBlock filters list now. https://github.com/uBlockOrigin/uAssets/commit/7a32aa2efb033...


welcome to the future


Yes, and this is utter shit. You want to push CDN bandwidth constraints onto consumer networks in some p2p model and trust the content coming out of it? The idea is ridiculous in every aspect except for those who don't know anything that are buying this paradigm.

The DOS possibilities are endless and the MD5 + layered approach already has chinks in the armor. Come on. You filter every participant through some ddos filter provider you don't own, filter good content from bad based on some persistent hash database state and take a look at the content you are introducing in some heuristic (probably comparative) profile.

Garbage, move along.


BitTorrent tried to market this product when video streaming on the web was still fairly new under the BitTorrent DNA name. As I recall, over the year it was being developed, bandwidth prices dropped something like 80% which made the market for it pretty much evaporate at that time.

Most cloud bandwidth is crazy overpriced since in the datacenter you typically pay for peak bandwidth, not bytes. You can see this with cloud providers like digital ocean where you can essentially buy 1TB for the cost of running a $5/mo instance. You can build a poor mans CDN using these types of services and geo DNS that saves you a ton of coin.


Totally agree - and bandwidth rates on cloud are crazy expensive [1]. The more common use-case for Supernodes are on colocated servers where you have excess capacity already - or for when you need to just deploy additional capacity for a moment (Black Friday etc), or when you have private links that are uncorrelated to the common CDN backbones (e.g. areas in Asia and Africa)[2]. If setting up Geo DNS with healthchecks is a bit much to get going - this is a self bootstrapping option that doesn't require other changes. That being said we run Geo DNS as well :)

[1] https://blog.edgemesh.com/its-time-to-change-the-web-and-sto... [2] https://blog.edgemesh.com/understanding-diversification-netw...


I have also been contemplating to make one. After all web is all about decentralization. However such crowd sourcing bandwidth must be with full transparency and need to have configurable soft limit on per user basis.


For those hosting assets on S3, you can use something like http://idiallo.com/blog/creating-your-own-cdn-with-nginx or https://github.com/alexandres/poormanscdn with Geo routed DNS on Route 53. Seems a lot simpler than this (but probably not as feature-rich).


With this there's no DNS to even setup which is nice. Route53 is great but getting the failover and geo-routing to work is ... challenging. But I would def still keep a base NGINX and/or Varnish cache at the origin for sure. Can also look at AWS Cloudfront[1]

[1] https://aws.amazon.com/cloudfront/


Yep, CloudFront is great. The disadvantage is missing Let's Encrypt support, which is trivial with the two options above.


AWS has Certificate Manager, you don't need let's encrypt.


Or you know, have your lunch :P


Sorry if this has been mentioned, but it would be worth having a link back to your main website from Medium. I've hunted around and finally had to reach for my URL bar to get to your product landing page.


Thank you! We just added a navigation element to get you back to the home page. Can't believe that hasn't been there all this time!


what's the difference between this and peer5, and other webrtc based p2p cdn?


Peer5 and Streamroot.io are focused solely on video - and tap into WebRTC media functionality to scale Video on Demand etc. Both great pieces of tech. We are lower level and use Datachannels[1] to p2p replicate _almost_ all assets (images, Video, Fonts etc) required to build the page. This is primarily enabled by using the ServiceWorker[2]. We also focus on updating the client side cache as opposed to stepping infront of a page load. E.g. we replicate in asset's that will be used to render the page and when you request the page we simply serve those assets from the (now populated) cache. For video it's a bit more complex as we replicate in the first N seconds of video (to help with buffer lag) and then switch to a similar mode as Peer5 and Streamroot. Feel free to PM me for more info, and also there will be an ACM article out this month that goes into more details.

[1] https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChan... [2] https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...


I want to set up a private CDN pointed to some kind of hostname / reverse proxy and caching a hostname. Almost all of my users are on metered connections so p2p might not be the best.

I have existing infrastructure and unused bandwidth. What are my choices for easy deploy?


One option is Varnish [1] with some DNS routing to your caches. It's well tested and deployed. If most of the users are on metered connections your correct that they won't be able to provide upload capacity (but will be able to download). In those cases you can also just deploy the server version [2] mentioned here and disable the browser client.

[1] http://varnish-cache.org/trac/wiki/Introduction [2] https://edgemesh.com/product#Supernode


Is there any way I can selectively enable/disable upload capacity on the client side with some form of JS method? I know on my own end specific netblocks that are severely metered (<1-5GB/m) but likely won't show up as such because they're usually 3g wifi modems or otherwise, so will just be a wifi connection instead of mobile.

Are my supernodes used for any other site / are my users' browsers used for any other site than mine?


You can limit your supernode to your Origins [1] by setting the EM_ORIGINS environment variable.

With regards the first point we should detect it (based on you ASN, if you are on 3g modems they won't be able to upload). E.g. even though your laptop/tablet is on 'Wifi' your actual IP that comes to the backplane will be from your network block (the cellular address block) and so your client will be automatically removed from the available upload pool (although you can still download). Feel free to PM me directly if you've more questions

[1] https://edgemesh.com/docs/supernode/configuration


Sent you an email w/some more details. Thanks!


Looking at the image on the link, the "checksums" are a suspicious 32 characters... Hoping you guys are not using md5sums.

Am I missing something, or would this let any node (supernode/browser) in the system potentially replace arbitrary content with their own content? [1]

Hopefully JS isn't being served by this mechanism (attack vector pretty obvious there), but even images are still a concern [2] [3].

[1] https://en.wikipedia.org/wiki/Collision_attack#Chosen-prefix...

[2] https://threatpost.com/apple-patches-ios-flaw-exploitable-by...

[3] https://imagetragick.com/


There is a 3 part hash going on. There is an Origin ID hash, a URL hash and then an MD5 on the actual payload. When a new asset is registered on the mesh the Edgemesh backplane downloads the asset direct to confirm the MD5. If it doesn't match it won't allow the asset to register. On a replication the destination node receives the asset and calc's the MD5 again. If the MD5 doesn't match - it signals Edgemesh who then takes that node (source) out of the mesh. E.g. if you modify an asset and attempt to replicate it - the receiving party will invalidate the object and signal back to Edgemesh. Replication directions are from the Edgemesh backplane. PM me if you'd like to go into this in more detail.


21 fucking years ago.

> In 1996, Dobbertin announced a collision of the compression function of MD5 (Dobbertin, 1996). While this was not an attack on the full MD5 hash function, it was close enough for cryptographers to recommend switching to a replacement, such as SHA-1 or RIPEMD-160.

https://en.wikipedia.org/wiki/MD5#History_and_cryptanalysis


:) You're dead right and it's why we use it inside two other top level hashes (e.g. you'd need to collide inside the OriginID space as well). It's certainly possible though (for extremely large sites) and we're experimenting with an xxHash64 implementation for a later release.


You have SHA256 built into the browser. Use it.

Stop inventing your own crypto protocols, as you clearly have no idea what you're doing in that area (as evidenced by any usage of MD5).

xxHash64 is not a cryptographic hash function. Collisions and pre-images matter here as they allow for subdtitution of content by an adversary.


>if you modify an asset and attempt to replicate it - the receiving party will invalidate the object and signal back to Edgemesh

If I understand you explanation correctly, the receiving party will invalidate the object if the MD5 of the object doesn't match the advertised MD5? That would leave you open to people serving other objects with the same MD5 hash as the original.


It also has to match on the OriginID and AssetID has as well - the checksum is a final check on the actual payload (once decompressed).


Right, but if I modify your client to be malicious, I can spoof those two id's, right?


You can but our backplane won't know about you local modifications. When you're client informs the backplane (on a sync) it will see that those IDs and hashes we're registered and it will instruct you client to delete them.


E.g. modifications that happen in your local instances are checked against our backplane. If an asset hasn't been registered (and verified independently via our backplane) it won't be available for replication


I'm working on a platform (Peerweb) similar to the product being discussed, and I think I've put more thought into the security and autonomous self-policing aspects of P2P CDNs. I don't waste my time with MD5, and I deeply considered the PKI that I designed.

Also, my platform can offload all assets including the page itself and enables sites to get free failover during content server downtime. Due to my DNS-seeded PKI, your users stay secure and content continues to be correctly authenticated in your P2P CDN cache even when your site would normally be down.


>Am I missing something, or would this let any node (supernode/browser) in the system potentially replace arbitrary content with their own content?

collision attack != preimage attack (what you're thinking of).


Ah I see, I forgot that in the SSL attack the attacker had to choose both certificate prefixes as opposed to just one. Thanks!

It does seem to me though that if I could coerce/direct the site into accepting one image that I created, I could manage to replicate a second, different file throughout the network. Obviously assuming I computed both images ahead of time and both image formats were unperturbed by the nonsense appended to file by the attack.


When you register a new asset, the Edgemesh backend downloads it from origin itself to validate the hash you've calculated. And on replication the destination recalculates it on the payload (to make sure the asset replicated correctly).


Right. So let's say we have file A, which is an innocuous image file, and file A', which is a malicious image file, where MD5(A) == MD5(A'). Based on the MD5 prefix collision attack, I should be able to construct two such files A and A'.

I get an edgemesh site to accept file A (perhaps the site allows me to upload a user avatar, upload an image on a forum, etc). I then behave as a node in the mesh, and receive file A. When I get a request to replicate file A to someone else, I send them file A', they check the MD5 hash, and the hash matches. Not seeing how that doesn't work?

It is admittedly a narrow attack, but I think it works.


why not upload the malicious file directly?


Because you could bypass filtering / approval mechanisms, or automatic image processing that could defang a malicious image.


For those looking for even more detail, our ACM article is now available on Queue [1]

[1] http://queue.acm.org/detail.cfm?id=3136953


This breaks down running on a P2P CDN on Google Cloud, but the same can be done on AWS, Azure , Digital Ocean. Digital Ocean definitely has the best bandwidth rates AFAIK and has enough regions to serve as a solid backbone.


Vultr is half the price of DO and has even more regions.


You can actually get even cheaper.

Check out https://git.io/vps, where I made a comparative listing of different providers.


This is actually a pretty good list. The VPS hosting industry is actually one of the most awful, bottom-feeding industries that exist. I've been buying VPS from many different providers for fifteen years or something, and the one thing I learned is that the vast majority of VPS providers are scumbag thieves and fly-by-night scammers. I'd also include Ramnode on this list of good VPS providers, but otherwise I'd stay far away from any provider not listed here.


I wanted to add Ramnode, but didn’t have the time (work, open source projects) to do that yet. Thanks for reminding me!


See also: https://github.com/joedicastro/vps-comparison comparing "≤5$" VPS options.


+1 this is great. Thank you!


And looks like there's a docker-machine api for Vultr as well [1]. Thanks again for this!

[1] https://github.com/janeczku/docker-machine-vultr


And Vultr gives you BGP Anycast :)


Even better! Writing deployment scripts for that. Thanks for the tip!


Great article, I'll definitely take a look at this! I think it's a nice solution for startup companies that are looking to cut costs on CDNs as well as experiment with P2P tech.


Can I propose an alternative?

https://github.com/andreapaiola/P2P-CDN


Would this be a potential competitor to netlify.com or perhaps partial competitor or a complement?


Author and Edgemesh employee here: I think it sits pretty squarely in the 'complimentary' category. E.g. we've customers who run Fastly, Akamai , Cloudflare etc but add us as well to get increased resiliency, Real User Metrics and edge acceleration. One customer saw a 35% drop in page load time when they added Edgemesh to an already fast Fastly supported site. Enterprise customers use Supernodes to add capacity in via their own datacenters.


Since netlify is a traditional static CDN, this would most likely serve to complement that.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: