im seeing a lot of "my router" and "my computer" threads so its probably worth it to say this isnt for your home network. Mikrotik is targeting larger customers with a product that handles offloading to the ASIC's on the board, which is far more performant and scalable than COTS ethernet cards or the onboard gigabit.
the reason you would slap a router card in your rackmount server is because its an IOMMU passthrough to a k8s service load balancer or straight up just openstack and the push toward hyperconvergence. the switch is already virtual inside the kvm on openvswitch (has been for a decade now), but the router is still hardware and this product aims to solve that problem.
You aren't wrong but honestly I'm having a hard-time envisioning a target audience for this device besides the ardent homelab crowd, or existing Microtik users who just want to eliminate one more piece of gear like a normal CCR from their setup and move it into the server itself. I don't see many "larger customers" moving to something like this instead of competitors. It's not like it's priced out of homelabs; $200 MSRP is the price of an entry level 2x10G Intel card and I'd consider that table stakes for actually adventurous home networking.
The bandwidth on the interfaces isn't high enough to match most enterprise customers needs -- 25GBe/40GBe had pretty marginal market penetration compared to 10G where you don't need hyperconverged solutions, and beyond that most major hyperscalers and others have skipped straight to 100G as far as I can see, to leverage economies of scale. And the CPU complex and ASIC together aren't powerful enough with enough resources to offload serious "service provider compute" workloads to; they even note specifically things like it reaches "line rate with Jumbo Frames", where most of those other solutions aim for line rate @ MTU, so I'm suspicious of that wording. And on top of that you need some actual dedicated engineering (operations, engineers) to utilize a solution like this versus just reserving AWS instances with ENA adapters or whatever. Anything this can do, something like Bluefield will just do better in every way, if you need the hardware yourself.
So I legitimately have a hard time envisioning anyone other than random nerds buying these. Any large customer is probably better off just going with Nvidia (Bluefield) or Intel (Mount Evans). But hey, for two 25GBe ports at the price of a normal 10GBe card, as long as I can pass them through directly I suppose I can handle RouterOS or whatever, and if the software gets more advanced that's cool too. And if it gets more people on the whole converged infrastructure bandwagon, sounds good!
There are more markets than homelab and hyperscale data center, this is solid for software network services at the edge where cost is a concern and flexibility is a plus. MikroTik tends to fill these kind of niches at a cost competitive price point, they don't aim to sell just to consumers or realistically compete with established vendors in the high end segments, just those niche cases they think they can be a low cost option where there wasn't one before.
My hope (once I can actually get my hands on one) is this can integrate well for us by offloading a lot of the routing and NAT type functions for a managed service network offering software based box we sell that handles all of the "smart" network functions at the site + acts as the egress point.
There may be more of us here, even in the FTTH-flavor.
There are some imperfections, mostly related to bonding+ospf and vrrp-grouping but if one does not mind some warnings and a little scripts, one can make things work nicely.
Let's just say that BGP-signaled redundant uplinks and routing as close to the customer as possible are to strive for, instead of starting with a L2 port-isolation-pyramid nightmare.
25 to the server is pretty popular in mid-tier IaaS providers. Means you can use 48x25GbE switches on the edge, which are pretty economical now.
I don't see this card being that popular in that market however; if you want solid tcp offload and asic acceleration there's xilinx cards with a good reputation already.
I think for niche portable use cases this could be very cool or anywhere you are super space constrained.
I agree with you on most points though - and finding good people who know how to even use RouterOS seems like it would be a pain for companies as well.
I've got a few hundred mikrotiks, mainly CCRs and 1100AHs, I guess I could merge my monitoring machine and my router, and it's handy if I just want to deploy a single device somewhere but manage it in the same way (firewalls, vpns etc), it's certainly not something I've being waiting for.
It's also worth saying that Mikrotik is a common platform for "homelabbers" who use enterprise-grade (ish) hardware in their homes. RouterOS isn't without its flaws and pain points, but Mikrotik brings high quality features into a low cost package that appeals to many. It's the lesser-known (and polished) brother of what Ubiquiti used to be.
I'm sure Mikrotik gear is on plenty of HN (and /r/homelab, et al.) members' home network. It's the budget end of 'serious gear' so there's a decent amount of it in homes, probably rivals (the totally different demographic buying) gamer-marketed consumer network gear.
> this device can handle a lot: firewalls, user management and access control for home media and file servers, and even some traffic control in data centers – without the need for a stand-alone router.
It's definitely got home networks as a target market. It's one of the suggested use cases.
At USD$199 I think this could be a steal. Combine this with the new low-cost Intel CPU and they could be a game changer especially at the edge and data center environment [1].
Please check out this Kamuee software router and it has the performance of 100 Gbps without using SmartNIC based open soure Quagga/FRR open source routing suite although it's proprietary technology by NTT [2]. Perhaps it's not coincident that the original Zebra open source routing suite pre-cursor to the Quagga/FRR is also written by a Japanese.
Another related and promising software router technology from the startup Netris using SmartNIC [3],[4]. It aspires to provide automatic network operations platform that turns the physical network into a service like a cloud.
[1] Welcome to the Intel Ice Lake D Era with the Xeon D-2700 and D-1700 series:
Quagga was my top scoring word in words with friends back in the day. Played it against a friend who couldn't believe I wasn't cheating. It's an extinct Half-Striped Zebra.
Essentially it's a single board computer with two network interfaces, one on the PCIe side, one on the bracket side.
This has been done before with the likes of DSL modems that weren't actually modems but just router-on-a-card that would just have a Realtek PCI chip on the bus side, which then directly had its GMII interface hooked up to a conexant DSL modem/router package which itself then connected to the actual on-board modem.
No, but at that point you are building a router anyway, so you might as well run the router software on the host directly instead of on the SBC host inside the main host.
Keep in mind that most routers are just computers too. Sometimes they are low-power computers with special hardware components to offload specific tasks so you can trade power for specialisation (which also comes with a rigidity trade-off, you can't change the hardware after the fact like for new protocols).
No. The ports on this "NIC" are actually connected to the router, though they can be passed through to the host if needed.
The ports on another NIC would be assigned directly to the host. While I'm sure you can theoretically redirect them to this router wit a combination of VLANs and other Linux networking magic, you will be limited by your CPU and it's unlikely you'll manage more than a few Gbps.
Neat concept but I wonder why the PCIe initialization delay can't be handled with an option ROM. I don't know that a fully fledged option ROM would add value but it seems like it could be a good workaround/hack to not require additional BIOS configuration or support a BIOS that doesn't allow configuration of a delay.
I've seen some option ROMs take 10 seconds or more depending on the card - hardware RAID controllers being a well known example.
i don't know anything about mikrotik's hardware development practices and don't want to besmirch them, but some places basically just fab reference designs, or smoosh a couple of reference designs together, and have a product.
it helps to keep costs down if you don't have any NRE.
While this seems cool for some implementations there is a reason we often have separate boxes for compute / storage / routing. Some of these are much more critical to have consistently running than the others and it also means it is easier to swap out and do upgrades without having to worry about affecting the other parts of the pie. I think virtualized networking devices like routers are definitely the future but I would still much rather have it as its own separate physical box so that if some hardware fault in a server takes it down the network still functions (not to mention having them on different UPS hardware or different levels of redundancy.) And with servers getting smaller and smaller and the compute required getting more and more power friendly I do not see this as something I would like to use unless I was EXTREMELY space constrained.
Where I can see this being super cool though is niche use cases like highly portable servers and whatnot for things like VFX shoots. I once was contracted to built a set of highly mobile and durable servers for mobile rendering of 8K footage. I built the servers into some super durable hard case boxes that are usually used for shipping things like expensive camera equipment, military hardware, etc. The cases even have a valve to equalize pressure in case they get pushed deep underwater (like in the event of a boat capsizing) and a very robust waterproof gasket. Of course for the servers to be running the case must be open (mainly for cooling) but it would have been interesting to network multiple of them together AND other equipment without needing a separate physical device for routing. It would also have made scaling the system much easier if each server could also act as a router - you could bring one or 10 and each could function independently of each other.
Interesting ... so if I could find a server board with no other network ports and then put this card in, I could finally build a wire-speed multi-gigabit "network slug"[1].
Just watch out for Amazon Sidewalk! Your consumer TV could connect to your neighbors' Amazon Echo wirelessly to continue sending screenshots (or hashes of screenshots) to Amazon and its marketing partners.
The first time when I read about it on ServeTheHome I had no idea what this can be used for. Then I saw the price and my jaw dropped, it is cheaper than a basic NIC with dual 25 Gbps ports. Together with the CPU and RAM on it, it makes a lot of sense for specific use cases and the price is appealing: for a Small or Medium Business with some servers and not a lot of dedicated network equipment, it allows to move the router/firewall inside the server case, combining it with the NIC at a good price and without eating up any of the server resources.
Do you want a cheap dual-port NIC at 25Gbps? How about we add some solid router capabilities on it for no extra price?
How long ago was that? I've bought recently newish dual-port (SFP+, 10Gbit) Connect-X 3 Pro at 80 GBP per piece. And that was one of the better prices.
this is the cheapest I see right now on ebay for dual port card ($35) so perhaps a bit higher than what I remember from a year ago (I guess silicon shortage effects everything).
My Linux server already has "full router capabilities" AND I don't have to use RouterOS to configure it (which is just a shit abstraction on top of common Linux network services like iptables).
This Dual SFP28 (dual 25Gb cages) plus 1Gb Eth PCI-e card has an MSRP of $199, meaning a street price will be a bit under that.
10Gb NIC's run around $100... and can't do any switching or routing. As mentioned, this card can offload 100% of routing needs from the server (ie. zero CPU usage on your server to make routing decisions), can switch at line speed (well above line speed actually, rated for 100Gbps throughput), plus the server can still use one of the ports for it's own needs. Sounds pretty powerful to me.
It's unlikely this is an interesting product for a home lab or business - it's likely more geared towards service providers. Still a pretty cool idea none-the-less, regardless of how you feel about routerOS.
Bunch 'O VLAN's to guest VM's without that routing consuming CPU cycles on the host hardware. RouterOS has a full API so you could automate everything.
The card actually has 4 "ports". 1 virtual port dedicated to management (via PCIe passthrough), 1 GBe port, and 2 SFP28 cages. Plenty for a cloud hosting provider. Both SFP28 cages, plus the virtual 1GBe port support passthrough via PCIe to the physical server.
That's just one use case. Another is a dedicated firewall for the actual physical server that's powering the thing, running at a full 25Gbps consuming zero CPU time on the server itself and zero U's of space... all for less than $199 street price. The card supports PCIe 8x passthrough, so the physical connection to the server is 64Gbps - way more than a single SFP28 cage can support.
And yet none of this is actually attractive from a cost perspective at this rate compared to other options that would actually be used in the same space is my point. Everything I can think of to do with this card can be done better and cheaper somewhere else.
And yes, I've done this stuff at very large scales.
Cheaper than $199 (probably around $175 street price is my guess), 0U's, no licensing or reoccurring fees, and consumes zero host resources? Even as just a physical firewall or hardware vpn endpoint in front of a colo'ed server, using one SFP28 in full passthrough... this thing sounds incredible.
I think people that feel this device is pointless just have not been in a situation where this device is exactly what they needed. It is a niche device, admittedly.
However, you have my attention - what device are you referring to that does this better and cheaper?
This is all better done with ACLs or direct programming on a broadcom chipset in a traditional scale top of rack switch or similar, or better done in the actual host CPU land because its something complicated that can't be done with simple ACL / TCAM programming.
Seriously, there isn't a good problem here that this solution solves that isn't better done elsewhere.
Top of rack requires space, which must be paid for monthly in a colo'ed environment, or reduces usable space (ie. revenue generating space) for hosting/cloud providers.
Done in actual host CPU consumes resources.
Direct programming on a chipset is a lot more complicated than running an off the shelf router.
Yes this is a niche product - but your proposed alternatives are silly in most cases where just slapping a zero U device into an unused PCIe port is far less complicated, less expensive, and easier to maintain.
This thing costs less than $200, a one time cost. Just reading the Broadcom documentation alone will cost your organization more than that...
New ones do, but 5 year old PCIe3 ones, such as the SolarFlare 7000 series which are still supported with drivers by Xilinx can be had for around $30 off ebay for home use.
Almost everyone I know that's ever used JunOS from a command line for 'serious' ISP things finds RouterOS painful and cumbersome.
The way things are laid out in a hierarchy in a full system "/export" from a Mikrotik is so weird and annoying compared to a hierarchical junos configuration from a "show configuration" on a juniper router.
If people want to make a real router of an x86-64 system rather than putting a mikrotik pci-e card into it (wtf, why?) I'd recommend they go with vyatta or VyOS instead, or install something like a barebones centos or debian and then add FRR to it.
As a network engineer who’s worked on Cisco, Juniper, Foundry, Brocade, Extreme, HP, Dell, and even Netgear, let me assure you that while the urban legend is that “JunOS is IOS done right”, the reality is that they’re all terrible in their own ways.
JunOS is generally better than IOS(-XR), but it’s still got its sharp edges. VyOS / Vyatta are poor enough clones that they will bite and seriously suck to anyone who’s actually got real JunOS experience.
Let’s be real. The goal in improving network configuration standards is to suck less. That’s it. Everything in networks sucks. Anyone who tells you otherwise either lacks experience in general, lacks experience suffering at the bleeding edge, or lacks my cynicism and genuinely sees the world as a better place than I do (I envy them for any of the above)
I don't disagree with any of this - have been using JunOS since the M40 was the absolute apex of service provider core router technology. Lots and lots of weird bugs in various versions of IOS and JunOS on all their platorms.
Big difference between what you might get spending $15,000 for a Juniper MX204 running JunOS and a Mikrotik $800 router. I mentally categorize Mikrotik RouterOS and similar ultra low cost things in the same tier as VyOS. It's cheap but there are tradeoffs to going cheap. One has to understand the risks and tradeoffs of running a lot of your traffic or important things through cheap routers. Sometimes it's a risk worth taking.
Foundry, as we've seen, was a straight knockoff of the IOS 12.2/12.4 CLI and interface. Used plenty of Foundry switches in a previous role.
Everything does suck. Some things suck less. Sometimes you can pay money to get things that suck less.
I have worked for a medium size ISP and we had Juniper, Cisco and lot of Mikrotiks.
For me the big lack in Mikrotik, compared to the bigger vendor, is the lack of real support. No TAC services, no SLA, etc. The only way to get support is via email, but you have to wait days for a response. And also the system is not stable like the one from big vendors.
Anyway, the performances of Mikrotik are impressive for the cost.
and TAC/support is half the reason you buy from the known vendors in the first place. (the other being well rounded and actual trustworthy performance numbers when using more niche network technologies, especially in regards to encapsulation).
for a comparison, I once had an issue where both routers in a redundant setup failed within half an hour of each other. (was a pure coincidence, the setup was redundant).
then, the sparefallback unit would not boot, and jtac send us a replacement within 3 HOURS...
100% agree.
At a moment we decided to buy 6/8 CCR instead to buy a couple of Juniper and keep the unused one as passive hot spare, just because it was cheap and sometimes the CCR failed.
Another point is the feature development: BGP implementation, in Mikrotik was single core only and this was a bottleneck especially when you want to calculate the full routing table. Everyone in the forum asked for this new feature, but Mikrotik always refuse to work on that.
As someone who’s a home networking enthusiast, and has too much Mikrotik gear at home, I can kind of understand what they’re coming from. RouterOS has the usability of “enterprise-grade” network equipment (meaning it’s arcane and non-intuitive), but at the same time has lots and lots of half-working features.
I simply cannot believe how terrible their IPv6 support is (still no connection tracking!), and plenty of weird glitches, etc.
But! Their hardware is very reasonably priced, and an excellent gateway to “real” networking equipment for the hobbyist. It’s unfair to compare it against Juniper and the likes: yes, it’s much better, but yes, the products are also 10x - 100x as expensive.
While everything that’s done in RouterOS can also be done under vanilla Linux, I buy Mikrotik precisely because I don’t want to build a custom Linux router. I want something that comes with a GUI, and I won’t have to spend too much time setting up.
Having said that, I would absolutely kill for an “escape” Linux shell. I know that RED supports ECN in Linux, please allow me to use it!
Seriously? Is it not possible to have stateful firewall rules for IPv6 traffic? Or is it just NAT that won't work (I don't care about NAT, NAT can die)? I was considering getting a microtik router but this would be a dealbreaker.
7.1 is only required on their brand new router targeted at enthusiast home users. The RB5009, which specifically says it's targeting home labs and explicitly came with the caveat of 7.1 being the minimum version and there is no LTS in the 7.x branch as-of yet. This is the only product that requires the 7.x branch.
Everything else ships with 6.48.x LTS or 6.49.x Stable. Nearly all serious users are using the LTS branch. The 7.x branch is well known within the RouterOS community to not be "production" ready... although that's where new features and stuff are going. It will be, one day.
> make a real router of an x86-64 system rather than putting a mikrotik pci-e card into it (wtf, why?) I'd recommend they go with vyatta or VyOS instead
One thing I've been looking for is a hardware box that can replicate what Ubiquiti's EdgeRouter Infinity does: a handful of 10Gbps SFP+ ports (sorry, I know that the term is "cages" but I just can't) and a couple of copper 1Gbps ports.
So far I haven't found anything but I feel like my search will get motivated in the next couple of years since it feels like Ubiquiti has forgotten that EdgeRouter exists.
Do you have any rack form factor x86-type systems you like for VyOS?
> a hardware box [with] a handful of 10Gbps SFP+ [..] and a couple of copper 1Gbps ports
I have a couple of (fanless!) CRS305-1G-4S+IN[0] at home, one in my study and one in the utility room. They each connect with 10GbE fibre (or DAC) to ConnectX-3 cards in my PCs and servers.
I appreciate the recommendation but that's kind of a gap from the EdgeRouter Infinity (ER-8-XG). The Infinity has 8x10Gbps SFP+ ports, a single copper 1Gbps port, 16GB of RAM, and a multi-core processor because it's designed as an inexpensive core router for a mid-sized network.
Where I work, we use one of them as our main router with multiple peering sessions and two transit uplinks. According to Cacti, right now we're pushing about 30Gbps through the router.
That's what I'm looking to eventually replace, if Ubiquiti doesn't start up with software updates to the EdgeRouter line again. But I think that's the problem: the EdgeRouter line is so amazingly inexpensive for all of the power you get, there's no financial incentive for Ubiquiti to invest in it and all of the players with the "proper" routers--the Junipers and Ciscos and the like--start at three times the price of an ER-8-XG.
Have look at Mikrotik CCR2004-1G-12S+2XS (1G-12S+2XS means 1x1Gbps RJ45, 12xSFP+, 2xSFP28) or CCR2116-12G-4S+ (12G-4S+ = 12x1Gbps RJ45, 4xSFP+), depending how many ports and what kind of routing performance you need (check the block diagrams, they tell the story).
However, neither of them will route 80 Gbps full duplex.
Then there is CCR2216-1G-12XS-2XQ (1x1Gbps, 12xSFP28, 2xQSPF28); this one is supposedly capable of routing shy of 200 Gbps @1518 packet size.
Edit: another thing on Mikrotik naming conventions: CRS = switches; CCR = routers.
If people have anywhere near 80 to 200 Gbps of real world IP traffic and are thinking of using a mikrotik for it, they seriously need to re-examine the revenue from customers that's going through that >50Gbps of traffic, business risk profile and how serious they are about things...
At that scale you'd better have a redundant identical twin pair of routers with 1+1 or N+1 redundant everything (fans, power supplies, routing engines, etc) 24x7x365 service contract, and so on. Not something you can or should do with mikrotik.
> Have look at Mikrotik CCR2004-1G-12S+2XS (1G-12S+2XS means 1x1Gbps RJ45, 12xSFP+, 2xSFP28) or CCR2116-12G-4S+
Both of these look fantastic. The second one, with the four SFP+ ports, looks like an almost drop-in replacement for the Infinity, particularly with its 16GB of RAM. (We use soft-reconfiguration inbound which bloats the amount of RAM needed for the tables.)
> However, neither of them will route 80 Gbps full duplex.
That's actually fine, at least for our needs. We only have 50Gbps of connectivity between peer, IXP, and transit links and today's 30Gbps is high because of end-of-month activities. We got the Infinity largely because it was the only EdgeRouter that could do what we needed. Like the gap between EdgeRouter Infinity and "every other router that can do what it does," there's a rather large gap in Ubiquiti's EdgeRouter line. The next one down in the list is the EdgeRouter-12 that is a small fraction of the capability of the Infinity.
> another thing on Mikrotik naming conventions: CRS = switches; CCR = routers
That's good to know. I hadn't started down the Mikrotik path yet but I'll give it a look. We have a leaf router at a small office where we experiment and maybe I can put one in there to start.
> that's kind of a gap from the EdgeRouter Infinity (ER-8-XG)
Indeed, not least on price. How much was your ER-8-XG? My CRS305-1G-4S+IN were about USD180 each.
EDIT: If there were a silent version of the CRS326-24S+2Q+RM[0][1] I'd have bought one already...
"The MikroTik CRS326-24S+2Q+RM is an insane switch. Its specs are relatively mundane by modern standards. It has 24x SFP+ 10GbE ports and 2x QSFP+ 40GbE ports making it not even as powerful as mainstream previous-generation switches like the QCT QuantaMesh T3048-LY8 that we installed in our lab years ago. Instead what makes the switch insane is that it offers all of that performance at $475"
For what it's worth - there is a healthy "modding" community for some of these Mikrotik switches. People convert them into fanless/silent units pretty regularly, or swap the fans for higher flow / lower rpm fans, etc.
a crs326 is a layer 2 switch - not comparable with a router. you could categorize it as more like a cisco 3750G from ten years ago in capability of 24 ports of copper gigabit in one place.
any mikrotik CRS series has very limited routing/layer 3 ability compared to a CCR series. Different things for different purposes.
look at the logical block diagrams mikrotik provides of their crs series equipment. it's all a bunch of ethernet switch chips in a few blocks of 8 ports and then something like a single 1GbE link to the CPU. the moment you start telling it to do layer 3 things its capability is very limited.
When space permits I prefer full-size 1U systems that have dual/hotswap power supplies and room for three low profile pci-e slots, such as a Dell R630/R640 or similar. With Intel chipset 4-port 10GbE SFP+ NICs this would max out at twelve ports plus whatever is on the motherboard daughtercard for network interfaces (2 x 10GbE + 2 x 1GbE copper, or whatever).
for smaller or shallow stuff, supermicro, msi, tyan, asus
if you want a mikrotik, buy a mikrotik hardware 1U router, despite the many issues with them the one thing they do have going for them are low power consumption and small space use. an actual ccr2004 1U box is not that large and can be mounted almost anywhere.
If you have enough traffic to need multiple SFP28 interfaces in colo and can't pay $150-250/mo extra to put in place a real hardware router, or stop paying by the 1U increment and get 1/4, 1/3 or 1/2 of of a cabinet, priorities and risk tolerance are misaligned in my opinion.
if you have >10Gbps traffic flows and are putting the router and other hosting environment/linux things all together in one 1U piece of hardware as a single x86-64 server, that's a "too many eggs in one basket" problem.
also worth noting that many colo/hosting ISPs won't offer 25GbE circuits on SFP28 anyways, you can buy either a 10GbE transit link or 100GbE, or maybe 2x10GbE bundled together in a 802.3ad or similar.
In this case, I was thinking about moving a currently half a rack worth of equipment from premises to colo, as the (internal) users are mostly on WFH anyway. They would not generate 1 Gbps of external traffic, not even in spikes. Currently, as it is, it makes more sense to stay on premises, but if some increase of density happened, it could make some sense.
However, it is not going to happen, it would be somewhere at bottom with priority. It was just an exercise, what could be done.
We're all different, I find Cisco and Vyatta awkward because of different reasons. RouterOS is not the best there is but it's less awkward, in my opinion.
Products like this are, generally speaking, designed for service providers, where having more available host capacity directly translates to increased revenue.
Consider a cloud provider who offers virtual machines to users: the physical host machine typically is involved in whatever networking path is necessary (e.g. an SDN), as well as the control plane software for managing VMs, and other tidbits. Moving the entire networking and SDN layer off the host system and onto an accelerator card, with your own customizations to the data path, means you can take those host resources and use them for VMs instead -- effectively increasing the total amount of capacity you have available. It's not just CPU time either: things like this also effectively increase available PCIe bandwidth, memory bandwidth, etc, available to users, by moving the resources the operator needs elsewhere.
There are some other benefits too, like you can run the whole security framework on a card like this. Or QoS controls. You could for example rent out the entire bare metal server to someone more or less and use a device like this to implement throttling/QoS/SDN transparently.
Most of the vendors are calling these "Data Processing Units" or "Infrastructure Processing Units" or whatever, but the idea is all the same. Offloading the networking/data paths into accelerators allows you to offer more raw compute to your users. For example, Nvidia Bluefield or Intel's new Mount Evans IPU.
This Mikrotik is basically the bargain-bin version of those products. Which is actually pretty cool. I could actually use a couple of 25GbE breakouts for that price...
I'm not sure where the confusion is. OP mentioned that his Linux system can already do routing. The purpose of this card is to remove that load from the computer. The manufacture suggests it can do up to 100Gbps which isn't trivial.
It uses another CPU to do that. GPU is fundamentally different, high memory bandwidth, embarrassingly parallel, virtually no branches, and what not. That's just using a different CPU to do more CPU, and using the same OS the host already runs.
Then it requires its own security maintenance (+training) and patches.
Personally I think one of the real usecase for smartnic is isolation: for a cloud provider, you can rent a bare metal instance and run all your networking security stack (think encapsulation, filtering, throttling etc) on the smartnic.
IOW the customer has full control of the host, but the cloud provider manages the smartnic. Incidentally, this is exactly what AWS does with their ENA adapters designed by... (ex-?)Anapurna Lab they bought some years ago (:
Knowing Mikrotik for like 2 decades, it should do better than UBNT really. Mikrotik still produces great hardware, but it's totally eclipsed by Ubiquitous Networks these years. It's kind of like watching digitalocean the new cool kid playing the same tricks overtakes linode, sigh.
Mikrotik misses the "polished" aspect still, that UBNT does well. As someone with moderate enterprise network experience, setting up RouterOS as a basic L3 switch was way more difficult than it should have been. That being said, once I was done I haven't had to think twice about the switch, it just works (which should be default, but isn't always the case).
I have mixed experiences with UBNT polish. It looks good on screenshots, it allows to set up simple things, but there it ends. It is often inpractical, shows nonsense data (basically anything dashboard is just random, useless data with zero relevance) and if you want something slightly unexpected (like ipsec tunnels defined by hostnames and not by ip addresses), you are either stuck with json (on older models with config.gateway.json) or it is straight impossible.
RouterOS did have a learning curve, and there are some unexpected bugs, but compared to UBNT, I like it much more. Yes, it has more knobs, and they generally allow configuring that needs to be done.
Probably different target audiences. Mikrotik originally got big with WISP's years back, where it was common to have Mikrotik handling routing and UBNT handling wireless PtP/PtMP.
I've found UBNT's modern switches and routers to be nice from a UI perspective - but oh boy do they have strong opinions on how you should configure them. You have to jump through a ton of hoops to get the Dream Machine Pro to not be your actual gateway, for instance... tricking it into thinking it's the gateway and then unplugging that port, etc.
Mikrotik is happy to let you do whatever you want, to your detriment sometimes.
UBNT gear seems great for SMB/Home Labs where people just want it to work... Mikrotik is for those who want to tinker, and more power-oriented users looking for non-conventional setups.
To be fair to Mikrotik if you just want basic/intermediate switch they have SwOS, which is FAR easier to set up. I also find RouterOS to be extremely unituitive, but SwOS is a breeze. I think most of their switches can run either and even dual boot.
It's a dumbed down RouterOS (afaik they use the same base but I'm not certain) with a lot of features stripped out to focus on just basic switching features. It also has a vastly simplified webui. As the other commenter noted you lose a lot of the remote management features and of course it hides lots of the stuff RouterOS can do, but that's the point. If you only need set it once and be done basic switching, SwOS is vastly simpler.
This is something I've been wanting for years to solve a huge painpoint in smaller colocation deployments. Basically, it sucks to have to pay extra for a 1u space and power just for a dedicated router. Virtualizing a router / trusting a hypervisor to keep a virtual networking appliance (connected to raw internet) separate is certainly not something I trust. (maybe I'm just old school?)
This completely solves the issue, in a much better way than me just shipping a 2u server to a DC with a small micro router shoved inside the case and powered with a micro-usb cable haha.
I currently have a ProtectCLI vault device running PFSense for my router. I also have a TrueNas / FreeNas device (Supermicro board with Xeon 26xx processor, 2x 1Gbps ports).
I’ve been wanting 10 Gbps networking for some time but I’ve been undecided how to best do that. Could I simply get this card, drop it in my FreeNas box, then plug my Arris S33 modem into the card, then the card to my network switch? Would the FreeNas host also get 10/25 Gbps virtually, or do I still need another card specifically for the FreeNas box?
I got the four SPF+ port microtic, some eBay 10GB cards for my VM server and my ZFS NAS, and connected one port to each, along with one to the 10GB uplink on the old Nortel switch and one to the 10GB port on the Mac (that one is the only one that was cable ethernet instead of fibre or direct connect).
Works fast and well. The fifth "management" 1GB port goes to my router, 1GB is way faster than my internet anyway.
Can you audit your CPU? Can you audit your mainboard, which was probably assembled in China and has some microcontrollers with firmware installed in China? Mikrotik is made in Europe in a democratic state.
I cannot. But this is another software stack, another CPU and another mainboard with DMA access.
It's a big bucket of additional weak links in my chain of security security. The whattabouttery in these replies isn't s good approach. It only takes one component to get hacked. More than doubling your surface area isn't something to do lightly.
Many servers already have embedded devices in them that you can't audit. How is this anything new? (I'm thinking of remote management, like HP iLO, Intel AMT, etc.)
Does anybody know, if Openwrt for this is planned/feasible/complicated/...? ARM64 sounds like basic boot could be easy, but the CPU name (AL52400) top search hits are from the Mikrotik product page. Is something known about the rest of the components?
It´s too early to tell. This card was hit by the chip shortage, estimated delivery times here in EU begin with Mai. So it will take some time for Openwrt devs to lay hands on this. There is lot of ongoing successful work (for example RB5009) on other Mikrotik devices and the HW is fairly hackable.
Anyway RouterOS is worth a try, it gives you probably much better performance than Openwrt. You can see a lot of comments like it´s being unstable and lacking features. IMHO that comes from the bad relesae timing by MT. They have released RoS 7 as stable in a state of rather being beta. It have seen lots of improvements lately. So until we can buy one Ros 7 will probably be OK for production use.
Huhh, all big vendors have made and are still making similiar mistakes on a regular basis, just look up Cisco related CVEs.
While I don´t know it for sure yet, as this card is not yet available at my favorit suppliers, this might very well work with Linux. While Mikrotik does not directly support installing Linux on their HW, they don´t lock the HW down.
There is support by Openwrt for many different Mikrotik devices.
Anyway I wouldn´t replace RouterOS.
I'm not certain what you mean. 4GB of RAM is far more than plenty for nearly anything. This card isn't going to be the core router for Comcast or anything... but for what it's worth BGP definitely requires far less than 4GB of RAM, although it depends on the exact implementation of course.
And this card is highly unlikely to be targeted for home use - mostly service providers doing routing within their private networks.
That's not what that says. Autonegotiation (ab)uses the 10BaseT link pulses to communicate, but that data rate is 1Mbps.
You could use the link pulses, but skip 10M support and leave out the 10M data encoding/decoding. Most likely, it's not a meaningful cost savings, although I've seen some devices that work at 10M, but don't turn on a link led.
I have had bad cables degrade from 1000 to 100, and one time had to force a shoddy (and very temporary) connection to 10 for it to work at all. So there is definitely a use for it.
I still have quite a lot of equipment in the field that is 10/half. PLC's that control commercial HVAC are expected to last the life of the building, at least until a refurb or two.
Cisco has some switches that can't go down to 10, which makes it interesting when those show up on site and the HVAC system can't link up any more.
*-T1 Ethernet was designed by Broadcom and the car manufacturers to implement single pair ethernet for automotive applications. Specifically for things like backup cameras, ADAS, etc. The standard is less than 10 years old and has nothing to do with 10base-T.
yeah, atrocious: I can reach roughly 20Gbps routed on my not too expensive home Opnsense box with firewalling and many other features (which is absolutley unnecessary for me). No routing by a CPU is not for routing between Comcast and Google, but it has it´s place and works so well, that a lot of opensource routing projects were supported by big vendors and telcos. Also networking for containers depends on this...
the reason you would slap a router card in your rackmount server is because its an IOMMU passthrough to a k8s service load balancer or straight up just openstack and the push toward hyperconvergence. the switch is already virtual inside the kvm on openvswitch (has been for a decade now), but the router is still hardware and this product aims to solve that problem.