Times to use UDP over TCP
* When you need the lowest latency
* When LATE data is worse than GAPS (loss of) in data.
* When you want to implement your own form of error correction to handle late/missing/mangled data.
* You need all of the data to arrive, period.
* You want to automatically make a rough best estimate use of the available connection for /rate/ of transfer.
Pretty much the only genre that does is First Person Shooters.
MMOs (like world or warcraft)
MOBAs (like Dota)
RTSs (like Starcraft)
Action RPGs (like Diablo)
All of these are action games and use TCP.
In most genres people would rather have the simulation pause when there is packet loss, and then run fast to catch up rather than have unreliable data about player locations.
Due to the large number of geographical locations you can have your servers in these days, it's common for most of your players to have pings <30ms. With that kind of latency and a few tweaks, it's possible for a single packet lost to be recovered very quickly with TCP.
Dota 2 definitely does not use TCP. The Starcraft games I assume can also work with TCP but I doubt that they use it as a primary choice of communication. I would love a source for this.
as the parent mentioned -- UDP where time is more important than reliability. I disagree with his last comment on building your own reliability -- use TCP for that.
tcp/1119, udp/dynamic (listed app definitions for app-based firewall)
some additiona blizzard info --> https://us.battle.net/support/en/article/configuring-router-...
There can also be some packet loss increase when using TCP mixed with UDP. Most games built recently probably use more RUDP than TCP. But when you need to use any UDP at all you may as well just go reliable UDP. TCP definitely works for non-realtime games and turn based especially but if you ever want some non essential systems updating where packet loss is ok (maybe world physics, or events that are just decorative/fun) then reliable UDP is best in my opinion.
SCTP was supposed to fix much of this but really reliable UDP does almost everything you need for realtime and not, so if investing in a network lib/tech that is the best choice.
enet is a popular reliable UDP framework that was the base or basis for many others. RakNet also has some nice implementations of it. There are many others now but some larger common systems based their systems on these, Unity is one.
Characteristics of UDP Packet Loss: Effect of TCP Traffic
I'd argue that none of these games require time-critical data and often have latencies much higher than they would with UDP. None of these games have networked physics either, TCP breaks down badly in that case. I'd also argue that great network developers come in very short supply and therefore most companies are left to their own misconceptions. (Then again, most game developers grossly overestimate their own skills, nothing new here :D)
I don't know where you get that "people would rather have the simulation pause when there is packet loss, and then run fast to catch up" idea but my experience is the exact opposite.
"Unreliable data about player locations" is a myth. Its just as "unreliable" in TCP with the difference being that in TPC the protocol will arrange for the packet be resent; at which point its no longer relevant to the current frame - you've wasted CPU and bandwidth, crippling your game designers in the process, often without realizing so. You rarely ever want a reliable world state, just reliable actions.
With UDP you actually rely on unreliability to only ever use the latest game state; actions you want reliable or ordered can be implemented very easily on top of UDP or left in TCP. Client-side prediction takes care of dropped frames such that missing a datagram (and therefore that frame's player positions) isn't noticed by the player. A delayed TCP packet will do the exact same, but waste CPU/bandwidth in the process.
I've seen my share of "senior" network programmers who couldn't make a networked engine that is actually stable. I can understand why some teams look at TCP and naively claim "problem solved!" But it doesn't make TCP the best choice because you can appeal to authority with big titles using it :)
Is it something that changed over time and that page is out of date? Is it just doing TCP over UDP for some esoteric cases?
I thought that modern games use interpolation to compensate for this, because network lags are always present in WAN.
You are talking about extrapolation which is the prediction of future data from previous data points.
Once again, First Person Shooter games will tend to do extrapolation, but few games of other genres do that kind of thing.
This gives you (in theory, I've never tried it) better throughout, and you get the equivalent of TCP's retransmission without the window semantics, since you move all the retransmissions to the end. On the other hand, it's probably bad for everyone else, since there's no congestion control.
Except that if you ship them too fast, they will at best simply build up in a buffer somewhere (e.g. near a bottleneck link), or more likely they will overflow causing many of your packets to be lost. So not only did the recipient not get them (and you have to figure this out and resend them later), but you also wasted a bunch of resources sending packets only to have them dropped.
And if you send them too slow, you are leaving some potential capacity on the table. So you need to send them fast, but not too fast.
So you add some additional logic to implement flow control and congestion control. And that's TCP, pretty much.
That sounds like FASP (http://asperasoft.com/technology/transport/fasp/)
I've tested it on "long fat pipes" when evaluating it for a project a number of years ago. The numbers don't sound that impressive now. I was able to saturate the 100Mbps NIC of the slower server in the transfer. The transfer was from Northern VA to LA over the Internet. This was from data center to data center and the network capacity was such that I knew that the transfer rate would not cause any serious packet loss.
Using simple TCP over the same link, same computers was limited to around 40Mbps. More modern TCP implementations have a larger window and are able to better saturate high-throughput, high-latency links.
We had need to send one type of message split due to size with udp. We were on the same subnet sending and receiving so it worked almost always. We were also using multicast.
Its very very hard to figure out whats going on when things don't work ( You don't know if the control messages are getting through, when you ask again or if the message just isn't being sent). At some point you are reinventing tcp/ip.
I always liked the way udp messages are received though, all or nothing. Not the byte stream that is tcp which needs to be broken into messages manually.
The purpose of this is for games, not web traffic or torrent2.0.
In real time games where you're syncing physics states or player locations data >200ms isn't just useless it's actually negative.
This article is 2+ years old, and is the foundation I suspect for libyojimbo which hybridizes UDP low latency communication for large player real time games and reliable messaging (basically custom TCP?) for less time critical things.
Games sit in this happy space where the data rate is ~10-20kb at most so they don't need to worry about congestion control as much.
That would be odd. It's not true. There's TFTP all the way up to UDT.
I see what you're getting at by trying to avoid processing the packets in order if it wastes time, but ultimately I think its a micro-op that introduces other bottlenecks. That's why bit-torrent still typically uses TCP even though they could potentially receive file parts in any order.
The real answer is that chunking over TCP is efficient enough that little is gained by chunking over UDP.
When Linux came around it just followed suit. TCP support came to NFS only later with other bells and whistles in later NFS versions, and Linux stuck with basic NFSv2 for a good while.
NFS was designed for local-area networks, so it didn't need to worry about working over the Internet.
In addition to the network being much faster relative to CPUs back then, the TCP stacks hadn't seen the tuning they have now. Making TCP fast came later.
1) Bulk data transfer doesn't work over UDP.
There is no flow control whatsoever with UDP and no guarantee, it sends stuff without distinction between a 56k uplink and a 10Gigabit LAN. The packet loss and lack of reliability are atrocious when you send non negligible amount of data.
2) You'd need to create a protocol on top of UDP to handle the basics (controlflow/error/retransmission), that's equivalent to re-inventing TCP. Don't re-invent your own TCP, just use TCP ;)
Tell that to all of the people that use TsunamiUDP to do bulk data transfer. It uses TCP as a control protocol, but the data transfer is all UDP.
Bulk data transfer doesn't work over UDP, but it can work over UDP and TCP.
In modern days UDP is really only used in situations where you can afford packet loss and CPU is expensive, or when you need to bypass the built-in behavior of TCP on your OS. It's not as much about latency or data loss, usually UDP is used in places where processing power is expensive or load is too extreme to justify TCP.
Sometimes, like with DNS, it's better to just resend to request than burden a massively parallel server with maintaining TCP state and checksums.
The main issue I have with the above comment is that UDP absolutely does not guarantee better latency or have higher priority than TCP packets. The packets usually queue in the NIC buffers and downstream identically regardless of which protocol you use. The main draw of TCP besides data and ordering guarantee (which are CPU costly) is that (depending on your operating system) it follows some kind of automatic rate limiting algorithm be it TCP Vegas(Linux, delay based) or Reno (windows, loss based). These algorithms attempt to limit your connection "stream" to the line rate automatically and play somewhat nicely with each other on large scale.
In contrast, the main problem with using UDP is figuring out a smart way to rate the limit to avoid excessive packet loss. In some protocols like DNS anycast type things where only raw throughout matters, who cares if responses drop. Other times, like with LEDBAT or Microsoft's BITS, your main reason for using UDP is to roll your own rate limiting protocol, since otherwise you're stuck with what the OS gives you on TCP.
The problem is, the OS throws everything into the same buffer, and unless you're clairvoyant you won't be able to skip sending old packets into the buffer until it's too late. Basically by the time you know the data is outdated it's already on the way to the NIC buffer and you can't stop it, making TCP and UDP equal in this regard
Once TCP drops a packet, data sits in the receiving buffer getting older and older instead of being delivered to the program.
Red and hopefully codel can combat buffer bloat to some point but in the end all packets go to the same place
As long as TCP happens outside of the "hot" loop, it's fine.
What routers? Is this something that stupid routers did in the '90s and people are still worried about, or are modern routers actually behaving this way? (And if so, why?)
TCP retransmission interfering with UDP sounds like something that shouldn't be a problem on any sane network, especially if the application authors are smart and use something like LEDBAT for their TCP traffic when they're also doing latency-sensitive UDP communication.
There is no general policy against non-TCP packets of course, as lots of important internet services are non-TCP. Just flows that don't respond to congestion.
It's not a vendor specific thing, all the major router vendors provide ways of doing this.
(Note: router != your home NAT box)
I think that's only if you're being less responsive to congestion signals (drops, ECN marks), and you're using more than a fair share of bandwidth or trying to use any available bandwidth. Fixed but low rate flows (eg. VoIP) shouldn't be penalized until the link is congested with so many flows that the VoIP is trying to use more than 1/N of the bandwidth.
Since UDP isn't as common of a protocol I can certainly see some routers treating it a second class traffic when it comes to packet shaping.
That said, I doubt home routers are de-pritoritising UDP to any extent, or it would be a big topic of discussion among gamers.
(Of course, I'm pulling all of this out of my nether regions without a lot of thought, and you may well be right, the sheer volume of such requests might lead to problems.)
DNS caching is a good thing, but it certainly doesn't eliminate this problem. Web browsing still produces a lot of DNS requests, and any cache upstream of the bottleneck (ie. any cache operated by your ISP rather than in your own router) doesn't help against loss during congestion.
Without active queue management, a single bulk upload can break DNS. When the bottleneck link (usually your cable/DSL modem) is saturated by the bulk upload, it will start dropping new packets that come in while the queue is full.
When a single packet worth of space frees up, the chances are that the bulk flow will instantly take it, such that nearly all outbound DNS packets get dropped, and DNS queries time out.
The solution is active queue management (CODEL is a good choice) at every bottleneck.
Exactly which routers are alleged to be doing this?
Use UDP only if you satisfy all the following conditions:
* Don't care about loosing data
* Don't care about receiving data out of order
* Not sending stream of data but individual messages ( < 580 bytes)
* Don't care about the other side of the connection being dead or terminated, and not knowing about it
The requirements you listed don't make sense as written. Nobody would ever use UDP.
The requirements make sense and the common usages for UDP fit these requirements.
It's correct that [almost] nobody ever uses UDP. It's a niche protocol for a very few specific use cases, one of which is real-time FPS/RTS/MOBA games (which fits all the criteria I listed).
Skype, Facetime, WhatsApp, Hangouts, Telco VoIP, Facebook Live, YouTube Live, Surveillance Cams, most VPNs.
the TCP packets require an ACK ever n packets, as defined by the FREQ field. If the ACK isn't received, no further packets are send.
The sensical comparison would be TCP vs whatever protocol you build or use. It might be your homegrown protocol, but if you have problems with TCP, you should probably look at other ready-made protocols first. For games, there's eg ENet.
(Not to say that TCP is generally out of the question for games, eg. WoW uses it apparently successfully)
I once implemented a communications protocol that worked with variable-length independent messages over TCP. It felt a little silly pushing distinct datagrams into a stream, that would be chopped up into a packets, sent to another PC, where the OS would then reassemble those packets back into a stream, only to be chopped up again in messages.
Implementing the system for chopping up the stream really felt like it should have been done by a lower level as well, not in an application
if this is a consideration -- using TCP may be best option.
1) The correct interpretation of later packets depends on side effects of earlier packets (e.g. commands in an SSH session, OpenGL commands)
2) Raw bytes-per-second throughput is a major consideration (e.g. file transfer, streaming video)
3) The application readily tolerates high latency and jitter (e.g. email, text chat)
For many (if not most) threat models, the lack of a sequence number does not provide adequate security.
You might not get it, but I don't really care.
Here's a joke about TCP.
All that said, I certainly see the argument for an all-UDP protocol in terms of defining your own retransmission approach, or attempting to avoid it altogether with forward error correction or whatever.
UDP will drop data if there is congestion. Basically, if you care about congestion and data loss, you should not use UDP.
If you do UDP and TCP. The UDP transfer will loose data during congestion, while the TCP transfer will slow down and try harder.
This has some usage. For instance, videoconferences software will do TCP and UDP. The TCP is used for control data -low volume data that needs to work- (authentication/joining room/leaving/adjusting quality), the UDP is used for audio/video (some pieces may be lost during congestion, it's okay).
If you use UDP for everything, you'll randomly loose control AND audio/video. It's the worst of both worlds.
Not to be dismissive but I think the guy who gave you the advise doesn't really understand networking (or doesn't explain very well or was talking about some weird usage we don't know about).
UDP ignores everything else and always expect resources to be available for itself.
We could say that UDP interferes with everything, we could say that everything interferes with UDP, it's both ways.
The 1997 paper only shows that certain simulation parameters can produce bad packet loss for UDP traffic. But those parameters are almost all irrelevant to today's Internet: MTU is now 1500B not 512B, TCP congestion control is usually something like CUBIC rather than RENO, buffers are never a mere 16 packets long, and TCP traffic is more often relatively short-lived HTTP connections, not long-running FTP.
The paper is just another piece of evidence that TCP synchronization sucks. You are very unlikely to encounter such perfect TCP synchronization on today's Internet, and even if you did, the dynamics of introducing some UDP flows to that situation would be different.
As far as negative interactions between TCP and UDP on the same interface, I'm not aware of any but I'd love to be corrected if wrong. As you pointed out, it's not uncommon to use both. The BIND 9 DNS server, for example, allows the use of TCP for DNS queries over 512 bytes .
I was wondering about this too. Yesterday in the "WebRTC: the future of web games" post we had Matheus28, the creator of Agar.io, saying that his games are built on WebSockets, a TCP-based protocol. Agar.io is real-time and massively multiplayer, and seems to run smoothly enough. So is it completely critical to use UDP?
Any general advice from anyone is appreciated, I am new to both networking and game design but it's a lot of fun.
Agar.io is slow and simple. There isn't much state to encode and state doesn't change drastically that a few re-sent packets here and there are going to ruin the gameplay. Granted, if you're on a terrible connection with high packet loss, even a great algorithm on top of UDP won't save you.
There are better explanations than I can provide on why TCP isn't great for (parts of) fast, complex games, but it's enough to say that there's a huge difference in the complexity of state and state updates between say Battlefield and Agar.io or the game you described. Modern TCP on modern network connections will get you very far, but there's only so much you can deal with when a split second can mean the difference between "A was under the crosshair of B" and "A actually went around a corner, sorry B".
What you described is likely not going to be so sensitive. I don't think you'll personally have to worry about implementing a custom networking protocol over UDP any time soon - maybe unless you think your game is going to be played by people on terrible connections (possible!), or you end up with some abysmally inefficient state sharing over the network.
And, from a different perspective, getting caught up in those details tends to risk getting burned out. Better to see your game through to MVP then go back and make it great.
And it's all tradeoffs. For example, Ultima Online used TCP and worked relatively fine on decent networks. In case of a packet drop, you'd observe a complete pause on the world but most animations / movement had a 100ms or more buffer area. If you wanted to move some direction, your character would start moving, if there was no ack after a single step (100ms if running) then it would stop. The world was being streamed as diffs from previous state so it depended on tcp's reliability.
Anarchy online used udp for game and tcp for chat. While the OP argues against this, I think it is a good seperation. I remember playing it on a congested network and the game state fucked up really bad. Tunneling over a tcp stream to a vps server close to game servers made it playable for my case. Maybe they should have gone with tcp. Or their networking code did not compensate for UDP's issues better.
On the other hand, anything more action packed like a fps would probably be better off with UDP as the author suggests.
There is always going to be data transmitted other than the game protocol. Most likely that is going to be limited and not at the same time as real time data is being transmitted.
It's aimed at people who don't know the difference between UDP and TCP (and possibly wet string). Yet he recommends they implement their own reliably protocol over UDP, and they avoid TCP because it's better to implement your own QoS?
Why not add obtaining PhD in quantum mechanics just to round it out? It wouldn't alter the odds of pulling it off overly.
The choice you make depends entirely on what sort of game you want to network.
So from this point on, and for the rest of this article series, I’m going to assume you want to network an action game.
You know games like Halo, Battlefield 1942, Quake, Unreal, CounterStrke, Team Fortress and so on.
It's definitely not an easy task for beginners, but these days you can pick up his library that implements this stuff.
For people who are not familiar with TCP and UDP, the answer should be to ALWAYS use TCP.
The only exception is some subset of games (FPS, RTS), that this article is exclusively intended for.
Even there, for the average user who just wanna make a test game project, TCP may be easier to use and program around. For simple games (e.g. Connect 4), TCP is a better option either way.
"I didn’t care for this article 7 months ago. I regret it. My whole game is useless now. I should have go with udp."
It's been a while since my networking class, but if I remember correctly with UDP you have some serious issues where you can end up clobbering your network, filling up buffers in the middle and dropping tons of packets. The lack of congestion control is a huge no-no.
For instance in the example he gives, sure you can tolerate dropped packets for player-position data, but how do you know if you can tolerate sending at 10Hz 100Hz 1000Hz? Even with TCP you can't (I think....) programmatically adapt to the size of your pipe. That's kinda abstracted away for you so that you just say "send file A to B" and it does it for you
Are you supposed to write your own congestion control in userland???? Seems like this should be a solved problem
- UDP: unordered unreliable datagrams
- DCCP: unordered unreliable datagrams with congestion control
- SCTP: optionally-ordered reliable datagrams with congestion control
- TCP: ordered reliable stream with congestion control
This is first time what I read that datagram can be duplicated. Is it true? It's duplicated by network or does it mean that peer send it again?
I think you can't send a udp paquet to a phone because the carrier will block it, but i'm not sure.
But be aware that it's a daunting task, because there are so many things you need to handle all together: lost packets, reordered packets, duplicate packets, connection handshakes, session handling, reliable/unreliable channels, packet resends, random disconnects, reconnects, network congestion, spoofing, protocol hacking attempts, dos-attacks, banning, encryption, etc...
If you're writing an UDP library, you also need to think of performance, object pooling, connection buffers, threading/async issues and on top of that you also want to provide a nice API to the outside world for the client and server... Well, it gets messy...
If you're into this thing, I can advice you to look at haxe libraries. Learned a lot of them. There are very simple, idiomatic server/client-side implementations which are easy to follow, even if you don't know haxe .
That fixes nothing. Now you are sending too many small packets using too many syscalls. Just like UDP, buffer in user space, send in one go. If you do that, TCP_NODELAY makes no difference. (The exception is user input, if you want to send those as they happen, use TCP_NODELAY, but think about the why ... it has little to do with what this article is talking about.)
Games likely send data only around 25 times per second, and ping is likely < 50ms. Waiting on a dropped packet and the delay it causes is unnoticeable. Added that clients will need some kind of latency compensation and prediction, independent of the TCP/UDP choice. Delays and then bursts of 100ms or such are doable.
The problem starts when the connection stalls for more than 100ms, especially in high bandwidth games. During the stall both behave the same. After the stall, TCP will be playing catchup and wasting more time receiving outdated data, and handing it to user space in order. UDP just passes on what is received, with a lot less catching up, and maybe some dropping of packets.
But gameplay has been degraded in both cases. UDP just has a higher chance of masking and shortening degradation more.
Anything more than that is basically cargo-culting, like this article.
He offered one solution to fix one behavior of TCP, the behavior being that if the data written to the buffer is not big enough, TCP might hold it. Setting TCP_NODELAY forces the protocol to avoid holding data. But this is only written as a note to coders who might think TCP_NODELAY will fix TCP for action games, but it doesn't actually, because the protocol has other characteristics that are undesirable in that type of games.
Moreover, you write this: waiting on a dropped packet and the delay it causes is unnoticeable. Unless you have a FPS game with TCP as its network protocol to validate this claim, I call bull. Many network programmers recommend against TCP. This is probably not simple cargo-culting.
That is the misunderstanding. It will send one non full packet, and only then "hold on to data" until the next packet is a full packet. But that condition will reset when all data has been acknowledged. If you buffer in userspace and write data in one go, at 25 network fps, you basically never trigger the condition where TCP_NODELAY makes a difference.
It is not nagle's algorithm, but a variant of minshall-nagle, what modern systems use.
There is a reason why FPS games use UDP, but that discussion should not start with TCP_NODELAY. That one is misused enough ... often when it fixes anything, it is because it masks a real underlying problem, had you fixed that problem, your system would react much better under stress or bad networks.
Anyone here have any experience using QUIC in any application of their own?
The custom congestion control makes me wonder if it only works alongside TCP traffic - once everything goes QUIC, then what happens? I looked for a bit about the story in ancient history of some blazing fast server OS TCP implementation that broke the rules so it fell over when more than one server was on the network, but couldn't find it.
The Internet Sucks: Or, What I Learned Coding X-Wing vs. TIE Fighter
Do you have any sources/data on that? Genuinely interested where you drew that conclusion from. From my vantage point (anecdotal), ISPs and carriers routinely under-provision, congest peerings, and don't find a problem well until after a number of customers complain.
Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too much data. It is a huge drag on Internet performance created, ironically, by previous attempts to make it work better. The one-sentence summary is “Bloated buffers lead to network-crippling latency spikes.”
The bad news is that bufferbloat is everywhere, in more devices and programs than you can shake a stick at. The good news is, bufferbloat is now, after 4 years of research, development and deployment, relatively easy to fix. See: fq_codel: wiki. The even better news is that fixing it may solve a lot of the service problems now addressed by bandwidth caps and metering, making the Internet faster and less expensive for both users and providers.
The introduction below to the problem is extremely old, and we recommend learning about bufferbloat via van jacobson’s fountain model instead. Although the traffic analogy is close to what actually happens… in the real world, you can’t evaporate the excessive cars on the road, which is what we actually do with systems like fq_codel wiki.
Still, onward to the blow.
I work on protocols that use sketchy wifi on mobile units that roam among access points. Sometimes they roam into RF black holes. TCP is all kinds b0rk3d for what I am doing.
I'm not sure how UDP is going to help if you have no connectivity.
These days TCP copes as well as UDP with temporary complete loss of connectivity, due to fast recovery. It is really just packet loss that kills TCP, but that isn't generally an issue these days if you're in the USA/Canada/Europe/Japan/Korea/Australia, and your wifi isn't crapping out.
I don't know which reality bubble you live in, but this is utterly false. I live in the country side and get routinely an average of 40% packet loss on many, many websites. There are plenty of occasions for packets to get lost somewhere between a client and a server.
It can be changed with a registry setting (TcpAckFrequency) but you can't expect even a significant fraction of your users to do that. Why this isn't a per-connection option sort of like TCP_NODELAY is beyond me.
BTW, the article's view angle is multiplayer game programming.
On his site you can also find an irate rant about people with no experience dismissing his claims as "reimplementing TCP". It has good points but it's a frustrated rant (reader beware). But it addresses a lot of the commonly believed fallacies.
His latest project is an open source library for game networking over UDP: https://github.com/networkprotocol/libyojimbo
And he is correct, for "real time" games you must use UDP or you will be in trouble in real world networking conditions. When everything runs smoothly, TCP and UDP work almost identically but when packet loss occurs, TCP will make things worse.
Worked with Glenn on Freedom Force and Tribes. He knows his stuff!
* TLS over 443/udp
Websockets are TCP connections.
You mean the networks in particular that you use? There's a lot of varied network equipment out there. Glenn wrote this article at least 8 years ago and for the types of games he works on (realtime 64-player online FPSes), I'm sure TCP still wouldn't work as well as a UDP solution.
> The trick is to hide the lag with animations
That's an understatement. A significant amount of work needs to be done with client-side prediction in order to give the appearance of playing in real time consistently with other players.
> I think google, facebook, and world of warcraft use tcp for their real time apps
Which real time apps are you referring to? In the context of this article, Glenn is talking about soft realtime systems (30 hz updates or more). There aren't any user-facing apps from facebook or google that I'm aware of that I would call "real time". For WoW, timing is less critical than an FPS and obviously TCP works well for them.
The second biggest mistake is trusting the client. People never cheat in online games, right !? smile Network latency is often less then it takes to render, so naive solutions with simple broadcast servers work well until you need to deal with rouge clients using teleport hacks etc.
There are no set rules when making real time distributed and scalable applications like MMO Games. In my experience it's best to start with a naive demo/prototype to see where the bottlenecks are.
I would suggest starting with TCP and then only switching to UDP, or more likely making your own protocol (TCP ontop of UDP smile), when you know the requirements.