What does that mean?
Here's an example diagram:
Also see "north-south" and "east-west" in sub-heading "Changing traffic patterns":
Amazon's AWS presentations also sometimes mention network traffic inside the datacenter in terms of “east-west traffic” & “north-south”.
If blade1 needs to talk to blade2, running it through a firewall means that the communications needs to flow out of the blade back to the datacenter network (ie. flowing north to the top of the rack switch). That adds latency and requires more network and firewall capacity, as all traffic needs to leave the chassis.
If there is no firewall requirement, traffic flows east/west within the chassis on the blade backplane. Security can be layered with host firewall or similar technology. (ie. IPSec, proprietary solutions like Unisys Stealth)
For years (15 ?) I have been putting very simple, very small ipfw rulesets in place on non-firewall systems that allow only the traffic I believe that system should be sending/receiving.
It's a firewall. It's on the host itself. It is a firewall that is securing "east/west traffic". It's a simple model that any host can implement and has very low (typically zero) cost.
This is the first, and last, time I will ever use the term "east/west traffic". Christ.
If every service in your ecosystem implemented ipfw rules (or equivalent) then that's great. But if your box got popped, then can I be sure that it won't be used as an attack vector for other machines? I will turn off the ipfw ruleset locally, and start connecting out to other systems. If there was a firewall sitting there between me and other systems, this would hit rules that should never be hit, resulting in the NetSec team getting some alerts.
Now I believe, like most sane people, that if you've popped an appserver, it's already likely to be game over, and this is a moot point.
For most applications, the app server doesn't live in its own little DMZ, and usually does have privileged access to the DB, often shares the same authentication domain as other services which is not properly secured (e.g. your [backup|log|monitoring|deployment] server connects to every machine with a service account, not SSH protected, and now I have the service account for all machines).
You wouldn't be foolish enough to have mixed admin functions (content management?), and user functions on the same app server... right? Right? Oh... wait... almost everyone does that.
Apparently this has nothing to do with getting sun in your eyes.
Just a guess.
A sad, recurring theme in INFOSEC industry. They're figuring out a lot of the old stuff, though, slowly but surely. Especially in cutting edge datacenters doing things like OpenFlow.
Typically in a campus you would see traffic going from the end devices up to the core, and then out of the core, either to datacenters/machine rooms, or out to the internet.
In a datacenter, historically, you had a few servers that talked to each other, connected to the same "Access" switch (commonly refered to as the top-of-rack or end-of-row), and then almost all the traffic for those servers also went "north" to the core, with a much smaller amount going south. Almost all the traffic was from clients out in the corporate network, down to their specific set of services.
However over time, end users represent a smaller and smaller portion of what an application does. More systems integrate with more systems - pulling in data from many other systems, doing analysis, backup, etc. etc. This is the east - west traffic, that flows between things in the same tree diagram. East-west traffic is by far the largest throughput in a modern DC.
When the traffic was mostly north-south, network engineers secured the traffic at the edge of the DC - where the DC joined the core/internet. Now the traffic is between servers that are sitting in the same rack/row/room/DC, securing it in the same way just doesn't work.