
Splunk aims to use augmented reality to monitor server racks and equipment - praveenscience
https://www.zdnet.com/article/splunk-aims-to-use-augmented-reality-to-monitor-server-racks-equipment-to-bring-data-to-multiple-screens/
======
amacalac
I worked on an idea like this at a Fortune 500 company several years ago. It
_seems_ like a great idea, but it's not scalable at all.

A better solution is to pipe these numbers to a machine and let it worry about
tracking all these values.

Then if you need to guide a human to that rack, (and for some reason you're
not using rows & columns or an ordering system that _makes any logical sense_;
__THEN __use Augmented Reality ...or better still, have strip lighting LED in
the floor that 's controlled by the same system to guide the human there.

~~~
existencebox
I helped build a POC of this precise idea at a university datacenter ~6-7
years ago.

I broadly agree that it's not super scalable if you try to use it as "all up
monitoring" but there were certain instances in which I found it to be very
useful.

\- We used it among other things to visualize datacenter temperature as
volumentric clouds. It let us better diagnose airflow and cooling issues.

\- We previously relied on flashing lights on disk controllers to identify the
bad slot of a disk that needs to be swapped. But when you have heterogenous
SKUs/chassis/disks/backplanes, maintaining that infra is a PITA. AR offered
avenues to do this that slightly reduced the complexity needed to provide this
feedback.

Basically, even if you "piped your data into a machine" a human still had to
eventually interface with systems (the datacenter, the machines within them,
networking topologies) that are highly opaque without augmented guidance.

To your final point as well: I see the lighting solutions vs. portable AR as
different points on the "AR spectrum" and they offer obvious tradeoffs in
portability, extensibility, etc.

