Beta testers constantly complained that falling didn't feel right, jetting and skiing (the game's main movement) was slow and soupy. Players were leaking videos and photos showing the differences in motion.
At one point, Dynamix hired a top player to playtest, as he "remembered the best" how it felt. All to reclaim the physics calculations of some game from the Leisure Suit Larry company.
The game is 16 years old... old enough to drive a car and it's still being played!
Tribes 1 was an absolute favourite in my household growing up. It was the first real viable, extendable MMO fighter I'd played. I don't know if I've had as much fun in a similar format (and I enjoy [like the odd ice cream treat] some modern ones like BF1). Maybe it's just sentimentalism. It was the first game I got my younger siblings into, and they became obsessed. More than me... haha...
They can be downloaded here as iso files: https://www.tribesuniverse.com/
Smackdown in Tribestown happening Tomorrow at 8pm EST on SNAP LAK Server.
Midair guys are coming over to Tribes2 to play a few games and warm up for the Closed Beta Release of Midair. Should be a good time.
If you need a copy of the game preconfigured check out my Tribes 2 Config here.
Join us in Discord for more info.
...and download the map pack here => put in prog files / dynamix / tribes / maps / etc. => http://t2.branzone.com/
The crazy thing is that with T2 technology, they technically support(ed) 128 players in the same map! Epic, epic, epic, epic, on a scale that can't be imagined today for melee / team combat.
Or maybe that's just my nostalgia and lack of a Windows PC kicking in. 8v8 doesn't do it for me compared to Tribes. Eve looks awesome but it's not as punchy, crunchy, or visceral.
Any random game recommendations to recapture that feeling?
For altogether different reasons I really enjoy Star Wars Battlefront.
I'm not a huge gamer, though. There's probably something out there that I'm not aware of. I did buy a Windows PC/gpu just so I could do some gaming, though... (well, and for some basic VR dev project)
We had a community center near home, with about 10 gaming PCs. Tribes was a huge favorite, especially with all the different mods that were available.
We switched almost completely to AVP when that came out, though.
You just reminded me. I had a real slack-off sort of teacher(s) in my 9th and 10th grade communications labs, and Tribes would run on the old Dell boxes they had in there for whatever work. We had a whole system worked out with a watcher and everybody playing over LAN in the lab whenever the teacher stepped out of the room. I'd forgotten all about it.
I had an interesting experience with that. I'd borrowed the game from a friend, and it didn't play nicely with the integrated SiS graphics chip on my computer (performance was fine, but a lot of elements of the game didn't render). I contacted Fox Interactive's support team. They asked for my address, and I got a "replacement" copy of the game a few months later. To be clear, I didn't ask for a replacement, and stated that I was borrowing the game from a friend. Still, a nice surprise.
Also, the engine itself was open-sourced years ago, and a Linux port was made. It should be perfectly possible to clean up some bitrot and get it working on modern systems, including natively on the (x86) Mac. Makes me curious if it would run on the PPC Mac that I've got in my closet.
Having in game mail, IRC, and an actual clan system I think was awesome and ahead of its time as well.
Tribes original certainly remained in play while tribes 2 dwv was happening.
Or you could just take some of the other AIs designed to play games based on video and wire them up. Then just let your system learn to play AND run Mega Metroid Brothers.
One of the many games on the triangle with vertices Mega Man, Super Metroid, and Mario Brothers?
What am I missing here, because I'm positive that I am missing something?
The set of facts is worthless for anything of complexity.
It does not really generate the rules itself. (They are directly derived from the facts.)
What they did is only a small improvement over a typical expert system or CNN for a very limited case.
Choice quote: "Notably each fact can be linked back to the characteristics of a sprite that it arose from."
Wrong. When you pick up a flower your sprite changes, but how does it know you can suddenly shoot bullets? Etc. And for more complex games a lot of data requires exploration well past the GUI. An action might change acceleration (suddenly nonlinear ice physics with momentum), or direction handling, or you can start flying, or many other things. What if the thing moves in a circle? What if there is just some probability that something results?
The approach will fail at modelling as soon as Mario level 1-4. (The one with rotating fireballs.) Or produce an insane representation of the engine. Note how it even cannot model the dampened triangle wave motion of the fireballs in the example - assumes they're a sparse line.
The paper presents no way to reduce these huge number of "if-then rules" to something actually useful either.
Since this doesn't even attempt to explore the state space, it also requires a huge database.
Calling this "recreate game engine" is akin to saying that since we have an algorithm that can solve checkers, it will solve poker, go and also whodunit. And can play Jeopardy too.
I even suspect it's not useful as a preprocessor to something that can actually play a game, as it will break later cases.
CNNs have done such impressive things that "outperforms convolutional neural nets" sounds like an achievement, but CNNs have never been the pinnacle of accuracy - their key advantage is flexibility. Feature learning costs some reliability, but gives a huge advantage in saving human time and effort.
This appears to be exactly the opposite approach, an AI system that gains its accuracy by working from heavily pre-defined rulesets. Feature engineering is fine in a stable, well-understood domain, but it reduces the impressiveness of the 'AI' result. And more worryingly, it cripples the flexibility of the agent in a open domain like "video games".
Hand-authoring a set of functions required to derive the model means embedding a huge portion of the game engine in the engine-learning framework - what's left to learn is basically just parameter values. Mario without powerups is a game entirely defined by 2D movement, collisions, animation, and a tracking camera. That's the same feature list that had to be hand-defined for the engine.
I don't mean to attack the authors. This is still an interesting result, and they do acknowledge this in P2 of 'Limitations'. (Albeit with some lofty claims about eventually understanding real video - are they planning to encode physics as their ruleset?) But the article really oversells the capacity of a system that was spoon-fed the essentials of what it had to learn.
So what you're saying is they came up with some new, albeit small, way to do it?
Yes, the technique it uses only works for a certain space of possible games. That means there is an obvious path to increasing the size of that space.
In addition, this is wrong to having been said to be new, such attempts have been made before and even stronger in results and generality. For example this (relatively dumb) approach from 2013 generalized kinda well, much better than I've seen a silly even deep network generalize: http://www.cs.cmu.edu/~tom7/mario/
So yes, they are overselling it a lot. I am 100% not impressed by this paper as it lacks critical detail.
That it can parse stuff from 2D frames is not interesting, it is basic motion analysis which can be done even by a supremely stupid algorithm, not even a CNN.
I mean, Google best AI can play 15 rooms of a simple game...
(The algorithm as described will require a huge database for a game that is even slightly more complicated than Infinite Mario. And we don't even have the sources to try that.)
Even the object motion tracker part will choke in 3D environment. (It is a greedy matcher as they described it.)
Speaking of impressed, Google DeepMind paper is way more actually feasible to improve upon and rich in detail: https://arxiv.org/pdf/1606.01868v1.pdf
Compare the two papers in straight quality.
I understand why you'd publish any worthless junk in the current academic culture and do not agree we should actually do it.
Section 3.1 of the paper outlines a list of 'hand-authored' functions the agent used to derive events from images. They include animation, sprite-entity relationships, motion, collision, and camera movement. Which is to say, every component of Super Mario level 1-1.
That doesn't mean the paper is uninteresting, or useless. Defining facts based on those possible rules is still an intriguing result. I'm having real trouble working out from the paper how well their agent understood conditional changes like size and fire flowers - if it accurately recreated those rules, then I am impressed.
But "modeled without accessing the code" is a dubious claim about an agent that started with a list of the core rules included in its code. The Engine Learning section (3.2) mentions that automatic derivation of possible facts is a key area for future work. That is to say "this would be flexible if it did feature learning instead of needing feature engineering". Unfortunately, that's the problem in agent design, and the value of CNNs isn't unbeatable performance but the capacity for flexible feature learning. The press release here elides the issue of feature learning entirely when comparing performance.
The idea of producing a rule-based system from deep-learning, while not exactly a breakthrough, is an interesting direction to take.
It is research. It is not designed to solve real-world problem but to give ideas to engineers. And really, I can see several simple systems that could be programmed by simple rules and learned from input/outputs.
I thought the AI created a playable game engine from the reference video? If so, why did it need to replicate the exact movement of the game character? Why not come up with its unique set of movements in a fully flexible game engine?
It's waaaayy less impressive if it's just programmatically processing video frames, tracking pixels to generate coalescence, generating a library of sprites based on pixel coalescence, and then playing back the same sequence of sprites programmatically...
I'm sorry, Dave, I'm afraid I can't answer that.
University press has a horrible tendency to oversell research in the name of getting news coverage, often completely burying possible flaws or limitations of the result. The University of Maryland infamously put out a major release on concussion treatment based on a study that didn't exist (1). Similar but lesser abuses appear to be almost constant. It seems like worthwhile-but-unspectacular thesis result gets spun as a groundbreaking insight in its field.
so, their script/app doesn't reproduce a game engine at all, it instead analyzes pixel arrays from video frames and maintains rules about how the pixel arrays typically transform from one state to another. this sounds more like a useful video analytics tool than an "AI that makes game engines". if i was responsible for the headline/marketing, i would've gone with "Artificial Intelligence Can Predict the Ending of a Movie" or something (as long as the movie is 8-bit and only has 256 possible colors!)
in other words i dont think Unreal or Unity are worried about this tech.