> Obviously it’s Tinder posting the fake profiles because they want men using their software.
I have used Tinder a lot, but not in the US. Where I am it's obvious that 95%+ of profiles are fake. However, I don't believe they are created by Tinder themselves. It's a combination of scams/bots, people looking for followers, prostitutes, fake profiles to try Tinder, and other stuff.
On Tinder's side, a few things happening:
- Tinder actively surfaces old/abandoned profiles to make you waste your swipes and super swipes. It's safe to assume that the vast majority of profiles you swipe on are no longer active or signed up once but never used Tinder.
- Tinder picks a subset of profiles (again, mostly abandoned ones) as "upsells" to make you buy likes. They are even labeled "upsell" in their API and it's the same profiles over and over again when you sign up.
- Most people that you swipe on will never see your profile.
While Tinder themselves does probably not create profiles, they are doing absolutely everything they can to make you spend money and give you the minimum amount of real matches they can get away with. Their goal is to give you the minimum to keep you on the app as long as possible, nothing more.
The system is completely broken and Tinder is extremely malicious. It's almost a scam. Less than ~1% of what you see are actually real people actively looking for matches. I am still using it because even 1% is better than nothing, but I hate the company with a passion.
Also, it has gotten significantly worse in the past ~5 years.
For many RL problems you don't really need GPUs because the networks used are relatively simple compared to supservised learning, and most simulations are CPU-bound. Many RL problems are constrained by data so that running simulations (CPU) is the bottleneck, not the network.
Yes. Unfortunately my experience in this area is about four years old, so maybe obsolete at this point. But at that time, for our problem -- which used many of the typical benchmark gym simulators -- the main bottleneck were CPUs, not GPUs.
I suspect the equation changes on how complex the neural network is (if it's simple, not much is gained from GPU), whether the simulation can take advantage of GPUs (the ones we used didn't, but for 3D graphics-heavy simulation and other kinds of computation I'm sure it can help) and the algorithm -- some algorithms rely more on on-line evaluation and others make more of an effort to reuse older rollouts. (An extreme case is offline RL, which has also attracted a lot of interest recently. Since you were asking for references, this might be worth a read https://arxiv.org/abs/2005.01643).
Yes. GPUs are extremely good at doing millions of small repetitive calculations, and not great elsewhere.
My capstone project used RL on a Raspberry Pi to train hardware-in-the-loop (essentially when to open and close valves based on sensor input). It was incredibly slow because it couldn’t be parallelized (without buying additional hardware for $500 each). Lots of professors asked why a Raspberry Pi was chosen when we had high end GPUs in the lab, and I had to explain that the Pi was NOT the bottleneck, and in fact stayed idle 95% of the time.
Agreed, deep architectures are really only needed for feature engineering. There have been a few papers showing that even for these very deep setups, the actual policy can almost always be fully captured in a small mlp.
One example with model-based RL is "World Models" by Ha and Schmidhuber. They pre-train an autoencoder to reduce image observations to vectors, then pre-train a RNN to predict future reduced vectors, then use a parameter-space global optimization algorithm (but any RL algorithm would work) to train a policy that's linear in the concatenated observation vector and RNN hidden state.
The important thing here is that the image encoder and the RNN weren't trained end-to-end with the policy. The learned "features" captured enough information to be an effective policy input, even though they only needed to be useful for predicting future states.
It's also interesting that the image encoder was trained separately from the RNN. I think that only worked because the test environments were "almost" fully observable - there is world state that cannot be inferred from a single image observation, but knowing that state is not necessary for a good policy.
I don't really see the dichotomy. Surely the features you need to learn depend on the task? Or do you mean running linear / shallow rl on top of unsupervised learning?
Also see my comment above, but you find this kind of storage commonly in game development [0] where you are optimizing for batch access on specific columns to minimize cache misses. It's usually used as the storage layer for Entity Component Systems. It's also called data-oriented design [1]
Very cool. This kind of storage is similar to what's typically being used in Entity Component Systems like EnTT [0], which can also be interpreted as in-memory column oriented databases.
Recently I'm starting to like this type of programming over OOP. Each part of your application accesses a global in-memory DB with the full state. But unlike a traditional persistent db it's really fast.
Interfaces/protocols aren’t really the same though. An interface defines capabilities for an object; the capabilities are directly associated with the type.
In ECS (where Components are a bag of data, and systems handle all logic/operations), and like a DB, the object is defined by an id and its relations; from the relations, you can derive available capabilities.
That is, an object tells you what it can do. In a DB, what it can do tells you the object.
You can create the same system with interfaces by simply ignoring the methods part of it, and keeping the data part, but associating data with capabilities is pretty much the defining difference between objects and structs.
More importantly from an architectural perspective, in ECS the logic isn’t associated with the object, it’s associated with a system that takes the object as input. The system is shared across all objects. The object (entity) for ECS is little more than an id and some relations.
An ECS very directly corresponds to an RDBMS. To call it OOP is to deny the ORM’s classic Object-Relational mismatch.
An interface in a component object model can be made only of properties.
Secondly most languages with OOP support aren't Smalltalk/Java, rather multi-paradigm, e.g. Objective-C, Component Pascal, C++, Delphi, Python, among others when Component Programming came into CS papers for the first time.
To argue that Component Programming is not OOP is just religious hate that shows lack of knowledge regarding CS literature.
> An interface in a component object model can be made only of properties.
Yes, I addressed this case. It can be done, but it rips the “object” out of it — you’re no longer sensibly organizing things under an “OOP” paradigm, but something else altogether. You’re quite directly reducing the “object” to a struct.
> Secondly most languages with OOP support aren't Smalltalk/Java, rather multi-paradigm, e.g. Objective-C, Component Pascal, C++, Delphi, Python, among others when Component Programming came into CS papers for the first time.
That’s fine, but doesn’t really support a case that it’s an “OOP” architecture/mindset/organization. In fact, that rather undermines your case..? A language that isn’t interesting in being purely OOP (is anyone?) is happy to introduce an arbitrary construct is not an argument that the construct must therefore be “OOP”
> To argue that Component Programming is not OOP is just religious hate that shows lack of knowledge regarding CS literature.
It’d be more about diluting the term into nothingness — closures are a poor man’s objects, and objects are a poor man’s closures. It’s technically true, but it wouldn’t be useful to go ahead and describe all functional programming as “OOP”, because the main interest in defining the terms is to indicate architectural and logical flow/patterns one would expect under such a paradigm/organization.
Interfaces and classes with no defined methods looks and feels much closer to Haskell and C than it does C#, python and smalltalk.
OOP is not a technical definition. It’s an organizational strategy, and the term itself is just a marker on a continuum. Everything is OOP, and nothing is, just as all things are Turing machines. But that’s not the point.
That is surely the point, everything else is religious hate without any technical substance.
Apparently me in the mid-90's porting an particle engine from Objective-C NeXT into Visual C++ using COM on Windows, with a component based architecture, did not happen.
I’m not clear — did you just ignore everything I wrote and mindlessly restate your claim, in a manner akin to one undergoing a fit of religious fervor?
I have used Tinder a lot, but not in the US. Where I am it's obvious that 95%+ of profiles are fake. However, I don't believe they are created by Tinder themselves. It's a combination of scams/bots, people looking for followers, prostitutes, fake profiles to try Tinder, and other stuff.
On Tinder's side, a few things happening:
- Tinder actively surfaces old/abandoned profiles to make you waste your swipes and super swipes. It's safe to assume that the vast majority of profiles you swipe on are no longer active or signed up once but never used Tinder.
- Tinder picks a subset of profiles (again, mostly abandoned ones) as "upsells" to make you buy likes. They are even labeled "upsell" in their API and it's the same profiles over and over again when you sign up.
- Most people that you swipe on will never see your profile.
While Tinder themselves does probably not create profiles, they are doing absolutely everything they can to make you spend money and give you the minimum amount of real matches they can get away with. Their goal is to give you the minimum to keep you on the app as long as possible, nothing more.
The system is completely broken and Tinder is extremely malicious. It's almost a scam. Less than ~1% of what you see are actually real people actively looking for matches. I am still using it because even 1% is better than nothing, but I hate the company with a passion.
Also, it has gotten significantly worse in the past ~5 years.