It's pretty cool, because both the branches let you create custom generators. One we tinkered with was using T-Digests to profile large datasets to produce synthetic data, rather than just fake data. Basically we're working on something that lets you take in data and spit out something with identical statistical information inside. One thing you can use this for is stripping out confidential data (in theory at least). Another use case was to expand dataset sizes.
Having access to synthetic data like this would let you give lower level analysts or people with lower clearances data similar to the confidential data you're working with. That's a good way to reduce costs, while also developing skills that would have otherwise been difficult to develop without providing access to the data itself.
What I found is that it's a valuable tool for bringing something up to a state where it can be applied to real data, and refined further. So, long story short would be that I think it certainly has uses, however those uses eventually lead back to using real data.
Dummy data, like what the Faker (various languages) package does has very little utility other than testing systems and developing prototypes with similar schema to the real thing. It can be used as a starting point for making synthetic data, and that's what we did.
Getting into synthetic data, there's sequential and non-sequential synthetic data. Sequential would generate a single datum, such as age, then use that as the starting point to produce the rest. For instance:
age = 29 => Income for age (20 < x < 30) draw from distribution [30, 70], for incomes in that distribution... ect
Here you have actually really high utility data and can use it for producing basic models that you can then apply to the real data. Non-sequential data on the other hand creates independent datum with a specific rule, but ignores the interdependence of the rules. For example, where a sequential dataset may contain less than 1% of people age 20 to 30 who are retired, a non sequential dataset may contain a distribution based on the group average, leading to a skewed number of people age 20 to 30 who are retired.
Photo-real 3D worlds are particularly appropriate for generating high-quality synthetic ML training data sets—I know a bunch of autonomous driving companies are doing it with great success. (We also use Houdini to generate our 3D data sets.)
We’re using GANs to generate synthetic transactional data that preserves temporary and causal correlations .
 friends link to avoid paywall: https://medium.com/towards-artificial-intelligence/generatin...
Generally, we use it to avoid utilizing 'real' data. Accuracy usually is the same using either synthetic or real data. There are edge cases where one fails due to particular issues about how synthetic data suppresses outliers, etc.
It is difficult to create high-quality photo realistic images at big scale with enough variance. It is interesting to see, if one can train a network that transfers images (both synthetic and real) to some intermediate representation and then train some detector/classifier/semantic segmentation on it...
I had a lot of fun playing around with a open-source game engine called VDrift to generate ground truth for optical flow, depth and semantic segmentation. I think the video with the ground truth is nice , but the graphics of the game weren't that good. All the code is open-sourced on Github if somebody feels like playing around... 
To my knowledge, the best public attempt at this is represented by the OpenAI Rubik's Cube / Shadow Robotics dexterous hand demo. https://openai.com/blog/solving-rubiks-cube/
NVIDIA are also doing some interesting work in this area, but again, I'm not really sure they put enough different modes of variation into it. https://research.nvidia.com/publication/2018-04_Training-Dee....
CVEDIA also get really impressive results using similar techniques:- https://www.cvedia.com/
One might argue that the point of synthetic data is to produce pseudo-random samples, but they are only ever going to reflect the biases of the interpreter and so the critique will still be worth holding for its cautionary significance.
Of greater concern are quality measures that look across the entire dataset. Here are some hypothetical metrics which (although impossible to compute in practice) will help get you thinking in the right way.
- How does the synthetic image manifold compare to the natural image manifold?
- Are there any points on the synthetic image manifold where the local number of dimensions is significantly less than at the corresponding point on the natural image manifold? (Would indicate an inability to generalise across that particular mode of variation in that part of feature space).
- For each point on the synthetic image manifold, are there any points where the distance between the synthetic image manifold and the natural image manifold is large AND the variance of the synthetic image manifold in the direction of that difference is small. (Would indicate an inability to generalise across the synthetic-to-real gap at that point in the manifold).
- Does your synthetic data systematically capture the correlations that you wish your learning algorithm to learn?
- Does your synthetic data systematically eliminate the confounding correlations that may be present in nature but which do not necessarily indicate the presence of your target of interest.
Engineering with synthetic data is not data mining. It is much more akin to feature engineering.
You can also A/B test for specific use cases, for example train a model on the real and the synthetic data and compare relevant metrics.
You can see some of these illustrated on eg: https://hazy.com/blog/2020/03/23/synthetic-scooter-journeys
What another commenter said about how synthetic data is useful for providing analysts with good quality dummy data instead of confidential real data is correct. I think that's a great use case for synthetic data. But in general, I disagree with using synthetic data to augment a dearth of real world data unless you have reasonable certainty your data conforms to a certain distribution with certain features and parameters.
One such area is financial simulation. You can generally be reasonably certain that price data will conform to a lognormal distribution. So it's okay to generate synthetic lognormal price data in place of real price data for certain types of analysis. But again, I would still stress that you can't use that to measure (for example) how profitable an actual trading strategy would be. You need real data for that (to analyze order fills, counter survivorship bias, etc).
Another area is computer vision. As others have pointed out, since our understanding of roads is very good it's very effective to generate synthetic data for training self-driving vehicle models. But it's still tricky and it can be extremely confounding if misused.
Because you have this piecewise-linear sort of warping of the feature space going on, the NN is basically a whole bunch of lever-arms. The broader the support that you can give those lever arms, the less they will be influenced by noise and randomness ... hence my obsession with putting enough variance into the dataset along relevant dimensions.
To put this another way, I think that the synthetic data manifold has to be 'fat' in all the right places.
Can you give an example of successful synthetic data generation which doesn't need to map to the same distribution? I'm surprised at that idea.
So, as a simple example, the illumination in a real data-set might be strongly bimodal, with comparatively few samples at dawn and dusk, but we might in a synthetic dataset want to sample light levels uniformly across a range that is specified in the requirements document.
Similarly, on the road, the majority of other vehicles are seen either head-on or tail-on, but we might want to sample uniformly over different target orientations to ensure that our performance is uniform, easily understood, and does not contain any gaps in coverage.
Similarly, operational experience might highlight certain scenarios as being a particularly high risk. We might want to over-sample in those areas as part of a safety strategy in which we use logging to identify near-miss or elevated-risk scenarios and then bolster our dataset in those areas.
In general, the synthetic dataset should cover the real distribution .. but you may want it to be larger than the real distribution and focus more on edge-cases which may not occur all that often but which either simplify things for your requirements specification, or provide extra safety assurance.
Also, given that it's impossible to make synthetic data that's exactly photo-realistic, you also want enough variation in enough different directions to ensure that you can generalize over the synthetic-to-real gap.
Also, I'm not sure how much sense the concepts of mean and variance make in these very very high dimensional spaces.
We're building something that helps us narrow or completely eliminate the gap between synthetic data and real data at Creation Labs and will have some exciting things to show in the next few weeks on our website (creationlabs.ai)
In fact, it works so well it feels like cheating.
Our ML algorithms are really good at finding correlations -- but we don't necessarily know if the correlations in our data are actually the ones we want our system to learn. When we're using synthetic data, we have many more levers at our disposal to ensure that this is the case.