Deep Schizophrenia is a deep neural network model I'm going to either open source or make an API for people to use that can be used to generate new narratives of any size. And it doesn't have the semantic and narrative "fall-off" you get after a few sentences with models like OpenAI's GPT-2 and other "attention" based or LSTM models. It's called "Deep Schizophrenia" because like Google Deep Dream, GAN's and style transfer models do to create new images of people, lanscapes etc., it sort of "warps" narratives to generate new ones. It's as if the model is having an hallucination (hence the nomenclature contrasted to 'Deep Dream') but instead of changing images it's changing the semantic and narrative embedding space.
You're talking about some kind of online trolling and harassment. That would be stupid and accomplish the exact opposite of the intended affect. But I'm talking about customized, generated digital content using the very same content channels that everyone else on here uses. Social media, blogs etc etc.
Let me explain how Deep Schizophrenia works.
I was able to construct a continuous, fractal, space filling curve that satisfies Peano's/Cantor's definition (https://en.wikipedia.org/wiki/Space-filling_curve) but I was able to give it just enough additional "structure" so as to let it be treated as being differentiable 'everywhere' (If you're not clear on the definition of 'differentiable', for now, just know that it's very important for training machine learning models). This lets me normalize each document in my corpora from narratives the size of a single, abstract sentence like "The quick brown fox jumped over the lazy dog." to the entire novel 'The Fox and the Hound' so the entire narrative, no matter the length can, as a whole be embedded into a common narrative unit 'space' by simply adjusting the iterations of the curve according to word count. In Deep Schizophrenia, individual word tokens aren't decoded from nor treated like discrete values (as in CBOW, Skip-Gram, BERT etc.) but as the localized, resultant values of wavelet functions 'passing through' the dimensions of the space. So this lets me use techniques similar to GAN or style transfer but which are heavily modified to take advantage of my curve's structure in order to generate new narratives by 'warping' them along these dimensions while still maintaining both narrative, gramamtical and semantic 'cohesion' So no "fall-off". To borrow a metaphor, think of these 'semantic' wavelets like draw strings on some n-dimensional piece of fabric, when you pull on the string, the entire garment, from 'hem' to 'hem', gets pulled, bunched, stretched or twisted cohesively, as one garment, which is what you'd expect; how consistently it conforms to one's expectation of narrative, grammatical and semantic cohesion is largely dependent on how much memory one can afford to throw at it during training.
And the training set is prodigiously annotated and tagged with themes, prominent characters/persons, archetype categories etc. etc. So these inputs can be modified to get different predictions (stories). GPT-2 was trained on about 40 GB's of largely unannotated data. And it's considered state of the art. But I have over 247 GB's of annotated narratives all of which could potentially be trained with (I estimate the GPU costs to train any significant portion of it would be around $250K and take months. But it would be worth it).
I chose a fractal structure because narrative expression has what appears to me to be a sort of infinite set-ness (https://en.wikipedia.org/wiki/Infinite_set). This allows Deep Schizophrenia to recursively fill up the story from the 'inside' with each pass, moving down from higher levels of abstraction as it increases the cardinality (detail) of the narrative set. So in theory, with some clever memory management, it could allow you to fill up a narrative indefinitely (a-la George R. R. Martin). Theoretically, it should also be possible to take a complete, encoded narrative, normalize it onto a fraction of the curve, say [0,0.5] and, with some software architecture I haven't quite figured out yet, generate a "sequel" on the interval (0.5,1].
Narratives, common stories, myths etc. inform and tell you more about people's beliefs and ideologies in ways mere factual data could never hope to tell you. Did it matter that Boston Tea Partier's were actually dumping the tea in order the protest the new lower prices of the East India company's tea (thanks to the British lowering the tariffs) which were negatively affecting the sales of their own speciously sourced tea? Not really because what's the story that made America what it is today? The one that reaffirmed the heroes of the American Revolution. Does Turgenev's 'Sketches From A Hunters Album' and the affect it had on people's understanding of the morality and brutality of Russian serfdom any lessoned by the fact it's a fictional narrative. No, I would argue the fact its a fictional narrative loosely based in reality is what makes it that much the stronger.
And we can now tell people much better stories, with better customized content (targeting) faster and much more efficiently than any human could hope to accomplish.
(sigh) A lot of hard work and bootstrapping. My friend, don't waste this moment. There is a much more pertinent question you should be asking me.
There's a lot to unpack here. But you have to understand that I built Deep Schizophrenia (though I didn't call it that at the time) SEVERAL YEARS BEFORE I built Grassland. Partly because I realized what D.S. could do to people. Let me explain....
We'll take a few of the arguments some people have commented here as an example. I imagine most of them are athiests. But it's irrational to think 100 years of Nietzche is going to make a dent in 3 million years of evolution. We're hardwired for silly beliefs (no offense). Every culture and social group has their own pantheon of gods. They all just have different names for them. You can talk about 'privacy' and you can talk about 'Privacy'. Encryption and closing your blinds will give you privacy. That's rational. But there's no Privacy god who's going to make a data scientist suddenly unknow your pilfered Equifax credit history. The Privacy won't make the former employees of Cambridge Analytica suddenly unknow how to make your aunt vote for candidate X. And they'll never outright say they believe that. But they do by their actions.
Like 4chan with Nazism, they at one point merely cajoled one another with this mocking, ironic disattachment to the idea of a Flying Spaghetti Monster. Because they thought they were too smart to believe in it. But then some where along the way, they actually did.
These are all different stories. Human beings are extremely susceptible to the power of a story. They're like those funghi that take over an ant's brain till the ant is controlled by the funghi. If you want a story to placate your fantasy, if you really want that then I've built Deep Schizophrenia (Well, I actually built it to generate romance novels. Romance is a big industry) and it's ancillary software to figure that out for you and provide the story/rhetoric that reaffirms your fantasy back to you. A virtual, virtual reality.
But for the rest of us, those who want to be able to have data about the real world with a statistical guarantee of validity that we can calculate and create a clear separation between that and things that are mere stories, narratives and rhetoric told by humans and therefore subject to bias and interpretation (I enjoy the Lord of the Rings but I don't literally believe in Mordor), for those people there's Grassland.
And that's part of the reason why I built Grassland and why I built it in such a way as to make it extremly costly to get false data into the system. Because I knew eventually either I would release the Deep Schizophrenia software or because simultaneous discovery is so common (https://en.wikipedia.org/wiki/List_of_multiple_discoveries) possibly someone of untoward quality would discover it, use it in secret and not tell people about it like I did. And there'd be no safeguard against it if I didn't build Grassland.
I'm not saying the things I build are perfect or they're going to fulfill your fantasy of a perfect world (again, Deep Schizophrenia can give you that fantasy if you're hell-bent on stupidity). But what I try do is give mathematical arguments to support my conjectures.
Hence why I wanted a license to prevent people merging the software with things that would counteract Grassland's purpose. Yeah, maybe I wrote it wrong. It's a little difficult solving some of the world's oldest mathematics, AI, computer vision, cryptocurrency and surveillance problems in one's spare time. Adding a law degree to the mix must have slipped my mind. I'll fix the license. I'm only one guy...
So as I understand, the higher the wordcount, the deeper the "spaces get" by generating filler content matching the space and fit for the dimension of the space.
> ... the higher the wordcount, the deeper the "spaces get"...
During training, what I would say is that the "space" gets denser. Imagine you live on a cliff overlooking a lake/sea (some body of water with known boundaries). You notice on some days the winds produces long waves that are spaced far apart, while other days the waves are very short and choppy. If you wanted to encode this, it would take more memory to encode the latter than the former despite the lake being the same size.
If you have more questions about the Deep Schizophrenia model, I'd be happy to discuss. You'll see my email at the bottom of the site.
You're talking about some kind of online trolling and harassment. That would be stupid and accomplish the exact opposite of the intended affect. But I'm talking about customized, generated digital content using the very same content channels that everyone else on here uses. Social media, blogs etc etc.
Let me explain how Deep Schizophrenia works.
I was able to construct a continuous, fractal, space filling curve that satisfies Peano's/Cantor's definition (https://en.wikipedia.org/wiki/Space-filling_curve) but I was able to give it just enough additional "structure" so as to let it be treated as being differentiable 'everywhere' (If you're not clear on the definition of 'differentiable', for now, just know that it's very important for training machine learning models). This lets me normalize each document in my corpora from narratives the size of a single, abstract sentence like "The quick brown fox jumped over the lazy dog." to the entire novel 'The Fox and the Hound' so the entire narrative, no matter the length can, as a whole be embedded into a common narrative unit 'space' by simply adjusting the iterations of the curve according to word count. In Deep Schizophrenia, individual word tokens aren't decoded from nor treated like discrete values (as in CBOW, Skip-Gram, BERT etc.) but as the localized, resultant values of wavelet functions 'passing through' the dimensions of the space. So this lets me use techniques similar to GAN or style transfer but which are heavily modified to take advantage of my curve's structure in order to generate new narratives by 'warping' them along these dimensions while still maintaining both narrative, gramamtical and semantic 'cohesion' So no "fall-off". To borrow a metaphor, think of these 'semantic' wavelets like draw strings on some n-dimensional piece of fabric, when you pull on the string, the entire garment, from 'hem' to 'hem', gets pulled, bunched, stretched or twisted cohesively, as one garment, which is what you'd expect; how consistently it conforms to one's expectation of narrative, grammatical and semantic cohesion is largely dependent on how much memory one can afford to throw at it during training.
And the training set is prodigiously annotated and tagged with themes, prominent characters/persons, archetype categories etc. etc. So these inputs can be modified to get different predictions (stories). GPT-2 was trained on about 40 GB's of largely unannotated data. And it's considered state of the art. But I have over 247 GB's of annotated narratives all of which could potentially be trained with (I estimate the GPU costs to train any significant portion of it would be around $250K and take months. But it would be worth it).
I chose a fractal structure because narrative expression has what appears to me to be a sort of infinite set-ness (https://en.wikipedia.org/wiki/Infinite_set). This allows Deep Schizophrenia to recursively fill up the story from the 'inside' with each pass, moving down from higher levels of abstraction as it increases the cardinality (detail) of the narrative set. So in theory, with some clever memory management, it could allow you to fill up a narrative indefinitely (a-la George R. R. Martin). Theoretically, it should also be possible to take a complete, encoded narrative, normalize it onto a fraction of the curve, say [0,0.5] and, with some software architecture I haven't quite figured out yet, generate a "sequel" on the interval (0.5,1].
Narratives, common stories, myths etc. inform and tell you more about people's beliefs and ideologies in ways mere factual data could never hope to tell you. Did it matter that Boston Tea Partier's were actually dumping the tea in order the protest the new lower prices of the East India company's tea (thanks to the British lowering the tariffs) which were negatively affecting the sales of their own speciously sourced tea? Not really because what's the story that made America what it is today? The one that reaffirmed the heroes of the American Revolution. Does Turgenev's 'Sketches From A Hunters Album' and the affect it had on people's understanding of the morality and brutality of Russian serfdom any lessoned by the fact it's a fictional narrative. No, I would argue the fact its a fictional narrative loosely based in reality is what makes it that much the stronger.
And we can now tell people much better stories, with better customized content (targeting) faster and much more efficiently than any human could hope to accomplish.