Is there any market/demand for AI companions for females (i.e. https://jaimee.ai), or has this trend of AI companions vastly for males? I suspect the latter, but curious if anyone has evidence beyond Jaimee AI of the former.
You'd be _very_ surprised. This isn't reddit so I probably don't have to write a disclaimer about generalizations... but just in case, I'm speaking in general terms.
Just look at who buys/consumes most written romance. Overwhelmingly women. Now, a super simple AI that just says what you want to hear is different from long-form romance novels... but I think we could see something approaching 50/50.
I personally can't see the appeal. It seems like a fun toy for a bit. Super impressive stuff but the idea of treating it like a human is a bit depressing to me.
>I personally can't see the appeal. It seems like a fun toy for a bit. Super impressive stuff but the idea of treating it like a human is a bit depressing to me.
I wouldn't mind chatbot trained on Kant's or Hegel's work and asking Hegel for example what he thinks of some modern day issue. I know character.ai has historical characters as chatbots but they seem like toys(I agree with you on that) and they do not cite historical sources.
I think the only problem would be we truly don't know if that is what a figure would think. I can't imagine how many people would invoke the "George Washington's AI agrees with me" lol
you'd be surprised on how deranged some people are, no matter the sexes. Having a Prince Charming / Manic Pixie Dream Girl that always agree with you will be the utopia for them, and amplifying their mental issues. Soon they'll have their world view warped and that not even considering bad actors. If politically, criminally or economically motive is poisoning the model, they'll be very vulnerable.
>you'd be surprised on how deranged some people are, no matter the sexes. Having a Prince Charming / Manic Pixie Dream Girl that always agree with you will be the utopia for them, and amplifying their mental issues.
Isn't social media already doing that and niche internet communities? People call it echo chambers.
Thankfully I don't know of any of these types first hand, but I totally believe you.
I tried it the "companion" and it was a fantastic work of technology but the idea that there is _nothing_ behind the screen just makes me not really care to use it for "companionship."
It's mostly for males because the developers behind them are targeting themselves. However I would imagine the market for female users would likely be equal if not larger...
Well, there is an app called Tolan (alien AI friend) that has been very successful and the devs have said that 80% of the users are young women.
I myself have released an app in this realm a few days ago, it's very much a work in progress, but my goal was to let the AI feel more like a computer and less like a companion/boyfriend. I think the relationships these companies are pushing will be harmful in the long term.
From the few of my friends who have talked about this the majority have actually been women. I think the image gen lagged behind the LLM companions because of the computational intensity plus VLMS are just more complex. Because of that it seems like it's been more interesting to women who prefer prose to visuals.
Yes, there was a LastWeekTobight episode on AI Slop. There is a product which is popular amongst women, biker boyfriend and sadistic boyfriend were one of the most popular AI-agent flavours.
If there were the falsehood list for NSFW contents, there has to be "women must despise sexualized females, so all female depictions must be for mens to consume" pretty high up in it.
I experienced something similar. My use case is I need to summarize bank statements (sums, averages, etc.). Gemini wouldn't do it, it said too many pages. When I asked the max number of supported pages, it says max is 14 pages. Attempted on both 2.0 flash and 2.0 pro in VertexAI console.
Try with https://aistudio.google.com
Think the page limit is a vertex thing
The only limit in reality is the number of input tokens taken to parse the pdf.
If those tokens + tokens for the rest of your prompt are under the context window limit, you're good.
We've been using technology to touch up movies for years. What's the big deal - just because its the AI boogey man? Accusations the director is deliberately using AI to squeeze artists out of jobs is silly, he is simply perfectecting the scene using the tools at his disposal like every director before him.
You’re getting a lot of scared artistic people right now lashing their jobs disappearing. I genuinely can’t blame them.. I’d probably do the same, and let’s keep in mind, A lot of these people weren’t doing particularly well up to this point. Imagine being a struggling actor and realizing acting roles were being given to a chatbot. That’s how this is seen by them.
There was a similar controversy recently in the UK where a radio show was produced with an AI host (with the AI’s voice based on a now-dead famous interviewer). It was a complete novelty. Regardless, there were outcries, but when you looked closer, they were outcries from people in the industry feeling insulted that they could have dome interviewed and made a name for themselves in the radio interview circuit. An AI, particularly of someone who is dead and already had their time in the sun, felt like quite the insult. The territory they were all fighting over was already pretty tiny, and then it shrinks into almost nothing with them left thinking, “Would it really hurt so , much if I could just manually do this thing you’ve automated away?”.
we’re going to see a lot more of this. I can almost guarantee there’s going to be a movie made at some point that just doesn’t have any actors or very limited input from actors. Perhaps an animation with the voices done by very amateur people put through some sort of “acting filter” that makes them sound just as good. Actors will definitely have a lot to say about that. Don’t even get started on AI-generated scripts: the screenwriter guild is already putting things in their contracts to try and limit this.
This seems to still be very much an AWS/Amazon project with no clear path to becoming its own independent thing. For example, you want vulnerability scanning on the OS? Well you can use an Amazon product for that, otherwise *shrug* [1]. So I guess as long as you plan to run Bottlerocket in AWS, you're fine.
I wish the Bottlerocket team would do 1 of 2 things. Either own up that this is just an AWS project, or start to solve for things like this and actually be a product that "runs in the cloud or in your datacenter" as they suggest on their website.
To be fair, I think "VM" on the OS for Flatcar / BottleRocket / CoreOS is not a requirement in the same way as on RHEL etc.
Do you want to know if you are patched? Are you running the latest version? If so, you have all the available patches.
I appreciate this can cause difficulties in some regulated domains because there's a "vm" box that needs to be ticked on the compliance worksheet.
Most of the reason we need VM on a "traditional" OS is to handle the fact that they have a very broad configuration space and their software composition can be - and often is - pretty arbitrary (incorporating stuff from a ton of sources / vendors and those versions can move independently).
But that's not how you're supposed to use a container OS.
If you do "extra work" to discover vulnerabilities in "latest", you are not really doing the job of a system owner (whose job is to apply patches from upstream in a timely fashion), you are doing the work of a security researcher.
It's not like something is stopping one from doing a vuln scan, right? Like, there's something that SSM's in (or uses the admin container) and then runs the scan. Couldn't you just do the same thing?
Genuine questions, I don't know if this is the case or not.
I just wrote a post on this. We have an eBPF + SBOM based security tool and it runs great due to hooking the kernel directly via Kube DaemonSet: https://edgebit.io/blog/base-os-vulnerabilities/
tl;dr: Amazon prioritizes patching really well, fixing real issues first
Indeed, but it's just an example. Imagine it said "For example, you want Feature X on the OS? Well you can use an Amazon product for that, otherwise shrug" instead if it makes it easier.