>It's not just optimistic - its qualitatively unjustified to think that neuroscience (in its current form, at least) is inevitably capable of cracking consciousness.
The fact that you had to add the parenthetical here to hedge your bet demonstrates that you don't even entirely believe your own claims.
That claim has a very robust history in philosophy of mind. Peter Hacker and M.R. Bennett, a philosopher and a neuroscientist respectively, cowrote Philosophical Foundations of Neuroscience[0]. There was also a fascinating response and discussion in a further book with Daniel Dennett and John Searle called Neuroscience and Philosophy[1]. Both books are excellent and have fascinating arguments and counter-arguments; you get very clear pictures of fundamentally different pictures of the human mind and the role and idea of neuroscience.
Not an axiom, just a prior with enough evidence to smash the probability of ghosts quite near to zero. You're welcome to pretend that I said "zero" and continue shadow-boxing a straw-man, but if you want to fight my actual argument you need to contend with "near to zero."
The difference between Woo of the Gaps and Science of the Gaps is that science is on the advance and woo is on the retreat, it has been this way for centuries, and the pace always seems to be determined exactly by the rate at which science advances rather than any actual opposition from the Woo camp. Nothing is over until it's over, but how much do you actually want to bet on a glorious turnaround? You do you, but for me the answer is "not much."
The rate of an army's advance through a particular town is 0 until they get there, but if you were to see the front lines moving towards you and put forward this argument as a reason to stay, you would be in for a rude surprise.
What argument? I don't see an argument, I just see [deleted].
If you mean this...
This is why I think strict materialism on consciousness is misguided. People like to think "weve cracked everything scientifically, from quantum physics to neuroscience, so even if we don't have a good explanation for consciousness now, we'll get there." Except the reality is macroscopic neuroscientific findings are incredibly coarse and with many caveats and uncertainties, statements more like "this area of the brain is associated with X" than "this area of the brain causes X." It's not just optimistic - its qualitatively unjustified to think that neuroscience (in its current form, at least) is inevitably capable of cracking consciousness.
Many STEM people hate this because they want to axiomatically believe materialist science can reach everything, despite the evidence to the contrary. shrug
... that wasn't an argument, it was a loosely formed set of vague and unsubstantiated claims. The fact that you immediately deleted it and started insulting anyone who responded to you kind of proves my point. I'm sorry I wasted my time on you, won't happen again.
I was going to help support your argument, but I can’t because you started throwing a fit and deleted everything. My 18-month has calmer manners. You can actually delete your comment (instead of editing to [deleted]) and it kills the entire thread, you know.
While I agree in general, I think you overstate things here:
> Many STEM people hate this because they want to axiomatically believe materialist science can reach everything, despite the evidence to the contrary.
Do we have actual evidence that it can't reach everything? That would be "evidence to the contrary". What you have given is evidence of its inability to reach everything so far, in its current form. That's still not nothing - the pure materialists are committed to that position because of their philosophical starting point, not because of empirical evidence, and you show that that's the case. But so far as I know, there is no current evidence that they could never reach that goal.
[Edit to reply, since I'm rate limited: No, sauce for the goose is sauce for the gander. The materialists don't get the freebee, and neither do you. In fact, I was agreeing with you about you pointing out that the materialists were claiming an undeserved freebee. But you don't get the freebee, for the same reason that they don't.]
Science and philosophy as they currently stand have yet to settle on just one single an universally agreed upon definition of "consciousness" — last I heard it was about 40 different definitions, some of which are so poor that tape recorders would pass.
The philosophical definitions also sometimes preclude any human from being able to meet the standard, e.g. by requiring the ability to solve the halting problem.
Without knowing which thing you mean, we can't confidently say which arrangements of matter are or are not conscious; but we can still be at least moderately confident (for most definitions) that it's something material because various material things can change our consciousness. LSD, for example.
>because various material things can change our consciousness. LSD, for example.
I feel really encouraged here, because I think this example has surfaced recently (to my awareness at least) of a good example of material impacts on conscious states that seems to get through to everybody.
Right, you can cite, say, lobotomies, concussions etc all day long but I think eyes glaze over and it hinges on the examples you choose.
I think the one about drugs is helpful because it speaks to the special things the mind does, the kind of romanticized essentialism that's sometimes attributed to consciousness, in virtue of which it supposedly is beyond the reach of any physicalist accounting or explanation.
A slightly-less-than-perfect analogy: I can mess with the execution of software by mis-adjusting the power supply far enough. It still runs, but it starts having weird errors. Based on that, would we say that software is electrical?
Is software electrical? It certainly runs on electrical hardware. And yet, it seems absurdly reductionist to say that software is electrical. It's missing all the ways in which software is not like hardware.
Is consciousness similar? It runs on physical (chemical) hardware. But is it itself physical or chemical? Or is that too reductionist a view?
(Note that there is no claim that software is "woo" or "spirit" or anything like that. It's not just hardware, though.)
Humans being unable to figure out how inanimate matter gives rise to consciousness is not evidence that "strict materialism on consciousness is misguided". Or is there some other evidence I'm unaware of?
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
> Please don't fulminate. Please don't sneer, including at the rest of the community.
People lash out at anyone who says "aha! this is evidence against materialism" in the usual case where materialism predicts exactly the same observation. There are only a few areas where common materialist and dualist models diverge: "brains are complex and hard to understand" is not one of them.
I don't really care if I believe you or not, you deleted your comment so I can't even see what you're referring to, but getting into unproductive arguments on the internet is just gonna make you miserable.
Honest question, as someone who has thought of making an "AI wrapper" app myself - why would I use this rather than go to Gemini/ChatGPT/StableDiffusion/etc and prompting it myself?
Using a wrapper gives you a few benefits.
- It lets you shortcut the time to having a refined prompt that gives you a somewhat reliable output
- Flux (like some models) don't have readily available interfaces as the model is usually required to be self-hosted. For TattooPRO I'm use Together.ai as they host Flux and I can then use their API instead of hosting it myself. The outcome is that users can then get a nice user interface to generate Tattoos with Flux and have some additional features like history and favorites to keep track of their generations.
I've also tried to make the experience as mobile-friendly as possible.
Its not that theres no benefit at all, it's more like does it give me enough upside compared to something that is easy and free, doesnt require me trusting an app Ive never heard of, taking out my credit card, worrying about getting ripped off, etc.
Hehe, the non answer you got is spot on what I expected.
Here's the deal: AI tattoo generator, what could possibly go wrong? Liability. That is also why people want to pay, even if the dev cannot or tries not to be held accountable and even if it is for some Electron frontend for a customized prompt. Paying gives them [the (potential) customer] the feeling they get a worthy result. A lot of services work like this, btw, and it helps if the service is actually not cheap. Because why spend very little money on a tattoo design. You're worth it, right?
My take is simple. If you want a tattoo and CBA to do your own research (via a search engine, a professional tattoo artist, some kind of curated database, or gasp CAD it yourself like you'd do your 3D print) then ML-based search could be a viable, modern alternative but I would not want to get burned by '6 fingers' in hindsight. AI output needs to be qualified by a qualified human being, and you [random person who wants tattoo] are probably not said qualified human being. But could it aid a qualified human being? Absolutely, just a smaller customer base. So if you want to go for volume, you pretend to serve a customer base you cannot reasonably serve well.
Eh, I think it's a neat idea. No one's forced to use or buy this - as is the case with any offered service. Also, the 'qualified human being' in the end is still the tattoo artist who's actually doing the tattoo in this use case.
I assume most people who would use this won't just get a 1:1 copy tattoo of an AI generated result, the artist can still reiterate and use the designs as a draft or inspiration.
I think it can make sense when you have some secret-sauce mixed in for whatever the application is. A custom fine-tuned model, text embedding, LoRAs, etc. It's certainly less convincing to me when someone offers just a plain wrapper around free/cheap/easily-accessible models.
But I can see the appeal of making it a bit easier for non-technical people when you add in surrounding features (favorites, history, etc.).
At this point even the fine tuning isn’t a big differentiator. It costs a few bucks to make one in Replicate and you don’t even have to caption the photos because it can use another model to do that (I usually download and improve them for the second run). You just upload a zip file of images and give it a keyword.
There’s an art to fine tuning but plenty of laypeople have done it, it just takes time to experiment and some cash for the cloud providers.
I think your definition of laypeople and my definition of laypeople are different. If I talked to anyone not in my IT department about fine-tuning, their eyes would glaze over in 2 seconds.
These types of services are, in my opinion, targeted at the people who live their entire computer lives in Chrome & Excel. Not people who know what fine-tuning is or can recognize what "Replicate" is without Google.
I don’t mean it’s common knowledge among laypeople, just that someone determined enough to spend a weekend reading image gen documentation and the StableDiffusion subreddit can probably figure it out. It’s not like they need to take a months long bootcamp to learn to code first. Once they sign up for replicate (and I guess github for SSO first), all they have to do is find the page for the fine tuning and upload a zipfile of images.
Its not that it has no appeal, its that I expect it to be a tough sell to get people to actually take out their credit card for this when its free and good enough to go to chat.com or gemini. But I may be wrong.
Yeah I have the same issue with these types of projects as well. Could be interesting to map it on a 3D body part or scan so you could see how it looks on your body or next to your other tattoos.
I’d recommend looking at how Photoshop does it. Those asset marketplaces sell template images that contain a layer that maps the user’s image onto the template surface, like for t-shirts and other printing product mockups.
By that logic, no one should build a startup that generates tattoos then?
Speak to average, non-technical users. You'd be surprised how many people have a very vague idea what ChatGPT is capable of. They aren't using it everyday like you and I. Relating this back to original comment, expecting them to know about effective prompting techniques, Stable Diffusion etc is unrealistic.
One of the reasons OpenAI offers APIs, is so you can build startups on top of their tech for average non-technical users.
> Is this like a law of the universe I'm not aware of, that you must be able to create a profitable startup that generates tattoo ideas?
That's not what I said. Your question/argument was why build a wrapper when someone can go to ChatGPT and generate a tattoo. You can make that argument for any startup that wraps AI image models. If everyone followed that argument, there would be no startups in this space.
> A profitable product will make use of APIs to do something that a user couldn't do almost just as well by just prompting ChatGPT themselves
Exactly, average users won't know how to steer ChatGPT to generate high quality tattoos. In fact, OP is not even using an OpenAI model, they are using Flux which has no direct consumer interface (from the creators of Flux) and much higher quality image generation.
It's funny that we went from "ChatGPT is going to unlock AGI and displace millions of workers" to "the only thing that came out of ChatGPT is a million of API wrappers that do nothing worthwhile at all" in like two years.
I mean it's basically the same thing as NFT/crypto grifters, just on a different tech stack. It's not about actually solving problems, it's about speculation to them.
Time to make a markov chain as a service startup...
Therapy. Wealth and success is one of the most massive crutches there is. It can make it almost impossible to be truly in touch with your insecurities and pain because its simply too easy to hide in your victory. Your toughest challenge now is to, despite your wealth, find a way to contact the pain that drove you to your hunger for success. As the bible said, it's easier for a camel to get through the head of a needle than for a rich man to go to heaven. I interpret that metaphorically.
Therapy = Exactly. He thinks he has freedom and agency but he's just being puppeteered by conflicting subconscious forces he doesn't understand and seems to have no insight into. This is a man who's in a self-driving car turning a steering wheel that's connected to nothing.
"eye of the needle" refers to a small gate or passage in ancient city walls, used after the main gates were closed at night. A camel could only pass through this narrow opening if it was unloaded of its baggage and possibly crawled through on its knees.
Not as hard or impossible as it first appears but still harder.
From what I understand, this is actually highly debated among biblical scholars.
This idea that he meant "it's hard but not impossible" seems to generally be pushed by wealthy religions and "prosperity gospel" types.
Reading everything else Jesus said, I find it more likely that he literally meant the "eye of an actual needle". He did not seem to be a fan of the rich or powerful in any way.
It's not impossible for a rich person to develop spiritually and attain heaven. They just have to give up all their riches. So functionally it is easier for a camel to do this other equivalent nearly-impossible thing.
In a Catholic and the way those verses are interpreted is that it’s not that you have to give up all your money but give up greed, it basically means that you should not worship your wealth but place your highest of high towards God, then and only then you can use your wealth towards the Good as you have no more attachments.
I think Protestant have similar interpretations but I could be wrong as they have many denominations.
To be fair, needles at the time probably weren't as fine as they are these days, so you may still have a gap a millimeter across instead of a fraction of that.
A common myth! No, no gate or passage was ever referred to as "the eye of the needle" in antiquity. [1] That verse is intended to be taken literally. Jesus Christ was quite outspoken on his feelings about the wealthy, but of course, wealthy Christians need a way for him to have meant something figurative when he told them to surrender their worldly riches.
Jesus literally told his followers to give up their worldly possessions, but… sure. He intended to give a free pass to those who came after, that hinged on a quirk of city planning that would not exist until centuries later.
Oh yeah, I just clicked the link and it's Dan McClellan. Maybe TikTok isn't the best place to cite, but he's highly credible in terms of Biblical scholarship and history.
It's just an unsuitable, attention-zapping, toxic format for sharing information. It would be best to link to an article that can convey the full message. not a 15 second jolt which is what TT is all about.
Tiktok is particularly dangerous. And not because of the oft-repeated Chinese FUD (some of which may be true), but (my opinion here) it's the "crack" version of cocaine, or the "heroin" version of opium. Everything TT does goes straight to our psychological weaknesses.
Matters of religion (as well as philosophy) simply can't be covered in bite-size 2-minute "shorts".
If you prefer, he does a long form podcast called Data Over Dogma that is skeptical and informative. He uses TikTok because there are loads of people on the platform spewing misinformation about the Bible. He's meeting people where they are with empirical information
It's not true though (and no evidence that such a gate existed with that name).
It's more likely exaggeration referring to actual camel (the large animal of the area) and the eye of a needle (an example of the smallest hole one would be readily familiar with at the time).
If it was reffering to a named place, the very capable in both Jewish and Greek authors of the New Testament wouldn't have translated it as "τρυπήματος ῥαφίδος" (needle's opening) or "τρυμαλιᾶς ῥαφίδος" (needle's hole), as opposed to something like "narrow gate" or similar that would convey to people unfamiliar with Jerusalem the point.
A rich man won't be attracted to heaven in the first place, for it's a place for people who enjoy giving something to others and rarely think about themselves. Hell, on the other hand, would mesmerize a typical man of ambition for it's a world of selfish might and power.
This is pretty close to ancient eastern christian views on heaven and hell. In that view heaven and hell are the same situation: full exposure to the unattenuated light of god. A righteous & repentant person will experience that as love and mercy, and an unjust person will experience it as fear, shame & torture. But all get the same "treatment" so to speak.
This one was originally used to mock scholars who debated such seemingly obscure minutae at the expense of more pressing issues, the canonical example being theological debate during the fall of Constantinople. But I remember reading somewhere that this debate was actually for a good reason since Constantinople were looking for help from fellow Christians against the Ottomans but needed to convince the potential helpers that their beliefs were closely enough aligned enough to warrant them giving aid. Hoping someone here might know more (and apologies for derailing the thread even further...)
> it's easier for a camel to get through the head of a needle than for a rich man to go to heaven. I interpret that metaphorically.
I agree that there's a parallel between what Jesus meant and your comment—in both cases, wealth is dangerous because it distracts from what's important. To my understanding, Jesus meant that one's heart will be focused on money rather than wanting to follow God. And, like you said, it's really easy to be distracted by material success (money, degrees, fame, etc.). But, none of these things will follow us to the grave. IMO this sort of tunnel vision is really pernicious, because it's so, so easy to fall into.
If you'll allow a personal rant: I recently heard someone say that failure is—somewhat paradoxically—a crucial part of finding happiness, because it loosens our grip on things that are ultimately unimportant. I've been thinking about all this a lot recently myself. Last year I hit a bump in the road w.r.t. my career, due to factors outside of my control. So, for the first time, I was suddenly failing my subconscious goal to climb the ladder of achievement. I started feeling adrift and demotivated, and the obvious solutions (therapy, medication, more regular exercise) didn't help.
It eventually forced me to really sit down and take a hard look at my priorities in life. Speaking concretely, this meant 1) accepting that I might not get what I had wanted out of my career, and because I'm a Christian, 2) focusing instead on how I can serve God every day (love others more, be much more open about my faith, volunteer at church and elsewhere, etc.). That's much easier said than done, of course, but I've just gotta take the baby steps that I can and trust God with the rest.
It's only been a few months since I came to this conclusion, but I feel like it's changed my life. I've become much less stressed, and I feel much more fulfilled. Honestly, it's like I have hope again in my future.
Naïvely I want to say something like "therefore, everyone should try to find whatever brings them this fulfillment." But this might be too weak of a statement, because I really think there's only one true answer to this question.
P.S. As for the verse you quoted (Matthew 19:24), I'd be remiss not to point out what Jesus says a few verses later: "With man this is impossible, but with God all things are possible." :-)
I'm not Christian but I definitely resonate with what you're saying about failure sometimes being a gift, if you can make use of it.
I wish I could be more religious, in a sense, but I just can't get my head around the concept of "serving" or "fearing" god. It's not how I relate to "the divine" at all. Power to you, though.
Nah, it's death. People objectively are doing better than ever despite wealth inequality. By all metrics - poverty, quality of life, homelessness, wealth, purchasing power.
I'd rather just... not die. Not unless I want to. Same for my loved ones. That's far more important than "wealth inequality."
You don't mind living in a country with a population of billions [sic], piled on top of one another? You don't mind living a country ruled by gerontocracy and probably autocracy, because that's what you'll eventually get without death to flush them out.
"You/your loved ones should die because Elon would die too" is a terrible argument. It's not great, but it's not worth dying over. New rich bad people would take his place anyways.
"You should die because cities will get crowded" is a less terrible argument but still a bad one. We have room for at least double our population on this planet, couples choosing longevity can be required to have <=1 children until there is room for more, we will eventually colonize other planets, etc.
All this is implying that consciousness will continue to take up a meaningful amount of physical space. Not dying in the long term implies gradual replacement and transfer to a virtual medium at some point.
> People objectively are doing better than ever despite wealth inequality. By all metrics - poverty, quality of life, homelessness, wealth, purchasing power.
If you take this as an axiom, it will always be true ;).
There's a few different cases here, but the general principle is that, for many people, words carry emotional or semantic connotations, and we want to use different words to manage these connotations.
1) For the "sexual assault vs. rape" case - Some words may be more unpleasant for many people to hear. Many people may find the word "rape" to evoke more unpleasant emotions than the more muted "sexual assault," so may use the latter in situations where we prefer not to elicit a strong emotional reaction, such as in a professional or therapeutic environment.
2) For the other cases - many people find certain terms more dehumanizing than others. With the homeless example, many people feel that "homeless" has become not just a descriptive term, but a term that carries connotations such as "crazy," "disgusting," "failed," and maybe above-all, "other" - outside the realm of people we consider "us". In contrast, the term "unhoused" carries more of a connotation of a temporary state a person is in, rather than an inherent label or trait of that person.
There is certainly a treadmill phenomenon to this - in a few years, if "unhoused" catches on, and if homeless people are still dehumanized, it's likely that the word will take on a dehumanizing connotation and many will seek to replace it.
> What do we achieve by focusing on words
All that said, while I think language is important, I do agree with you that there is an excessive focus on language and an inadequate focus on actual solutions. I would say that the current economic system and power structures make solving these problems very very hard, and as a result many people focus on policing and adjusting language because it is something that they feel they can actually change.
If I hear homeless I hear "home" + "no". If I hear unhoused I hear "no" + "house". To me they are the same. Also why when someone says "the X word" and I have to hear it in my head I don't get their point, for all practical results you still said it as I now have it in my mind - BTW I'm not advocating for saying the word, just don't say it at all though. Language is tough man, maybe it's because I'm not a native speaker.
Your POV is valid here - that's why I tried to say "for many people," because I know not everyone would have the same reaction. I do think that a native speaker would pick up more on these connotations.
I dont need to logically disprove theories like this, because I experience myself in a way that you can't argue against. Trying to convince me my experience of myself is an illusion is, in my view, a horrible case of trying to fit reality to your model. Yet it's one that a surprising number of scientifically minded people enjoy doing, for some reason. Beats me.
Here's a different way of thinking about it. There are two things that are both very plausible: One - based on your direct personal experience - is that you are a non-boltzmann-brain-human living a normal life on earth. Two - based on well-accepted science - is that it is MUCH more likely for you to be a boltzmann brain than not.
Great; these two things are seemingly inconsistent. Which means one must be false. But if either of these is false, it's surprising! Because one is based on our direct experience of ourselves, as you have pointed out, and the other is based on well-established science. So what's interesting about Boltzmann brain (and similar) is that it shows that one part of our body of knowledge must be false. And this ought to motivate us to investigate exactly what it is that we have wrong.
None of these are even the interesting points. Arguing about whether you or I are a BB is meaningless.
What I want to know is this: will BBs exist in the immeasurably far future? If they will exist, how fast could they possibly think, how long could they last, and what is the limit on their intelligence? Could they comprehend their own existence from first principles? In the short instants that they exist, would they realize how short their life expectancy is?
>Two - based on well-accepted science - is that it is MUCH more likely for you to be a boltzmann brain than not.
We're not each individual BBs (and you're not a lonely BB imagining the rest of us). It's closer to the truth that our entire universe is one big BB that just blipped into existence one moment billions of years ago. If we accept the concept of a Boltzmann Brain at all, then it must be that some configurations of one where parts of the brain are disconnected from each other and each spawns and intelligence... or even just unintelligent matter/machinery. Scale that up to a few billion light years wide, and that's us.
If a BB could exist, it could also represent a type of intelligence that is so foreign to our experience that we wouldn't even recognize it as such, even if it could last long enough for us to encounter and study it.
Very likely BBs can't exist in our current high-stability regime, and only in the post-matter universe where vacuum-decay-style events occur more often would they manifest. I think they're incredibly far future only. As for the type of intelligence, it seems probably that they'd be completely alien to us, yes, as there must be modes of intelligence other than that evolved my social monkeys. Don't expect any of them to be friendly (though, what sort of violence they could hope to commit is beyond me).
I don't think anyone wants to convince you that you're a Boltzmann brain. This is more a thought experiment. The fun is to try to explain on logical grounds why this isn't a viable option.
Not sure why you're so hostile toward this. No one is saying that you're a BB; it's a thought experiment on the nature of reality.
And if you were a BB, you probably wouldn't know it, and how you experience yourself is irrelevant. That's kinda the point. The problem with the BB thought experiment is that it isn't falsifiable, at least not with techniques we have now.
It's impossible to logically disprove a theory like this, because the entirety of your thought process would itself be a random fluctuation, never mind the inputs on which it is based.
The problem is that if you start with the reasonable assumption that you objectively exist and that your observations are valid, the model of the universe derived from those observations (or at least some otherwise viable models) includes prevalence of Boltzmann brains.
The truth is probably somewhere in between "social media/technology is the cause of all problems" and "social media/technology causes absolutely zero problems that wouldn't be caused anyway"
For sure, there are problems ascribed to all of these things consumed [ignorantly | irresponsibly | in excess]. Still, I've lived long enough to see many of these featured in hysterical "for the children" propaganda, and I find myself recoiling from that, maybe more than most. It's easy to see that AI (LLMs) are next on the list to be vilified, which seems absurd to me.
The standard definition of a derivative c involves the assumption that f is defined at c.
However, you could also (probably) define the derivative as lim_{h->0} (f(c+h) - f(c-h))/2h, so without needing f(c) to be defined. But that's not standard.
> However, you could also (probably) define the derivative as lim_{h->0} (f(c+h) - f(c-h))/2h, so without needing f(c) to be defined. But that's not standard.
Although this gives the right answer whenever f is differentiable at c, it can wrongly think that a function is differentiable when it isn't, as for the absolute-value function at c = 0.