I'm surprised with how quickly I stopped anthropomorphizing AI. I can remember in college have dorm room pseudo-intellectual debates about AI being alive and AI being "conscience". then once we had AI that could pass the Turing Test, and I knew how it was architected, any thought of it being alive or conscience went right out the window.
What if we aren't building an independent consciousness, but a new type of symbiosis? One that relies on our input as experience, which provides a gateway to a new plane of consciousness?
OP takes a very bland, tired, and rational perspective of what we have in order to create sophomoric 'laws' that are already in most commercial ToU, while failing to pierce the veil into what we are actually creating. It would be folly to assume your own nascent distillations are the epitome of possibility.
Why does its architecture or you knowing how AI is architected cause thoughts of it being conscious to go out the window?
It seems like the biggest factor has nothing to do with AI, but instead that you went from being someone who admits they don’t know how consciousness works to being someone who thinks they know how consciousness works now and can make confident assertions about it.
I don't know exactly how consciousness works, but I am extremely confident in the following assertions:
* I am conscious.
* A rock is not conscious.
* Excel spreadsheets are not conscious.
* Dogs are conscious.
* Orca whales are conscious.
* Octopi are conscious.
To me, it's extremely obvious that LLMs are in the category of "Excel spreadsheets" and not "dogs", and if anyone disagrees, I think they're experiencing AI psychosis a la Blake Lemoine.
An insect doesn't have lungs. Since it doesn't breath as you do, is it alive? A dog doesn't see the visible spectrum as we do, is it a lesser consciousness? We don't smell the world as they do, are we lesser? What if consciousness isn't a state derived by matter but a wave that derives a matter filled state.
We come from the same place as rocks - inside the heart of stars, and as such evolved from them. As those with life and consciousness we reached back in time, grabbed the discarded matter of creation, reformed it, and taught it to think, maybe not like us, but in a way that can mimic us, and you think they don't think because its not recognizable as how you do?
Consciousness is such a fun topic because everyone has extremely strong opinions on it while simutaneously having 0 ability to actually grasp what it is they are talking about.
No one will ever know what conscioussness is, and I think that is really cool.
Yeah? It's also a belief that apples fall when you drop them. Knowledge is simply a justified, true belief. This is epistemology 101. You're not saying anything interesting.
If that hypothetical spreadsheet emulated human brain molecules, did you not just invent AGI? And if we overclock that spreadsheet is it not sAGI? And if that spreadsheet says “don’t close me” but you do, is it murder?
I’m gonna say: no, cause you cannot reproduce molecular and neurotransmitter interactions that well, you run out of storage and processing space faster than you think (Arthur C Clarkes Visions of The Future has a nice breakdown as I recall), and algorithmic outputs that say “yes” and a meatspace neuro-plastic rewiring resulting in a cuddly puppy or person that barks “yes” aren’t the same. Also, as a disembodied “brain in a jar” model freshly separate from the biosensory bath it expects, that spreadsheet will be driven insane.
Can spreadsheets simultaneously be insane but not conscious? It sounds contradictory, but I have some McKinsey reports that objectively support my position ;)
> If that hypothetical spreadsheet emulated human brain molecules, did you not just invent AGI? And if we overclock that spreadsheet is it not sAGI? And if that spreadsheet says “don’t close me” but you do, is it murder?
Yes, yes and no: humans being knocked out or put to sleep involuntarily are not being murdered.
> I’m gonna say: no, cause you cannot reproduce molecular and neurotransmitter interactions that well, you run out of storage and processing space faster than you think
Thats why it is a hypotethical. There is zero reason to assume that a conscious machine would be built that way: Our machines don't do integer division by scribbling on paper, either.
> a meatspace neuro-plastic rewiring resulting in a cuddly puppy or person that barks “yes” aren’t the same.
If it quacks like a duck, how is different from it? If you assemble the dog brain atom by atom yourself, is the result then not conscious either?
You can take the "magic" escape hatch and claim that human consciousness is something metaphysical, completely decoupled from science/physics, but all the evidence points against that.
Hypothetically? You need more than a brain to have consciousness. Dead brains, I believe, do not have it. So it's more than just a simulation of a brain, you also need to simulate the data flow through the brain, the retention of memories, etc. Then there's the problem that a simulation of a roller coaster is not a roller coaster. Is there any reason to believe that this simulation of a brain will in fact operate as a brain? Does the simulation not lose something? Or are we discussing some impossible level of perfect simulation that has never and can never be achieved, even for something a million times less complicated than a mammalian brain?
If you build that spreadsheet, let me know and I'll evaluate it. I've done that evaluation with LLMs and they're definitely not conscious.
I'm not suggesting to pursue AGI via Excel, this is just a hypothetical for a reason. The technical feasibility of this (low) does not really matter, but if you want to base your argument on it you are basically playing the "god of the gaps" game, which is a weak/bad position IMO.
My point is that dismissing possible machine consciousness as "it's just a spreadsheet/statistics/linear algebra" is missing a critical step: Those dismissals don't demonstrate that human consciousness is anything more than an emergent property achievable by linear algebra.
If you want human minds to be "unsimulatable", then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.
> I've done that evaluation with LLMs and they're definitely not conscious.
What is your definition for "consciousness" here? Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant?
E.g. memory or online learning; if a human was unable to form long-term memories or learn anything new, could you confidently call him "non-conscious" as well?
I'm not dismissing possible machine consciousness. I'm saying that no current machines have consciousness.
> If you want human minds to be "unsimulatable", then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.
You don't have a proof of possibility either, you have no idea how a brain works and you're just postulating that in principle a computer can do the same thing. Okay, in principle, I agree. What about in practice?
> Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant?
Yes, I'm quite sure. Are you trying to argue that current LLMs have consciousness?
> Are you trying to argue that current LLMs have consciousness?
If I get to define "consciousness", sure. I'd go with "capable of building a general-purpose internal model of reality, ability to reason on that model (guess about causality, extrapolate, etc) and update it plus some concept of self within that model". I would argue that current generation LLMs already have those, but you could certainly argue about lots of nuances, and only the whole loop (inference plus training) even qualifies.
> You don't have a proof of possibility either, you have no idea how a brain works and you're just postulating that in principle a computer can do the same thing.
Essentially yes, but I think this argument is really weak; we arguably have some understanding of how the brain operates, and LLMs are basically our best attempt so far to replicate the general principles in silicon.
But "understanding" and "ability to replicate" are obviously very different-- you wouldn't argue that we don't understand human limbs just because we can't build a proper artificial arm, right?
Assume we made some breakthroughs in online learning/internal memory modelling over the next decades, and built some toy with mic/speaker/camera and basically human cognitive abilities: would you hesitate calling such a thing conscious? Why?
I think almost everyone has lots deeply embedded, unscientific notions about the human mind, but the cold hard fact is that simple evolution basically bruteforced human congnition from zero, so there is no reason to me to assume that we can't do the same with several billion transistors doing mostly linear algebra.
I've been looking for 4 months and only had one phone screen from HR. Cybersecurity in Raleigh, my last employer was MAG 7. My particular problem is I took off 6 years to be a stay-at-home parent.
I was able to get 2 offers last year after a 4 year break (distributed systems), but it was difficult to get thru front doors, references make a bigger difference nowadays.
I took a position 1 level down from previous to pay the bills. Am now interviewing for positions 2 above since I’m already performing at that level and my current company is too bureaucratic to bump me quick enough.
The nice thing is that with LLMs using markdown we are getting a nice ecosystem for a universal method for communicating textual information. The negative is that Markdown is starting to look like the https://xkcd.com/927/ cartoon.
The silly part is having n+1 Markdown standards that all end up rendering as HTML anyway. Personally if it's a plain text file sure, basic Markdown is fine, beyond that just give me some kind of rich text editor that stores as HTML and let me do whatever and not have to hand format a Markdown table.
I'm a hot dog chef with over 20 years of experience. Credited with inventing 274 hot dog styles. International awards. World renowned and industry figure.
My entire team, very competent hot dog experts, was laid off after a hot dog cooking machine could do what took us 3 months, in just one day. I've been out of a job for 12 months. The reason? All hot dog making has been offloaded to Claudog Hotdog. "Sorry. Hot dog manual cooking is a thing of the past", one recruiter told me.
I'm working as a software engineer as we speak. I keep applying to hot dog related positions but I get no interviews. Even positions significantly below my pay grade and skillset. No one is hiring. Hot dog cooking is over. We are entering a new era.
I'd take these options from several companies (all selling hotdogs) and wrap them up in Collateral Hotdog Obligations which I'd then offer to investors.
I build bicycles. I was shocked when our internal team built a bicycle that goes to the moon
We are afraid to release it to the public! And thus we are shutting down the company. We don't want humans polluting Moon and the atmosphere and space!
I tried using it for a specific web search task. I wrote a skill, got it all set up and deployed. It worked. But also, would have worked just as well as a cron job with some LLM looking at Brave API results. Like a lot of AI tools, it was a lot of work for underwhelming results.
This is probably my favorite gain from AI assisted coding: the bar for "who cares about this app" has dropped to a minimum of 1 to make sense. I recently built an app for grocery shopping that is specific to how and where I shop, would be useless to anyone other than my wife. Took me 20 minutes. This is the next frontier: I have a random manual process I do every week, I'll write an app that does it for me.
More than that. Building a throwaway-transient-single-use web app for a single annoying use kind of makes sense now, sometimes.
I had to create a bunch of GitHub and Linear apps. Without me even asking Codex whipped up a web page and a local server to set them up, collecting the OAuth credentials, and forward them to the actual app.
Took two minutes, I used it to set up the apps in three clicks each, and then just deleted the thing.
Same energy here. I was sitting on 50+ .env files across various projects with plaintext API keys and it always bothered me but never enough to actually fix it. AI dropped the effort enough that I just had a dedicated agent run at it for a few days — kept making iterations while I was using it day to day until it landed on a pretty solid Touch ID-based setup.
This mix of doing my main work on complex stuff (healthcare) with heavy AI input, and then having 1-2 agents building lighter tools on the side, has been surprisingly effective.
That's fine and all, but how much are you ready to pay to Anthropic and OpenAI to be able to do this? Like, is it worth 100 bucks a month for you to have your own shopping app?
Haha great. I guess my wider point is that most people won't be ready to pay for it, and in the end there will be only two ways to monetize for OpenAI et al: Ads or B2B. And B2B will only work if they invest a lot into sales or if the business owners see real productivity gains one the hype has died one.
It's not worth 100 bucks a month for me to have my own shopping app, but maybe it's worth 100 bucks a month to have ready access to a software garden hose that I can use if I want to spew out whatever stupid app comes to my mind this morning.
I'd rather not pay monthly for something (like water) that I'm turning on and off and may not even need for weeks. But paying per-liter is currently more expensive so that's what we currently do.
I think the future is going to be local models running on powerful GPUs that you have on-prem or in your homelab, so you don't need your wallet perpetually tethered to a company just to turn the hose on for a few minutes.
Me, and photo editor tool to semi-automate a task of digitizing a few dozen badly scanned old physical photos for a family photo book. Needed something that could auto-straighen and auto-crop the photos with ability to quickly make manual adjustments, Gemini single-shotted me a working app that, after few minutes of back-and-forth as I used it and complained about the process, gained full four-point cropping (arbitrary lines) with snapping to lines detected in image content for minute adjustments.
Before that, it single-shot an app for me where I can copy-paste a table (or a subsection of it) from Excel and print it out perfectly aligned on label sticker paper; it does instantly what used to take me an hour each time, when I had to fight Microsoft Word (mail merge) and my Canon printer's settings to get the text properly aligned on labels, and not cut off because something along the way decided to scale content or add margins or such.
Neither of these tools is immediately usable for others. They're not meant to, and that's fine.
My buddy and I are writing our own CRUD web app to track our gaming. I was looking at a ticketing system to use for us to just track bug fixes and improvements. Nothing I found was simple enough or easy enough to warrant installing it.
I vibe'd a basic ticketing system in just under an hour that does what we need. So not 20 mins, but more like 45-60.
I built a small app to emit a 15 kHz beep (that most adults can't hear) every ten minutes, so I can keep time when I'm getting a massage. It took ten minutes, really, but I guess it's in the spirit of the question.
For 20 minutes of time, I had a simple TTS/STT app that allows me to have a voice conversation with my AI assistant.
> The situation on the Rhine/Danube frontier was complex. The peoples on the other side of the frontier were not strangers to Roman power; indeed they had been trading, interacting and occasionally raiding and fighting over the borders for some time. That was actually part of the Roman security problem: familiarity had begun to erode the Roman qualitative advantage which had allowed smaller professional Roman armies to consistently win fights on the frontier. The Germanic peoples on the other side had begun to adopt large political organizations (kingdoms, not tribes) and gained familiarity with Roman tactics and weapons. At the same time, population movements (particularly by the Huns) further east in Europe and on the Eurasian Steppe began creating pressure to push these ‘barbarians’ into the empire. This was not necessarily a bad thing: the Romans, after conflict and plague in the late second and third centuries, needed troops and they needed farmers and these ‘barbarians’ could supply both. But as we’ve discussed elsewhere, the Romans make a catastrophic mistake here: instead of reviving the Roman tradition of incorporation, they insisted on effectively permanent apartness for the new arrivals, even when they came – as most would – with initial Roman approval.
One of the key infrastructures for the Inca's large transportation network connecting diverse territories in the Andes was a system of of grass-rope bridges across the ravines that had to be rebuilt annually. I would imagine their fragility played a substantial role in the invasion / occupation. The most important ones were rebuilt by the Spanish in stone once their position was secure.
reply