Ok, I think this has moved on from the original point. I quoted you saying: "As I see it, there is clearly no way to advance ChatGPT into anything more than it is today. Impressive as it is, the curtain is wide open for all to see, and the art can be viewed plainly as what it truly is: magic, and nothing more."
I was disagreeing with this, and showing ways it can advance. Your reply focuses on the fact that it will still get things wrong. However, I never claimed that it wouldn't make mistakes. I agree it will, and frequently. I just wanted to show that it can do more, and valuably and recognisably more.
The next step up for ChatGPT isn't perfection of mathematics and logic. And with the examples we've been playing with, experimentations with prompts, and further training, when combined with other systems, I don't see why better couldn't be done, and I gave one such example. Sometimes the output it gives is what we're after, sometimes it isn't. Sometimes, we'd be able to feed its own input back to itself and ask it to evaluate if the previous output it gave is what we're after. E.g., I recall hearing about someone feeding the output to the Rust compiler, then feed error messages back, in an iterative loop until they get something that at least compiles. I don't know the best techniques yet, but I have no doubt that a variety of techniques combined with other systems could improve the output of ChatGPT, significantly.
Regarding fair prompts, here's another example of a prompt and its reply, trying at a more generic prompt:
Provide a list of mathematical questions contained in the following text, phrased as equations: "Bob was asked to add 234241.24211 and 58352342.52544, and he wanted to know the result. What is the result? An answer was given by Suzie, who had recently had to count how many new potatoes she had in her pile, after putting the seven new ones in the pile of 15 she already had."
1. 234241.24211 + 58352342.52544 = ___________
2. 15 + 7 = ___________
Or maybe we can ask it to do something like a school assignment:
Assuming the following text is given to students, create a list of mathematical equations that could serve as questions based on this text: "Bob was asked to add 234241.24211 and 58352342.52544, and he wanted to know the result. What is the result? An answer was given by Suzie, who had recently had to count how many new potatoes she had in her pile, after putting the seven new ones in the pile of 15 she already had."
1. 234241.24211 + 58352342.52544 = __?
2. 7 + 15 = __?
Doesn't matter if it fails sometimes, but it can do a lot better.
You might be right about your claims about conceptualising, and seemingly focusing on perfection as the next step. Perfection isn't the next step in this journey, because there's a lot more value to be added before trying to hit anything remotely close to that (if we even can!). (here, I'm referring to your comment: "I don't see any way to seed such a system to always produce logically correct results")
Side note: I think that humans (and probably many other animals) are conscious, and have a non-physical mind, while computers never will, or at least won't in virtue of us improving the physical characteristics of these systems. But that being said, I think there's a lot about the way human brains work that can be mimicked by machines, and I think that human brains do an awful lot of guessing in our day to day moving throughout the world. That doesn't strike me as different to ChatGPT. It guesses a lot of things, and so do we.
That's my whole point. It can't. There is no room for the other system. Any other system will always be either too late in the process to work, or a complete replacement.
> Sometimes, we'd be able to feed its own input back to itself and ask it to evaluate if the previous output it gave is what we're after
And that doesn't accomplish what "evaluate" actually means. It only does the same thing it always does (find something semantically close to "evaluate the thing"), but with a new prompt. If you are lucky, the new prompt will be responded to with output you like, but only if you are lucky. If we keep doing this process, we are effectively guessing what password will match a hash. That's called "brute forcing", and the fact it doesn't work is why your passwords are secure.
> Doesn't matter if it fails sometimes
It fails every time. Sometimes it "fails up". Stumbling into success is still stumbling.
If I write a program that outputs random noise into a filter that hides all but the output I want, then eventually I will see that output. But crucially, the input is still random noise. What is the value of ChatGPT if it's no more than curated randomness?
My point is that there is a very clear difference between impressive and functional. ChatGPT is impressive, not functional. You can't make it functional. You can only make it better at impressing you.
I was disagreeing with this, and showing ways it can advance. Your reply focuses on the fact that it will still get things wrong. However, I never claimed that it wouldn't make mistakes. I agree it will, and frequently. I just wanted to show that it can do more, and valuably and recognisably more.
The next step up for ChatGPT isn't perfection of mathematics and logic. And with the examples we've been playing with, experimentations with prompts, and further training, when combined with other systems, I don't see why better couldn't be done, and I gave one such example. Sometimes the output it gives is what we're after, sometimes it isn't. Sometimes, we'd be able to feed its own input back to itself and ask it to evaluate if the previous output it gave is what we're after. E.g., I recall hearing about someone feeding the output to the Rust compiler, then feed error messages back, in an iterative loop until they get something that at least compiles. I don't know the best techniques yet, but I have no doubt that a variety of techniques combined with other systems could improve the output of ChatGPT, significantly.
Regarding fair prompts, here's another example of a prompt and its reply, trying at a more generic prompt:
Or maybe we can ask it to do something like a school assignment: Doesn't matter if it fails sometimes, but it can do a lot better.You might be right about your claims about conceptualising, and seemingly focusing on perfection as the next step. Perfection isn't the next step in this journey, because there's a lot more value to be added before trying to hit anything remotely close to that (if we even can!). (here, I'm referring to your comment: "I don't see any way to seed such a system to always produce logically correct results")
Side note: I think that humans (and probably many other animals) are conscious, and have a non-physical mind, while computers never will, or at least won't in virtue of us improving the physical characteristics of these systems. But that being said, I think there's a lot about the way human brains work that can be mimicked by machines, and I think that human brains do an awful lot of guessing in our day to day moving throughout the world. That doesn't strike me as different to ChatGPT. It guesses a lot of things, and so do we.