'It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experience observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, "I could have written that." With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curios, fit to be discussed only with people less enlightened than he.'
The phrase "to explain is to explain away" is Shakespearean in its precision.
But such regret at a loss of magic (or put another way, loss of ignorance) is IMO not a good sign, that a person wants to be somehow deceived, and I don't think that's healthy. Bit harsh perhaps but just my view.
It's the idea that "AI is anything that has not been done yet". Or as I like to say: "AI is any algorithm you haven't understood yet."
So you can go:
* "That's not AI, that's just a regex over a string!"
* "That's not AI, that's just a lookup over a dictionary!"
* "That's not AI, that's just a series of if statements!"
* "That's not AI, that's just a search for keywords in text!"
* "That's not AI, that's just an optimized brute force over a large search space!"
* "That's not AI, that's just a linear regression!"
* "That's not AI, that's just a neural network!"
* "That's not AI, that's just Bayesian Statistics!"
Say for example, I go on a quest to create "AI" from scratch and start with inventing string interning to keep track of symbols. It would be pretty big deal for me, but it would absolutely not be AI which was an ill defined goal from the start.
String interning though will be useful for a lot of disciplines, and a good marketing department will start calling it AI to get more moolah out of it.
This is exactly what happened in the 80s and its practically what is happening today. "AI" is a great motivator to call any of your project a success, because it encompasses everything. Programming Languages, GUI, Networking and whatnot have come out of "AI" research.
In my books, AI just mean one thing. A general purpose machine that can do anything that a human can or more. Not chess, not starcraft, not spying, but everything. People have started calling this hypothesis strong AI, but I think AI will do. This should be the final goal. Anything before that, Programming Languages, Deep Learning, Networking, Hardware Design should be called by their own names and merit.
A dog is intelligent, as is a pigeon. Even bees and some mollusks, like the cuttlefish, are intelligent. They can't think in all the ways a human can but at what they do, they are competent, even clever.
I feel the same is true for machine intelligences. It takes intelligence to learn Go or Chess but it also takes intelligence to play, or at least this is what we say for humans. When the human is thinking about Starcraft, we consider only how good their thought patterns are for that game. We do not look at their vision, walking, social skills or whatever. The same should be applied for the Chess or Go AI while it is playing Chess or Go. While one can complain that all they know how to do is play Chess, anything else is unfair.
There is nothing wrong with those original ambitions, and it is to nobody's discredit that progress has been slower than originally hoped (the same could be said in many other fields, from space travel to curing cancer.)
One can certainly find people (though not usually here) who insist that everything achieved so far is "just database lookup" or "just a machine doing what a person programmed it to do", and who leave little doubt that they would continue to do so regardless of what had been achieved. There are also philosophers who make more sophisticated versions of the same argument, such as by imagining p-zombies, which are unfalsifiably merely faking intelligence. Such people want to take the goalposts off the field, but the rest of us should be able to discuss what has been achieved, and what remains undone, without being distracted by arguments over the precise semantics of the phrase "artificial intelligence."
Perhaps another way to see this is that humans use their intelligence to play Go and chess, but playing Go and chess does not require intelligence: a machine can do it, even though it's not intelligent; and it can do it better than any human. And perhaps it can do it better than any human because it's not intelligent.
Maybe then intelligence is not really useful for playing Go or chess, but for other tasks, that we haven't quite pinned down yet because we don't really understand what intelligence is in the first place. And maybe all the successes of AI that fall victim to the AI effect are all steps towards understanding what intelligence is, by pointing to what intelligence is not.
We think of intelligence as an absolute advantage, without downsides. But if humans, who are intelligent, are worse at tasks like chess and Go, than machines who are not intelligent, then perhaps we have to start thinking of intelligence as having both strengths and weaknesses. Perhaps we'll find that, while there are tasks that cnnot be accomplished without intelligence, there are also tasks for which being intelligent is an impediment rather than an asset.
For example: play Go or chess at all, never mind to a high level. Write good music. Pass a Turing test. Drive at least as safely as average.
Maybe a third of the population is going to struggle with ticking off even one of those requirements. 
Someone who can do all of the above is comfortably in the top 5% of the human ability range.
Curiously, the usual list of goals looks suspiciously like the interest profile of a tenured CS academic.
Things humans do but AIs don't include:
Parsing complex social and personal interactions and maintaining maps of social and political relationships. Improvising solutions to problems using available resources. Converting word-of-thumb learning into memorable narratives - either as informal instruction, or as a formal symbol system. Communicating with nuance, parable, irony, humour, metaphor, and subtext.
Some humans can also parse complex domains and extract an explicit rule set from them - but that's a much less common skill.
Except for that last one - maybe - these all seem like they're much closer to the human version of intelligence than any goal based on a specific output.
 Even the driving, because many people can't drive at all, so it's not a 50% break at the average. And even the Turing test, because there are still a lot of humans with no Internet or computer experience, and they'd find the glass terminal experience very strange and unsettling.
I am not sure what you mean with "can't do". Could you please clarify?
Further, could you describe how you would determine that a person "can't do" something? For example, how would you determine that a person "can't (do)" play chess or Go at all?
"Hey so-and-so, do you know how to play Chess?"
And I am not talking about philosophical or linguistic definitions, but strict mathematical definitions with proofs and experiments.
We had some pretty goofy ideas on what intelligence looked like and required back in the day. But I do think we realized that beating a chess grandmaster wasn't the defining point of intelligence, well before a chess program actually beat a grandmaster.
Do you mean they looked goofy back then, or only in hindsight? My point is not about chess or racism but how we justify things, and the implied question of why we have to justify things to make ourselves look good.
It says that every breakthrough in AI, once accomplished, forces us to reclassify the accomplishment as no longer being AI or intelligence. I'd suggest reading the full wikipedia article, but I quote:
> As soon as AI successfully solves a problem, the problem is no longer a part of AI. [...] practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked. [...] A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.
What happens when "strong" AI turns out to just be "wtv technique will finally yield the full spectrum of human intelligence"? The AI effect will be back claiming it's not AI, but just "wtv technique" was used to achieve the outcome.
Richard Feynman recalls a disagreement with a friend, where he maintains that a scientist can appreciate the beauty of a flower just as much as an artist. Feynman believes the aesthetics of a flower can be recognised by everyone. He says that a scientist can also imagine the plant at a cellular level, and see the beauty in its complex inner structure. He wonders whether this beauty can also be appreciated by other creatures
I am completely bored with version control, flaky automated testing systems, decaying source code, the "development process", not being empowered to do good work, etc.
Things are fun when they're new. They're also fun when they don't require a lot of work. Small software is fun. Shoehorning features into a massive codebase or hunting for obscure bugs in a complex system is not always fun. Even when it is fun, it is draining.
But, of course, it’s useful to maintain a sense of collective achievement here - we’re where we are now because generations of scientists and engineers did the work to figure out the magic and how to harness it.
I'm not sure I can say this of anyone, although I won't deny that there are a few people that tend to do a lot better at this than most ;)
If I suddenly stumbled upon a giant pile of money, I'd still do what I do because it's honestly amazing. The applied sciences are the closest things to magic we'll ever get.
- John Keats
As a teenager I studied every magic book in the library, practiced for hours, started performing and eventually made a living in college as a magician, even touring on occasion. As I became more skilled and knowledgeable I eventually got into studying magic theory, learning from some very experienced pros. The 'real' fun in magic for me was coming up with new effects and methods.
However, the tough part is once you get to a certain level, you find there are no magic tricks that give that 'zap' of delight you got when you didn't immediately know how they worked. I suspect this effect may be most severe in magic because the visceral impact relies on not knowing the method. You can be an expert musician, able to deconstruct chord progressions and rhythms yet still lose yourself in dancing to music you love. However, not so for advanced magicians.
I can still enjoy watching a really good magician on other dimensions like technical execution, creativity or even entertaining presentation but that momentary zap is gone forever.
Often, I compare magic to programming. It’s all about the “effect” and wowing people with something they did not believe possible. But software, as with a trick, loses its effect once you see it the second time. And, once you get into the details of how the sausage is made, it’s technical, time-consuming and probably not worth it for most people who just enjoy the effect.
There’s a book “The Royal Road to Card Magic” and there’s a line there which says (paraphrasing) “there is much joy in being the deceiver as there is in being deceived” - perhaps that’s where the joy is to be found, as at the other side of that transaction rather than “enjoying being deceived”?
So perhaps you lost the 1st and 2nd delight, but you still can have the 3rd.
You could also argue that you didn't really lose anything, no more than somebody "loses" virginity. There are people who simply aren't amazed by magic tricks regardless whether they know the method or not. Maybe we are lucky that we can be amazed at all!
The Joys of the Craft
First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.
Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office."
Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate.
Fourth is the joy of always learning, which springs from the nonrepearing nature of the task. In one way or another the problem is ever new, and its solver learns something: sometimes practical, sometimes theoretical, and sometimes both.
Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)
Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.
Programming then is fun because it gratifies creative longings built deep within us and delights sensibilities we have in common with all men.
I consider programming the most magical thing we have: We put mysterious incantations into mysterious contraptions and mysterious things happen. Harry Potters wand is not really impressive compared to a smartphone. I find software wonderful even after doing for over twenty years.
You'll find yourself drunk one Saturday night fiddling with yet another side project, tears rolling down your face, wishing you could just work on code that you don't actively hate.
That being said, a seemingly simple (and long finished) hobby hardware+software project somehow led me down to the path of engulfing myself in the theory of Digital Signal Processing. It is mostly math, far more than I used to be confronted with in my career as a software engineer, even with a solid analog+digital hardware hobby, and I'm diving deep into the very fundamentals: Discrete Fourier transforms, Z-Transforms, Quadrature Amplitude Modulation... not just learning how to apply it, but how it's actually derived.
For some reason, this time the magic sticks. For one, there is the immensely satisfactory feeling when I actually understood, really got a grasp, on a complex subject within that field. You would expect that, as usual, this is where the magic stops. But then actually applying that daunting theory to real world problems, and watching it actually perform what you intended to, is still just very... magical.
Its also really cool having so many opportunities available for personal projects.
Number of “masters” I have met in my career who were really barely intermediate in a narrow area. I been doing it since my early teens and I am in my 40s now, I am really really good. But master no.
Many of the projects I had always wanted to do were suddenly within reach, and I learned a lot, and made lots of cool things.
Now though, when I talk to people there, and they ask me what I'm working on, I have no good answer. All the low hanging fruit that I had been reaching for has already been picked. The interesting problems to me are still out of reach, and now require even more specialized equipment or technology.
If anyone has a spare MRI machine or electron microscope they want to get rid of, I'll take it!
I’m really not sure where people go from here. I don’t know whether to stay in the industry or plan to switch careers; I’d probably be just as bored with something else. Age seems to be accompanied by disenchantment and I can’t imagine ever being a bright-eyed and full of ambition to excel in a field like I once was. Maybe I’m just jaded.
Reading the topic article, I’m a little glad I didn’t study CS. At least I got to feel the “magic” he mentions in the first few years of my career. This poor guy’s already spent and it doesn’t even sound like he’s in his first real job.
Based on every piece of software ever it sounds like OP just doesn't appreciate just how deep the rabbit hole goes. I mean, even something as simple as `cat` is 700+ lines of code, and it would probably take a novice years to understand every single nuance of that program to the point where they could build something comparable on their own. And programming is still in its infancy. If you want more rabbit holes than you can shake a stick at, just look at algorithms, data structures, new languages, networking, compression, high-performance computing, massively parallel systems, zero-knowledge proofs, formal verification, you name it.
If you find yourself in a technological field and yet very uninspired, its more than likely you've lost contact with the users of that technology.
To revitalise yourself, engage with the users of your technology - go find them, see how they use it, see how your technology changes their lives. That is the purpose of technology, and its where all the magic lies.
Users are key. Don't have users? Thats your problem. Got no clue how your users use your stuff? Again, that's the problem. Don't see them improving their lives in some way with your technology - then don't expect there to be that magic feeling..
Disclaimer: Have lost and found the magic over 30 years of experience as a software developer. This always works for me: put down the tools and go spend time with your users.
I think this is a huge problem in the tech world especially and businesses would be wise to find solutions to it. Sending workers on outings to interact with users sounds like a great idea.
It has always worked. Software projects (hardware too of course) get off the rails when too much time is spent in the woods and not enough time enjoying the trees. Get out there and use what you've made - you'll see what you need to do after assuming the perspective of a user for a while ..
For fun, look up "Strandbeest" or "Wintergatan marble machine" on YouTube to see some mechanical equivalents that (at least for me) trigger the same satisfaction at seeing the parts of a complex system come together.
However, I think the magic is now in seeing the excitement, seeing that spark getting lit in others. Whether it is people just getting into the profession or those studying, the magic has moved to an external locus.
Not the same kind of magic as the OP describes, but still satisfying.
Replace the word beauty with magic, and you get a similar point with regards to the "loss" of magic. It's not a loss at all, it's really just familiarity; "knowing" how it works doesn't make it any less magical (or beautiful). My suggestion to recapture that magic, is to delve further into the things you don't understand or know, and to ask further questions to unravel deeper layers, rather than continually having to use the knowledge you already have. It's worth recognizing that you can still appreciate that "magic", despite having looked behind the curtain to see how it works.
I think this is the key part. (See also: http://www.commitstrip.com/en/2014/11/25/west-side-project-s...
An opposite of this is moving between different technologies so quickly that you feel like you're forever a newbie.
If the spark is fading, start teaching. Either on the side, or as a mentor. You'll get asked questions you may not know the answer off the top of your head and it will make you dig, and remember.
It feels good to spread that spark and fan your own flames in the process.