Hacker News new | past | comments | ask | show | jobs | submit login
Where Has the Magic Gone? (arnavdhamija.com)
106 points by shorts_theory 8 months ago | hide | past | web | favorite | 89 comments



This seems to be the engineer's version of a type of sentiment that has been expressed for hundreds, if not thousands of years: once you really understand a thing, it's not magical anymore. A great example of this is Mark Twain's writing on his experience with the Mississippi river before and after being a riverboat captain ("Two Ways of Seeing a River")[1].

[1] https://wordenenglishiv.weebly.com/uploads/2/3/6/5/23650430/...


Indeed. Eliza was created to kill the magic. The author, Joseph Weizenbaum said it precisely.

'It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experience observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, "I could have written that." With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curios, fit to be discussed only with people less enlightened than he.'

The phrase "to explain is to explain away" is Shakespearean in its precision.

But such regret at a loss of magic (or put another way, loss of ignorance) is IMO not a good sign, that a person wants to be somehow deceived, and I don't think that's healthy. Bit harsh perhaps but just my view.


I also like the AI Effect: https://en.wikipedia.org/wiki/AI_effect

It's the idea that "AI is anything that has not been done yet". Or as I like to say: "AI is any algorithm you haven't understood yet."

So you can go:

  * "That's not AI, that's just a regex over a string!"
  * "That's not AI, that's just a lookup over a dictionary!"
  * "That's not AI, that's just a series of if statements!"
  * "That's not AI, that's just a search for keywords in text!"
  * "That's not AI, that's just an optimized brute force over a large search space!"
  * "That's not AI, that's just a linear regression!"
  * "That's not AI, that's just a neural network!"
  * "That's not AI, that's just Bayesian Statistics!"


AI effect also stems from naming your research "AI" which is pretty broad and it's meaning can change with context.

Say for example, I go on a quest to create "AI" from scratch and start with inventing string interning to keep track of symbols. It would be pretty big deal for me, but it would absolutely not be AI which was an ill defined goal from the start.

String interning though will be useful for a lot of disciplines, and a good marketing department will start calling it AI to get more moolah out of it.

This is exactly what happened in the 80s and its practically what is happening today. "AI" is a great motivator to call any of your project a success, because it encompasses everything. Programming Languages, GUI, Networking and whatnot have come out of "AI" research.

In my books, AI just mean one thing. A general purpose machine that can do anything that a human can or more. Not chess, not starcraft, not spying, but everything. People have started calling this hypothesis strong AI, but I think AI will do. This should be the final goal. Anything before that, Programming Languages, Deep Learning, Networking, Hardware Design should be called by their own names and merit.


I consider this argument flawed because it equates intelligence with human intelligence. The field is Artificial Intelligence, not Human Artificial Intelligence.

A dog is intelligent, as is a pigeon. Even bees and some mollusks, like the cuttlefish, are intelligent. They can't think in all the ways a human can but at what they do, they are competent, even clever.

I feel the same is true for machine intelligences. It takes intelligence to learn Go or Chess but it also takes intelligence to play, or at least this is what we say for humans. When the human is thinking about Starcraft, we consider only how good their thought patterns are for that game. We do not look at their vision, walking, social skills or whatever. The same should be applied for the Chess or Go AI while it is playing Chess or Go. While one can complain that all they know how to do is play Chess, anything else is unfair.


The founders of the field left no doubt that their goal was, indeed, human-like intelligence, so the position you are stating here was the original goalpost-shifting.

There is nothing wrong with those original ambitions, and it is to nobody's discredit that progress has been slower than originally hoped (the same could be said in many other fields, from space travel to curing cancer.)

One can certainly find people (though not usually here) who insist that everything achieved so far is "just database lookup" or "just a machine doing what a person programmed it to do", and who leave little doubt that they would continue to do so regardless of what had been achieved. There are also philosophers who make more sophisticated versions of the same argument, such as by imagining p-zombies, which are unfalsifiably merely faking intelligence. Such people want to take the goalposts off the field, but the rest of us should be able to discuss what has been achieved, and what remains undone, without being distracted by arguments over the precise semantics of the phrase "artificial intelligence."


I agree. This is generally not the place to argue over semantics, but rather to discuss and discover aspects of technology so as they could be improved.


>> It takes intelligence to learn Go or Chess but it also takes intelligence to play, or at least this is what we say for humans.

Perhaps another way to see this is that humans use their intelligence to play Go and chess, but playing Go and chess does not require intelligence: a machine can do it, even though it's not intelligent; and it can do it better than any human. And perhaps it can do it better than any human because it's not intelligent.

Maybe then intelligence is not really useful for playing Go or chess, but for other tasks, that we haven't quite pinned down yet because we don't really understand what intelligence is in the first place. And maybe all the successes of AI that fall victim to the AI effect are all steps towards understanding what intelligence is, by pointing to what intelligence is not.

We think of intelligence as an absolute advantage, without downsides. But if humans, who are intelligent, are worse at tasks like chess and Go, than machines who are not intelligent, then perhaps we have to start thinking of intelligence as having both strengths and weaknesses. Perhaps we'll find that, while there are tasks that cnnot be accomplished without intelligence, there are also tasks for which being intelligent is an impediment rather than an asset.


Many humans can't do any of the things CS textbook AIs are supposed to do.

For example: play Go or chess at all, never mind to a high level. Write good music. Pass a Turing test. Drive at least as safely as average.

Maybe a third of the population is going to struggle with ticking off even one of those requirements. [1]

Someone who can do all of the above is comfortably in the top 5% of the human ability range.

Curiously, the usual list of goals looks suspiciously like the interest profile of a tenured CS academic.

Things humans do but AIs don't include:

Parsing complex social and personal interactions and maintaining maps of social and political relationships. Improvising solutions to problems using available resources. Converting word-of-thumb learning into memorable narratives - either as informal instruction, or as a formal symbol system. Communicating with nuance, parable, irony, humour, metaphor, and subtext.

Some humans can also parse complex domains and extract an explicit rule set from them - but that's a much less common skill.

Except for that last one - maybe - these all seem like they're much closer to the human version of intelligence than any goal based on a specific output.

[1] Even the driving, because many people can't drive at all, so it's not a 50% break at the average. And even the Turing test, because there are still a lot of humans with no Internet or computer experience, and they'd find the glass terminal experience very strange and unsettling.


Another thing that humans do but AIs don't: They recognize that they are doing everything on your list and wonder how they do it. This self-aware consciousness is something that goes beyond any particular skill.


I’m curious: has anyone actually tested how many humans “wonder how they do [a thing]”? What would such a test of the general population even look like?


I am not aware of any such study - not that that means anything. If one assumes, as seems reasonable to me, that a person's theory of mind is based on at least a tacit assumption that other people function somewhat like oneself, then one might make the working assumption that experiments on a person's theory of mind [1] also reveal something about how they tacitly perceive themselves. If you want to know something about their explicit thoughts about their mental capabilities, one could start by asking them.

[1] https://en.wikipedia.org/wiki/Theory_of_mind#Empirical_inves...


>> Many humans can't do any of the things CS textbook AIs are supposed to do.

I am not sure what you mean with "can't do". Could you please clarify?

Further, could you describe how you would determine that a person "can't do" something? For example, how would you determine that a person "can't (do)" play chess or Go at all?


> how would you determine that a person "can't (do)" play chess or Go at all

"Hey so-and-so, do you know how to play Chess?"


Edit: I don't that's the meaning of "can't do" that the OP had in mind. Why not let them clarify what they meant?


The root as I see is that Intelligence is still very loosely defined (or not defined at all). We cannot simulate something undefined.

And I am not talking about philosophical or linguistic definitions, but strict mathematical definitions with proofs and experiments.


[flagged]


> I mean beating a chessmaster was considered (at one point, and by some) to be the defining point where AI could be defined as Intelligent.

We had some pretty goofy ideas on what intelligence looked like and required back in the day. But I do think we realized that beating a chess grandmaster wasn't the defining point of intelligence, well before a chess program actually beat a grandmaster.


> goofy ideas [...] back in the day

Do you mean they looked goofy back then, or only in hindsight? My point is not about chess or racism but how we justify things, and the implied question of why we have to justify things to make ourselves look good.


That's the AI effect!

It says that every breakthrough in AI, once accomplished, forces us to reclassify the accomplishment as no longer being AI or intelligence. I'd suggest reading the full wikipedia article, but I quote:

> As soon as AI successfully solves a problem, the problem is no longer a part of AI. [...] practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked. [...] A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.

What happens when "strong" AI turns out to just be "wtv technique will finally yield the full spectrum of human intelligence"? The AI effect will be back claiming it's not AI, but just "wtv technique" was used to achieve the outcome.


The people who point out that AI has not yet replicated key aspects of human intelligence are merely stating facts. There is no basis for saying that they would necessarily do so even if and when this is no longer the case. What is the point of making such an allegation? It looks rather like an attempt at building a straw man: "claims that AI isn't there yet can be ignored because these people will never be satisfied."


Anyone with this problem should switch to physics, because it gets better as you learn more. I bet there are a lot of other things like that.


Any excuse to post something from the BBC's Feynman Archive.

https://www.bbc.co.uk/programmes/p018w2zl (1m26s)

Richard Feynman recalls a disagreement with a friend, where he maintains that a scientist can appreciate the beauty of a flower just as much as an artist. Feynman believes the aesthetics of a flower can be recognised by everyone. He says that a scientist can also imagine the plant at a cellular level, and see the beauty in its complex inner structure. He wonders whether this beauty can also be appreciated by other creatures


I disagree. I still love aspects of my job and will until the day I die. RF communication is still magic. The power of modern computers is absolutely amazing. Power efficiencies and transistor densities blow my mind. As a software engineer, I feel like I create something out of nothing.

I am completely bored with version control, flaky automated testing systems, decaying source code, the "development process", not being empowered to do good work, etc.

Things are fun when they're new. They're also fun when they don't require a lot of work. Small software is fun. Shoehorning features into a massive codebase or hunting for obscure bugs in a complex system is not always fun. Even when it is fun, it is draining.


But that’s the point - you fully understand the domain of software engineering, so it’s not magic, so at least some of the fun has gone out of it. You might not fully understand how RF communication works, or how the transistors in a chip are put together. An engineer working on either domain might well be bored of their work too. Imagine working 8 hours a day using suboptimal tools to model and tweak the radiant field of an RF antenna, or carefully optimizing the hardware demodulators, or dealing with the intricacies of Doppler frequency drift as your antenna sways in the wind (or is carried by a fast-moving object). Or, imagine a chip engineer having to redesign the scheduling pipeline for the 15th time to fix some silly flaw in one instruction, or a lithography engineer struggling to improve yields in the face of relentless quantum physical effects. Once you understand these things at a low enough level, the magic really can drain away.

But, of course, it’s useful to maintain a sense of collective achievement here - we’re where we are now because generations of scientists and engineers did the work to figure out the magic and how to harness it.


> you fully understand the domain of software engineering

I'm not sure I can say this of anyone, although I won't deny that there are a few people that tend to do a lot better at this than most ;)


Actually I'm an EE by education and an Extra class amateur radio operator. If you put me in a room with the components I could build a computer or a radio transceiver. I don't think understanding something necessarily removes the magic. Toil and frustration removes the magic. Working on things you don't believe in, or wasting energy on counterproductive tasks removes the magic.

If I suddenly stumbled upon a giant pile of money, I'd still do what I do because it's honestly amazing. The applied sciences are the closest things to magic we'll ever get.


OP and writer here, thanks for sharing this. I enjoyed that piece and the sentiment really resonates with me.


I can’t find the quote but it reminds me of Jonathan Creek. Paraphrasing: magic, once explained, is much more banal than it really seems.


Ha, that is always my go-to story for that idea. So far I've never met anyone who's heard it. Great autobiography.


I never understood the need for things to be magic. Isn’t cool that you can predict and harness these great forces?


"Newton has destroyed all the poetry of the rainbow, by reducing it to the prismatic colours."

- John Keats


The "magic" goes out of most things as you become expert in the domain but perhaps the greatest loss is the field of magic itself. As a kid I fell in love with watching magicians. That moment of amazement and delight was always intoxicating. It made me feel for just a moment as if anything was possible (despite knowing there's a trick behind it).

As a teenager I studied every magic book in the library, practiced for hours, started performing and eventually made a living in college as a magician, even touring on occasion. As I became more skilled and knowledgeable I eventually got into studying magic theory, learning from some very experienced pros. The 'real' fun in magic for me was coming up with new effects and methods.

However, the tough part is once you get to a certain level, you find there are no magic tricks that give that 'zap' of delight you got when you didn't immediately know how they worked. I suspect this effect may be most severe in magic because the visceral impact relies on not knowing the method. You can be an expert musician, able to deconstruct chord progressions and rhythms yet still lose yourself in dancing to music you love. However, not so for advanced magicians.

I can still enjoy watching a really good magician on other dimensions like technical execution, creativity or even entertaining presentation but that momentary zap is gone forever.


I also began as an amateur magician but mostly remained there. Never did more than a few impromptu shows for friends and family.

Often, I compare magic to programming. It’s all about the “effect” and wowing people with something they did not believe possible. But software, as with a trick, loses its effect once you see it the second time. And, once you get into the details of how the sausage is made, it’s technical, time-consuming and probably not worth it for most people who just enjoy the effect.

There’s a book “The Royal Road to Card Magic” and there’s a line there which says (paraphrasing) “there is much joy in being the deceiver as there is in being deceived” - perhaps that’s where the joy is to be found, as at the other side of that transaction rather than “enjoying being deceived”?


I also dabbled in magic a bit in the past. I used to say that as a spectator, you're only amazed once - when you see the trick being done, but if are actually learning magic, then you're amazed three times - once as a spectator, second time when you get an explanation of the method (no it can't be that such a simple thing fooled me) and then when you try to perform on others and it works (surely I cannot do it as good as the guy who fooled me!)

So perhaps you lost the 1st and 2nd delight, but you still can have the 3rd.

You could also argue that you didn't really lose anything, no more than somebody "loses" virginity. There are people who simply aren't amazed by magic tricks regardless whether they know the method or not. Maybe we are lucky that we can be amazed at all!


'The loss of innocence is the price of applause'

- http://thecodelesscode.com/case/195


  The Joys of the Craft
Why is programming fun? What delights may its practitioner expect as his reward?

First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.

Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office."

Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate.

Fourth is the joy of always learning, which springs from the nonrepearing nature of the task. In one way or another the problem is ever new, and its solver learns something: sometimes practical, sometimes theoretical, and sometimes both.

Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)

Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.

Programming then is fun because it gratifies creative longings built deep within us and delights sensibilities we have in common with all men.


Fred Brooks - The Mythical Man Month http://pages.cs.wisc.edu/~param/quotes/man-month.html


It is still magic even if you know how it's done. -Terry Pratchett

I consider programming the most magical thing we have: We put mysterious incantations into mysterious contraptions and mysterious things happen. Harry Potters wand is not really impressive compared to a smartphone. I find software wonderful even after doing for over twenty years.


You wait, kid, soon you'll be dealing with a terrible project manager on a doomed project in a company that does awful things to the world.

You'll find yourself drunk one Saturday night fiddling with yet another side project, tears rolling down your face, wishing you could just work on code that you don't actively hate.


There are deeper circles of hell yet. Imagine being dragged in a bureaucratic/operations/meetings soup of tasks. Then you could find yourself begging for coding tasks even if they'd be in the stack/project you now hate. The sad reality is that a lot of what we are paid to do as engineers is stuff that nobody would do for free. Boring, ethically dubious, bummer, repetitive, unnecessary complex balls of mud, with people we don't like working with, etc. Balancing between the too-much-shite and good-enough is a whole new art that you get to know as you mature...


Call it demystification. It happens in every field. For me, it was when exotic vocabulary lost its luster, and my writing improved. (Ah, youth.) It happens in relationships too. Demystification is generally forward progress.


Hofuku said, "Right here is the peak of the mystic mountain." Chokei looked and said, "So it is, what a pity." -- Zen mondo

That being said, a seemingly simple (and long finished) hobby hardware+software project somehow led me down to the path of engulfing myself in the theory of Digital Signal Processing. It is mostly math, far more than I used to be confronted with in my career as a software engineer, even with a solid analog+digital hardware hobby, and I'm diving deep into the very fundamentals: Discrete Fourier transforms, Z-Transforms, Quadrature Amplitude Modulation... not just learning how to apply it, but how it's actually derived.

For some reason, this time the magic sticks. For one, there is the immensely satisfactory feeling when I actually understood, really got a grasp, on a complex subject within that field. You would expect that, as usual, this is where the magic stops. But then actually applying that daunting theory to real world problems, and watching it actually perform what you intended to, is still just very... magical.


To counteract this, I recommend a career in the biomedical sciences; we fundamentally don't understand so much that "i know how it works now and so it's not special" hardly ever obtains!


Funny, I get a great sense of pleasure from understanding things well enough to apply them, especially in unconventional ways. Whenever I read articles like this I get the feeling that they might actually be depressed but misattribute it. Its very difficult to debug your own happiness.


I feel similarly. Its satisfying to feel that you know what you are doing rather than just randomly trying things and copying stack overflow. Every few months I get the realization that the things that I considered too hard are now simple.

Its also really cool having so many opportunities available for personal projects.


the article is not really about "randomly copying stack overflow". It's about the joy of exploration and learning, and once you have attained mastery - the details of applying it over and over again is not interesting (which is usually the case in real jobs)


The thing with mastery in software is that it is a fleeting thing. There is always more and more and more..

Number of “masters” I have met in my career who were really barely intermediate in a narrow area. I been doing it since my early teens and I am in my 40s now, I am really really good. But master no.


I totally get this. I started a Makerspace, and was enchanted by the possibilities present in the dense technology I had gathered.

Many of the projects I had always wanted to do were suddenly within reach, and I learned a lot, and made lots of cool things.

Now though, when I talk to people there, and they ask me what I'm working on, I have no good answer. All the low hanging fruit that I had been reaching for has already been picked. The interesting problems to me are still out of reach, and now require even more specialized equipment or technology.

If anyone has a spare MRI machine or electron microscope they want to get rid of, I'll take it!


I've heard of electron microscopes being built in the garage. MRIs though I imagine would be more hefty. Could one build a portable MRI with less resolution?


I don't really want to build them to look at stuff. I need an ultra fine electron beam and extremely uniform magnetic field for my fusion reactor prototype.


Farnsworth Fusor?


It's a device of my own design. There is a bit more info about it at my website www.DDproFusion.com


Movable MRIs exist, but they're trailer-sized: you not only need to move the magnet (probably a superconductor, which then needs usually needs cryogens to keep it superconducting too), but all the shielding too.


I've felt this too. During my learning phase with the Arduino, I had hastily bought a bunch of sensors and random pieces of hardware with some ideas but no cohesive plan of what I was going to do with it all. After a stint at my college's robotics club, I had a much better idea of how everything worked but I felt drained of the vague and exciting ideas which made me buy all the hardware in the first place.


Truth. The problem I find with makerspaces is that, because its not "engineering" most of the things to make are pretty easy. The interesting problems require some specialized help that is hard to find. I am having huge issues with this for my machine vision drone project


Why are you trying to do in terms of machine vision for drones?


Set up control via mavlink <> ROS with a drone running gobot


I felt the "magic" of software as a teenager with my first programs. I've even faced long stretches of boredom over the years, but after 2 decades the awe of the magic has been replaced with the awe of my own mastery. I'm in awe when a elegant solution comes to mind, seemingly out of thin air. I find that magical.


I've been experiencing this for the last six months and unfortunately its soured my current gig (that I started six months ago now) such that I dread my work life. Almost the entirety of tech is no longer magical once I actually read the tfs cards.


I feel this way too. The only way I can describe it is like pressing the sprint button in the video game is the only way to get any work done anymore, because my natural curiosity is largely gone. Wait until I have enough mental energy to deal with a task, press the sprint button, and wait until I can do it again. I wish I still enjoyed programming/software like I used to, but it's purely work to me now.


I’m in the same spot. Things that are well-understood seem bland, and complicated or unknown problems seem tedious. Productivity comes in 16-hour spurts separated by several days of boredom, guilt, and unsuccessful attempts to get something done. I’ve felt like this since maybe year 4 of programming.

I’m really not sure where people go from here. I don’t know whether to stay in the industry or plan to switch careers; I’d probably be just as bored with something else. Age seems to be accompanied by disenchantment and I can’t imagine ever being a bright-eyed and full of ambition to excel in a field like I once was. Maybe I’m just jaded.

Reading the topic article, I’m a little glad I didn’t study CS. At least I got to feel the “magic” he mentions in the first few years of my career. This poor guy’s already spent and it doesn’t even sound like he’s in his first real job.


For electronics engineers the bleeding edge of wireless research feels like magic, try connecting a 802.11 antenna to a spectrum analyzer and you are guaranteed to have some serious WTF how does it even work moments.


What are some good magic retention strategies for people that feel this way? Personally I rotate through an ever growing bunch of topics of interest and find that some insight on one topic can open up the potential for magic in some other topic. I think the fascination some people have with things they don't understand is a natural incentive for seeking more knowledge.


> Once I had a robust mental model of the problem and its solution, writing code for it just felt like a perfunctory task.

Based on every piece of software ever it sounds like OP just doesn't appreciate just how deep the rabbit hole goes. I mean, even something as simple as `cat` is 700+ lines of code[1], and it would probably take a novice years to understand every single nuance of that program to the point where they could build something comparable on their own. And programming is still in its infancy. If you want more rabbit holes than you can shake a stick at, just look at algorithms, data structures, new languages, networking, compression, high-performance computing, massively parallel systems, zero-knowledge proofs, formal verification, you name it.

[1] http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob_p...


I think of this attitude as being a stereotype of mathematicians. "I've already proved that a solution must exist; finding it is just drudgery." Perhaps the author would enjoy branching into one of the more theoretical branches of the discipline, like theory of computation, cryptography, or mathematics proper?


The magic is where its always been - in the hands of the user.

If you find yourself in a technological field and yet very uninspired, its more than likely you've lost contact with the users of that technology.

To revitalise yourself, engage with the users of your technology - go find them, see how they use it, see how your technology changes their lives. That is the purpose of technology, and its where all the magic lies.

Users are key. Don't have users? Thats your problem. Got no clue how your users use your stuff? Again, that's the problem. Don't see them improving their lives in some way with your technology - then don't expect there to be that magic feeling..

Disclaimer: Have lost and found the magic over 30 years of experience as a software developer. This always works for me: put down the tools and go spend time with your users.


I entirely agree, and what you write connects with Karl Marx's theory of alienation: that as workers become more specialized they become more alienated from the people and communities that benefit from their work.

I think this is a huge problem in the tech world especially and businesses would be wise to find solutions to it. Sending workers on outings to interact with users sounds like a great idea.


I've been brought into projects as a senior developer to try to revitalise the project and get it back on track after a catastrophe or a failure to produce, and I always recommend, straight off the bat, put down your tools and go be a user for a day/week/month, until you understand what you're doing and have a stable point around which to orient the rest of the chaos in the project.

It has always worked. Software projects (hardware too of course) get off the rails when too much time is spent in the woods and not enough time enjoying the trees. Get out there and use what you've made - you'll see what you need to do after assuming the perspective of a user for a while ..


I often have the opposite experience these days. A very (very) long time ago I knew almost everything about my computer system. I knew the physics of the transistor, the power systems, the schematic of the boards, I had practically all the source code that the machine ran and understood most of it. These days the machines are so complex that I have a very good understanding of my little corner of the machine but the rest might as well be magic for all I know about it. In a way it’s a huge victory for the engineers in our field that these things can be considered mundane.


I find the complete opposite, the less magic I have to deal with the happier I am, because it means maybe I can fix things when it doesn't do what I want it to do.


Yes, magic is a dirty word in my book. Give me bog-standard, boring, predictable systems all day long. There's still a kind of joy in being able to do the job well and efficiently, and it's easier to do the job well when you aren't spending your time scattering chickenblood and eye of newt around, muttering cryptic incantations.


At 54, I still find some wonder in the coordination of complex systems. Making a single process(or) step through a linear sequence of steps is BORING, but making a hundred or a thousand work together in some complex dance without skipping a beat can still feel pretty amazing. Maybe it's more like juggling than magic: the more balls are in the air at once, and the faster they're moving, the better. You can even get that feeling without true concurrency, when many pieces of a complex system each step in to do their brief essential part before stepping out again. Of course, when such systems fail the results can be spectacularly bad, but I guess that's the price you pay to experience the wonder when it's working.

For fun, look up "Strandbeest" or "Wintergatan marble machine" on YouTube to see some mechanical equivalents that (at least for me) trigger the same satisfaction at seeing the parts of a complex system come together.


I kind of understand the sentiment, having seen many enjoyable tasks become chores.

However, I think the magic is now in seeing the excitement, seeing that spark getting lit in others. Whether it is people just getting into the profession or those studying, the magic has moved to an external locus. Not the same kind of magic as the OP describes, but still satisfying.


I'm reminded of Feynman's discussion about The Beauty of the Flower https://www.youtube.com/watch?v=ZbFM3rn4ldo

Replace the word beauty with magic, and you get a similar point with regards to the "loss" of magic. It's not a loss at all, it's really just familiarity; "knowing" how it works doesn't make it any less magical (or beautiful). My suggestion to recapture that magic, is to delve further into the things you don't understand or know, and to ask further questions to unravel deeper layers, rather than continually having to use the knowledge you already have. It's worth recognizing that you can still appreciate that "magic", despite having looked behind the curtain to see how it works.


I feel I am not a very good developer because of a similar feeling. I do not enjoy producing lines of code, but I enjoy solving difficulties. That is one of the reasons I love making prototypes. One of my favourite professional activities is to provide support. Some other team call me to help on issues. Often I know far less than them and they do not always tell all what they have done. It is like an Agatha Christy novel where I have to find the culprit (the cause of the issue). Someone else provides a clean fix. I love problems where the solution is surprising (for example https://stackoverflow.com/questions/41061400/perl-join-strin...).


Max Weber's concept of Disenchantment is essentially this applied at the societal level.

https://en.wikipedia.org/wiki/Disenchantment


Same is true of music too. Once you learn the ins & outs of playing a certain song, you almost forget why you liked it. Or worse, you master the playing style of your favorite player and suddenly you've killed your god!


One would think one potential solution is to start working on problems we don't actually understand. General AI, algebraic frameworks, high efficiency cross-cutting models, etc. Pie in the sky stuff.


I work in AI and I find it pretty discouraging at times. The biggest conference in the field had about 9000 attendees last time. There are thousands of publications coming out every year. It's basically impossible to keep up, and your chances of getting scooped (someone publishing your idea before you) are pretty high. Many of the more obvious research directions have already been tried. Many ideas don't work so well. It's tough. I personally have moments where I struggle to believe I can have any impact in this field.


I feel that a lot -- but not when I'm learning new ideas in Haskell. After years of study, that particular engineering landscape remains powerful and beautiful in ways I still don't understand.


Fuck that shit. Increased understanding of the world reveals deeper magic.


The Magic is gone when you stop challenging yourself and you start looking more outwards than inwards. You have to think crazy ideas and work on them. Most of my side projects are still magically, folks tell me I'm crazy or it's impossible, and the first thing they almost always ask is "How will it work" Of course, the magic is gone once I explain it. Understanding takes away the magic, might be a good thing for you, it might mean you're grown and can now understand more and many things.


It has led me to think that the exciting part was never the actual implementation (or coding in this case), but figuring out the solution instead.

I think this is the key part. (See also: http://www.commitstrip.com/en/2014/11/25/west-side-project-s...

An opposite of this is moving between different technologies so quickly that you feel like you're forever a newbie.



I m more interested where did that magic come from? How come this though-secreting organ in our heads feels good vis-a-vis something it does not understand, and why is this more prevalent in early age.


To me the magic is gone because I feel that even if I create something good, I still need the permission from Google to make it popular.


At least you know to whom you should bend a knee. I do wonder if there was a time you didn't need to.


How much new do you use, before you use it all up?

If the spark is fading, start teaching. Either on the side, or as a mentor. You'll get asked questions you may not know the answer off the top of your head and it will make you dig, and remember.

It feels good to spread that spark and fan your own flames in the process.


Wireless electricity is quite literally the transmission of electrical energy without wires. People often compare the wireless transmission of electrical energy as being similar to the wireless transmission of information, for example, radio, cell phones, or wi-fi internet.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: