The standard way of representing mathematics today is tedious to write and extremely non-intuitive in many ways. Unfortunately alternative representation systems of physical processes (e.g. block diagrams, Feynmann diagrams, etc.) don't yet provide a way for the user to operate on higher-level objects while staying on an abstracted level. One has to tear open all the black boxes, rewrite them as integral/sigma/matrix/bra/ket soup before they can be operated upon.
Most of the time when I read a physics paper, even in my own field of research, I spend abount 95% of my time and brain power parsing and 5% of the time understanding. This should be reversed.
In addition, a startling problem is that it now takes about 25-30 years of education from birth to the time before any individual can be productive to society in physics research. They are subsequently only productive for another 30-40 at most. As fundamental science continues to become more advanced, the increasing number of required education years to catch up with all of human history, even for a highly narrow, specialized field, is not a sustainable trend. Either modifying the brain or adopting a new framework of thought will be necessary to sustain progress; a notation revolution may facilitate the latter.
The traditional static symbolic notation of math may be the most efficient one, It's constrained, but forces abstraction to emerge. If people had Excel before, we'd never know Gauss sequence sum formula, we'd let the system help us.
I think some people in this thread must be using "notation" in a different sense than I am accustomed to. I don't think of the computer's ability to automate addition as an innovation in "notation", per se.
I don't see how any improvement can be anything more than cosmetic.
Re Scheme as notation (or APL/ J many years ago) -- when I hear a programming language proposed as a replacement for math, I always wonder "how does that make proving theorems easier?" I haven't read Sussman's book, and he is way smart, but I have listened to lots of APL weanies say "Really, it's just like math" but ... no, it isn't...
EDIT: I just glanced at the first part of Sussman's book on Classical Mechanics, and it looked like Lagrangian mathematics to me, no new notation in the first few sections.
What I meant is that, no, I don't think that GA is a substantial improvement of notation, even though a vocal minority might have made you believe otherwise. One piece of evidence is that physicists would have happily picked up the notation otherwise (and no, this is not an appeal to authority ;). Also, I am not sure what you mean by "in the works", since it is quite old of a concept.
Anyway, the model is great, and IMO should be taught in high schools. It would reduce a lot of redundant and obnoxiously inconsistent existing models we use instead, and ultimately save students a lot of time.
It takes a bit of work to get used to reasoning in GA vs. the standard vector model, or via trigonometry, etc., but there’s a pretty big payoff, I find. Many problems are substantially easier to reason about in GA terms, and many bits of bookkeeping can be avoided compared to the standard approaches.
More generally, the study of geometry has in general been systematically deemphasized and devalued over the past 100 years, in favor of analysis. This has IMO caused some serious problems for students’ and scientists’ geometric fluency and reasoning. GA is IMO one way of helping reunify algebraic/analytic with geometric reasoning.
By “in the works” he probably means that since David Hestenes has put in a career’s worth of work reintroducing / popularizing the ideas that were mostly ignored after Grassman’s time with the singular exception of Clifford (except in highly technical pure mathematics contexts, and by a few physicists for studying general relativity), and especially in the last 20 years or so since a slightly bigger community has gotten involved, it seems like GA is picking up some steam. Many more people have heard/thought about it today than 10 years ago, there are now a handful of textbooks, etc. In particular some computer graphics / computer vision / robotics folks have found GA very useful to their work.
As for why physicists haven’t been champing at the bit.. GA as a model doesn’t allow people to solve any new problems that they couldn’t solve before using existing models, it just makes understanding what’s going on a bit easier. It’s sort of like refactoring all your code: it takes a lot of effort to reframe everything in terms of a new model, and the payoff is mostly for the people who are learning the material for the first time rather than for current professional mathematicians and physicists. Radical changes to low-level models take a long time to propagate through society, or sometimes are altogether impossible (which is why we still use base 10 instead of base 12, why America is stuck with imperial units, why we have a completely hacked together calendar of unequal length months, why we use π as a circle constant instead of either 2π or π/2, why we haven’t normalized English spelling, etc. etc.)
I thought that I was inured to future speculation (except in the sense of "what wrong things are intelligent people thinking today?"—a comment on general such speculation, not Wilczek's in particular), but I think that this is one of the most exciting ideas I have ever read. I'm being quite serious.
"The quantum revolution gave this revelation: we’ve finally learned what
Matter is. The necessary equations are part of the theoretical structure
often called the Standard Model. That yawn-inducing name fails to convey
the grandeur of the achievement, and does not serve fundamental physics
well in its popular perception. I’m going to continue my campaign, begun
in The Lightness of Being, to replace it with something more appropriately
Standard Model → Core Theory"
One thing that is rather hot now and likely will become a richer field over the next decade or so (and will likely leave it's imprint in future technology and life in 100 years) is the merging of physics with biology. Mainly, the use of physics and quantitative methods in biology is the next area of innovation for this century, in my opinion. That might fall more under "Biology in 100 years," however.
What's so hard to explain? Airplanes and cars? You had them then, they're just more common now. TV? Yeah, it's like radio and movies combined. Cellphones? Ok, phones are portable now. Computers? Alright, just tell them what a computer is.
They can handle it. They were human then too, not some alien species of comparative primitive idiots.
(If the argument were instead that it's unreasonable to expect someone to accurately predict the world of 100 years hence, I would agree; long-term prediction has not historically been something anyone has been all that good at. But the difficulty of predicting the future doesn't mean one couldn't handle exposure to it.)
In 1915 radio is far more primitive than you are thinking and very few people had them. 'Talking pictures' had not even been invented yet. If you took a more technically minded person from New York perhaps, they may be able to wrap their head around what is occurring. If you took a farmer from the midwest they would be totally lost. WWI has not been going on long and the world is beginning a rapid set of changes from manpower to machine power. 1914 is commonly considered the end of the Steam Age.
Next, it would be far harder for them to handle socially than strictly comprehensively. Think of the common language we use when talking to each other. People would almost be speaking your language, but they wouldn't be speaking it at all dude, post that to your blog and tweet it. Even common things like food would seem totally foreign, in the most literal sense. Foods were very regional and always made at the time of eating. Now you can get food from practically anywhere, anytime, frozen solid, and you can heat it up in magic beeping boxes with some kind of magic rays inside.
Lastly, unless you were from one of the always on cities in 1915, the pace of the modern world would wear you down so quickly you'd be in a constant exhaustion. The rate of change we consider normal these days is unprecedented. The speed we travel. The copious amount we are expected to communicate with large numbers of people are not something that commonly occurred in that past. Now imagine a future where brain implants allowed to you instantly connect to devices or other peoples minds all around you. You could understand it, but the feed of information to and from you would likely be something your mind would have issues dealing with, such as privacy something we focus on quite often these days.
The social/ cultural changes are what matters, I think, and the technology is actually much less important.
On the other hand, I think if you brought an Afghani tribesmen from the 1950s into modern US, they would be more blown away and have a harder time adjusting, (EDIT: even though they would be more familiar than an ancient Greek or Roman with telephones, cars, etc).
Do you really believe that it would be so simple? I think that it's hard to appreciate how much computers have changed the shape of our lives in large and small ways. For example, imagine bringing Turing to the present day. Would he have any trouble understanding what a computer is, as a mechanical or mathematical device? Probably not. Would he be baffled by, and initially uncomprehending of, the way that computers have changed our lives? I'd say almost certainly. (I often am, just thinking back on the way things were as recently as 20 years ago, and I lived through the change.)
Computers have changed our lives in many large and small ways. Great. Tell that to the traveler from 1915. Show them. They'll understand; they've witnessed the same phenomenon with the technological developments of their own time. They won't be used to our modern world, but their mind won't violently reject exposure to the concept.
Consider these technologies and innovations that had already been developed:
* Jacquard loom (http://en.wikipedia.org/wiki/Jacquard_loom)
* Player pianos (http://en.wikipedia.org/wiki/Player_piano)
* Widespread use of interchangeable parts (http://en.wikipedia.org/wiki/Interchangeable_parts#Late_19th...)
* Assembly lines (http://en.wikipedia.org/wiki/Assembly_line)
* Tabulating machine for the 1890 US Census (http://en.wikipedia.org/wiki/Tabulating_machine)
* Transoceanic telegraph networks (http://en.wikipedia.org/wiki/Electrical_telegraph)
* Use of humans for distributed computation (http://en.wikipedia.org/wiki/Human_computer)
Automated computation is an amazing concept, but--at the risk of historic bias--what we have now is merely an optimized and widely available form of what we had then.
What I imagine they'd have a really difficult time accepting is modern physics, particularly quantum theory. Then again, the vast majority of the public today (including me) struggles with it.
And culturally? Well, for one (U.S.-focused) example, there's that whole Civil War/end-of-slavery thing smack dab in the middle of it.
It's by no means clear to me that the changes of 1915 - 2015 somehow completely outstrip the changes of 1815 - 1915.
(I see that you agree with my general point, and with good illustration, too. I just wanted to point this out as well.)
The UK still had slums with limited indoor plumbing during the 1950s. It wasn't until the slum clearances of the 1960s that the standard became an indoor bathroom in a house with central heating.
The big change between 1915 and 2015 is that the technological GINI coefficient has shrunk so much.
Being rich gets you few technological perks. You can buy a supercar or a private jet, but that's a difference of degree, not a difference in absolute access.
Most people have cars, and almost everyone can afford to fly. And all but the very poorest have Internet and electronic media access.
Likewise for cultural differences. Sexual morality - at least as publicly presented - is completely different now.
Work culture is somewhat different. Politics and finance have probably changed least of all.
The point being that someone from 1915 may just about be able to understand the Internet and computing. But they're going to have a really tough time learning how to parse Buzzfeed or TechCrunch or Reddit. Never mind what happens when they find PornHub or Tinder.
There are three things to learn, not one. The first is what the technology does. The second is the vocabulary of new names and the new concepts used to describe. The last is the social scripting that defines appropriate and inappropriate behaviour.
Those last two will take longest and be hardest.
I'd expect similar challenges in 2115, with the difference that there's likely to be much more social and political change, and not less.
It would surprise me if the definition of "human" hadn't changed fundamentally by then, together with almost everything we think we know today about culture, politics, and economics.
Also, I didn't mean to imply full agreement with chaosfactor (https://news.ycombinator.com/item?id=9279631)'s "die from culture shock", which I think overstates it. I just meant to say that there's a wide gap between being told the facts (about anything profound, not just computers) and really coming to terms with their implications, and that I think that achieving the first is easy and the second hard.
> “What amazed him most of all,” Peskov recorded, “was a transparent cellophane package. ‘Lord, what have they thought up -- it is glass, but it crumples!’”
Everything else can be learned - after all, even dogs get used to Roombas...