The differential equations course was one of the most baffling experiences I ever had. The professor on the first day told us the course would be rote, as opposed to proofs, and every day he copied the methods to the chalkboard. He specifically instructed us to copy them verbatim into our notebooks. In this way there was not a lot to discuss, and from time to time the professor would gently steer us back to merely copying and memorizing the methods, even though no one had questioned him out loud. The methods were entirely disconnected. I had no indication of how they were derived or what the original motivation might be. What a differential equation was or why I wanted to "solve" one---this generated a second equation---was a mystery. None of the problem in the physics sequence looked like this. The engineering students claimed to have them, but reminded me that "this was all done by computers now." In the textbook there were no word problems, only formulas, and so I was never able to infer what this might all be about. The problems gave no opportunity for insight beyond recognizing the form. On the homework I manipulated one formula into another. On the test I did the same thing. Through memorization, I got an A in the course. I never encountered a differential equation before or since.
This is standard an unavoidable. There are like a dozen of tricks that solve a few special cases, and they were found after heroic brute force search in the void. (The real fact that is somewhat hidden is that most differential equations can't be solved analytically. You solve analytically only the few cases that are solvable analytically, otherwise you just get a numerical solution or an approximation.)
> In this way there was not a lot to discuss, and from time to time the professor would gently steer us back to merely copying and memorizing the methods, even though no one had questioned him out loud.
I phrase it as “The class I cheated my way to an A but didn’t commit any academic dishonesty violation.”
> The real fact that is somewhat hidden is that most differential equations can't be solved analytically. You solve analytically only the few cases that are solvable analytically
I discovered this very early in my semester of Differential Equations. We were allowed a single 8.5x11 notesheet for the exams. As there were only a handful of the “most general” cases which are solvable on paper with a basic calculator, I simply copied the step by step solution for each of the very most general case completely worked out in whatever techniques we were going to be tested on for that exam.
The professor was an engineer before becoming a math professor so he only liked to include real-world situation ODE’s on exams which further reduced the potential problem space.
While it greatly confused the professor/grader who scored my exam that I kept adding zero-coefficient terms before solving the differential equation perfectly…I got 100% on all the exams.
The catch was that I didn’t learn anything. The next semester it turned out that I needed to know those techniques for Reaction Kinetics and Heat&Mass Transfer and Biochemical Engineering (these courses involved deriving and solving many equations from first principles).
I had to crawl back to my Differential Equations professors office hours for 3 weeks and beg him to actually teach me differential equations. He was very confused after asking me what grade I got (an A) and I had to explain to him how I got an A without learning anything.
To his credit, he did a fantastic job assigning me custom work for 3 weeks and reviewing it with me and I was able to learn what I needed for the more advanced courses.
But without his help and some additional tutelage from my peers, I would have been completely screwed for the rest of my Chemical Engineering major.
> most differential equations can't be solved analytically.
Exactly, so why don't they teach the numerical analysis for actually solving PDEs that matter? These are equations that are very highly relevant to a wide array of real-world science and would be extremely beneficial for many people to know, even if (like calculus or even algebra) most people may not need them later.
I ended up wandering into a career where I work with PDEs nearly every day in some form or other, and would have greatly appreciated some basic training as part of my formal education.
Luckily there are many interesting examples that can be solved. In particular the linear differential equations. Many equations can be approximated by a linear version of it.
Also, in Physics, a lot of ODE are mysteriously integrable if the variable is x instead of t. (One reason is that it's easy to measure the force/fields, but the "real" thing are the potential, so you are measuring the derivative of a hopefully nice object.)
Also a lot of the theoretical advanced stuff to prove analytical solutions and to estimate the error in the numerical integrations use the kind of stuff you learn solving the easy examples analytically.
And also historical reasons. We have less than 100 years of easy numerical integrations, and the math curriculum advance slowly. Anyway, I've seen a reduction in the coverage of the most weird stuff like the substitution θ=atan(x/2) (or something like that, I always forget the details). It's very useful for some integrals with too many sin and cos, but it's not very insightful, so it's good to offload it to Wolfram Alpha.
Hmm, maybe. How would that impact the larger curriculum? Are you thinking a new class, or just change how differential equations is taught?
I think there is a little bit of an annoying situation where at least Electrical Engineering students are going to want Differential Equations pretty early on as they are pretty important to circuits (IIRC, I don't touch analog stuff anymore). Like maybe as a first semester 200 level class. This doesn't afford space to put a Linear Algebra class in beforehand (needed for numerical analysis).
Maybe the symbolic differential equations stuff could be stuck at the end of integral calculus, but
1) curriculum near the end of the semester is risky (students are feeling done, and it can suffer from schedule shifts).
2) Transfer students or students who satisfied their calc requirements in highschool (pretty common for engineering students) wouldn't be aware of your curriculum changes.
Or, a numerical-focused PDE class could be added to elsewhere. I bet most math departments have one nowadays, but as an elective.
They do, but if you want to solve PDE numerically instead of DE analytically, you should enroll in the “Numerical Methods for PDE” course instead of “Analaytical methods for DE”.
They do, but they’re fairly advanced level courses. For e.g. if you go down the Theoretical Physics or Applied Maths routes you’ll do perturbation theory and asymptotic analysis, probably in Master’s level or grad school.
Most people will do some computational courses that at least have them solving basic PDEs in their first or second year of undergrad now.
(This reflects the state of those in the UK at least)
“Unavoidable” is a bit too strong I think. For an ODE course, what does the usual list of elementary methods really include?
- Separation of variables. If one is fine with differentials (or their modern cousins differential forms), there isn’t much to explain here.
- Linear equations solved with quasipolynomials. The only ODE-specific observation is that d/dx in the ( x^k e^x / k! ) basis is a Jordan block; the rest is the theory of the Jordan normal form, which makes interesting mathematical points (an embryonic form of representation theory) but exists entirely within linear algebra (even if it was motivated by linear ODEs historically).
- Ricatti equations. Were always a mystery to me, but it appears they could also be called “projective ODEs” to go with linear ones and have pretty nice geometry behind them (even if, as you said, they were first discovered by brute force search).
- Variation of parameters. Despite the mysterious appearance, this is simply the ODE case of Green’s method beloved in its PDE version by physicists and engineers. (This isn’t often included in textbooks, in fear of scaring students with Dirac’s delta, but Arnold does explain it, and IIRC Courant–Hilbert mentions it in passing as well.)
- Integrating factors. Okay, I can’t really explain what that one means, even though it feels like I should be able to.
Not that teaching it like this would make for a good course (too general, and ODEs ≠ methods for solving ODEs), but that’s essentially it, right? There are certainly other methods you could mention, and not unimportant ones (perturbation theory!.. -ries?), but this basically covers the standard litany as far as I can see. And it’s no haphazard collection of tricks—none of these is just pulling solutions out of a hat.
(In the interest of changing things up and not spending an hour on a single comment, I will omit the barrage of references I’d usually want to include with this list, but I can dig them up if somebody actually wants them.)
Here the first ODE course is half a semester. If you spend a week or two proving existence and unicity, you get one week to study each method and make a few examples and then you must change to next week trick.
Fourier/Laplace and other advanced stuff are in a more advanced course.
I never used perturbation theory for ODE. I've seen it for solving eigenvalues/eigenvector of operators in QN. But perhaps it's one tool I don't know.
Around 50 years ago, I was a math major and consequently required to take a course on differential equations. For perhaps the first time, I was taking a class on mathematics that just didn't click for me; nothing was intuitive. The class was just a big bag of complicated tricks. Each category of differential equations covered in the course had its own special trick. There was no general or universal approach to solving a DE; one had to recognize a particular problem was a DE from a particular category and apply the special trick to solve it. The methods were often long or complex and they only solved some differential equations. Many differential equations have no trick at all to solve them. It turned out to be a tough class for me because I'd never before needed to learn math purely by rote.
Those that have taken Integral Calculus may be thinking that solving DEs sounds akin to integration where one may have to apply substitutions, integration by parts, trigonometric substitutions, or partial fractions. Yes Calculus requires learning a bag of trick too, buts its a small bag of simple tricks with wide applicability. So many of the functions one needs to integrate succumb to this small bag of tricks that it's almost fun to hone ones technique. A class on elementary differential equations is just depressing.
To be fair, differential equations are important. Physical phenomena are often best described by differential equations. Fortunately, programs like Mathematica can be used to tackle real world differential equations one way or another (perhaps with numerical methods) to obtain solutions.
I was fortunate to have my Probability course (sadly, not my differential equations course) taught by Gian-Carlo Rota.
Math was all easy peasy for me until I took a course on differential equations. For the first time I couldn't just visualize the problem and due to a lack of discipline on my part I dropped out.
I've been coding for years and have been able to fake it with my limited math education but would love to have the time to learn more for the sake of understanding.
Unfortunately it seems this way with a lot of higher level math and it's not really unique to differential equations. The difference is, unlike calculus in general, in diffeq you have actual rote formulas to solve most of the known solvable cases.
I found the classes to be rote. The derivations are truly non-trivial. The book Ordinary Differential Equations by Arnold goes into more detail. Basically if we taught the reasons we'd require everyone to take analysis and differential geometry to truly understand how they work. Given the MAJORITY of students in diffeq are engineers and not math majors 99.9% don't want to know and/or don't care about this detail. You see a similar occurrence in calculus where you're basically told "dont think about it too hard" for your own safety. If you start wondering a little too hard about calculus you end up switching majors to math and taking two semesters of real analysis. It's also EXTREMELY common for engineering professors to teach differential equations rather than math professors. This further waters down the rigor because (obviously) an engineer will not know/care about the rigor. Part of the reason I've pursued a math degree is because there was so much handwaving in engineering/computer science it became just an extremely annoying grab bag of math tricks and I wasn't satisfied.
To me we have too many inter-dependent classes to teach each class with full rigor. As a result you end up with a collection of half-understandings for most of your undergraduate career and only if you take a math major itself (or a minor in math) will you actually unlock the other half. A better path through math might be basic algebra I, II-> geometry -> trig -> abstract algebra I+II -> analytic geometry -> calculus I, II, III -> real analysis I+II -> differential equations I+II, but this would basically make every degree a math degree. What you experienced is the compromise.
And why it took a long time for back propagation to be introduced into machine learning..
Back propagation is (almost) just a fancy word for differential equation, with derivative relative to the error in the output against your training data.
As someone who's starting to learn a bit about machine learning, it feels like the whole field is full of fancy terms like this that seem to mostly map to simpler or more familiar ones. "linear regression" instead of fitting a line, "hyperparameter" instead of user-provided argument. Half the battle seems to be building this mental translation map.
You are looking at it from a programmer standpoint rather than a mathematical standpoint.
Linear regression isn't just fitting a line, it's a statistical technique to fit a line of best fit. Hyperparameters are a bayesian term for parameters outside the system of test or "algorithm". User input really misses the bayesian aspect.
These terms actually have meaning so I'd be careful ascribe simpler definitions. The underlying meaning is important to the reason they work. If you don't have a really strong background in probability theory and statistics trying to dig into machine learning will take work. Id recommend taking an MITx course or picking up a textbook on probability so the terminology feels more natural.
A user-provided argument could also be an input parameter or a regular function parameter altogether.
Yes, hyperparameters are often set by the user of a model, but more specifically they are parameters that exist separately from the data put into a model (input parameters) or the structure inside of neural networks (hidden parameters). Hyper- meaning above, helps conceptualize these parameters as existing outside the model.
Yes, backpropagation isn't the chain rule itself, but just an efficient way to calculate the chain rule. (In this respect there are some connections to dynamic programming, where you find the most efficient order of recursive computations to arrive at the solution).
I think of it as: computing the chain rule in the order such that we never need to compute Jacobians explicitly; only Jacobian-vector products.
I also didn't totally grasp its significance until implementing neural networks from matrix/array operations in NumPy. I hope all deep learning courses include this exercise.
Yes, they are not the same. The chain rule is what solves the one non-trivial problem with backpropagation. Besides that, it's just the quite obvious idea of changing the weights in proportion to how impactful they are on the error.
Is that why it took long? I was under the impression it was because of diminishing gradients in backprop once you stack a huge amount of layers (the deep in deep neural networks).
The reverse mode has famously been re-discovered (or re-applied) many times, for example as backpropagation in ML, and as AAD in finance (to compute "Greeks", ie partial derivatives of the value of a product wrt many inputs).
I'm temporally leaving my degree on Telco/EE due to this. I have passed and done well in all the subjects but those "memory-heavy" math, and that's what I have left.
We have to memorize a lot of information without the explanation about why is done in that way (due to the lack of time in those subjects), and also we are more encouraged to study how previous year's exam were made than the content itself. This one of the big reasons only 10% to 15% (IIRC) of the enrolled students pass those exams every year.
That scene, knowing that I have to do a task that is time consuming, pretty hard, artificial, and useless for the rest of my academic life, my work life, or my life in general, is what made me leaving this year. I don't have enough mental health to do such a big thing.
PS: Sorry for the rant. I'm having too much time at home due to COVID and maybe wrote too much.
I am middle aged and completed my EE degree when I was 20, but it was 90% theory with very little practical use (mostly useful if you were to continue climbing up the education chain). Completing the degree made me despise working with electronics, a topic I had deeply loved and had spent my teenage years learning for myself. Most courses were rote learning, and I was very good at passing exams, but it was two years before I realised how pointless the majority of the “knowledge” was, and then I forced myself to finish the degree (sunk cost), which I now regard as one of the few true mistakes of my life (wasted years, for valueless academic “knowledge”). The degree got me a software job, so there is that, but I am sure I would have ended up in software anyway (early love of computers).
I had the same experience but with a final exam that was way too long and covered all of the types of diff. equations given to us throughout the semester (expected to just memorize everything). Result was that the average score was ~29%, only reason anyone passed the class was that it had a curve. By far the worst university level math class I had, it has been much easier to learn to solve them depending on need with physics.
The required memorization made things especially difficult for me because I tend to work off intuition rather than memorization. I also usually can't name theorems despite knowing them from practice (this also used to be a huge pain for exams where solutions were unreasonably only considered correct if you named everything used).
Wow I could have written that. The prof teaching my diff eqs course was a nice enough guy and I think tried to get the engineers interested by making the optimization problems as applicable to IRL as possible but it was dull and rote and I don't remember any of it 15 years later, but I feel like I could still pass a statics final so I don't think this is my problem per se.
Differential equations show up all the time. Your professor could have provided concrete examples -- maybe they were more interested in their own research work than their teaching duties.
In high school, I looked into analytically calculating a ball's maximal trajectory length (or something like that) and was told it required solving differential equations and would be taught in college.
This is exactly my experience with my differential equation lessons in my maths classes at Year 1 and Year 2 undergraduate chemical engineering. The way we were told to just follow the instructions and not have any critical thinking at all about what we were doing made me so unmotivated that I sort of gave up learning differential equations. I was lucky this was during lockdown, so I was assessed by online tests and was able to get through it, but my god was the teaching so so unengaging.
Very similar experience to mine except I failed to memorize stuff and had to ask prof. how I can improve my score, he told me that I should do my best and he will fix the rest. He was kinda grandpa figure.
they removed the human element from the content. they've focused on the outcomes, the resulting inventions of the scientists and mathematicians. they only teach how to use the techniques, not how they were made.
paving the way (or building a wall) such that few can understand how people came up with that stuff. this is intended. this literally constructs knowledge as power.
The ways of thinking used to come up with the techniques are hidden, restricted. The academics who know the whole story (who know the ending -- which is what is taught, as well as how mathematicians of old came up with such ideas) hold this kind of power.
This gets even more interesting when the academics who know the histories, cannot really use the techniques. then the only people who knew both are historical figures (who get bathed in myth).
I cannot forgive them for this, given as they are still actively doing this. e.g. finding out how they make shredded wheat cereal is not possible [1]; and this must be technology from the early 20th or late 19th centuries... anything more recent is just hopeless.
How to make shredded wheat has been publicly known since at least 1895. [0]. How to make it efficiently at large scale is a trade secret that the company invested in and has a right to protect. None of this is related at all to the teaching of differential equations.
again, on my own very stretchy way of thinking (which involves big leaps in reasoning). you're saying that a company has a right to protect its secrets, but I'm hearing something comparable to (e.g.) "colonialist superpowers have the right to enslave people from Africa". I suppose I may be tuning into a moral ethical-framework from the future when I take 'offense' by the "rightful" actions of companies to keep knowledge bound and locked.
the relation is ideological, cultural (in the sense of being close to the intention of); not direct, causal, material (in the sense of relating to the actual implementation).
You can motivate the methods somewhat - if that weren't the case, no one could have thought of them. I can't usefully explain without an example, so I apologize if the math that follows bores anyone.
One of the standard methods is "integrating factors for first-order linear equations". You are told that, faced with an equation
y' + p(x) y = q(x) you should multiply both sides by e^(the integral of p(x)).
For example, you might have
y' + (2/x) y = x.
Then you multiply by e^(integral of 2/x), which is x^2.
[Sometimes I wish Hacker News had TeX available.] If that's all you tell people, it looks like some random abracadabra and it's no wonder why people feel they just don't get it. So you might try to explain this way:
"The equation has a derivative in it. To undo a derivative, you need to integrate. But if you integrate as-is, you have no idea how to integrate y' + (2/x) y."
"Well, you know that the integral of df (the derivative of f) would be just f. So if you could make the left side look like the derivative of something, then you could just integrate both sides."
At this point, you scratch your head and think: "What could I do to make the left side be the derivative of something?" This kind of thought is impressionistic - you have to think in a vague way of things the left side "is like". Daydreaming for a while, you might realize it's a sum of two terms, so you think: If this is a derivative and it's the sum of two terms, what derivative rule gives a sum of two terms? And you might think of the Product Rule.
But the given thing is not the derivative of a product as is. What to do? So continuing this line of thought, you might think - maybe I can multiply it by something to make it the derivative of a product. Once again, you have to search through your experience with derivatives and maybe mess around on scratch paper. Finally, you realize x^2 works - multiplying by x^2 makes the equation
x^2 y' + 2 x y = x^3.
The left side is d(x^2 y), so you can integrate both sides and get x^2 y = (1/4) x^4 + c.
The final step is to think whether you can generalize what you did with "p(x)" instead of "2/x". After some additional messing around, you come up with the integrating factor I gave at the start.
I have no idea who discovered this method, or what their thought process was (if they even explained it at the time). This was about the extent of the motivation I got when I was taught this stuff in high school/college. I'd tell students this sort of thing when I taught differential equations. But I don't know what other people do in teachng, and I'm not sure this helps. For people who feel their differential equations courses were baffling/unmotivated, is this the kind of explanation you want? Or do you want something completely different, like applications?
At some point, explanation ends. Can a painter say why he put a daub of paint of that color in that place in a painting, or can a writer say why he had a character do this or say that? There are several points in the motivation above where all I can say is "you have to sit there and think and mess around", even after paragraphs of writing. I'm not sure how to do better.
> numbers (and other sorts of mathematics) do not pre-exist reality. Numbers are abstractions of regularities we see in nature.
This is an open and ancient question. I don't suppose to have the answer. I will say, the case for mathematics pre-existing is stronger than what you refute here.
You're right, of course, that this is an open question. But the posted article saunters about the fields of an incredibly deep question by glibly asserting idealism about mathematical structures. It is an awfully weak foundation upon which to build an answer to such a fundamental question.
It is _at least_ plausible that numbers supervene upon existence and not the other way around, which makes the entire exercise in the article seem suspect in its presentation, at the very least.
Would you _really_ say that the case for mathematical idealism is that strong? The Philpapers survey seems to suggest philosophers are approximately evenly split on this question (idealists at 39%, nominalists at 38%).
There is an entire world here beyond registering for a signal that comment seems unaware of. Even the simplest of preliminaries: registering for a signal is arguably non-trivial and incorrectly specified in many places since sigaction() supersedes signal().
> it's not rocket science to handle it in a sane fashion even in a multi-threaded application. Modern languages make this trivial. The author makes it sound like some dark art
Which language? I'll specify one so we can begin the process of picking each apart. Python? There is a sibling thread indicating Python issues. I don't know what the actual internal status is with Python signal handling but I am guessing the interpreter actually doesn't handle it correctly if I spent any time digging. Do you mean apps implemented in Python? They will almost certainly not be internally data-consistent. Exposing a signal handling wrapper means very little particularly when they frequently do this by ignoring all of the bad implications. I just checked Python's docs, and not surprisingly, Python guarantees you'll be stuck in tight loops: https://docs.python.org/3/library/signal.html That's just one gotcha of many that they probably aren't treating. This dialogue is going to play out the same way regardless of which language you choose.
Do you mean Postgres? I haven't used it recently but the last comment I read on HN seemed to indicate you needed to kill it in order to stop ongoing queries in at least some situations. If by a stroke of luck it does support SIGINT recovery (which would be great), what about the hundreds of other db applications that have appeared recently? You can't just call the signal handler wrapper and declare victory.
I've done plenty of signal handling in Python and it's extremely straight forward. Like other languages, the runtime takes care of safely propagating information from the signal handler to other execution contexts, which requires being careful in a language like C (it's not hard, but you can't be naive). I wouldn't be surprised if there were bugs in Python, it's a mess generally and I'm not a fan.
Postgres queries run as subprocesses. You can send them any signals you want. Postgres tries very hard to be durable, and it handles signals carefully but often to the dismay of the operator who can't force it to stop without SIGKILL.
> registering for a signal is arguably non-trivial and incorrectly specified in many places since sigaction() supersedes signal().
This isn't a good argument, no one uses signal(2), I'm not aware of that ever being recommended in recent history and even the docs on my system scream "never use this" quite clearly.
Look, if you're not going to read the docs, signal handling will be the least of your concern. Signal behavior is extremely well documented.
It is true this improves the bad path. It ignores desired happy path cases: downstream processes, custom debugging, graceful shutdown, preserved workspaces, and so on.
The way exceptions are handled as a result of siglongjmp'ing out of a signal handler is currently platform-inconsistent and one of the many dark areas I alluded to. It isn't even consistent on Linux between compilers.
This conversation is silly. You are snapshotting the transformative window. If you snapshotted the internet in the 70s or 80s you'd get a much more mundane picture.
I don't know if I think cryptocurrency will be as transformative as the internet or the mobile phone, but I think it will be within a level or two of that scale. It makes good on its promise year after year. NFTs just became popular in the last few years, and that is only one example of many. Are you going to bet against NFTs in the long run? I certainly wouldn't. We don't even know all the technologies crypto is going to enable yet.
GC researchers insist on conflating GC with all of automatic memory management. The public doesn't do this and neither does the article.
> Secondly, you know what's cheaper... Not doing anything at all.
These techniques are on the level of resetting a stack pointer or calling `sbrk()`. Incorporating them doesn't produce more-advanced GC schemes, it just means you neglected to consider similar allowances for RC.
The line of contention is at traversing the object graph and pausing threads.
Of course I have considered similar allowances for Rc (this has nothing to do with stack allocation by the way, I'm hoping this is not a misunderstanding on your part). I referenced RC Immix throughout the post. It is extremely clear that the author of the article did not consider such allowances because they do not see that there is a problem. Even if the author had done so, there is a big difference between saying "whatever, I could do that same thing for Rc!" and actually doing it--a hugely nontrivial one that has not been fully bridged until quite recently. These kinds of techniques are still not used in production languages with reference counting garbage collection, including Swift.
Your comment is radically naive. You can't literally believe what a business writes in its PR spin. The only thing you can take away from this is that Visa intends to tip the balance towards merchants and away from consumers when it comes to chargebacks. The problem with this is chargebacks just barely work as is.
Punishing honest consumers is not the right way to go about this.
Your approach will not work. Either continue attempting proofs or give up. It's fine to read the answers after you've tried.
> I've heard many many times that you can only learn math by doing it, which is certainly true
yes
> and is akin to saying that you can only learn a language by...
no. this is you evading the main point.
Taking a graded class with homework can help. So can finding an elementary book on a subject that interests you (topology, combinatorics, algebra, ...). Linear is dry, that may be your issue.
I don't think that's fair. Basic proofs are fairly rote, but if you don't know where to start it can be challenging to replicate them. You sort of begin by learning to translate definitions into algebra, and only branching out from there. Until you get a sense for which tools to reach for when, you could bang your head against that wall for a while without making any real progress (and pick up some terrible habits in the process).
I don't know where this method of argumentation came from, but it's obnoxious and I'm over it. There are even soyjaks for this. Either you don't know and you're being lazy (I don't think this is the case), or you do know and you've generated an asymmetric work request for op that you could've answered and that left unanswered by him casts doubt on his argument. If you have a case to make, make it, instead of this nonsense DoS attack. This is not a defense of op's position.
Your outrage is unwarranted. The proposed changes are:
1) Limiting PFOF because it creates possible conflicts of interest.
2) Limiting "gamification" of trading via engagement prompts.
3) Adding sub-penny prices to exchanges to harmonize them with market makers. This is to encourage more orders to be sent to exchanges instead of market makers.
How are any of these things capitulating to institutional investors?