This reminds me of my Ph.D. crisis. (which I'm sure many former grad students can relate to)
I was in my 6th year. All my friends had graduated, and my stipend had run out. I was 2 weeks away from submission and discovered that one of my assumptions was wrong, which potentially distorted/invalidated all my studies -- to fix these studies would have potentially delayed submission for months. It was a very subtle assumption violation (and it wasn't even that wrong) and my committee probably wouldn't even have noticed. I was tempted to sweep it under the carpet and not let it keep me from graduating.
But I knew it was wrong. I felt that if I sacrificed my integrity then, the moral failure would mark me for life. No one would know -- but I would know. So I decided to fix the issue, re-do the studies and live with the reality that I would have to delay my defense.
Turns out when you're desperate -- and many grad students can attest to this -- a resourcefulness that you never thought you had kicks in ("where were you during all my years of grad school?"). I don't remember how, but I somehow managed to wrangle new studies out in 3 days (which would have previously taken me months). I made the deadline in the end.
The lesson I learned was that committing to doing the right thing has its costs, but in some cases it also forces one to explore attacks never previously considered. Asking on MathOverflow is one such attack.
There's another (slightly kinder) version of this:
When people go to grad school, then during their studentship get married and have a kid -- suddenly their productivity goes up dramatically and output per time increases by many factors. What happened?
It turns out when you have self-motivating reasons to get out (and a target on your head from your spouse to earn some money for godssake), you find ways to focus on what's important and drop the rest.
No more idling away for hours on silly ideas that don't get you closer to handing in your thesis. No more trying random libraries that get your code to run 2% faster. No more goofing around after 6pm with other students just because you have the time -- you have to get home and be a breadwinner for your family. You have to get shit done.
You start to ask, "even if I don't know exactly what the thesis will say, how should it be organized and what kinds of conclusions will make up the writing? And what experiments do I need to fill in those charts/paragraphs, and no more?" What's the minimum I need to do to get out of here? Not, "What amazing interesting thing could I explore?"
Limits and constraints sometimes free the mind dramatically. The side effect is maybe you don't get to explore ideas that go nowhere, but that's a discussion about the purpose of the PhD and for another topic I guess.
(And sometimes, if you think, well I don't have a kid, so what's the rush? Well, someday you might have a spouse, a kid, and every day of time you left in grad school is a day for your future self -- and family -- and $$ -- left behind in time. Work to free your future self... now, while you have the time.)
I might offer a corollary as an practice to adopt (in the absence of the above possibility) -- learn to enable your research to have visible incremental gains on a known path every day, rather than hoping (hoping!) for some amazing breakthrough at the end.
Amazing breakthroughs have high risk, and make it highly likely you'll have a crisis when it doesn't happen.
And what I mean is that, even if you don't know what the answer is going to be at the end of your research, you must think about, or have an idea about, the format of what that amazing answer is going to be. Write the outline of your thesis and "ghost out" what the major charts will be. Write the intro sentences of each chapter -- what are they? (and I don't just mean the boring review of the field part, but your findings part)
You should know what major finding, plot, or table your research is going to output. What are the columns and rows of that table, or axes of that plot? How many data points are required? How many of them can already be guessed? Where is the surprise going to be? What is the conclusion going to be?
For most graduate research, the finding is not an amazing field-changing big bang. Few grad students are that fortunate. Or at least most of the bulk of the work will be of that sort. You should be able to predict what the answer is going to look like from past work, and the error bars.
Draw out the answer you're aiming for, now. If you can't even articuate what the answer will look like, you may be in for a bad time, so work on fixing that. It will also push you and your advisor to be specific about what the output of your thesis will be.
> You should be able to predict what the answer is going to look like from past work, and the error bars.
If you basically know the answer in advance, chances are you are not doing very interesting research.
The worst work I have done has been of this sort. The best has had me completely change my view of (mathematical) reality multiple times during the process.
In emperical fields as well I would assume the purpose of doing experiments is that you don't already know what the result will be. I accept that this may be a idealised or naive...
I'm not talking about the exact result, right? Just that someone engaging in research should know what the format of the plot/table/output should look like, how much work is needed to populate that table, and what kind of conclusions will come out of it.
If you (the general "you") are in industry breakthrough territory, you're an advanced student and my advice isn't for you. Otherwise, I think it's a good practice.
Fair. I think considering what possible conclusions could come is certainly important. I worried about your original post suggesting actually writing the conclusion in advance.
One of my professor's recommendation was to write the actual paper that you want to write, introduction, related works, methods, discussion, conclusion, leaving the figures, tables, etc. blank, before you actually start doing the experiments.
Many advisors don't push their students to do this, and they should. Or they haven't had enough practice to know to do this. Or they're not taking their responsibility seriously enough.
An effective PhD advisor/thesis isn't wandering the woods to find something. It's a guided coaching exercise, with an outcome in mind. If your advisor doesn't know this, maybe time to find a new advisor...
> An effective PhD advisor/thesis isn't wandering the woods to find something. It's a guided coaching exercise, with an outcome in mind. If your advisor doesn't know this, maybe time to find a new advisor...
You don't know me, but I really needed to hear this. I left grad school essentially because of a dearth of coaching. Thank you for framing this so succinctly.
There are multiple slightly different versions of the seminar (another is linked below that one), but unfortunately they all came out after I'd finished my PhD, and I didn't hear that advice from anyone else.
The process I arrived at after losing time on failed projects was basically "fail fast": find the simplest quickest way to demonstrate that your idea won't work, and do that. Then find the next simplest, and so on, until either it works or you've proven it doesn't and moved on.
Introduction, related works, methods, OK, but writing the conclusion before doing experiments sounds like an integrity issue (assuming you re-write the conclusion after the experiments -- else I'd label it fraud). What is the idea behind it?
As someone who was encouraged to do something similar, it was akin to creating a template manuscript for the project at hand: sketch out the experiments you plan to do, the likely results, and illustrative figures. This is meant as an exercise very early on in the project discussion phase, before any experiments have been performed, and while you're still figuring out the papers to read.
No. Obviously you rewrite the whole paper. Academics rewrite their papers multiple times before submitting them. The whole thing is a draft of a draft. The idea is to make your ideas very very concrete by writing them down as a paper.
Really annoys me when the word fraud is thrown around so freely, so I'd appreciate it if you don't.
It is not mentioned and was not obvious to me. Still, I'd postpone writing the conclusion to after you actually have the data. To do otherwise would be steering the experiments to get the expected results. I also reckon this depends on your field, how long the experiments take and your confidence in the results.
> Really annoys me when the word fraud is thrown around so freely, so I'd appreciate it if you don't.
I agree, 'academic dishonesty' is a more appropriate term here.
You typically have a finite set of overall results you expect to get, usually something like positive or negative. It helps to write out what you would conclude when you get either results. Think of this like a process that helps make your ideas clearer. And makes writing your final paper a lot easier.
This is amazing advice I hadn’t heard before, it rings very true.
I remember after my first paper was finished and submitted, and wow, it seemed easy to bang out papers after that, all you needed was four to five figures :)
My Doctoral Advisor had similar advice, though he suggested to add in aspirational figures as well. It can be helpful to help guide your initial readings and your experiments.
For most graduate research, the finding is not an amazing field-changing big bang. Few grad students are that fortunate. You should be able to predict what the answer is going to look like from past work, and the error bars.
Does it ever works that way for field changing big bang ideas? Every human achievement was built upon the back of previous achievement and works, either by you or someone else.
It's easy to forget that grad school is the start of a career rather than the summit, so the work just has to be publishable, not earth-shattering.
Most nobel/turing/fields level work doesn't happen while the researcher is still in graduate school, because a) it usually benefits from additional experience, knowledge, and resources and b) it can sometimes take years to complete.
It also might not be obvious to a dissertation committee (or anyone else) how field-changing certain work actually is until many years have passed, and they might dismiss it early on. Consider Tim Berners-Lee, who was actually a Physicist but received the Turing award because the web (and HTTP/HTML) turned out to have much greater impact than predecessors like FTP, NNTP, PLATO, HyperCard, Director, Xanadu, Intermedia, Gopher, AOL, etc..
Another reason that more mundane research tends to work better is that groundbreaking work is often hard to publish (and get support/funding for), particularly if it isn't completely solid and seems to contradict or supersede established theory or practice.
However, if you are choosing between multiple projects that all seem doable within the allotted time frame, picking the one with higher positive impact on the field (and/or elsewhere) may be a good approach.
Hmm this is different than my experience when my co workers had kids. Usually their productivity dropped due to being sleep deprived and not having the time on nights and weekends to come up to speed on any new technology.
Not that I minded, just saying that time constraints and sleep deprivation seemed to have the effect you'd except from them.
> Usually their productivity dropped due to being sleep deprived
High levels of sleep deprivation is only common for the first few months.
> not having the time on nights and weekends to come up to speed on any new technology.
This tends to be a problem on teams that don’t properly evaluate costs of new technology and so churn like crazy for very minor productivity increases.
it wasn't about coworkers.... when people are working they often get paid the same no matter what their productivity is, so the motivations are different
This rings so true to me. I got engaged during the last 6 months of my PhD student tenure, and lo and behold, I defended my thesis in the same month we got married. In the same vein as above, the added perspective and fire under me resulted in a number of novel solutions to problems that probably would have been major issues in the past, including a full change in direction of my research conclusions after realizing a substantial issue in the practical application of the technology.
I credit my now wife with the (passive) motivation to shave at least 6-12 months off my time in grad school.
This happened to me. I typed the last paragraph of my thesis shortly after my wife went into in labour. It was very satisfying to hear it printing while I was filling the birthing pool in our living room!
This is so spot on. I work for a biotech start-up that is obviously very research intensive. It's amazing to see the CEO, who has an undergrad in bio and an MBA, corral the PhD researchers and get them on track.
He understands about 40% as much about the biology, yet he's very good at getting people to drop/trim things that are not going to be worthwhile. He often adds tons of constraints that allow our researchers to hone in on what's going to drive the company forward instead of languishing in a million possibilities.
Before I got engaged to my wife(who was a year ahead of me in college) I had plans to do some kind of individualized honors project across 2 or 3 different majors (physics / math / CS).
I very quickly found myself reprioritizing how quickly I could graduate and find a job (since my wife's field didn't have many lucrative career prospects).
> Turns out when you're desperate -- and many grad students can attest to this -- a resourcefulness that you never thought you'd had kicks in
This is getting millions of students through high school, college, and university every year.
Everybody has at least one thing they pushed back for way too long and then developed almost super-human-like abilities to do it. Doesn't guarantee a good grade, though.
Good on you for doing the right thing. We have far too little thinking like this in the world.
It reminds me of a story I heard from a businessman that was walking with an associate down the street. He stopped at one of those old metal newspaper dispenser machines to buy a paper. For those that don't know, the way they work is once you have put in your quarter the machine opens up and all the papers are just stacked in there.
His associate asked him to grab a newspaper for him. So he pulled out one paper, closed the machine and put in another quarter and then pulled out another. The associate asked, "Why didn't you just grab a second paper for free? It's just a quarter." The businessman's response was, "If I'm willing to sacrifice my integrity for a quarter, how much easier will it be to sacrifice it when something serious is on the line?"
> 10 “He who is faithful in a very little is faithful also in much; and he who is dishonest in a very little is dishonest also in much.
> 11 If then you have not been faithful in the unrighteous mammon, who will entrust to you the true riches?
It's right there in the gospels, plain as day, and yet I agree with you that there's far too little thinking like this in the world, even among people who claim to be "Christians".
My laptop broke down just hours before I had to submit the thesis. It was the day of the deadline. I did have printed copies of the thesis (I had to submit printed copies), however I discovered bad typos on the front page. But, my laptop was down with the only digital version in it. In the end, I corrected the typos using a photocopier. I overlayed the typos with corrections on cut out pieces of paper, and photocopied. It was quite a struggle, but I made it!
The “point” of gradual school should be taking long walks with friends from completely different departments, talking about abstract ideas that aren’t related to your work at all. Maybe later in the shower you realize some subtle connection to your work that gives you an idea for an innovation, but maybe not. The point of graduate school is play. The switch from “play” to “product” is difficult but necessary. Both phases are important to the budding PhD. You need to demonstrate the ability to think creatively AND the ability to process that creativity into a deliverable. That’s the value of a PhD. If you’re interested in a focused grind to a specified finish line, skip graduate school.
I agree with you and I'm surprised I haven't seen anyone else make this point. Graduate school, and to a lesser extent post doc, is when you're supposed to dilly dally. Of course at the time it doesn't quite feel like that, but you are paid a meagre wage for a reason--to survive, but to not be comfortable. No extra money to go party, travel, etc. The devotion to the craft is supposed to be an almost pious experience. Even though it's simple, it is an incredible luxury most have never and will never get to experience (only a small percentage of the world obtains or even attends grad school for a PhD). With an eye on "getting out", you miss out on part of the beauty. I had many contemporaries through my studies like that. They were always stressed out and pushing to get home at 5pm, unlike us slackers who hung around and discussed tangential things but always with an eye toward connecting the dots. The ones who really excelled post grad school were the ones with vision and drive, not just drive alone, and the former always hung around shooting the shit because they appreciated the value derived from informal, idle chat.
One of my colleagues was in grind mode and switched to a 6 day life week by living 28 hour days so that he could be on certain machines all night when normal humans were sleeping.
Yeah he burned out quickly and left with a masters.
I wish I knew the answer to this. When everything is totally screwed up and on fire and I need to get something done by 5:00 today that would normally take a week and a half, by God, it becomes apparent every millimeter of fat that can be trimmed and I can usually scrape together what's needed by then. Then the deadline is over and I get a similar project the next day and it's back to taking 5 days to finish.
There are real quality differences between the two scenarios, but they aren't nearly as bad as what you'd think they are. The reality is that I can just work a lot more efficiently when stressed. If I could do that all the time I'd be able to work 3 hours a day and remain as productive.
It's not typically sustainable to operate at such high levels of stress, but the stress brings with it a myriad of hormones and behavioural adaptions that, for some people, aid in sustaining focus and effort.
I wrote a special chapter where I discussed my results with myself and ended up suggesting someone takes a closer look if interested.
The jury was initially 6 people, 1 refused to review a thesis where the candidate discusses with himself because it breaks tradition. The others were quite happy, with one of them referring to my discussion and thanking me for the time she gained with it and which allowed her to spend more time with friends.
For some small issues it is sometimes better to tell it upfront and highlight hiw genius the rest of the thesis is.
I discovered that the proof of a fundamental proposition in my maths PhD thesis had a mistake in it two weeks before my viva, after I had submitted. Not sure if that was a better or worse time to discover it! After a rather stressful few days, I managed to reprove it and took the correct proof along to the viva. The examiners were 'well, the result was obviously correct so we weren't worried about it'!
The book “Gödel, Escher, Bach” was printed with a special process only available at one printer far away from where Hofstadter was living, and was extremely time consuming. At the last minute the whole thing had to be redone, so he had to get on a plane, print a few pages, then fly back to his job. This took several months. (He talks about it in the 20th anniversary edition.)
> From OP's point of view this could be viewed as glass half-full rather than glass half-empty. Their dissertation results hold unequivocally on the sphere and might hold on the torus, though it is an open problem if they do. It is certainly legitimate to study what follows from a given conjecture being true. It could even be spun as a feature rather than a bug of the dissertation. If the results in fact fail on the torus then you know that the conjecture must be false. Potentially, it could open up a fruitful avenue of attack.
When Terence Tao writes stuff like this, I'm always very happy that I got to experience the Moore-method for learning math (at UT Austin). A group of us would be dumped into a class with a common topic and we'd just have to prove things (topology, algebra, analysis) ... on the blackboard, in front of everyone. The best work we did was when something started going wrong and then we'd all start arguing about the proof, building count-conjectures on the fly and riffing on the math. The worst work was when someone went and found a proof ahead of time and just showed the answer. There's so much to learning where the sharp bits of math are; proofs are the razor-thin path through the briar patch.
It was only later that I found out that history, the study of art & literature, and philosophy can all teach you the same thing. The important part is that you're interested in the topic.
> > There's so much to learning where the sharp bits of math are; proofs are the razor-thin path through the briar patch.
As a student representative for my undergraduate mathematics course, I got really pissed off at lecturers for exactly this reason: they'd write out a perfect correct proof on the whiteboard, but wouldn't explain where it had come from or how people had arrived at the solution. We were left to figure that out on our own.
They then complained that students were rote-learning for exams, rather than coming to a full understanding of the material. I'm not sure what they were expecting, given that that's exactly how they were teaching it.
My discrete mathematics professor was like that. He would regurgitate a proof onto the whiteboard. Then he'd do it a few more times with proofs of other things.
He has an identical twin brother, who is also a math professor at the same college. The regular professor was out for a day, and his brother came in to teach the class. His teaching style was completely different. "Ok, we need to prove X. Where should we start?" and would sit on the table and look at us with an inquisitive look on his face. Then learning happened.
Everyone's mind was blown. Most people didn't realize it was a different person. Then on Thursday it was back to same-old same-old.
My math teacher used to only have a tiny piece of paper with the thing he had to talk about during the lecture; since we had to prove everything we learned during this class, more often than once he couldn't remember how to prove some thing and usually happily sent a student to the blackboard to think together about how to prove the proposition. I thought that was a great way of teaching maths.
If you're taking a class with proofs, you're being prepared for research. Doing your own research into, and reverse engineering, proofs is an important skill.
Proofs in math journals are given 'as-is' and you learn the intuition through social means and discussions.
i think general math courses would benefit from teaching that skill _at all_. you'd be hard pressed to find people who took that lesson away from a course.
Just felt the same way about the portion after the semicolon; it's quite catchy as well! Please give a reference or else if I use this later I'll have to say it came from "some guy on hackernews".
I guess this is mine? I would hesitate to believe I made it up, but I don't remember it from anywhere. If you don't feel comfortable with that, then credit it to Michael Starbird, who was my first Moore-method professor 20-odd years ago.
This is what Putnam seminars are like, the whole class goes through problems together. A few of them are on Youtube, maybe more will show up as everything is remote now.
I suspect the reason OP's thesis worked out okay is because his intuition wrt the problem is correct, even if his formulation was a bit off. Very cool, sounds like a good mathematician to me
Stack overflow and it's cousin sites have many serendipities like this - and I happily conjecture this happens more here than facebook or twitter.
I think the reason is that despite being a walled garden (ie proprietary) it still has a promise to open up the content and makes effort to moderate and grow the community - in other words what they are really selling is not the SEO but the sweet spot between "anyone posts anything" of an "ideal" internet where no rentiers exist but no one can find anyone else, and the much more corporate hand of Facebook.
I am not sure reddit exists in this sweet spot either - mostly because there is just sooo much reddit.
A couple relevant bits of info about MathOverflow:
- The site is operated by Stack Overflow/Exchange, but is owned by MathOverflow, Inc a non-profit corporation[0]. As such, it retains the right to exist independently of the Stack Overflow company - to my knowledge, it is the only public Stack Exchange site for which this is true.
- Like all public Stack Exchange sites, authors retain ownership of their work, which is published under a CC-BY-SA license. Regular archives are uploaded to Archive.org and can be obtained there or via Bittorrent[1]
In short, not a walled garden, and not Stack Overflow's garden.
Re: MathOverflow Inc - I had not only not heard of this, but never even considered it was possible :-)
Yes, I think I am wrong to use the term walled garden, but it's hard to think of something else.
In a "platonic ideal" of the internet everyone would have their own internet connection, and a server and say post their own interesting queries and somehow others would find and answer them.
Perhaps search was assumed to solve it all then.
But the universe is much more "clumpy" than that so people will gather around certain locations, in nature they are natural oasis.
Perhaps we should drop the walled garden idea - gardens, walled or otherwise need tending and upkeep and that passed the ability of one or two people to do in their spare time somewhere around 1991 on usenet.
Tending a garden is a costly affair.
I think perhaps walled city might be a better term? It implies the "never leaving" which is what facebook seems to aspire to.
perhaps a better analogy is "chargeable car parking". :-)
This is yet another one of those situations where the use of a dying metaphor[0] hurts communication; you wall up a garden to protect what is inside from the harsh conditions outside: wind, cold, vermin... The implication is that the people in the garden are delicate flowers who would be destroyed by the conditions on The Greater Internet if they were to be exposed.
The antonym to the walled garden is the open garden or field, with hardy plants able to withstand and even thrive in the local atmosphere. They're still cultivated - weeded, fertilized - but there's no need to create a microclimate to just to accommodate them.
In this context, Facebook does make some effort toward walling off their gardens, but... As you note, Facebook's primary goal isn't protecting it's dominating - Facebook is just as happy to own major portions of the 'Net in pursuit of this goal, and more than a little reluctant to provide any real protection beyond what is absolutely necessary.
Beyond that... We all garden. From little personal websites and blogs, to big community gardens[1] like Wikipedia, Reddit, Stack Overflow, and even Hacker News. We plant, we harvest, we tend these plots, alone or together, but make little effort to isolate them from the larger world - indeed, we generally recognize that the strength of the Web is based on its interconnections, its inherent ability to draw together different sources of information.
Interesting - my understanding of the walled garden was the Omar Khayyam style of a luscious oasis walled off to allow only a few people to enjoy it whilst keeping most out (the implication that you had to pay to be one of the few).
Neither definition actually makes much sense when talking about incompatible protocols.
One of the major difference when you think of any concrete items like flower, gardens is that they are private goods. Public goods especially information, laws of physics, idea, mathematics etc. are hard to create but no costs to consume. Hence, the walled garden is not about keeping most people out of view i.e. Wall Street J etc. but keeping the noise from the creation so there is a point for the OP to post his question in a panic. Public goods like F=ma is very hard to be created and what motivation to create one is a big question. This is especially if it is networked public goods. And some public goods are useful to one or one group but not the others. That is the fundamental problem here.
Guess the one above this poster has right that the analogy is not totally right. Still if one assume the flower can be viewed a trillion time but the question is creation, walled garden is a good metaphor.
The first usage I remember of walled garden in an online sense was about AOL and their refusal to use common internet protocols - you literally could not email from outside AOL and if I remember not view websites outside AOL. There was a wall around their garden. IIRC it became a common description in tech columns of papers.
I still see the walled garden analogy to be more useful
If people are confusing their definition of a metaphor and mine (I don't think Orwell mentioned that but maybe it's low down the list of stylistic errors) then that can cause problems. But I struggle to see how Facebook / twitter "protects" creators from the harsh winds of the outside.
This is where the "dying metaphor" thing comes into play: AOL was a walled garden in the sense that I outlined - like so many other early online services (and BBSs), it had walls to protect its members. That was the original sense of the metaphor.
But... That sense is dying; it is a poor metaphor because few people actually build real-world walled gardens[0][1] for that purpose; the metaphor has no currency, and thus the meaning shifts. Now it is just as frequently used to describe any sort of system which restricts the flow of people OR of information, in or out. So while Instagram might be considered a "walled garden" in the original sense (no outbound links on posts), it may ALSO be considered a "walled garden" in the sense that it restricts inbound access for non-members, or even in the sense that it forces certain onerous licensing terms on contributors. In this manner, the metaphor becomes problematic, as what one intends to convey is not necessarily the meaning which is understood by others. If/when the metaphor dies entirely, becoming an idiomatic way of saying "not completely open", this problem disappears - no one will attempt to relate people to plants, or content to flowers.
With this in mind, I suspect your original intent was focused on the "garden" aspect: that the value MO provides comes from imposing a structure and certain expectations which facilitate productive interactions like this, with the "sweet spot" being that it remains open enough for the rest of us to benefit from the outcome of those interactions.
>In a "platonic ideal" of the internet everyone would have their own internet connection, and a server and say post their own interesting queries and somehow others would find and answer them.
That is how it used to be
I was late to the party, but I had my own website with a guestbook around 20 years ago (well hosted by AOL, but you would not know that through the url redirection)
I always say that the wonder of the Internet is the collaborative Wikipedia, not Facebook walled garden. Stack Overflow network of sites is another of the great Internet wonders. Non technical people can not grasp how fantastic they are. Younger developers do not imagine a world without SO.
As a private company, probably someday they will lost their techno-utopian magic (as Google already lost). It will be a very sad day.
Both reddit/SO and say, classic forums, each have their own method of content discovery (reddit/SO always prioritize new items, forums push you to long-running threads), but both have their blind spots. With reddit you can end up with a lot of duplicates because a subreddit's dashboard decays stuff pretty fast based on the frequency of posts. It makes long-running discussion impossible. With forums you can have your long-running discussions, but you sometimes have to wade through page after page to find those specials nuggets of info.
I look at this as a fundamentally hard problem of information design. You have a series of posts to present to a reader. Which posts do you give primacy to?
If your reader already knows most of the context and has read the previous posts, then "re-posts" are bad, long-running threads are good, and they just want to see the latest updates.
If your reader is coming to the material cold, a "re-post" may be completely new to them, and posts that presume pages and pages of existing context are completely impenetrable. You want to lean towards fresh, short threads.
The challenge for designing a forum then is balancing the competing needs of those users. You can make some progress by tracking on a per-user basis which comments they've already seen. Reddit does that (maybe just for gold accounts?) where new comments are blue. That makes it pretty easy to skim through a comment thread and see just the new leaves.
But there's still the question of how to sort the threads themselves. It might be interesting if that sorting was also user-specific. Maybe deprioritize threads that you've seen but not interacted with, and prioritize "live" threads that you've participated in and are still changing.
PG recently tweeted something relevant to this thought [1]
"Twitter is a few people saying interesting things amidst a much larger number saying mean or mistaken things. So are books. But you don't suddenly get sentences from bad books in the middle of reading one of the good ones. Maybe this gets fixed in version 2 of social media. Maybe version 2 is halfway between the randomness of Twitter and the predictability of Substack."
StackExchange sites feel like sites that you can browse without them trying to get you addicted and trying to stop you leaving. They do have the sidebar which sometimes shows interesting content but it doesn’t feel optimised for addiction or stickiness - more genuine discovery.
I’m not sure what sort of advertising they do on other stack exchange sites if any, but sticking to a job board on SO makes it so much more pleasant to read than a social feed throwing random junk products at you every few minutes until you go away.
As far as notational clusterfucks go, crossing numbers (along with the three standard definitions of a ring) are one of the best-known ones to still be biting people on a regular basis. ("Positive" and "natural number" are sufficiently well-known that people are careful.) But imagine how it felt to do group theory back when "group" could mean any of "abstract group", "subgroup of GL(n)", "finite group", "monoid", "semigroup" and combinations thereof.
The simplest gotcha I know is: is f(x) = 1/x piece-wise continuous?. This is calculus 1 level material and yet author's disagree significantly on this point, sometimes without specifying it! Some say yes, others would require f to have finite left and right limits at every point. This mattered for a point of my thesis and my advisor was very unhappy with me calling these function piece-wise continuous.
I thought this was only an issue in K-12, as anyone in research math considers the domain and the target to be part of a function, and then the problem disappears: The function R \ {0} -> R, x |-> 1/x is not just piecewise continuous but continuous on-the-nose. The function R -> R, x |-> 1/x doesn't exist. The function R |-> some completion of R, x |-> 1/x is continuous or not depending on which completion you choose (the one with two infinities or the projective line).
But I do recall a similar confusion happening with "piecewise linear" (the question is whether the pieces have to fit together).
> I thought this was only an issue in K-12, as anyone in research math considers the domain and the target to be part of a function
Anyone is research math does consider the domain and target to be part of the function. Alas, that doesn't mean that they'll actually write down which domain and target they have in mind.
No you've missed the distinction. In all cases the domain and range are R (you can fill in a value at 0, it doesn't matter which). See the MathWorld page which leaves the definition intentionally ambiguous:
"A function or curve is piecewise continuous if it is continuous on all but a finite number of points at which certain matching conditions are sometimes required."
I don't get this. If you require f to be piecewise continuous outside of a finite set of points and to satisfy left limit = right limit at each of these points, then you just have a continuous function. Why another word for it?
The left and right limits are required to exist (and finite), but not necessarily to be equal to each other. So f(x) = 1/x and sin(1/x) are out but x/abs(x) is not.
I teach math in the first year of the university and it's not a good moment to discuss about the subtle details of topology and definitions. So every time someone ask me that, I reply "It is not continuous for all the real numbers"
It is continuous in the natural domain[1] that is (-∞,0)∪(0,+∞).
The problem is that the students of the first (or second) year of the university will then try to use the the Bolzano's theorem / Intermediate value theorem to prove that it has a zero in the interval [-1, 1].
So I must answer NO, but the problem is that usually in the question the domain is implicit, so it cause a lot of confusion.
I think it's more clear the definition of "piecewise differentiable". I'm not sure what is the "official" definition of "piecewise continuous". The definition in Mathworld looks a little fuzzy. I'd probably request not an horrible behavior in the borders of the intervals that are glued, like in
* sign(x) -> yes
* 1/x -> no
* sin(1/x) -> no
One of the best keep secrets in math is that the definitions are somewhat arbitrary.
[1] At least it is how we call it in Argentina, sometimes the names/definitions change in each country.
f doesn't even have either of a right or left limit at 0.
f is maybe (piece-wise) continuous over what pseudo-domain? R or R\{0}?
You could axiomitize that an infinite discontinuity is like an unbouned function as x->infinity, but then how would you avoid 1/x being regular continuous?
I think you are claimokg that a set being incomplete "at the end" is different from a set having a hole in the middle -- that the question of continuity presumes connected sets. That's not standard but might be an appropriate assumption for the context of your paper.
Anyway, arguing over terminology is boring unless it raises conceptual issues. The point is to communicate, which has at least 2 stakeholders. Clarify your terms and move on.
I think you missed the point of the example, which is that people rarely clarify this because to the author their definition seems obviously correct. I made it 90% through a PhD without having considered that there could be more than one possible meaning for "piece-wise continuous".
The domain is R in this case. The less-restrictive definition would be that f is piece-wise continuous if there are a discrete set of points .. < x_0 < x_1 < ... with f continuous on each interval (x_i, x_{i+1}). The alternative definition is that, plus requiring that the restriction of f to those intervals have limits at the endpoint. For piecewise smooth function, there's an even larger variety of possible meanings, yet it's often stated without clarification.
> The less-restrictive definition would be that f is piece-wise continuous if there are a discrete set of points .. < x_0 < x_1 < ... with f continuous on each interval (x_i, x_{i+1}). The alternative definition is that, plus requiring that the restriction of f to those intervals have limits at the endpoint.
This sounds kind of surreal; putting them in shorter terms, we have two rival definitions for "piecewise continuous":
1. A function f is piecewise continuous if there exist one or more intervals over which f is continuous.
2. A function f is piecewise continuous if there exist one or more intervals over which f is (1) continuous, and (2) bounded.
I agree that it sounds obvious which of those is more appropriate as a definition of "piecewise continuous"...
(It also worries me that the definition you give requires the intervals to be adjacent, but doesn't require that more than one interval exist. A function that is continuous over (-2, -1) and also over (1, 2), but not anywhere else, would meet this definition, but you wouldn't be able to include both of those intervals of continuity in the set of endpoints.
I would prefer to either have two sets of endpoints, such that we end up saying f is continuous over (x_i, y_i), (x_{i+1}, y_{i+1}), etc. (if we want to allow for intervals of discontinuity), or to say that the intervals (-inf, x_0) and (x_n, +inf) also count (if we don't).
However, if we take that second approach, and we go with the definition of piecewise continuity that requires the function be bounded over every interval, we've just defined functions like f(x) = x as being not piecewise continuous despite the fact that they are continuous.)
I would generally want the intervals to cover the entire domain (or rather for their closures to cover it). And half-infinite (or all of R) intervals would also be allowed, but I couldn't think of a good way to state that concisely. And I would only require the limits at the finite end points so that boundedness is not a concern (piecewise continuity should be a local property anyway). Surprisingly complicated to specify fully!
The idea is that what 1/x is is intuitively simple enough, so we should have some standard terminology for it.
And saying left and right limits is -infinity or +infinity really also isn't that weird. I'm pretty sure other metric spaces can be likewise extended and end up with similar algebraic laws as the "extended real numbers". Again this isn't very profound, but is good for education and efficient communication, and so should be perused.
Finally, it's interesting that measure spaces with infinite measure is already a thing that people. I would like to see more connections with metric and measure spaces; e.g. we can have an n-point metric which is defined using the measure of the (n-1)-simplex. Just as regular metric spaces have a "triangle inequality", 3-point metric spaces should have a "tetrahedron inequality", and n-point metric spaces should have a "(n-1)-simplex inequality". Again, this is not profound, but good for communication, and connections between definitions help one compress everything for better mental storage.
The way I was taught was that back in the olden days, "group" always referred to groups of permutations (and the operation was always composition), and it was Cayley that introduced the much more general and abstract notion of groups that we use now. He could do that because it's relatively trivial to prove that the the two definitions are basically the same: every group is isomorphic to some group of permutations according to Cayley's theorem: https://en.wikipedia.org/wiki/Cayley%27s_theorem
Sure you can embed any group in a permutation group, but that doesn't mean the two notions behave identically in all respects. For example, two groups being isomorphic is not the same as two permutation groups being isomorphic, as the latter come with their embeddings.
OP is incredibly fortunate. Or maybe mathoverflow is that active/supportive.
As a STEM grad student (not in math), I had more than a couple such moments of crises, when I posted my questions on various stackexchange websites. I got either useless replies, or no replies.
Mathoverflow is different from most of the other SE sites in that it's only for research level questions. There is a separate site, math.stackexchange, for other math-related questions.
I have to say also that this type of crisis is not surprising (unfortunately) for math, or similar highly theoretical, loner fields. I can guess that the student asking the question is not being very open with his/her advisor, has worked and struggled for long hours alone, thinking they have to solve it on their own, and is not super communicative and checking in about important aspects of the thesis. Because he/she thinks it has to be a surprise "breakthrough" result -- a heavy obligation of the field's expectations.
No responsible advisor would let the work get to such a state, so late in the game. Major fault of the advisor too, here.
Advisors are also very much at fault, not just students.
The last year of my PhD I ended up being pretty much alone because my advisor had changed research topics a year before and therefore was not interested nor up to date, so any of her inputs were not very useful.
A couple friends of mine also struggled with their advisor because he actively avoided communication for some reason. I guess he had a personal or health issue.
So even on good faith, advisors can end up making students life quite stressful for one reason or another
I was contacted by someone in a PhD thesis crisis who wanted me to provide speech samples they were apparently missing. The thesis was due imminently.
As far as I could tell, the analysis was already done -- but my samples were needed for some other reason. I was kind of bemused by the idea that the analysis would be invalid with nothing behind it, but valid with unrelated data behind it.
My insight was an eng. student whose novel outcome of a maths model in Fortran on a mainframe depended on his not understanding what uninitialised arrays were. This was in the 80s.
There was no interesting novel outcome: he was random-sampling prior states of memory.
I felt very bad for him, it was mid-stage. I didn't hear how he resolved it.
The other side of this is the crisis which only emerges in the viva. I was working in Leeds uni in the 80s and overheard an external discussing a case he had: it was obvious the results were fraud. They made the student and his supervisor to the sums in the room, on the blackboard. He didn't get his thesis.
I was in a PhD crisis, but I did not post it anywhere. Not sure if it is allowed to ask for outside help.
Although now I have finished the thesis without that part (it should have become an additional chapter). Perhaps I should post it around (although that might spoil it for a paper)
Consider n polynomial equations in variables x_1, .., x_n, with constants a_1,..,a_n, b_1,..,b_n, c_1,...,c_n:
Under which circumstances exists a (unique) solution for x_1,..,x_n in terms of the constants?
I have found a recursive approach that results in a quadratic equation, containing only a single variable x_i (and the constants). (It is too much for a comment, here is a PDF: http://benibela.de/tmp/quadratic-equations-recursion.pdf )
For example for n = 2, it is very simple: x^2_1 (a_2 b_1 - a_1 c_2) + x_1 (a_2 d_1 + b_2 b_1 - a_1 d_2 - c_1 c_2) + b_2 d_1 - c_1 d_2
This gives 2 solution. But I do not know what happens if the terms cancel each other out. Like if a_2 b_1 - a_1 c_2 = 0, there would only be one solution. But since the full solution in the pdf is so complex, I do not see which constraints would lead to cancellation there.
---
And that is not the full problem I was trying to solve. In the full problem there are constraints on the a, b, c, d. There is a given graph, and depending which nodes are connected in the graph, the constants are the same. Like if node 3 and node 7 are connected, then b_3 = c_7 und c_7 = b_3. (even more complex though). And then the question is, do these constants cancel in the solution of those equations? And the final problem we want to solve: which graphs lead to exactly one solution, and which graphs lead to no solution of the equations?
Ambiguous and poorly explained. (Note the question immediately afterwards asking for clarification.) But probably something along the general lines of "My advisor said that, if my main theorem is an asymptotic estimate instead of an exact formula, then this would not be judged to be novel/strong enough to earn a Ph.D."
If you don't mind, could you explain the practical difference or the reasoning behind such a requirement? Are there situations where boundaries appear to be asymptotic but the exact solution shows this not to be the case?
Once again, I don't understand the Math Overflow poster's exact situation.
But roughly speaking, imagine you have two functions f(t) and g(t), which are described in completely different ways, and you want to prove that f(t) = g(t). If you try and fail, then you might instead aim for a proof that the difference between f and g is bounded, or that f(t) = O(g(t)) and vice versa, or that the limit of the ratio between f and g is 1, or something along these lines.
In many cases, such partial results are also of interest. In general, partial successes in math are considered to be successes.
But in some cases, partial results aren't really considered all that interesting -- or perhaps are known already or can be obtained very easily.
I re-implemented a quasi-polynomial algorithm. Experimentally, it shows exponential behaviour. Back-of-the-envelope calculation shows this behaviour can continue until the input size is >>10^21 before the asymptotic bound asserts itself.
(For comparison, input size 30 is unfeasible)
Note, another implementation doesn't have this behaviour for the family of inputs I use. It's an implementation detail that has no effect on correctness. Thus for the other implementation another family should exist.
On the surface, I agree: there's an interesting problem that's worth solving, and a purely artificial limit is forcing people to do a bang-up job at solving it.
But if you dig a bit deeper, I can see two counter-arguments:
1. The real risk -- by which I mean "the risk I have most often observed in the wild" -- is that a Ph.D expands to fill the time it's given, without ever wrapping up and producing a publishable result. This happens so often that it's practically expected in some places.
2. Having a deadline, oddly enough, also serves as a catalyst for birthing an idea... for "pinching it off" as the expression goes. At some point you have to stop planning and start executing. You can see the deadline as a forcing function.
Ph.Ds are needlessly traumatic and procedural in many ways, but I'm no longer sure that hard deadlines are a net negative.
I feel like this wisdom isn't tapped into enough. We're often burdened with individual tasks and challenges while utilizing crowd knowledge is looked down upon or seen as an inferior solution finding mechanism. e.g. Imagine if companies worked together to figure out self-driving cars rather than compete?
I was in my 6th year. All my friends had graduated, and my stipend had run out. I was 2 weeks away from submission and discovered that one of my assumptions was wrong, which potentially distorted/invalidated all my studies -- to fix these studies would have potentially delayed submission for months. It was a very subtle assumption violation (and it wasn't even that wrong) and my committee probably wouldn't even have noticed. I was tempted to sweep it under the carpet and not let it keep me from graduating.
But I knew it was wrong. I felt that if I sacrificed my integrity then, the moral failure would mark me for life. No one would know -- but I would know. So I decided to fix the issue, re-do the studies and live with the reality that I would have to delay my defense.
Turns out when you're desperate -- and many grad students can attest to this -- a resourcefulness that you never thought you had kicks in ("where were you during all my years of grad school?"). I don't remember how, but I somehow managed to wrangle new studies out in 3 days (which would have previously taken me months). I made the deadline in the end.
The lesson I learned was that committing to doing the right thing has its costs, but in some cases it also forces one to explore attacks never previously considered. Asking on MathOverflow is one such attack.