Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How and where can I publish my research on AI as a college dropout?
270 points by james1234 on Oct 9, 2017 | hide | past | favorite | 81 comments



I really like this question. I’ll make a few brief points:

- Most computer science research can be categorized into one of three forms: preprints published on arxiv or similar websites, papers published in conference proceedings, and source code. In artificial intelligence research, it’s typical for research, if there’s demand, to be promoted from a preprint, to a paper published in a conference proceeding, to publicized source code. Most research doesn’t make it past the first step and that’s usually okay.

- Many computer science labs are open to collaboration from outside contributors. It’s especially true for academic labs where day-to-day research is usually led by graduate students and post-doctoral students. In either situation, regardless of domicile, that person is underpaid and overworked and will eagerly accept help. However, a lot of computer science research comes from for-profit institutions (e.g. Facebook AI Research, Google Brain, and Microsoft Research are three well-know institutions in artificial intelligence research). In my experience, labs inside for-profit institutions will readily work for academic and non-profit institutions for the prestige but are less likely to work with independent contributors since there’s less incentive.

- When you start looking for contributors, I’d look for academic or non-profit labs that match your specific interest, I’d look for the people in that lab that are actually publishing (i.e. check arxiv), and I’d directly offer your skillset from the start (e.g. “I’m a programmer that’s worked on such-and-such and I’m looking to contribute”).

Finally, if you working on computer vision, please reach out! My lab at the Broad Institute of MIT and Harvard are always looking for free help and more than willing to help people bootstrap their publication record.


> labs inside for-profit institutions will readily work for academic and non-profit institutions for the prestige but are less likely to work with independent contributors since there’s less incentive

Well, that and lawyers. Even getting academic collaborations approved gets super complicated once lawyers are concerned about patents, etc.


Can you share your contact information? I am working on computer vision and I would love to contribute to advance science if I can.


Where's the best place to reach you? I'm work in cv and would possibly be interested in contributing.


What background are you looking for, do you accept contribution to the project for people without master in related field?


What’s the best way to contact you?


I do research on deep-learning based communication systems, would love to get in touch. My contact information's in profile.


I'm curious, what is a deep learning based communications system?


Deep learning based optical communications :D

https://arxiv.org/abs/1709.03222

It has the potential to significantly reduce complexity in various areas of the signal processing stack (in exchange for greater software complexity and black boxes)

We plan on using it to process signals for LED-based satellite FSO systems.

Would you like to get in touch to discuss more about it? We could use some advice on software for atmospheric simulations.


Curiously: what does that username mean!?


Please answer this.


Steps to publishing your research:

-Do a literature survey and read LOTS of papers. If you are not coming from the standard academic route you are probably vastly underestimating how much existing work has been published. To publish your research you need to place it in the right context, with citations, and really understand what is novel about your ideas. Read lots of papers, take notes, keep track of the bibliographic details. Follow up on citations to find more papers.

-Write up your understanding of the relevant field as a survey of existing literature with citations. This will clarify your thinking and help you become familiar with standard terminology. This will also be part of your finished paper.

-Write up your idea using terminology and notation consistent with your existing survey. Discuss what is novel and different about your idea.

-Analyze your idea from the point of view of other paradigms. Answer possible critiques that would come from other ways of thinking about the same problem. If you've discovered "standard" ways of evaluating your type of idea, do the evaluation to see how you compare.

-Get feedback. Get opinions from as many other people as you can that are as good as you can find. In the draft paper, thank everyone that gives you any feedback. If people give you substantial ideas that improve the work, ask if they want to be co-authors and work with you a bit more.

-Once you feel there is a real research contribution in your paper draft, and people you have shown it to think it is good, start working on getting it published. Put it up on preprint sites and send it to conferences or journals that are relevant. By this point you should know the right venues based on your survey work.


I'm a published researcher in AI.

* Where: depends on the field of research. Top general AI conferences include AAAI and IJCAI, which occur once per year. Papers are submitted online and reviewed anonymously. There are many others. Top machine learning conferences include NIPS and ICML; same comments apply.

* How: well, you write a paper conforming to the length and style requirements (this requires learning LaTeX, at least the basics). The paper should describe the problem you are trying to solve, briefly describe and cite relevant prior work and explain either why it does not solve the problem, or why the solution you will be presenting is better. It should then present your solution and evidence for it, be that theorems and proofs or experiments or simulations. You then submit the paper to a conference online.

You didn't ask how you can conduct your research on AI (your phrasing makes it sound already completed), so I'll try to avoid too much unsolicited advice, but this is the difficult part. It's important that you have read the most relevant papers to the problem you're trying to solve (this will help you see how to structure your own paper) and that you have a clear idea of where your research fits and improves on the existing literature.

Frankly, this will be very difficult unless you have some help from a mentor or collaborator who has been through the process before. Ideal would be to get in touch with someone who is doing research in the specific area you are interested in and ask to meet with them, explain what you are doing, and get their advice on related work, whether your work is publishable, and how to present it. That doesn't make them a co-author.

If you are interested in getting into research long term, ideal would be to ask a similar person to help or join on one of their projects, or collaborate on your idea. After some experience it will be much easier to figure things out for yourself.


As with others, I would suggest that you consider finding someone who you can collaborate with. That person should be doing research in AI as well. This serves two purposes;

1) It will help you determine if you can explain what you're doing to someone else, well enough that they can duplicate your results. If you cannot do this, then writing a paper will be a failure for you.

2) If they are in academia, then you will not only have someone who already has an understanding of how to get published, they can help with the paper's structure to make it useful for others (increasing the likelihood it will get published).

The challenge is not that you are a "college drop out", the challenge is that you didn't get a chance to learn all of the meta information that college was going to teach you about how to structure a research paper, how to explain the problem and your solution, how to help the reader understand the "big question" you are trying to answer, and the effort you have gone through in making sure you haven't deluded yourself your on to something new (that you have avoided confirmation bias).


There is no requirement to be affiliated with an institution to publish, or even hold any kind of degree. If the journal or conference is double blind, they will never even know about your credentials.

Of course, credentials do imply some skills and content quality that cause publications to be accepted, but the reverse isn't true.


> If the journal or conference is double blind, they will never even know about your credentials.

That is not necessarily true. The style sheet for the journal may require the author to specify his/her institutional affiliation right under his/her name. Then, before the paper goes to a referee in a double-blind peer review process, it is first examined by the editor or editorial board who decide if it is even a good match for that journal. If the journal staff have had a problem with crank submissions or even just a vague fear of cranks, they may simply decline to handle a paper without a clear institutional source.

I left academia for an outside career but I still publish a paper in my field from time to time. However, I have heard through the grapevine that my submissions are welcome purely because everyone knows I used to be affiliated with one respected university department. Without that established reputation, my papers might have been refused straight away.


My area (PL) is dominated by conferences, and I'm pretty sure the chair doesn't screen out submissions, even ones that are obviously cranky.


Funny you should say that, I just recently registered for a conference where the HTML form could not even be submitted until the "Institution" and "Role at institution" fields were filled in. I don’t know if unknown people who wrote "None" would be filtered out, but it shows the general expectations that the organizers have.


None works. Unaffiliated also works, or you can make something up (e.g. Ministry of Truth if you are Gilad Bracha).

Submission management platforms are pretty generic, no one (in their right mind) rolls their own, so I wouldn't take the way they work as signals of intent.


For this one, the host university did roll their own, on the basis of some form-generating service that Google offers. Note that here I was speaking about a conference registration form, not a paper-submitting form.


Then that's just what they need to print attendance badges. They aren't going to not take your money because you have no proper affiliation. Put something plithy in there and you are sure to make some new friends at the conference.


> They aren't going to not take your money because you have no proper affiliation.

There is no fee for the conference I mentioned, nor for most conferences in my field. I feel sorry for anyone in a field where they have to pay just to visit another university and present a paper.

I will have no problem presenting a paper at that conference because, again, people vaguely know of my past affiliation. However, proposals for papers are screened by an editorial board just like journals before the person’s registration is accepted, and it may well be that someone without any affiliation would find themselves unwelcome.


I would be surprised if your area's leadership was so petty, but I guess it's possible. Double blind works well in this case, nothing else will actually.

Our conferences cost money because...well...ACM. Heck, even when we use university resources, the universities still charge us. You'd be surprised how much a biggish room goes for at ucla, for example.


There is a good progression of effort you may want to put in:

First, discuss your idea on reddit machine learning. It will be shot down quickly if there is a gaping flaw or something you overlooked.

You can combine this with second, write a blogpost on what you got.

Third, write a preprint, link to it on the blogpost so there is no risk of you not getting credit.

Overall, the effort for each step increases significantly, you might be tempted to go straight for the preprint but let me tell you that this is most likely a waste of your time because you (very likely) don't know how to write with a certain ML flavour, how to present certain ideas, and how to design the right experiments or contextualise your arguments.

If you write a good blogpost, you have a nice starting point to find someone to help you formalise your ideas.


Since you mention that you are a "dropout", another interesting path to consider is to use your publications to eventually get a PhD.

Since in many countries there is a financial incentive for universities to deliver people with a PhD title, it should be easy to find a professor and committee willing to help you get a PhD, once you have a number of publications. But you should choose your publication channels wisely (e.g. respected journals earn you more credibility, and citations, which can also help).


Do you think I could possible use this path for psychology?

I was diagnosed with schizophrenia at 18 and bipolar at 31. I spent a few years doing research to create an alternate model of understanding and experimenting on myself have had amazing success without psychiatric medications.

I've also helped other people through sharing my experience and coaching them.

When I shared my story with Rick Doblin from maps.org he said he felt my particular application of using MDMA would spur new lines of research.


If you've spoken to Doblin I presume you've read this https://www.maps.org/resources/students/181-so-you-want-to-b...

A PhD by publication still needs a supervisor so I'd suggest contacting all the academics involved in psychedelic research and asking. For obvious reasons there aren't very many of them, so it shouldn't take long! I imagine you'll have to face a bunch of issues with legality and subjectivity that most researchers don't, but they will be the best people to ask.


Ps, I have a BA in Psychology and an MS in Urban and Regional Planning with a focus on community development.



First, start from a blog post (with working code!) and then share it on https://www.reddit.com/r/MachineLearning/.

Vide: https://www.reddit.com/r/MachineLearning/comments/756xt2/p_e...

Academic journals are not a good starting point for pushing your findings, unless you are already very familiar with the field (publications, style of writing, etc). It does not suffice to have a new idea. And very easily you can get discouraged, for no good reason.



If you feel you have made an actual contribution/extension of existing work I would try contacting the authors to see if they will review your work and potentially co-author something with you?


If he did the work, why should he need a co-author?


If it’s your first paper, co-authors can be extremely useful to help guide the author through the publication process. It’s also helpful for marketing purposes. Hell, this is basically the graduate school route.


It seems like having all those arbitrary requirements is good for nothing but gatekeeping higher education. Why not have those requirements publicly known and not unnecessarily complicated so that everyone can submit info?


The requirements are straightforward and publicly known. Here's everything you need to know to submit a paper to JMLR (http://www.jmlr.org/author-info.html). Here's the complete instructions for NIPS (https://nips.cc/Conferences/2017/CallForPapers).

The problem with amateurs publishing papers is emphatically not that the requirements are secretive and arbitrary. The problem is that virtually no one is able to write a publishable paper without basically going to graduate school where you have years to become an expert in the minutia of your chosen field of study. There are a lot of amateur researchers who have the ability to contribute to cutting edge research, but the vast majority of them lack the background to know exactly where the cutting edge lives and how to design, conduct, and then describe a convincing experiment proving to others that the idea stands up to scrutiny. And those aren't things that anyone can or does secret away so they can form some artificial gatekeeper. They're just things that usually require a lot of concentrated effort to study and learn.


They are publicly known. But I would say most paper requirements are "necessarily complicated", simply because science is hard.

Writing a scientific paper requires both solid science (both good results and good scientific practices) and solid writing skills to convey an important idea in a clear and precise way. Mastering both aspects requires time and effort.

An academic author is a person who made a job out of it. Most people don't like doing that, and that's fair, but I don't think it's fair to say that "they are gatekeeping education" when that same uninterested people suddenly find out that the pro level is hard.

(Disclaimer: I'm working in academia, although not in AI)


>Writing a scientific paper requires both solid science (both good results and good scientific practices) and solid writing skills to convey an important idea in a clear and precise way.

The only person who would read a journal paper and say that the the author has good writing skills is another academic.

No one outside of the research community would look at a typical journal paper and claim that the writing skills demonstrated are "solid".

Asimov once wrote about his own dilemma when he was ready to write up his PhD thesis: He had spent over a decade up to that point honing his writing skills, that he truly did not know if he had it in him to write as poorly as is needed for academia.

From my experience in academia:

1. Far too many people still prefer the passive voice over the active voice.

2. Far too many prefer to refer to themselves in the 3rd person.

3. Trying to explain how I arrived at an expression that took me weeks to derive was discounted and I was told to remove it by my advisor: As long as I wrote the starting point "any competent researcher should be able to derive it" (hint, probably half cannot).

4. Writing any background so that someone who is not already an expert can understand was strictly forbidden - always guaranteed to get a complaint by some reviewer or another. If it appears in a textbook or in another paper, do not think about including it in your paper. As a result, the only people who can understand your paper fully are those who have happened to read the same textbooks and papers you have. A poor new graduate student may need to look through several papers and a book or two to get the background needed for one section of your paper, even though you could have explained it all in a page or two. But nope - they may have to go scan over a hundred pages of material to have an idea.

Just a few I remember off the top of my head. Results may vary with discipline.


> The only person who would read a journal paper and say that the author has good writing skills is another academic.

You lack an understanding of the purpose and target audience of journal papers. Papers are written by experts, for experts, to be the most concise presentation of new, field-advancing facts as possible. Prior knowledge of basic, and even intermediate-level knowledge has to be assumed, otherwise it places too much burden on the authors.

You sound like a grad student who's sick of reading papers. I'm sorry, that part sucks. But eventually you don't have to anymore, and then you'll understand.


>Papers are written by experts, for experts

I understand that fully. You are merely restating the original complaint, which was "gatekeeping higher education"

Essentially, it's: "We write only for insiders".

And it's not for any expert, but an expert in a subfield of a subfield. Someone who is merely an expert in the field will understand it broadly, but often not well enough to reproduce, and often not well enough to even gauge the legitimacy of the techniques.

This becomes quite clear when you see some of the inane stuff reviewers write, which often indicates they did not understand your paper - yet they were picked as experts who were asked to review.

>otherwise it places too much burden on the authors.

There are multiple reasons I do not believe this is the reason:

I can understand a lot of researchers not wanting to bother, but the reality goes deeper than that. They actively do not want others to put in more explanations. If an author wants to put in the time, why are they getting in the way? It's not unusual for a reviewer to ask to excise material that is explanatory.

And frankly, in many research teams in universities, we have grad students who are just starting out and are not to the point of being productive yet. It is very beneficial to have them write the more intermediate stuff. It's not at all time wasted for them.

I'm not saying we need to include standard text book material in all papers. Often an additional 1-5 pages (depends on the scope of the work) will suffice. Any researcher who complains about writing an extra page or two for some project that they worked months on cannot say they care about propagating knowledge with a straight face. The additional time it will take you to write those few pages is vastly offset by the savings everyone (including experts) who reads the paper. That equation that took me days to derive will likely take most experts days as well. Whereas a few pages of derivation would save them all the effort (and would help a reviewer find errors).

I've heard mathematicians be proud if they've read N pages a day (where N is in {1,2,3}) of a typical paper in their field. Yet when I've asked, they've all admitted that had the author put in more effort, that N would be much larger.

They're written for experts, as you say, but even experts have trouble reading them. However, since they mostly only deal with experts, their baseline is very low compared to what the rest of the world would consider "readable". If I wrote a report in my industry that would take some expert a day to read and understand 6 pages, I would be in trouble.

Again, this may vary from discipline to discipline. I definitely have read papers that don't suffer from the above. Ultimately it's a cultural issue. Some are more welcoming of it, others are not.


(1) and (2) greatly depend on field. A lot of people are discouraged using the active voice in, for example, an academic thesis because it allows the nasty question of "So how much of this did you do?" and writing "I did this" comes off as a bit weird. In journal papers the use of the active voice is slowly improving. I still find it a bit odd that people use we for single author work, but that's probably just ingrained.

To address point (3) a common route is to add an appendix or supplementary material. This is commonly done in deep learning for derivatives of novel functions so that others can implement the backward pass. Whether you can get it past a supervisor is another issue entirely! A blog post or accompanying website is an easy option.

(4) is another tricky one. How many times have you read "Deep learning has revolutionised <field>" followed by the usual citing of Hinton, Le Cun, Krizhevsky et al.? The spiel is identical in every paper, how many ways can you describe this stuff after all? At some point you just have to assume that the reader is familiar with the background, but you still have to cite everyone to keep the reviewers happy.

Again, if you need exposition, write a blog post about it.

(This isn't just for deep learning - every field I've worked in suffers from the same copy-pasta introduction.)

This also depends again on field anyway - some journals are much more approachable than others. Nature and Science, controversy aside, generally publish very readable papers.


To be fair, only a few of these sound like actual problems.

>The only person who would read a journal paper and say that the the author has good writing skills is another academic. No one outside of the research community would look at a typical journal paper and claim that the writing skills demonstrated are "solid".

Scientific papers are written for people in the research community.

>3. Trying to explain how I arrived at an expression that took me weeks to derive was discounted and I was told to remove it by my advisor: As long as I wrote the starting point "any competent researcher should be able to derive it" (hint, probably half cannot).

This seems very context dependent.

>4. Writing any background so that someone who is not already an expert can understand was strictly forbidden - always guaranteed to get a complaint by some reviewer or another. If it appears in a textbook or in another paper, do not think about including it in your paper. As a result, the only people who can understand your paper fully are those who have happened to read the same textbooks and papers you have. A poor new graduate student may need to look through several papers and a book or two to get the background needed for one section of your paper, even though you could have explained it all in a page or two. But nope - they may have to go scan over a hundred pages of material to have an idea.

The poor new grad student should pour over multiple papers to get a solid foundation of the field. Most researchers do not want to read an epic tome to figure out how your research is novel.

I agree with your first two writing points though.


The requirements are generally clearly communicated. The strictness is about publishing quality science. Someone who has not been through the peer review process before might be quite shocked by the stringent requirements to get a paper published. Again, it is NOT arbitrary; it is about making sure the journal is publishing real science and not quackery. It is entirely possible for someone outside of academia to publish, but having guidance to get through the peer review process is invaluable.


Normative publication behavior varies by field, but is generally well known and openly discussed, critiqued, modified, etc. inside each field. It's not exactly kept quiet.


the 'requirements' that are being referred to go well beyond objective rules (e.g., no more than 4 journal formatted pages) and include things like lingo, citation standards, ordering of sections, and general academic writing voice.

Written communication of science is hard, especially when you are new to it. I think a good 50% of what I have learned as a PhD student was how to more effectively communicate through writing. And that builds on me (honestly) being in the top quartile of writers in my program when I walked in the door. I have worked with many co-authors who have had great work rejected by journals for writing reasons. It isn't about scholarly snobbery - a lot of it just isn't understandable to anyone but the author. Writing advertising copy / business plans / anything related to startups I worked with was orders of magnitude easier.

In a strict sense its gate keeping, in a more realistic sense its about norming. These standards/expectations often develop organically and are sometimes hard to write down in the sense you are asking for.


It's worth mentioning that some of those standards may not be producing the best outcomes (see https://lemire.me/blog/2017/08/15/on-melissa-oneills-pcg-ran..., for example).

In my own experience, I once wrote a paper in a much more casual and approachable style. I had what I felt was the same scientific rigor around the experiments and data, but tried to make the text more generally readable. I had Simpson's references sprinkled here and there, that sort of thing. The response was generally negative. The paper was accepted for the mid-tier conference I submitted it to, but on the condition that I revise to take care of reviewer concerns, many of which boiled down to "you should write with a more appropriate tone". No one complained that the paper was imprecise as a result. They just didn't like that it didn't read in the standard passive and bland academic voice. I found that somewhat depressing.


> I had Simpson's references sprinkled here and there, that sort of thing.

I should offer to read your paper before criticising it, but I can't even begin to understand why you would think that references to a TV programme would help make a paper more readable. What if you don't happen to watch the Simpsons? Is the paper then even more opaque because none of the references mean anything?

I quite like the bland academic voice - because it lets me focus on what I came for which is the science. I don't want to be amused, and I don't want academics wasting time trying to be witty or fit in references to whatever TV programme is fashionable.

I want just the science, please! Delivered as clearly and simply as possible.


There are parts of the standard academic paper that aren't really necessary for understanding it. You typically have background on the problem, for example. I can't say for certain I didn't lose a reader, but none of the references detracted from meaning in my view.


I totally TOTALLY agree. I was trying to stay away from offering value judgement and be a little more descriptive. it is not necessarily effective, but, it is.

I have done similar things, with similar reactions. I think such approaches in fact negatively impact science...especially through the public's engagement with it. Perhaps my 'favorite' was trying to do a conference session about active learning using...active learning. Got told that it was unprofessional and that those listening were 'peers not students'.

There was a really wonderful paper on academia I read not long ago[1] that basically made the argument that a darwinian biology framework basically explained all this. It basically asked the question what is a defense that arose to solve a problem, and has now transformed into what looks like a defect...and how do we separate those (or should we) from actual defects. Today I stumbled across a similar article which discussed

[1] Lohmann, S. (2004). Darwinian medicine for the university. Governing Academia: Who Is in Charge at the Modern University, 71-90.

[2] ...and of course I can't re-find it. I'll update if I do.


Overlooking the social or communal aspect of research is a mistake. People want to publish their research for a number of reasons (prestige, priority, profits, simple enthusiasm), and people want to read published research for other reasons (to further their own research, to understand the current state of the art, simple curiosity).

Publishing a paper no one reads is pointless. To make people want to read a paper, you need to make sure it is novel (so they don't waste their time reading about something you only rediscovered) and that it is integrated into the existing understanding of the field, with citations of related research, using common terminology, etc (so they can more easily understand your work and what it means).

This is what working with an existing researcher gets you. If you don't work within this framework, why would other researchers waste their time working out that when you say "frazzles" you mean "frizzles", and once you work that out the whole thing is equivalent to a conference paper from '06?


Tottaly. While ACM, IEEE, and similar organizations provide some of this information and do substantial outreach, more progress on this type of work would benefit everyone.


High school dropout with AI research cited in Nature.

You are going to have a hard road in front of you if you want to seriously make a dent in AI research. ML research is a bit more accessible to outsiders than AI research, because a lot of fundamental AI research can be a bit "out there", and is praised/discarded depending on the tenure/authority of the authors. But you probably already knew this. Just be careful to avoid (meta-)theoretical research that is close to futurism or philosophy without any credentials: It is easier to label outsiders as kooks.

First, create a blog. Write articles in a way that they are accessible to your skill/knowledge level one year ago. Get in the habit of writing and performing write-ups of your experiments. Share good articles on social media (Twitter, DataTau, /r/machinelearning).

(Co-)author a workshop paper for an AI or ML conference. Workshops have lower bars for acceptance. If anything, you'll receive valuable feedback.

Find an (assistant) professor who is an expert in a topic/subject you are interested in (don't go for Hinton in the first try). Familiarize yourself with their work and send them a polite short email asking for (search term) pointers on your research, their research, or related works.

Replicate as many research papers as you can. Implement the papers that don't have accompanying source code. Get in the habit of running many experiments. Post these on Github. Publish on social media, mentioning original authors. Mint a DOI.

Benchmarks (competitions) don't care about your credentials. Win one / do very well, and you'll create a platform for your research and methods, based purely on practical results and in compared to many other techniques. Papers based on winning results are fairly easy to write and do well impact-wise.

Really, don't worry about getting scooped, or making an error that embarrasses your supervisor for years. Leave that for the PhD's. Just get something out there, a Wordpress blog is enough. If your research is useless, no harm done (it won't get any cites if you managed to publish it in a journal). If your research has value, you'll have plenty of researchers read it and get inspired: This is your contribution to science (and, unfortunately, don't expect much cites to a blog post).

Optionally, solve the college dropout problem instead. If you want to dedicate yourself to being a researcher on AI, give your "startup" some rocket fuel and get a higher education in the field of data science/physics/computer science/AI. The synergy of young smart people and older wiser academics is something that is very hard to replicate on your own.


One I forgot is to work with either algorithms or datasets created by other researchers. Once I send an email asking for access to a certain dataset, describing the idea I had for it, and had a very famous researcher reply that they'd be interested in a cooperation.

The other side of this advice is to create a dataset that is interesting to other researchers.


Once I send an email asking for access to a certain dataset, describing the idea I had for it, and had a very famous researcher reply that they'd be interested in a cooperation.

I had a similar experience, as somebody who is also a "college dropout" and not formally associated with academia at all. I emailed a professor who wrote a book I was reading and asked for access to some of the datasets he cited in the book, and explained that I wanted to try re-implementing his technique using a newer tech stack (the book was from the 80's mind you) and then look at extending the ideas somewhat.

He quickly replied with the data, a pre-print of a new paper he was working on, and an invitation to keep him in the loop on my work. Not an outright invitation to collaborate, but I suspect if I achieve a useful result, the opportunity may well be there.


Those are some great tips.


anyone can upload to arxiv.org.

I, and many others, regard single-author publications with a higher degree of suspicion, so you should still reach out to other researchers in the area and foster collaborations first.


I thought you still need at least an affiliation which is why the crackpot version (which doesn't require anything) exists? http://vixra.org/ https://en.wikipedia.org/wiki/ViXra


Off topic, but you weren't kidding about it being the "crackpot" version. A quick check of the recent papers in the "Set Theory and Logic" section gave me this gem: http://vixra.org/abs/1708.0156 . Which seems to argue that Cantor Diagonalization, a method of proof known for over a century and taught in most undergraduate abstract math courses, is completely wrong. There's also a 2-page proof that P != NP http://vixra.org/abs/1709.0076

EDIT: this one probably takes the cake: http://vixra.org/abs/1406.0180 . Completely nonsensical, ramblings about fuzzy logic and "topological experiments", the same sentences repeated multiple times. The actual work seems to be taking the formulas for a circle with a line through it, showing all of his work for some Algebra I + derivative level manipulations while still making basic mistakes (the one I caught is the final equation should be "= 1", not "= 0", because he didn't take the second derivative properly), to come up with a differential equation to describe his "intersection of a circle and a line" problem. Then concludes that this "fuzzy topological non-linear differential equation" will prove very useful in solving quantum gravity, somehow.


The author in your last reference is pure gold. Here's another one of his gems, and I quote from the abstract:

"""

The author proposes several new concepts of physics such as it is not mass but the attracting force of mass which warps space, electricity can be generated from space at zero cost, making of graviton bombs are for real, it is possible to include gravitons and darkons in to the standard model of particle physics, unification of quantum physics with Einstein’s relativity can be performed on the foundations of a new spherical geometric applications, & new thoughts on big bang theory , particles and pre big bang.

"""

http://vixra.org/abs/1306.0017


If you want to seed doubts in your mind about the real numbers, you can get that at:

https://arxiv.org/abs/math/0404335

...as well (skip to chapter 5 for the impatient).


I'm fully aware of the complexities of real numbers. But I also know they are essential to derive any advanced mathematics. All the normal derivations and definitions of calculus depend on properties unique to the reals. And while it might be possible to re-derive calculus using computable numbers (I'm fairly sure it is, but not completely), a great many results using topology absolutely depend on the properties of the reals that the domain of computable numbers simply don't have.


This is super interesting, thanks! I love articles that dive into the historical context and philosophy underlying important math discoveries. Context is everything!


> Which seems to argue that Cantor Diagonalization, a method of proof known for over a century and taught in most undergraduate abstract math courses, is completely wrong.

FWIW, this is a very common topic for mathematical cranks. It's somewhat understandable that in the 19th century this was kind of a controversial result, but it's really impressive that we still see crackpots swinging at the same target.


ahh, this looks like an early beta version of The Book from Anathem:

http://anathem.wikia.com/wiki/Book


https://arxiv.org/help/registerhelp You only need-need an email address, I think.

I've seen papers where the 'institution' of the author is their home address both on arxiv submissions and in traditional journals. I didn't trust them very much, but I have seen them.


>anyone can upload to arxiv.org

I thought you needed an endorsement from someone already in the system, or were affiliated with a university?

https://arxiv.org/help/endorsement


Sure, but that does not exclude people like OP. They just need to find someone (ideally to collaborate with ) who will vouch for them before they can submit their work.


> "college dropout"?

For the two best papers I published, both as sole author, I just submitted the papers to a journals I selected as appropriate to the content of the papers. Both journals were highly respected.

The submissions had nothing about my educational background.

Net, being a "college dropout" should be irrelevant.


Github with code samples is more useful thesedays.


It really depends on the type of research. Writing, in artificial intelligence, can be especially useful when it’s still unclear whether your research is reasonably implementable. Machine learning, for example, was mostly theoretical until the engineering issues were resolved.


maybe you can try to publish via

https://distill.pub/

or you can try to publish via open review,

https://openreview.net/


A github repo and a medium post are usually more informative.


In principle there's no reason why you can't publish something as an independent researcher in a normal scientific journal. However of course you have to convince them that it's worth publishing and may have to pay publication fees.

Other than that try preprint servers like arxiv. Unless your research is obviously bogus they'll usually accept almost anything. If your research is worthy you may still be able to publish it in a peer reviewed journal afterwards.


I would say screw papers, make a web document, make it easy to repeat your experiments, publish code that work as is.


One idea, crowd-fund your research using: Experiment https://experiment.com

Then publish with PLoS


You don't need money to publish on PLoS, strictly speaking. If you are not currently funded to do the research you are seeking to publish, you can request a waiver of the author page charges.



Could you link to your research here please? That would help us advise on routes to publication.


This is a great question. Not to highjack the topic, but I think it would be great to have a clear path for publishing novel (if credible) research in reputable journals.

Could be a great hack for someone to enter a grad program for which they're otherwise unqualified.


I have the same question, except I'm an intermittent student at a faculty. I have a paper related to computer graphics I'd like to get published. But I don't know what journals would accept my submission?


Before you do anything, you should patent your work or some big corporation will most likely profit off of you. Good luck.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: