I'm sorry if this is an unsatisfying answer, but if you mean convincingly pitched, I couldn't answer a question like that without disclosing the long-term plans of startups that would prefer to keep them secret.
If you mean unconvincingly pitched, it would probably be the applications we get from people who've discovered new power sources that violate the laws of physics.
Can you say where the scariest and most ambitious convincing pitch was on the following scale?
1) We're going to build the next Facebook!
2) We're going to found the next Apple!
3) Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
4) Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you?
5) This is going to be the equivalent of the invention of electricity if it works out.
6) We're going to make an IQ-enhancing drug and produce basic change in the human condition.
7) We're going to build serious Drexler-class molecular nanotechnology.
8) We're going to upload a human brain into a computer.
9) We're going to build a recursively self-improving Artificial Intelligence.
10) We think we've figured out how to hack into the computer our universe is running on.
I have a drug that could massively raise the IQ of the world's population, in total more than the power of all the world's computers combined. And it costs only a couple pennies per person per year.
Unfortunately, the iodine has to be available before the third trimester for the full 10-15 point effect. :)
For everyone reading this, it's already far far too late. I've been compiling some of the child & adult iodine studies into a little meta-analysis: http://www.gwern.net/Nootropics#iodine
Current conclusion: for 13+ year olds, the effect size is (95% CI) -0.11 to 0.29. Yes, we can't even rule out that iodine is harmful to adults.
« According to WHO, in 2007, nearly 2 billion individuals had insufficient iodine intake, a third being of school age. Iodine deficiency can have serious consequences, causing abnormal neuronal development, mental retardation, congenital abnormalities, spontaneous abortion and miscarriage, congenital hypothyroidism, and infertility. Later in life, intellectual impairment reduces employment prospects and productivity. »
Go after "Railroad Baron Money." That is to say, to invest in a fundamentally new kind of very useful infrastructure, like a subset network of computers where DRM actually works. If not absolutely, then well enough in practice. That would mean it would take something like 2 to 3 years for people to jailbreak new hardware, and almost a year to break revisions to old devices that fix an earlier jailbreak. (We're getting close to this level of security for some organizations.)
The trick is to be one of the Barons when networks are becoming widespread, and not an early innovator who gets in the history books but dies penniless. (Again fits with the railroad analogy.)
(And yes, trusted computation [DRM] as it is practiced now is bad, in the way that many things only possessed by only the powerful are also bad.)
> Yeah, some kind of trustworthy computing would be around a 4-5 on the scale. Maybe 6.
A great sign for this idea, is that it gets pooh-poohed and shouted down, particularly by people who don't even hear the entire thing and just pattern match the security part. The idea that DRM can be useful and beneficial to society as a whole is precisely "What You Can't Say" for large swathes of the tech community and even more mainstream society.
Right, it was a game console, and not even a particularly good one.
A real computing device (I'd accept tablets, but really, enterprise desktops and especially servers) would be entirely different.
What I really care about is servers which can be trusted to be "fair" by all parties -- server operators, software operators, and end users. There is absolutely nothing like that today, and it's impossible without trusted computing. It's unclear if trusted computing itself is feasible (it's theoretically possible).
If it works, we end up with Vernor Vinge's _True Names_
A mobile-based distributed trust system that reflects what other people think of you and your skills, is resistant to gaming, and doesn't start with the premise of lots of Bitcoins.
I have one idea I'm still incubating in the back of my head that's a 6. It's achievable technologically, but I'm not yet in the right place in my life to be the one to achieve it.
Edit: It's not a drug - but it will, imho, produce a change in the basic human condition, if/when implemented.
I downvoted you because you added nothing to the conversation by claiming you have a 6 but then adding no details to it. Hey I can do that too. I have an idea that ranks a 7! I am cooler than you! But I am not going to tell you what it is because I think having a good idea is harder than executing well and I don't have many good ideas and I am afraid you are going to steal mine.
We've got a solid 5 with what we're building at LocalSense: https://localsense.com -- but it contains elements of #9 which is strangely high on the list.
Only way I can read that comment as anything but shameless self-promotion (or, more generously, over the line entrepreneurial-delusion) is if they intend to morph their using of social status to get things into being whuffies.
No worries, it's easy to be down on the public message right now. We're all pretty confident in the plan to overhaul the way commerce works in general, possibly to the end of supplanting "money" entirely. :)
Thanks... we have no plans on raising insects for human consumption! Right now, our prototype is producing fish, berries and vegetables; we'll be experimenting with fruit trees, beans, and eventually poultry (for eggs at first), mushrooms, and honey.
Also, I think you should rework the Yudkowsky Ambition scale a bit:
* 3 and 4 are really on the same level, just interchangeable
* 6 should really be an 8 or a 9, since it enables 7, 8 and perhaps 9
* the whole scale should be expanded so that 2-10 become 11-20, and 1 becomes e.g. a weekend hack that can bring in enough revenue to buy me a new toy every once in a while.
* While 3) and 4) both create sweeping political changes, 4) is decidedly scarier owing to the sheer destructive power. 3) encompasses positive change as well.
* 6) may enable 7), 8) and perhaps 9), but is less ambitious in that it doesn't seek to bring those frighteningly ambitious ideas into reality (it only possibly enables them).
Hmm, chrome wants to install an app from the store, but firefox renders the feed just fine (it's not empty). What browser/OS are you using, or do you have stand-alone reader?
I clicked through looking for an auo-micro-composting mechanism. And auto-harvesting. Did you have ideas about handling blights, molds and other diseases?
For auto-composting (not sure how micro you need to go...), we're thinking of using Black Soldier Fly Larvae to process the compost and auto-harvest into the fish tank for a snack for the fish. Another option is using earthworms, and manually (eventually automatically) harvesting them into the fish tanks, thus recycling one's food scraps back into food.
As far as blight and mold, those problems have been identified and solved with greenhouse and hydroponics growing. The key to these problems and other diseases is to make it easy to prevent, or failing that, treat those diseases. But again, this is a solved problem.
What a way to reframe the entire conversation like a boss. Prior to reading your comment, I thought my scale went from 1 to 10, but it only went from 1 to 2.
So anything > 3 requires a bunch of "hard science". That does a great job of highlighting the fact that almost all of the readers of HN will be incapable of producing any of these.
- you could try to change career, become a renowned research scientist and invent number 3-8 on the Yudkowsky Ambition scale, but ...
You are just one person, probably without the right inclinations to be a world class researcher (or frankly you would have become one, it would have been an irristable calling)
So a more worthwhile use of your time and effort would be as part of a co-ordinated effort to select, support and reward a world wide network of inter-dependant researchers, and then layers of secondary innovators and implementors who in totality will bring the benefits of scientific progress to all humanity.
They also serve who stand and insist politicians use
empirical based testing to validate their spending on
our behalf
I know within my own batch (W12) the highest a startup pitched itself at demo day was about a 2.5- creating a company substantially bigger than Apple. I wouldn't be surprised if you could find a 4 or 5 somewhere in in all of YC with something along the lines of creating the next version of the internet (especially from the startups doing hardware). The tradeoff there is the variance is even higher; doing a software startup is safe, relatively speaking!
After reading the original essay from PG, I was hoping that this comment would finally appear. I think number 3 would start at "frightening." Anything below that would simply be "quite" or "very."
Co-operative existence models of super massive intelligent entities (like Matrioshka brains), could equally end up being dominant vs 'last man standing' scenarios. But at least in LMS (which of course we are not fans of), there is no doubt as to the eventual outcome.
I don't think this scale applies so well in the healthcare vertical. Any major healthcare breakthrough is going to hit the latter half of 6 at least a bit, but I'm not sure that those all induce changes that scale above 5.
A major healthcare breakthrough that improved the quality of life for a large number of people would qualify as a solid 3 -- a major political-level change. I'd save 6 for things like defeating the aging process -- something that, like raising the IQ of the population at large, would also fundamentally change the human condition. I'm not really sure how many medical breakthroughs of this magnitude there are.
Improving world health status might not improve IQ as some inherent human intelligence capacity, but would certainly have an immeasurable population-level effect on "effective IQ" i.e. ability to practice whatever intelligence capacity exists already.
Consider just the challenge of defeating the obesity epidemic. Better diet improves energy and massively increases, again at the population level, the amount of raw thoughtful hours are available to a country.
This is even more widely applicable in global health where health initiatives can also be important components of social change. Women's/sexual health is tied deeply to women's rights which is a leading indicator for social revolution.
While medical breakthrough that change the space of healthcare at large might be rare, there's a lot of room instead for the kind of breakthrough which changes how people receive and are impacted by the healthcare knowledge we already have today.
I think the idea of level 6 is not merely an incremental increase in effective IQ -- the internet and smartphones putting most of human knowledge at anyone's fingertips could qualify as that -- but something that fundamentally changes what it means to be human. Defeating obesity, the African AIDS epidemic, or even cancer would be amazing, heroic, and worldchanging, but still wouldn't qualify as changing what it means to be human.
I'd've rated Adipotide at 2 (make a large difference to the lives of hundreds of millions of people), and even with cognitive effects factored in, well, Apple hasn't had zero cognitive effect either. The thing is, there was already an age before high-fructose corn syrup when almost nobody was obese - you couldn't call it a novel change in the human condition to put things back to how they used to be. If you consider Apple as having popularized icon-based GUIs then it's got a substantially better claim to 5 or even 6 than a completely effective anti-obesity drug.
"Hitting a bit" doesn't count, you'd have to think outside of established verticals - curing cancer would be a major healthcare breakthrough, but it's something like a 3 on that scale, no more.
For that, you'd have to think seriously big - doubling our expected lifespan is important, but it wouldn't bring that much of a change in our planet. A significantly improved homo sapiens species may qualify. Or a way to keep us going on forever - permanently fixing or replacing our bodies. Or a way to "upload" skills to brains, matrix-style.
6 is not making the current condition slightly more bearable - it's transferring us to a radically different state.
Where would the World Wide Web be on this scale? It would look below 4, but one instance of it, Wikileaks, arguably sparked several revolutions and is a 3 by itself.
It's a solid 5 -- made the kind of change that electrification did, in terms of there being massively many more ways to get informed, fact-check, collaborate, organize, the importance of location, and so on.
I don't know... what kind of impact has the internet actually made on human lifespans, or manufacturing output?
The internet has affected global GDP a lot, but almost entirely in the "moving numbers around in a database" fields, with all the profit ending up in the pockets of a few hundred hyperrich. (high finance, entertainment)
I think this is one of the cases where GDP misses what we're trying to measure. If, before the Web, I would have bought a product just barely useful enough to pay for, then post-Web, buy something different instead that is much, much better for me (has a higher consumer surplus), GDP will be the same, even though human welfare has significantly increased.
Normally, you can glaze over such a case, but the internet is basically filled with this kind of thing. Moreover, a lot of the stuff that you would normally have to pay for, you get effectively for free, and I'm not talking about pirated movies. I mean the collaboration, fact-checking, knowing of more options, info you'd normally need to buy a book for, etc. Again, cash payments for that stuff might have dropped, but it made people that much better off.
So if people that were doing things obviated by the Web, then go start doing something different, then, well "GDP is GDP" -- but that is a massive efficiency improvement.
I don't know about human lifespans or manufacturing output (though the widening collaboration opportunity increases the number of people that can solve foundational manufacturing problems), but that's only one of many human values, and I think it's an artificial requirement to compare to electricity on that basis -- almost like penalizing it because it didn't enable better hunting of large animals.
So stepping back to the level of human welfare and modes of life it enables, I think the Web is comparable to electricity.
Here's an informal measure: what does the technology do to the "quaintness" of (non-sci-fi) stories composed before it? After electricity, you might look at a book with a plot element "character can't work at night", and scoff at how backward it is.
Does the Web do something similar? I'd say it does a lot more. Per Steven Landsburg, there was a novel written in '91 (right at the Web's infancy) with plot elements like "someone is endlessly searching bookstores to find an obscure book" and "someone sells expensive encyclopedias to people with a tremendous demand for easy access to knowledge" -- very, very quaint from today's perspective.
Does anyone know of a sort of hacker news for 3 and up?
I think it would be good for my blood pressure to find a place where people don't equate "changing the world!" with "finding a new way to sell socks on the internet!"
"Aim high, you may still miss the target, but at least you won't shoot your foot off." Mercedes Lackey, I forget which book, but I think it was Tarma that said it.
I'm not sure where it sits on the scale but the internet, web and associated technologies allow anyone to share without the limitations of physical media, an idea with any other person across any distance from across the room to as far as people spread in future and across time from instantly through to the end of civilization.
I'd say that's a fairly fundamental change for humanity.
...Its interesting....before Google all the investors were saying that you could not compete against yahoo....and before you could not compete against IBM....and you could not compete aganits microsoft....All this was BS then and it is all BS today...
> We think we've figured out how to hack into the computer our universe is running on
Not sure someone with that kind of access will need seed funding with YCombinator or anywhere else. Unless their hack is unproven and unreliable, and up for immediate rejection.
Several years ago, a good friend came to me and excitedly pitched his latest invention - perpetual motion. He grew more and more animated as he explained how his device used magnets...
Being young, I drew a diagram showing why it would fail. With each line, I could sense him deflating, but I kept drawing. Finally, by the time I brought up Newton, I realized that I'd gone too far.
Our friendship didn't really work out too well after that and I still feel somewhat guilty...
Once upon a time a good friend tried to convert me to Scientology. That didn't work out so well for our friendship either. All I did was ask to see him levitate...
Appreciate the honesty in revealing that the biggest idea you've been pitched you can't actually talk about. But how about not taking the easy way out and instead answering what is the most frighteningly ambitious idea you have been convincingly pitched that you can talk about?
The irony is that over the next few decades, maybe it will turn out that the single biggest YC black swan could be a new power source that appears to violate the laws of physics.
i dont like ideas that need to be kept a secret, i think its just psychological or emotional to feel or think you need to keep your idea a secret to increase its chance of success
if its so easy to execute and replicate ... its not much of an idea ...
Few years back, this guy (Ramar Pillai) fooled the whole country and if i remember correctly even got the audience from Indian PM and State Chief Minister. Probably the best pitch ever :).
He claimed he can turn water into fuel using some herbs :).
Thank you very much for taking this step. (Not disclosing long-term plans just because they're frighteningly ambitious). Hopefully this goes double for ideas you chose not to take 7% of for a couple of grand :)
Imagine a table with a 3 course meal laid out...it's nice. Now imagine a table with a 100 course meal. You'll still eat only three plates of food before you're full but you'd still prefer the 100 course table.
PG writes of the Maker Schedule: "I evolved another trick for partitioning the day. I used to program from dinner till about 3 am every day, because at night no one could interrupt me. Then I'd sleep till about 11 am, and come in and work until dinner on what I called "business stuff." I never thought of it in these terms, but in effect I had two workdays each day, one on the manager's schedule and one on the maker's."
Does kinda imply he may very well be awake at 3AM. That said, he's also wrapping things up then and unlikely to take on a public Q&A requiring nontrivial attention.
We never pitched him on the Singularity Institute (http://singularity.org/research/), but I doubt he's ever been seriously pitched on anything more ambitious than "build a recursively self-improving Artificial Intelligence with a stable, specifiable goal system so that it can improve into a superintelligence and do world optimization." If he's been pitched on anything more frightening than that, I'd really like to know what.
A cynic would say that the main activity of SI's Research Fellows is to type up their Science Fiction and Philosophy musings and upload them as PDFs to their website. Someone less cynical would say that these musings are fundamental research in the area of self-replicating AIs that will enable other people to build said AIs in the future.
But the appropriate area for fundamental research is in explaining consciousness and intelligence so that in the future someone trying to build an AI can actually define what they are trying to build.
That kind of science takes large scale organization and specialization by each person in a narrow field. It carries enormous organizational costs. It's also frequently quite a lot of boring work.
Easier to just speculate on a wide area of interesting topics while you wait for people to do the real work.
Looks like I leaned so far towards the cynical interpretation that I fell over on it.
My hope is that if the spun-off Center for Applied Rationality takes off, we can all get out of the business of "Trying to explain to people why this an urgent problem that needs funding" and get back to "Actually researching the problem." It does take more than 2 people to actually research the problem (we know, we tried it with two people) and up until just recently there wasn't a very obvious path to how to find and fund 3 or more. Of course we've also produced various think-tank-ish analytic papers along that trajectory, whose value is not zero, but actual serious end-of-the-world basic-math research has always required more funding and more attention than we had.
All I can say is - it's starting to look like it did work (the "spend years convincing people to fund it" path) and it's not at all obvious even in retrospect what else we could've done.
There exists an axis which all institutions lie upon where (i) those that do direct business and support short-term incremental innovations exist on the left hand side, and (ii) abstract research institutes supporting long-term paradigm shifts of innovation exist on the right had side.
Totally agree. Ambitious is not the same as pompous, people forget that.
For example: We are building a social platform that will change the way the upper class communicates with a chat-site exclusively for the rich as We are building technology that gets removes the need for cars and allows people to get around faster and safer.
I think "build a recursively self-improving Artificial Intelligence WITHOUT a stable, specifiable goal system so that it can improve into a superintelligence and do world optimization" is more frightening, even if less ambitious :)
"We want to make our own rockets and spaceships from scratch. But that's just the beginning, to pave the way to settling Mars and making it affordable for middle-class customers to relocate to Mars." I imagine if Elon Musk had pitched SpaceX for YC, it would be a strong candidate for this.
Yeah, pretty much any of his recent ideas/companies are "frighteningly ambitious".
I do wonder how YC would handle someone that is as ambitious and skilled as Musk, but without his financial success. (Un?)Fortunately for them I think that is a pretty rare combination, so they probably don't have to worry too much.
I think it's pretty clear that the amounts YC invests wouldn't be enough to really "build an MVP" for Elon Musk's ideas. It's not something you do unless you already have a lot of money.
Elon Musk talks about this, in one of his interviews, how Tesla and SpaceX are something that requires a lot of money to begin with. Because they are big risks.
It is a mantra here that there is an enormous amount of timing and luck involved in creating a successful startup.
I would expect the majority of people with Musk's skill and ambition to lack his level of financial success. Certainly the percentages will be better than the general population, but it's not guaranteed or even likely.
In my opinion, if people really think timing and luck matter more than skill and determination, then they shouldn't be doing startups (unless their goals aren't to create an awesome/important company, but to have fun or something else).
Timing and luck (from what I have seen) really only seem to matter for things where it doesn't really require a lot of domain expertise. Of course, those are what first time entrepreneurs (at least of the Hacker News variety) are most drawn to, for what should be obvious reasons.
Perhaps it is just due to there being more information about Musk than other founders, but I haven't heard of any YC founders having the level of technical skills and abilities that Musk has. He is older than most, though, and has been at it for a while, so this shouldn't really be surprising.
You kinda can. Life isn't a comic book, and if you saw Elon Musk when he was 15, he wouldn't have any special aura of destiny glowing around him. I'm not saying that everyone can grow up to be Elon Musk, but he didn't have any sort of special license which allowed him to do it, either. In the end, superheroes are composed of people who try to become superheroes even though the rest of the world spends the first decade thinking they're committing lese majeste because they don't have a superhero license.
Great point. And if you look at Elon's record, he's shown a pattern of setting increasingly ambitious goals after working his way through (relatively) easier projects -- e.g., going from two software startups into hacking hardware, going from trying to contract with existing Russian rockets to designing and building his own rockets, going from modifying an existing car into an electric to designing and building his own electric car from scratch.
Might not have ever pitched pg, since they're hardware and probably needed a bigger initial investment than YC does to make sense, but Blue River Technologies is probably the most ambitious thing I've heard of.
I'd like to write a program, which uses Quantum Electrodynamics to simulate an electron in the presence of a point charge (ie. an hydrogen atom).
Next, I'd like to run two instances, put them in the same space and see if the simulation spontaneously forms an hydrogen molecule.
I'd then like to add a second electron, and see if the result was an Helium atom.
Keep adding electrons until the simulation can do 6 and 8 electrons (carbon and hydrogen).
Now, I'd like to put all three types of simulated atoms (H,C,O) in the same space and see if the result is organic chemistry. At this point, additional optimisations might be necessary, maybe along the lines of starting the simulation by computing simplified models and using those where full accuracy is not required.
I'd then like to continue building the system up, adding optimisations as necessary. Keep adding atoms, and seeing if ever more complex molecules result. Extend to atoms with additional electrons as required (eg. need to add Nitrogen for DNA bases).
At some point, custom hardware would be required. First GPUs/FPGAs, then custom silicon, and whatever else is state-of-the-art at the time.
See how far the optimise/extend cycle can be pushed. Amino acids, DNA, proteins, cells, organs, organisms?
Who knows how far it would get? It might remain an interesting toy, at the 0 level. The closer it is to simulating a human, the closer it is to an 9 or 10 on the "Yudkowsky Ambition Scale".
---
update: spelling
That reminds me of Peter Thiel's Venn diagram of 'Sounds like a bad idea' and 'Is a good idea'. It's a pretty slim area in the middle and the 'Sounds like' part means it's hard to know until you do it.
Presumably PG can't answer because of confidentiality, but the the most ambitious one I've heard is a pitch for a company claiming they could build a trillion dollar company.
(They were working on a cement/concrete replacement that was cleaner and cheaper)
How would you even know? Sometimes the most ambitious ideas don't require more than a few hundred lines of code. It's often how you want the rest of the world to use your product that makes it ambitious, not the product itself.
I don't think Instagram set out to be acquired for $1B. That was just a pleasant after-effect. And here's pg's answer on what he would have done had he been pitched on Instagram:
"Instagram is the one we'd most likely have missed. It all depends when we'd talked to them. They were a kind of overnight success in traffic. If we'd talked to them even a day after they launched we would certainly have said yes. But before that it might have seemed too speculative."
When we invested in Instagram, it wasn’t actually Instagram. It was a company called Burbn, and the idea was roughly to build a mobile micro blogging service. Technologically, it was also different: an HTML 5 application rather than a native app...
Of course you never know what then happens. Instagram pivots to photos, and exits at ~$1B. But...
As Kevin iterated on Burbn, we made another investment in an excellent entrepreneur, Dalton Caldwell. Dalton’s company, Mixed Media Labs, initially built a product called PicPlz. PicPlz aime d to be a mobile photo sharing service...
Of course, now Dalton (as app.net) is doing what instagram (then as Burbn) set out to do - a micro-blogging service.
Huh? I have been assuming that a large fraction of VC-funded startups try for exits of at least $1 billion. (The more rational ones know they will probably fail, but that does not prevent their sincerely aiming for that target.)
I am almost certain their investors want them to aim at least that high.
Are you sure you're not confusing billion and trillion?
ADDED. I'm not trying to appear bad-ass or hardcore (and in fact, I've never been a founder because I judge that I cannot afford that level of risk of being left with a severely suboptimal monetary reward for my efforts). I am honestly trying to understand VC-funded startups.
But to answer your question and being a startup founder myself - doing a startup is just an amazing experience. It's a bit like trying heroin (not that I've done that, but from what I hear). The highs are extremely high and the lows are extremely low. Going back to a "normal" job after doing a startup is like taking methadone.
Sometimes I wish I could think rationally around this (like you) and just take a normal job again. That would most likely be better for me in every measurable way (salary, benefits etc) but I'm just having too much fun doing startups.
Just to clarify: some young men find life (making friends, college, dating, getting and holding a good job) easy, and for those young men, I do not doubt that founding a startup is the rational path -- especially if they care as much about improving the world as they do about themselves. (If you're trying to improve the world, a 1% chance of making 5 billion dollars is a much more attractive choice than it is if you care only about yourself and your family and close friends because your ability to improve the world is approximately linear in how much money you can spend whereas your ability to stay safe and happy and to keep your friends and family safe and happy is distinctly sub-linear in "spending power" once spending power gets above $100,000 a year.)
However, I do not find life easy (nor am I young).
Investors generally aim for a 10x+ return, so if an investor invests $3m at $10m pre-val they'd be aiming for a $100m+ exit.
They're definitely not only aiming for $1bn+ exits although obviously they're a bonus (in the last decade there have probably been under 50 billion dollar exits in the tech space).
PG is an investor and he has stated multiple times that he'd prefer that all of his investments were aimed at the high end of the range of possible outcomes (mentioning Google's exit more than once in this context). Of course he has also stated that he understands that it is usually not in the best interests of the founders, so no pressure.
I don't see any way for him to give an answer to this that wouldn't cause hundreds of people to apply to the next YC with the exact same idea (or with the same idea with tiny meaningless variations).
I'm going to speculate for a bit, and say that some of the "Frighteningly Ambitious" ideas probably involve hardware or hardware/software combinations. Kickstarter is full of updates on how getting designs into production is harder than expected.
a guy who claimed that high speed Internet service could be delivered ove the magnetic field of AC power lines and into homes. wanted USGovt to mandate it because it was so amazing and cheap.
If you mean unconvincingly pitched, it would probably be the applications we get from people who've discovered new power sources that violate the laws of physics.