Hacker News new | past | comments | ask | show | jobs | submit login
The Influence of Bell Labs (construction-physics.com)
229 points by mooreds 4 days ago | hide | past | favorite | 134 comments





During my teenage and college years in the 2000s, I was inspired by what I’ve read about Bell Labs, and I wanted to work as a computer science researcher in industry. I’ve also been inspired by Xerox PARC’s 1970s and 1980s researchers. I pursued that goal, and I’ve worked for a few industrial research labs before I switched careers to full-time community college teaching a few months ago.

One thing I lament is the decline of long-term, unfettered research across the industry. I’ve witnessed more companies switching to research management models where management exerts more control over the research directions of their employees, where research directions can abruptly change due to management decisions, and where there is an increased focus on profitability. I feel this short-term approach will cost society in the long term, since current funding models promote evolutionary work rather than riskier, potentially revolutionary work.

As someone who wanted to become a researcher out of curiosity and exploration, I feel alienated in this world where industry researchers are harangued about “delivering value,” and where academic researchers are pressured to raise grant money and to publish. I quit and switched to a full teaching career at a community college. I enjoy teaching, and while I miss the day-to-day lifestyle of research, I still plan to do research during my summer and winter breaks out of curiosity and not for career advancement.

It would be great if there were more opportunities for researchers to pursue their interests. Sadly, though, barring a cultural change, the only avenues I see for curiosity-driven researchers are becoming independently wealthy, living like a monk, or finding a job with ample free time. I’m fortunate to have the latter situation where I have 16 weeks per year that I could devote outside my job.


Back in 1963, I had the privilege of participating in a program the summer before my senior year of high school. This was at the research lab of GE in Schenectady. I was working (not very productively) on a project involving platinum catalyst of hydrocarbons – what eventually became catalytic converters. The kid who tutored me in calculus (and later won a MacArthur grant and founded his own think tank) worked on what was then called nuclear magnetic resonance or NMR. Now it’s the guts or MRI machines (they left off the “nuclear” for PR reasons. I wonder how many kids these days have the chance to do shit like that, and if there are any labs where long-term research like that is funded.

> the only avenues I see for curiosity-driven researchers are becoming independently wealthy, living like a monk

I came to the same conclusion. This is the path I'm following (trying to set up a company and lean FIRE). It's sad in a way because those efforts and years could have been directed to research but we have to adapt.


That was mostly how the big scientific breakthroughs came in the 1800-1900s .. independent wealth.

That’s what a “scholar” is and Universities provided the perfect environment for that to thrive, which is no longer the case.


In the 1800’s and early 1900’s, maybe…

In post WW2 America though there was increased funding from the state, large research universities, institutes and national labs could be created. In the era where all that was really working at full speed, the “big scientific breakthrough” came at such a pace that it became hard to see what was big or not.


Edison's research lab was funded by companies wanting specific inventions developed.

I think that this was Paul Graham's original ambition for YC, really: a hope that some at least of the successful founders would choose to take their winnings and implement the next Lisp Machine and similar projects. Unfortunately, as with other things, winning the SV VC game just seems to incline people to either keep climbing that same greasy pole, or to do unstrenuous rich-guy things, or some combination of those two.

I've seen that many times over by now, sort of done it myself. It doesn't really work. You end up replacing one problem for another. There is also a heavy dose of procrastination and escapism related to it. Think about how many could, and does, do it and but how few results there are.

All it took was one bored patent clerk spending idle time thinking about something that he couldn't just let go and now we have General Relativity and black holes.

I was going to make a snarky remark like “Researchers are just going to have to write on Substack”.

Then I read this: http://mmcthrow-musings.blogspot.com/2020/10/interesting-opi...

I think the alt-economy that you describe may turn up soon. at least the one that I’m imagining that doesn’t involve registering for Substack.


> Our economy promotes short-term gains and not long-term initiatives; I blame over 30 years of artificially-low interest rates for this.

Don't low interest rates promote long-term thinking, perhaps to an absurd degree (e.g. the "it's okay that we hemorrhage money price dumping for 10 years as long as we develop a monopoly" playbook)? Bigger interest rate = bigger discount for present value of a future reward.


I'm guessing that they're referring to the practice of investors borrowing with low interest rates to buy large amounts of stock in a company, then milking the company of all its value for short term gains. Unless there's a significant plan in place, many companies can't really get away with long term playbooks if they are responsible to shareholders using ownership for short term gains (so they can quickly move to the next money making asset).

The truth is that kind of research can only happen with a very rich monopoly.

Bell labs came about when AT&T was the monopoly telephone provider in the US.

PARC happened when Xerox had a very lucrative monopoly on copy machines.


I have come to realize that over the years, though I still believe that wealthier companies like Apple, NVIDIA, Facebook, and the like could fund curiosity-driven research, even if it’s not at the scale of Bell Labs or Xerox PARC.

On a smaller scale, there is The Institute for Advanced Study where curiosity-driven research is encouraged, and there is the MacArthur Fellowship where fellows are granted $150,000 annual stipends for five years for them to pursue their visions with no strings attached. Other than these, though, I’m unaware of any other institutions or grants that truly promote curiosity-driven research.

I’ve resigned myself to the situation and have thus switched careers to teaching, where at least I have 4 months of the year “off the clock” instead of the standard 3-4 weeks of PTO most companies give in America.


If Zuck's obsession with VR isn't curiosity driven research than nothing is.

10 billion yearly losses for something that by all accounts isn't close to magically becoming profitable. It honestly just seems like something he thinks is cool and therefore dumps money in.


It's an example of Zuck's curiosity. When I refer to curiosity-driven research, I mean curiosity driven by the researchers, where the researchers themselves drive the research agenda, not management.

To be fair, though; Facebook, I mean, Meta is a publicly-traded company and if the shareholders get tired of not seeing any ROI from Meta's VR initiatives, then this could compel Zuck to stop. Even Zuck isn't free from business pressures if the funding is coming from Meta and not out of Zuck's personal funds.

Back to Bell Labs and Xerox PARC, my understanding of how they worked is that while management did set the overall direction, researchers were given very wide latitude when pursuing this direction with little to no pressure to deliver immediate results and to show that their research would lead to profits. Indeed, at one point AT&T was forbidden by the federal government from entering businesses outside of their phone business, and in the case of Xerox PARC, Robert Taylor was able to negotiate a deal with Xerox executives where Xerox's executives wouldn't meddle in the affairs of PARC for the first five years. (Once those five years ended, the meddling began, culminating with Bob Taylor's famous exit in 1983.)


As far as I know, Mr. Zuckerberg still owns a controlling interest in Meta Platforms.

Since he has 57% of the votes, he can tell everyone to pound sand.


That's right. Research needs to be bottom-up.

Managers heard that too but dramatically misunderstood what that phrase means.

I mean at some point. You either have to find someone rich who has the same curiosity as you and wants to fund it, or fund it yourself.

Is patronage a thing that even really happens anymore?

I bet (litteraly, founded an xr game development company in february) xr/vr games will indeed became a mainstream gaming platform in the next 5 years, maybe even next year. If or when it become the case it may totally become as present as smartphone and replace a lot of monitors, especially if they succeed to reduce them as smartglasses like their totally are progressing to.

if it become the case, meta get 30% of the revenues associated with it.

If it does not, i'm pretty sure they can now make good smartphones and even have a dedicated os. I'm pretty sure they can find a way to make money with it.

A meta quest 3s in inself is an insane experience for 330€ and it's current main disadvantages for gaming are the lack of players and the catalogue size. Even using it as a main monitor with a bluetooth keyboard is "possible". I would have find it 'improbable' a few years ago even as an enthousiasth, i now could totally imagine a headset replacing my screen in a few years with a few improvements on.


What about Musk and push to reach Mars? While I haven't liked Musk from long ago, SpaceX has given some steely eyed rocket men/women a pretty successful playground.

Maybe it's rare to do curiosity driven research.

But from the days of Bell Labs, haven't we greatly improved our ability to connect between some research concept to the idea of doing something useful, somewhere ?

And once you have that you can be connected to grants or some pre-VC funding, which might suffice, given the tools we have for conceptual development of preliminary ideas(simulation, for ex.) is far better than what they had at Bell?


I believe this depends on the type of research that is being done. There are certain types of research that benefit from our current research grant system and from VC funding. The former is good when the research has a clear impact (whether it is social or business), and the latter is good when there is a good chance the research could be part of a successful business venture. There are also plenty of applied research labs where the research agenda is tightly aligned with business needs. We have seen the fruits of applied research in all sorts of areas, such as self-driving vehicles, Web-scale software infrastructure (MapReduce, Spark, BigTable, Spanner, etc.), deep learning, large language models, and more.

As big of a fan I am of Xerox PARC and Bell Labs, I don't want to come across as saying that the Bell Labs and Xerox PARC models of research are the only ways to do research. Indeed, Bell Labs couldn't convert many of its research ideas to products due to the agreement AT&T made with the federal government not to expand into other businesses, and Xerox PARC infamously failed to successfully monetize many of its inventions, and many of these researchers left Xerox for other companies who saw the business potential in their work, such as Apple, Adobe, and Microsoft, to name a few.

However, the problem with our current system of grants and VC funding is that they are not a good fit for riskier avenues of research where the impacts cannot be immediately seen, or the impact will take many years to develop. I am reminded of Alan Kay's comments (https://worrydream.com/2017-12-30-alan/) on how NSF grants require an explanation of how the researchers plan to solve the problem, which precludes exploratory research where one doesn't know how to attack the problem. Now, once again, this question from the NSF is not inappropriate; there are different phases of research, and coming up with an "attack plan" that is reasonable and is backed by a command of the prior art and a track record of solving other problems is part of research; all PhD programs have some sort of thesis proposal that requires answering the same question the NSF asks in its proposals. With that said, there is still the early phase of research where researchers are formulating the question, and where researchers are trying to figure out how they'd go about solving the problem. This early phase of research is part of research, too.

I think the funding situation for research depends on the type of research being done. For more applied research that has more obvious impact, especially business impact, then I believe there are plenty of opportunities out there that are more appropriate than old-school industrial research labs. However, for more speculative work where impacts are harder to see or where they are not immediate, the funding situation is much more difficult today compared to in the past where industrial research labs were less driven by the bottom line, and when academics had fewer "publish-or-perish" pressures.


I thought I had read somewhere that 2 weeks vacation is more common in USA, at least for software companies, before things like "unlimited vacation". which is right, 3-4 or 2 weeks?

I think most "tech/bay" companies offer 3-4 weeks of vacation + holidays. Some have mandatory minimums a year and ability to accrue up to 30 days of PTO at a time in my experience. (e.g. not "unlimited", but specific amounts of PTO earned/used)

I’ve had anywhere from three to six weeks depending on seniority.

What makes you think they don't fund it ?

That's my conclusion as well since now the closest we have to Bell Labs is the Google R&D where it has a virtual monopoly on Internet search and it's able to hire excellent well paid researchers [1].

[1] US weighs Google break-up in landmark antitrust case:

https://news.ycombinator.com/item?id=41784599


Bell Labs was also funded by a massive monopoly.

This is not the mindset of monopolies, cutting research for the sake of short term profits is the mindset of Wallstreet's modern monopoly.

I agree with you that the modern corporate world seems to be allergic to anything that doesn't promise immediate profits. It takes more than a monopoly to have something like Bell Labs. To be more precise, monopolies tend to have the resources to create Bell Labs-style research labs, but it also takes another type of driving factor to create such a research lab, whether it is pleasing government regulators (I believe this is what motivated the founding of Bell Labs), staying ahead of existential threats (a major theme of 1970's-era Xerox PARC was the idea of a "paperless office," as Xerox saw the paperless office as an existential threat to their photocopier monopoly), or for purely giving back to society.

In short, Bell Labs-style institutions not only require consistent sources of funding that only monopolies can commit to, but they also require stakeholders to believe that funding such institutions is beneficial. We don't have those stakeholders today, though.


The time, motives, and management were different, the essence of my point was that in today's world even a monopoly would cut research for profit.

>The truth is that kind of research can only happen with a very rich monopoly.

A classification which includes government funding, note.


I hope that the #DeSci movement wlll help with this. ResearchHub is the closest so far in the soce to putting a financially sustainable crowdsourced research ecosystem together, but its early days.

I think Fang companies have a lot of curiosity driven research, just the output is open source software instead of physical products. AI especially.

I lament the decline also. Hopefully all of that lost energy has been put into good use on side projects.

On a tangent, but think it's related, the curiosity, exploration and research by the kids maybe stalling too.

Just a thought.


Okay, I'm really in a sad mood, so: tell me there will be places like that, again, somewhere, ever ?

We need this. Like, really, we need someone to have created the xerox part of the 21st century, somewhere about 20 years ago.

I honestly though Google would be that - but apparently it's easier to fund R&D on "selling copying machines" than "selling ads". Maybe "selling ads" earn _too much_ money ? I don't know.

I know, I know, DeepMind and OpenAI and xAI are supposed to fix climate change any time soon, and cure cancer while they invent cold fusion etc, etc... and it's only because I'm a pessimistic myopist that I can only see them writing fake essays and generating spam, bad me.

Still. Assuming I'm really grumpy and want to talk about people doing research that affects the physical world in positive way - who's doing that on the scale of PARC or Bell ?


The secret hero of that time was the US government. I’m not talking about the MIC, which is still quite robust and more bad than good. I am speaking more broadly. If you had a practical PhD and were willing to show up at a place at 9:00, you could get a solid upper middle class job with the Feds where you couldn’t get fired unless you broke the law.

The government also has always kept academia afloat. It is a privilege afforded to professors to believe they do not work for the state, but they do.

Great government and academic jobs forced companies to create these labs where it was better to hire great people and “lose” some hours to them doing whatever they want (which was still often profitable enough) than have zero great people. Can you imagine Claude Shannon putting up with the stuff software engineers deal with today?

The other main change is that how to run big companies has been figured out well enough that “zero great people” is no longer a real survival issue for companies. In the 1970s you needed a research level of talent but most companies today don’t.


Something that just dawned on me is the downstream effects of United States’ policy regarding science during WWII and the Cold War. The Manhattan Project, NASA, the NSA and all of its contributions to mathematics and cryptography, ARPA, DARPA, and many other agencies and programs not only directly contributed to science, but they also helped form a scientific culture that affected not only government-ran and government-funded labs, but also private-sector labs, as people and ideas were exchanged throughout the years. It is a well-documented fact that Xerox PARC’s 1970’s culture was heavily influenced by ARPA’s 1960’s culture.

One of the things that has changed since the 1990s is the ending of the Cold War. The federal government still has national laboratories, DARPA, NASA, the NSF, etc. However, the general culture has changed. It’s not that technology isn’t revered; far from it. It’s just that “stopping Hitler,” “beating the Soviets,” and grand visions for society have been replaced with visions of creating lucrative businesses. I don’t hear about the Oppenheimers and von Neumanns of today’s world, but I hear plenty about Elon Musk and Sam Altman, not to disrespect what they have done (especially with the adoption of EVs and generative AI, respectively), but the latter names are successful businessmen, while the former names are successful scientists.

I don’t know what government labs are like, but I know that academia these days have high publication and fundraising pressures that inhibit curiosity-driven research, and I also know that industry these days is beholden to short-term results and pleasing shareholders, sometimes at the expense of the long-term and of society at large.


I don’t hear about the Oppenheimers and von Neumanns of today’s world

Sadder still is the underlying situaiton behind this: the fact that there's nothing of even remotely comparable significance happening in the public sphere for such minds to devote themselves to, as those man did. Even though the current civilization risk if anything significantly greater than in their time.


Even if you don't buy that LLMs and the transformer architecture in particular will lead to AGI, and then artificial super intelligence (ASI), the quest to try and make ASI is far more significant than anything that's ever come before. ASI would be the last thing that humans need to invent.

https://youtu.be/fa8k8IQ1_X0


We'll see if it comes to anything, first.

Until then, it doesn't have the property of being significant.


I mean it’s possible we don’t see it. Or perhaps don’t see it very much before it’s ridiculously smart. A lot of AI research is at private companies who only share some details when they formally release a model. If they could make an AI researcher AI, they may be inclined to keep it to themselves for a bit while they let it improve itself for a few generations.

I feel like having access to instant high definition video communication and to the world’s repository of text, audio, and video information at a moment’s notice from a device in your pocket is of comparable, if not more, significance.

These innovations in LEDs, battery technology, low power high performance microchips with features measured in number of atoms is extraordinary, and seemingly taken for granted.

Then we also have medicines that can even bend one’s desire to over eat or even drink alcohol, not to mention better vaccines, cancer therapies, and so on and so forth.


And the smartphone is making sure you'll get plenty of access to the people who will prevent you from taking the vaccines or the cancer therapies (which, sadly, have _not_ been progressing as much as we would need.)

Google immunotherapy bro.

Vaccines are in a golden age, except political assholes are stoking ignorance and rejection.

In fact the two are advancing together. Bespoke vaccines for your cancer in in trial now.


No it's not, but thanks for assuming.

I'll be sure call to you when I meet the first real person cured from cancer by immunotherapy. So I far I only met people who died of cancer. But if you tell me we're in a "golden age", I suppose it will be over soon.


A good friend was in a Nivo/Ipi clinical trial in 2018/19 for metastatic melanoma. Melanoma is among the most aggressive cancers - it killed my beautiful wife in about 6 months. My friend is still alive and doing well. Prior to those trials, the 3 year survival rate was 0. Today, the 5 year survival rate is 65%.

Before the anti-vax lunatics moved in and captured the narrative in social media, President Trump’s extensive funding and regulatory acceleration of the COVID vaccines saved many thousands of lives. The design of the vaccine was done in days and available in weeks. That’s a golden age.


Sorry for your loss. Happy for your friend.

I guess it's a case where the future is there, but we badly need to get it "more evenly distributed".


> It’s not that technology isn’t revered; far from it. It’s just that “stopping Hitler,” “beating the Soviets,” and grand visions for society have been replaced with visions of creating lucrative businesses.

Any kind of societal grand vision we had has been falling apart since about 1991. Slowly at first (all the talk about what to do with the "peace dividend" we were going to get after the fall of the Soviet Union) And that accelerated with the advent of the internet and then accelerated even more when social media came on the scene. We no longer have any kind of cohesive vision for what the future should look like and I don't see one emerging any time soon. We can't even agree on what's true anymore.

> I don’t know what government labs are like

Many of these are going to be in danger in the next administration especially if the DOGE guys get their way.


> successful businessmen, while the former names are successful scientists

We’ve seen this before with Thomas Edison.


>It’s not that technology isn’t revered; far from it. It’s just that “stopping Hitler,” “beating the Soviets,” and grand visions for society have been replaced with visions of creating lucrative businesses

Universities are tripping over themselves to create commercialization departments and every other faculty member in departments that can make money (like CS) has a private company on the side. Weird that when these things hit, though, the money never comes back to the schools


The academic entrepreneur phenomenon is an absolute sink, but it exists for a reason and ought to wake people up,

Universities put a lot of pressure on faculty to win grants, and take 60-70% of the proceedings for “overhead”, which is supposed to fund less sellable research and provide job security but is, in practice, wasted.

You have to be a fundraiser and a seller if you want to make tenure, but if people are forced to basically put up with private sector expectations, can you fault them when they decide to give themselves private sector pay?


Yup. Silicon Valley would not exist without large government spending.

You can bet this spending is going to be among the fist things slashed by DOGE-lile efforts ("Scientists ? They're just liberal elites wasting our hard earned money researching vaccines that will change your dog's gender in order to feed it to communist immigrants.")

I suppose I could be cheered up by the irony, but, not today.


> honestly though Google would be that - but apparently it's easier to fund R&D on "selling copying machines" than "selling ads". Maybe "selling ads" earn _too much_ money ? I don't know.

I'm pretty sure Google Brain was exactly what you are looking for: People like to think of DeepMind, but honestly, Brain pretty much had Bell Labs/PARCs strategy: they hired a bunch of brilliant people and told them to just "research whatever is you think is cool". And think all the AI innovations that came out of Brain and were given to the world for free: Transformers, Vision Transformers, Diffusion Models, BERT (I'd consider that the first public LLM), Adam, and a gazillion of other cool stuff I can't think of right now.... Essentially, all of the current AI/LLM craze started at Brain.


Yes, it was basic research( guided to the field of machine learning), but between a search monopoly and their autonomous car project, they definitely have a great economic engine to use that basic research and the talent it pulled into Google, even if a lot of it escaped.

Right. And I'm sure that if I ever get in a better mood, I'll find that the current AI/LLM craze is good for _something_.

Right now the world needs GWh batteries made of salt, cheap fusion from trash, telepathy, a cure for cancer and a vaccine for the common cold - but in the meantime, advertisers can generate photos for their ads, which is, _good_, I guess ?


Your problem stems from assuming our natural state is some Star Trek utopia, and only our distraction by paraphernalia is preventing us from reaching such a place. Like we are temporarily (temporally?) embarrassed ascended beings.

Humanity’s natural state is abject poverty and strife. Look at any wealth graph of human history and note how people are destitute right up until the Industrial Revolution, and then the graph explodes upward.

In a way we (well, especially the West) are already living in utopia. You’re completely right that we can still vastly improve, but look back at the progress we already made!


It does sound like you're in a particularly bad mood, so yes, maybe our outlook does change. Maybe it helps to think of a darker timeline where Google would have kept all of these advances to itself and improved its ad revenue. Instead it shared the research freely with the world. And call me naive, but I use LLMs almost daily, so there definitely _is_ something of value that came out of all this progress. But YMMV, of course.

I would love to find a way llms could help me right now.

"Hey, xchatgclaudma, please conjure up time and energy out of thin air ?"


Current LLMs seem to be OK code generators when constrained well. I'm building a product with my hands tied behind my back when I only use Claude. Its... working.

Can't you get telepathy from training AI on functional MRI data? And then finding a way to pinpoint and activate brain regions remotely?

I mean brain-machine interfaces have been improving for quite a while.

Telepathy might even already exist.


Indeed. Come to think of it, telepathy would bring chaos to society, ruin everyone's life, and open considerable advertising space - it's way more likely to be invented soon than a vaccine for the common cold.

“Want to remember a long lost cherished moment with a now deceased loved one? Think about this product for 15 seconds first.” I’m positive it’s being worked on.

Rolling back the 1980s neoliberal cultural ideals of letting markets and profits be the highest arbiter of societal direction is the key.

Silicon Valley hippies have been replaced by folks focussed on monetisation and growth.

It’s not great for the west, but those problems are being tackled. We just don’t get to read about it because ‘China bad’ and the fear of what capital flight might do to arguably inflated US stock prices

https://www.energy-storage.news/byd-launches-sodium-ion-grid...


Once we get superintelligence — some time next year I’d say — then we will have a tool to make all those dreams come true.

What are you basing such a bold prediction on?

It's not bold if the comment author lives in a state that switched clocks to EPT (Elon-Prediction-Time) this week-end.


For instance, ARC prize now at human level… https://x.com/akyurekekin/status/1855680785715478546

I dont know why it seems bold. So many signs.


Extreme ultraviolet lithography originated with paper out of Bell Labs in 1991, then US government funded multiple research efforts via national nuclear research labs that came up with a potential method to implement but it took 20 more years of trial and error by ASML to make a practical machine they could sell. Other companies tried and gave up because of the technical challenges. This advance is responsible for modern chip fabs fastest chips.

No, but you also shouldn't romanticize Bell Labs _too_ much. It was not exactly a fun place to work. You got a 2 year postdoc and then you were out, and those two years were absolutely brutal. Its existence was effectively an accounting fluke. Nothing like it really exists now because it would largely be seen as an inefficiency. Blame Jack Welch, McKinsey, KKR, HBS or whoever you like.

I was a postdoc there and I would not say it was brutal. I got a very good salary (far above an academic postdoc), health benefits, relocation, the ok to spend $1000/day on equipment with no managerial review[0], and access to anyone and everyone to whom I felt like speaking. I read horror stories in Science and other journals about people's experiences elsewhere and am grateful that I was spared so much nonsense. It was the greatest university I have ever set foot in. I still feel unworthy of the place.

[0]This was in the early '90s when $1000 went a long way.


> It was not exactly a fun place to work.

I couldn't disagree more, but perhaps the time I was there (late 90s) was different.


Just curious, are you speaking from personal experience?

Hmm

There are companies that push many various technologies

Samsung conglomerate does everything, Intel does hard (semiconductor research, manufacturing) and soft (computer science/software) things

Maybe we're at the point where you need to specialize in one industry, so achieving various stuff like they did at Bell is harder?


The answer is Google. I'm not sure why you're being so dismissive.

When we look back in 20 years, things like the transformer architecture, AlphaFold (which just won a Nobel prize), and Waymo are going to have improved the world in a positive way as much as anything Bell Labs did, and certainly more than PARC.


>I honestly though Google would be that - but apparently it's easier to fund R&D on "selling copying machines" than "selling ads". Maybe "selling ads" earn _too much_ money ? I don't know.

Google has put quite a bit of resources into quantum computing research. It's not just for selling ads, though I have no doubt that it will be used for that among other things. But right now there's still no guarantee it's going to pay off at all.


Everyone wants Bell Labs, but not the thing that made it possible — high corporate profit taxes. They were making bucket loads of monopoly money and had to put it somewhere or taxes would it away.

I think a lot of people want high corporate profit taxes

people own corporations. If you tax the income of people who own corporations, it's not necessarily necessary to tax corporations; just pass their profits through to the people and tax the people. There are good reasons not to double-tax, first at the corporate level and then again at the personal level.

This is similar to the perverse incentive of share buybacks. It's just more tax efficient to pump stock value than pay proper dividends.

Corporations are apparently people too! They should be taxed like any other flesh and blood human!

I really want gov around the world to take back governance to be solely for the benefit of people. None of this greedy corruption lobbyist stuff.


How else will we know if seatbelts are effective in Africa?

Congrats, you've been brainwashed by billionaires into voting against your interest.

Har

Yes, disincentivize dividends and buybacks, reincentivize investments, R&D and others.

That makes no sense. Dividents and buybacks are both taxed.

From what I understand, they weren't really allowed to sell anything not related to telephony as well.

So they could create Unix, but they weren't allowed to profit off of it. So they just gave it away, because why not.


Just reading the book The Idea Factory, it was incredible amount of innovation. Lasers, early satellites, transistors.

And it was all done, apparently, at least in the beginning, because they hired smart people and they let them do what they wanted.


Almost all of the things I can think of that came from bell labs are things that helped their business. The only thing that I don't know how it helped their business was Hemo the Magnificent and similar films; but I'm sure those helped with PR.

Monopoly may have helped them pay for such r&d, but vertical integration is what made it possible for so much r&d to be relevant to the business.


I think it ended up helping their business because they deliberately made it so later, but it’s not clear how some inventions would help their business in advance. From the book there was quite a few things that sat on the shelf for years until someone figured out a way to use them later.

All true, but monopoly profits sure help.

As do high corporate tax rates

I know the author, Jon. Delightful guy

unix, c and c++ too.

And S, the statistical data language that was the ancestor of S-PLUS and R.

did not know about that, thank you.

Haven't gotten to that part of the book.

They don't cover it in the book, unfortunately. A serious omission, in my eyes.

Pasting my comment on the article https://www.construction-physics.com/p/what-would-it-take-to... :

> RCA Laboratories/the Sarnoff Research Center is surely one of the most important of the American corporate labs with similarities to Bell Labs. (It features prominently in Bob Johnstone's We Were Burning https://www.hachettebookgroup.com/titles/bob-johnstone/we-we... : it has a big role in the history of the Japanese semiconductor industry, in large part because of its roles in the development of the transistor and the LCD and its thirst for patent-licensing money.)

>> In Dealers of Lightning, Michael Hiltzik argues that by the 1990s PARC was no longer engaged in such unrestricted research decoupled from product development.

> According to Hiltzik and most other sources, the PARC Computer Science Lab's salad days were over as early as 1983, when Bob Taylor was forced to leave, while the work of the other PARC labs focussed on physics and materials science wasn't as notable in the period up to then.

Seriously: if this kind of thing interests you at all, go and read We Were Burning.


Bell Labs is wonderful to read about, and I've really loved delving into it. Alan Kay's talks in particular.

However, it should be seen as a starting point! Alternative hypothetical pasts and futures abound. One issue is that the stuff from the past always looks more legendary seen through the lens of nostalgia; it's much harder to look at the stuff around you and to go through the effort of really imagining the thing existing.

So that's my hypothesis - there isn't a smaller volume of interesting stuff going on, but viewing it with hope and curiosity might be a tad harder now, when everyone is so "worldy" (i.e., jaded and pessimistic).

Proof:

https://worrydream.com/ (brett victor)

and the other people doing dynamicland and realtalk, both discussed straightforwardly here:

https://dynamicland.org/2024/FAQ/

https://solidproject.org/about -- solid, tim berners-lee and co, also.

https://malleable.systems/catalog/ -- a great many of the projects here are in the same spirit, to me, as well!

https://spritely.institute/ -- spritely, too

https://duskos.org/ -- duskOS, from Virgil Dupras

https://100r.co/site/uxn.html -- 100 rabbits, uxn, vibrating with new ideas and aesthetics

https://qutech.nl/ -- quantum research institute in the netherlands, they recently established a network link for the first time I believe

etc etc. These are off the top of my head, and I'm fairly new to the whole space!


> and the other co-inventor of the integrated circuit was Fairchild Semiconductor, which as far as I can tell didn’t operate anything like a basic research lab.

Kind of a strange statement. Fairchild took the "traitorous eight" from Shockley Semiconductor, which was founded by William Shockley, who famously co-invented the transistor at Bell Labs (and who named the "traitorous eight" as such.)

So while Fairchild "didn’t operate anything like a basic research lab", its co-invention of the IC was not unrelated to having a large amount of DNA from Bell Labs.


The research done at Bell Labs is the foundation of the information age, however, Bell Labs sowed the seeds that made the post-divestiture AT&T a doomed enterprise from the start - there is a reason they only lasted ~20 years from divestiture 'til they were bought by one of their former children, SBC.

AT&T provided for most of its history, the best quality telephone service in the world, at a comparable price to anyone else, anywhere.

There were structural issues with the AT&T monopoly however, for example cross subsidization - the true cost of services was often hidden because they would use optional services (like toll calling) to subsidize basic access, and business lines would cross subsidize residential service.

The level that AT&T fought foreign connections (aka, bring your own phone), probably hastened their demise, in the end, the very technologies that AT&T introduced would turn long distance from a high margin, to low margin business - the brass at AT&T had to know that, but they still pinned the future of their manufacturing business on that - a manufacturing business that had never had to work in a competitive environment, yet was now expected to - because of this and other factors divestiture was doomed to failure.

I'm a believer in utilities being a natural monopoly, but AT&T was an example of effective regulatory capture, it did not, and does not have to be this way, however it was.


Eventually the cable guys were coming for AT&T's lunch, regardless of what happened with their monopoly. It's the rare circumstance where two seemingly unrelated utilities converged into the same business (moving bits, instead of analog video or audio) and we lucked into having two internet facilities in large portions of the country

Local Access is a very different issue, and I dont really disagree - but the local copper loop being broadband is an accident and one of technological evolution.

When the decisions were made about divesiture, that bit was non obvious.


My dream was to do research, but I didn't see academics as happy people. Instead, I had a great career, and now I mess around on my own programming language. Maybe it will become something, may it will not. Life is a mystery.

It has been fun spending a few years building my vision ( https://www.adama-platform.com/ ), but it's hard to communicate it all. I've come to the conclusion, the real magic that makes Research work is having a place where people can collaborate and work together in a shared culture.

At this point, I don't want to hire people into my endeavor, and I can't seem to find a co-founder to do commercialization right.


What were the other major R&D labs, corporate or otherwise of this era (roughly: 1880s through 1980s)?

I can think of: AT&T, DuPont, Kodak, Xerox PARC, Westinghouse, IBM, GE, the original Edison labs (best as I can tell acquired by Western Union), Microsoft, Rockefeller University, Google Research.

Of notable industries and sectors, there's little I can think of in automobile, shipping, aircraft and aviation (though much is conducted through Nasa and military), railroads, steel (or other metals/mining), petroleum, textiles, or fianance. There's also the Manhattan Project and energy labs (which conduct both general energy research and of course much weapons development).

(I've asked similar questions before, see e.g., <https://news.ycombinator.com/item?id=41004023>.)

I'd like to poke at this question in a number of areas: what developments did occur, what limitations existed, where private-sector or public / government / academic research were more successful, and what conditions lead to both rise and fall of such institutions.


MIT's AI Lab with Lisp, ITS and such. They were far ahead of Unix, Bell Labs and Berkeley.

Which suggests SAIL (Stanford) and early Internet collaboration (UCLA, UCSB, and others).

Various advanced computer facilities: UCSD, University of Illinois Urbana-Champaign. Probably others at Carnegie-Mellon, Georgia Tech, and elsewhere.


It's coincidental that this article has reached the front page today. I just picked up The Idea Factory: Bell Labs and the Great Age of American Innovation by Jon Gartner. (I'll let you know how it is when I finish)

One thing to consider is that Bell Labs didn't innovate for altruistic reasons like furthering the human race or scientific understanding. They innovated to further AT&T's monopoly and to increase shareholder value. This doesn't seem that different than what Meta, Google, NVIDIA, etc. are doing. Maybe in 10-20-30 years we will view the research that modern tech companies are doing through the same lens.

Although, I do admit that the freedom with which these scientists and engineers were able to conduct research is something special. Maybe that's the real difference here.


People always say "oh if you had this or that system we would be so much more innovative", but it's all wishy-washy unprovable statements pulled from 3rd hand sources. If you distill the problem down to it's essentials you are asking "how do I design (human in the loop) AGI", and until you can write a program for it, it will remain an artform, and the success or failure of art depends entirly on it's cultural context.

Rather than p(r)aying for the smartest people who have ever been born, design a corporation that can have the average high school dropout work in R&D and you will print money, innovation and goodwill.


Inline with this, the talk from Richard Hamming "You and Your Research" (June 6, 1995)

https://m.youtube.com/watch?v=a1zDuOPkMSw


If you want to understand and appreciate Bell Labs innovation eco-system from first hand account and perspective of its researcher while at the same time learning to perform research based innovations, please check this book written by one of its celebrated researchers Richard Hamming [1].

[1] The Art of Doing Science and Engineering:

https://press.stripe.com/the-art-of-doing-science-and-engine...


"Unpacking the reasons for this increase goes beyond the extent of this essay, and it certainly wasn’t just because of AT&T and Bell Labs. But Bell Labs’ achievements, particularly the invention of the transistor in 1947, seem to have prompted the decisions of many companies to start their own R&D labs, and influenced how those labs were structured."

1947 was a magical year. That announcement had profound implications. They effectively invented something that would replace the huge existing base of vacuum tube components with miniaturized transistors. This miniaturization phase significantly influenced Von Nuemann's recommendation for the ballistic missile program. Many of the discrete component systems manufactured during this time remained in service into the 1980's.

This is a photo of a D-17B guidance computer that deployed on the Minuteman Missile in 1962, 15 years after creating the transistor, and was typical of military printed circuitry at the time for general purpose computers, disk/drum storage drives, and printers.

https://upload.wikimedia.org/wikipedia/commons/3/38/Autoneti...

"The D-17B weighed approximately 62 pounds (28 kg), contained 1,521 transistors, 6,282 diodes, 1,116 capacitors, and 5094 resistors. These components were mounted on double copper-clad, engraved, gold-plated, glass fiber laminate circuit boards. There were 75 of these circuit boards and each one was coated with a flexible polyurethane compound for moisture and vibration protection."


I'm a bit surprised that RCA's research division didn't get a nod here. They came up with color TV, then did basically nothing of note for a good 25 years. They're a big part of the reason why RCA is nothing more than a label slapped on imported slop nowadays.

Once Facebook and tiktok got invented that was pretty much it for big new discoveries. Everybody now just posting crap and eating Cheetos.

Isaac Morgan is an unsung hero from those ages, man I miss working for that man

Deets?

Share the why, not (just) the what.


What was the influence of the manhattan project on bell labs?

Impostor syndrome was real at that joint!

New Jersey back to this when!?

I think the mythos of the Bell labs and other thinktanks of the Cold War era is overstated to some extent. https://greyenlightenment.com/2024/07/19/the-decline-of-cold...

These organizations employed too many people of relatively mediocre ability relative to output, leading to waste and eventual disbandment. Today's private sector companies in FAMNG+ are making bigger breakthroughs in AI, apps, self-driving cars, etc. with fewer people relative to population and more profits. This is due to more selective hiring and performance metrics. Yeah those people form the 60s were smart, but today's STEM whiz kids are probably lapping them.


I’m hoping this is sarcasm deserving my heart-felt belly laugh. FANG (or whatever the backcronym is these days) “selective hiring” is just puzzle driven mediocrity with a ridiculous amount of elect self-praise at their own good fortune. And “performance metrics”, give me a break - product innovation is in the toilet and product quality even further down the drain. Unless you’re talking about advanced PR and market manipulation techniques to capture and retain ad revenue…definitely genius there.

I agree with the vibes of your comment, but I have to reply to this:

> Unless you’re talking about advanced PR and market manipulation techniques to capture and retain ad revenue

Those very much _are_ the goals at those enterprises.


Indeed they are, but my point is that _these_ goals are hardly admirable. At the same time, the claimed innovations aren’t real, at least in the sense that anything in any issue of the Bell Labs Technical Journal was. “Apps”, etc? This is like giving the Medellín cartel credit for their hippo culture while ignoring the basis of their real success.

Off the top of my head: Shannon, Nyquist, Hamming. Lapping them you say.

I think my reading of your comment is: that's just wrong. And I tend to agree. The current STEM and FAANG activity is second order work in the main. I wouldn't hold AI work up as a paragon, myself. It's diverting from progress across a field.

I have hopes of a resurgence of operations research and linear optimisation as goods in themselves: we could be plotting more nuanced courses in dark waters of competing pressure. Decision systems support across many fields would remove subjective, politicised pressures.


Do you think there is room for a resurgence in linear optimization?

Linear programming, and even integer linear programming are pretty well solved practically speaking.


I tried using some online systems to help formulate weighted sum decisions over unrankable choices and it's bloody hard work getting people on board. I think how the logic presents could improve.

This stuff while old, is not routine for decision makers. They don't seem to grok how to formulate the questions and the choices.


I think it’s fundamentally hard to make tools like that because models can be sensitive to specifics, so dumbing them down is generally not great

> making bigger breakthroughs in AI, apps, self-driving cars

Those weren’t really the topics people were interested in at the time (depending on your definition of AI).

The shoulders of giants, as they say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: