Hacker Newsnew | past | comments | ask | show | jobs | submit | rccogar's commentslogin

Here's why GPTs are going to have a massive impact on business:

AI in the 2010s

- As described in the book Prediction Machines, deep learning enabled a significant drop in the cost of prediction, which enabled new technical capabilities (better object recognition, etc.) and therefore new applications (automatically identifying people in images, etc.)

- Business adoption lagged. Here's an article by McKinsey "AI Adoption Advances, but Foundational Barriers Remain" that describes some of the barriers such as lack of talent and lack of available data: https://www.mckinsey.com/featured-insights/artificial-intell...

- The drop in the cost of prediction in the 2010s was primarily a drop in the MARGINAL cost of prediction.

- It still required a substantial investment (i.e. FIXED cost) to build these systems, which became a major barrier.

AI in the 2020s

- Generative Pre-trained Transformers (GPTs) have enabled a significant drop in the FIXED cost of prediction.

- Whereas AI in the 2010s required massive datasets and large highly specialized (machine learning) teams, AI in the 2020s requires very minimal (or no) data and minimal technical capabilities (basic software development and even non-technical folks can build).

Implications

- This will enable many new classes of applications and a drastically greater adoption of AI.

- Workflows -- Dropping the fixed cost of prediction to near-zero means that the "glue" tasks completed by humans to stitch together workflows can easily be automated, enabling 100% automation of many workflows.

- Systems -- As these 100% workflow automations arise, those who can drastically rethink the design of systems will be the ones who benefit most.


Here's my theory - I think two key developments have occurred in AI/ML:

1. Relevance Machines

- The attention mechanism in transformers has caused a significant drop in the cost of predicting relevance.

- These systems correspond to the implicit System 1 in animal and human cognition in dual process theory.

2. Reasoning Machines

- Predicting relevance of human language in LLMs has caused a signifiacnt drop in the cost of reasoning (notably GPT4).

- These systems correspond to the explicit System 2 in human cognition in dual process theory.

When combined, I think these systems lay the foundation for AGI and turn it into more of an engineering problem than a fundamental research problem. While that effort unfolds, I think we will begin leveraging cheaper reasoning in business and other settings.

What do you think? Do you agree/disagree? Why?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: