Hacker News new | past | comments | ask | show | jobs | submit login
Businesses are in for a mighty debt hangover (economist.com)
99 points by mfiguiere on May 16, 2023 | hide | past | favorite | 217 comments



> Suddenly, borrowing money in order to fork it over to shareholders makes less sense in a world of higher interest rates

What a lot of people probably dont realise when they read the term "business" is how big a fraction of management attention, incentives and remuneration is pure financial engineering: a parasitic game between financiers, accountants, tax code, central banks and company treasuries that has nothing to do with the mission of the company.

A modern corporation is a gigantic financial spreadsheet with its real assets a mere footnote.

In its most benign form debt is an accelerator for a healthy business that is growing. It is possible to get a debt hangover when misestimating actual business prospects and that is a serious enough challenge of its own.

The virtual reality of Wall Street and the obese financial system only serves to inflate the volatility. That is by design because volatility is precisely what it is milking.


>What a lot of people probably dont realise when they read the term "business" is how big a fraction of management attention, incentives and remuneration is pure financial engineering

an anecdote I've shared before, was talking to an acquaintance from my child's kindergarten who was an investment guy and I said I thought that finance services were increasingly capturing too much value and he countered that was because finance was where "all the innovation is."

He later got indicted for fraud and couldn't explain how a lot of money disappeared.


As a finance guy I don't see a lot of core innovation. Somehow I've been able to understand pretty much every financial instrument I've come across using a few basics that you learn immediately when you get into the business:

Time value of money, optionality, non-arbitrage.

There's a million ways to cook up a financial product from those and I'm not going to pretend I know them all, but it's like how a chef can look at a dish from another culture and still more or less describe what it is: sugar, fat, acidity, heat, etc.

When exciting things have happened in finance it's often been purely because the exciting thing became legal. Things like buybacks or the repeal of Glass-Steagal. The other thing that can happen is that some sales team finds a way to sell the thing that creates an ecosystem around it, like Credit Default Swaps. Rarely is it the financial contract itself that is so inventive as to create a market.


> Time value of money, optionality, non-arbitrage.

Another big one is "opacity". A financial product that is convoluted and needlessly complicated allows the most sophisticated financial players to consistently take advantage of the rest.

Much in finance is zero sum. Your outperformance is somebody else's underperformance. Financial products that are understood by everybody will not make you a lot of money, because they will trade approximately at their fair value. This is why so much in finance is complex instead of simple.

By analogy: PDF is terribly complicated because Adobe didn't want their PDF software to become commoditized. A bad and illogical file spec benefits Adobe at the expense of everybody else.


JavaScript frameworks?


An important aspect of innovation in financial services is improved risk management.

A mildly cynical person might start giggling uncontrollably at the idea that the financial sector is propagating good risk management technology, yet the picture is quite nuanced: Our world would be a more brutal place without insurance contracts and pension schemes. So there is a grain of truth. There are genuine if unfinished financial innovations and they are important. The trouble is that, by-and-large, recent decades have seen nothing of lasting utility coming from the financial sector. In fact the whole thing seems to be unwinding.

Take for example the suddenly hot topic of interest rate risk which is blowing up regional US banks left and right. This was supposed to be a solved risk management problem. Interest rate derivatives have been developed after the Savings and Loan crisis and investment banks have made gazillions of revenue peddling "interest risk management" tools.

Much has been said about the role of regional bank managers, the role of social media the role of watering down regulation etc. What I have not seen much discussed is the role of those risk management intermediaries. MIA?

I at some point a smart cookie will follow the money and publish a lurid account of how the "solved problem" went all pear shape.


Is there a book about this you recommend? Those basic concepts?


Hull. Options, Futures, and Other Derivatives.

Also Wilmott. Can't remember the name but it will be obvious.

It's pretty much high school math, you establish what the cash flows are and the value of the thing flows from there. When it comes to optionality you probably haven't done stochastic calculus in high school, but you can follow along anyway. Both books will explain interesting things like how to price an option on an option, that kind of thing. Non-arbitrage is what holds the the whole thing together: if the price didn't follow {rules} then you could do {steps} to make free money.


Paul Wilmott Introduces Quantitative Finance


you anecdote is very apposite and imho applies beyond just the financial realm.

financial innovation is dirt cheap. get enough aligned interests around a table and you can create new contracts, new financial markets etc out of thin air. its all fundamentally just a game around information control, an intermediary forcefully interjected in the economy and extracting rents.

now, the quiz: in which other sector is most "innovation" dirt cheap and essentially just a game around information control, an intermediary forcefully interjected in the economy and extracting rents?

the only thing that could be worse than what we have today is when those two "innovative" sectors merge. yet this outcome is inevitable and it will happen fairly soon.


What's the other sector? Tech?


Not the poster you are replying to, but IMO it clearly applies to a large number of tech sectors, medical care, real property, and higher education.


Pharmaceuticals?


It may be true that at the top of any corporation is a lot of financial engineering and games. But at the bottom is the actual work, and when the leaders neglect the bottom is when the vultures swoop in to remind everyone.

There is a sick game in the world. The cause is not the workers, it’s the elites. However cold water washes away the crabs.


That actual work is in many cases just for keeping up appearances. The company owners and leadership know that the product or whatever their staff is working on is doomed to fail from the beginning, but need something to show to /with bankers and politicians to increase borrowing.

Just think about how many truly awful competitors there are in any market or product sector. If you noticed, don't you think they've noticed?


Both hands are required to clap, and frankly workers play their own games too. The vast majority of folks have no issues enabling terrible situations for a paycheck, and pretending to work isn’t exactly new either.


>The vast majority of folks have no issues enabling terrible situations for a paycheck

Is it really "enabling" when there's a metaphorical (and often literal, in the case of police) gun to one's head? Most of us can't opt out of this arrangement, and that's by design. There isn't even a "run away into the forest" option, because all the forests are already owned by someone. Being homeless is effectively illegal. Being dispossessed in general is illegal, even though the system is predicated on dispossessing the populace to enrich those at the top.

It's less like a dysfunctional relationship and more like a hostage situation. Don't blame the hostages, please.


If the hostages are also usually the ones holding the guns, it’s an apt comparison.

If labor never walks, it has no power.


>If labor never walks, it has no power.

Agreed, the power we have must be exercised. Use it or lose it. There are more of us than there are of them, but getting started is the hard part. It's a tall order to ask someone to stick their neck out first.

I'll stop here as HN is not the place to plan a revolution :)


If someone won’t take any risk - they get the corresponding reward. Law of the jungle and all.


They pretend to pay us, we pretend to work.

What's my incentive to take the considerable effort, burn what political capital I may have, and use my little spare energy to overcome the enertia of the terrible situation? I save the company 400k by eleminating unnecessary services. Great a challenge coin. I'll remember this next time you ask for savings opportunities or volunteers.


Yup. And in many places that’s ‘the deal’.

It may be worth considering overlap with burnout, and how staying (without really considering other options or addressing the underlying things going on personally) ends up damaging the person who stays.


All money is created by banks issuing debt, so it makes complete sense that people want to be as close to the money tap as possible to siphon from it. The alternative is trying to build up a productive business in the extremely competitive and harsh free market, where you have to deliver to customers, who are free to choose. Who wants that hassle when you can just pretend being a business and keep on borrowing?


>so it makes complete sense that people want to be as close to the money tap as possible to siphon from it

This is known as the "Cantillon Effect"[0], and it also explains the concept of biflation[1]: if you're close to the money tap and there's a surge of money, you can profit from arbitrage before that money hits the larger economy and spreads out. The further away from the money tap you are, the less you're able to profit from changes in money supply. Everyone who isn't close to the money tap is on the losing end of this arbitrage opportunity.

This creates perverse incentives for large well-connected business entities, typically centered around schemes involving juggling debt.

>Who wants that hassle when you can just pretend being a business and keep on borrowing?

Yep.

Eventually every business model starts turning into some variation of a pump and dump scheme. We see this most prominently in crypto, but also in the traditional economy (stock buybacks, real estate speculation, startups running on VC fumes, etc.)

When enough of the economy turns away from production to speculation, the entire thing falls over and implodes. We don't know the exact point when this happens, and it's probably impossible to model, but history tells us it eventually does happen. People like Michael Burry believe this implosion is imminent, but things keep getting patched at the last second and the machine keeps lurching along.

Despite the fancy math and terminology, the stock market (and the larger financialized economy) is ultimately based on "vibes" and the vibes now are bad.

Even the 2008 crash didn't put a stop to these speculation games. Nobody knows how long "we" can keep this up. It's like a slot machine with a bomb inside. The rich guys pulling the lever are getting most of the coins, and the rest of us will get mostly blast damage.

[0] https://en.wikipedia.org/wiki/Richard_Cantillon

[1] https://www.investopedia.com/terms/b/biflation.asp


> A modern corporation is a gigantic financial spreadsheet with its real assets a mere footnote.

This is the opposite of the reality. Financial statements following GAAP (USA) or IFRS (EU + many other major countries) explicitly list assets on the balance sheet with associated footnotes that explain more complex areas in greater depth. Often, footnotes will include more detailed information for those assets such as depreciation schedules (particularly for PP&E: Plants, Property, and Equipment).

A company can raise capital either via debt or equity financing[0]. Relevant footnotes usually include the terms of how capital was raised, which is normally to fund the core business operations for most companies I have come across.

> The virtual reality of Wall Street and the obese financial system only serves to inflate the volatility. That is by design because volatility is precisely what it is milking.

I agree there are indeed Wall Street firms that conduct unhealthy business, but I believe this is more common within Wall Street banks and investment firms rather than most American businesses as your comment seems to be implying.

I encourage you to read a set of financial statements to better understand how a business truly operates from an accounting perspective. Apple’s financial statements[1] are a good place to start since they have many PP&E assets, conduct financial services, and regularly deal with more complex accounting topics such as foreign currency conversion and derivatives for hedging. It’s truly valuable insight into how a company that’s incredibly expansive operates in (my opinion) a rather healthy manner all things considered.

[0] https://www.investopedia.com/ask/answers/032515/what-are-dif...

[1] https://s2.q4cdn.com/470004039/files/doc_financials/2022/q4/...


This is my gut feeling about a lot of activity going on in modern corporations. Private equity firms, for example, have been historically blamed for a lot of once great engineering companies in the UK being combined into a bloated mess of activities - which ends up ultimately sacrificing competitiveness.

However I feel like your average voter shaking my fist angrily a bankers sometimes when trying to put it into words (I don't honestly have a lot of insight on how it all works).

Have you got any good book/essay recommendations on over-financialisation?



It's a necessary pain. Better for the bubble to gradually deflate and find a steady state than to keep expanding unchecked till we get to a repeat of 2008.


What a coincidence it will be a pain absorbed mostly by those working pay check to pay check and not the politicians and bankers with a legal moat of grifting off our agency.


How are politicians going to absorb an economic shrinkage? What percentage of total personal net worth is owned by federal politicians in the US? Maybe it's higher than I would have guessed, but there are only 537 federally elected offices and no federally elected billionaires.

I see some numbers thrown around that the total net worth of congress is $2-3 billion. That's only $9.00 per American at the upper end. Even a complete personal bankruptcy of all of them wouldn't have measurable economic effect on its own.


The second order effect of bankrupting them would potentially be worth it though.


My reading of their comment was the politicians enable such unchecked bubble growth.


Publicly that’s their wealth and benefit for holding office. You think the politicians pay for much of the luxury they experience?

You should Google the recent headlines about SCOTUS


How much can their personal consumption really add to even off the books? It's 537 people.


Til mitt Romney is only worth 300 million


Who does a recession/depression hurt? Also the working class. This is the best option for everyone.


That’s a way to look at it. Not the only one.

Depressions/recessions have mental health impacts; people kill themselves in despair, lose homes, savings.

But go ahead and live in your hypernormalized fiat economics bubble where there are no externalities. The price of an ounce of gold and the speed of light are both immutable properties of reality I guess.


Recessions were much worse for everyone when the gold standard was still in use. The Bank of England has data going centuries back.


The best option for everyone would be to dethrone the capitalists and set up democratic socialism.


Socialism still relies on capitalism to function.


There they capitalism where I have $200 and give it to you to paint my house.

Then there’s whatever Wall Street does, that has nothing to do with the average sugar coated view of what capitalism is.


I don't even have a problem with much of what Wall Street does. I just think that (via democracy) people have the right to allocate labor and resources towards non-profitable but humanitarian ends.


Everyone except the capitalists. And the Kulaks.


I'm not so sure about the capitalists. I think its actually pretty bad for a person's character to pursue money at the expense of other things and to have a lot more money than you need.


You'll be surprised as to who are the greatest allies of democratic socialist governments. Take a look into it, it's out there in the open.


No idea why people are downvoting me. Socialist democratic governments have always been allied with the large private banks - the capitalists - openly. It is not something they try to hide, they are proud of that model. Socialist and capitalist leaders are many times the same person changing roles depending on if he/she is in office or not.


Right--a gradual deflation with absolutely no consequences for the grifters.


I'm an economics noob, but from a lay perspective, I do wonder how sustainable the American economic model is.

The debt is higher than the GDP. That's not really a problem as long as you can keep printing money and that money will be absorbed by other economies.

However, for the first time in over 200 years, we're reaching a point where the largest global economies are NOT western nations. While Japan and France and UK and other EU allies could be counted upon to keep buying your currency, can you be as dependant on India, China, Indonesia and Brazil?

What happens in 2050 when the amount of money needed to keep propping up the system keeps going up, yet the economies with the money and scale to absorb that extra money are not your allies?


> The debt is higher than the GDP.

My mortgage is higher than my annual income

Unlike me a country doesn’t have an end of income date in sight either - it doesn’t retire.


Tax revenue is a tiny fraction of GDP, so the national debt is already many multiples over income. The current $1 trillion annual interest payments on the debt are larger than the entire defense budget. Worse, annualized interest payments are still going up by about $200B per quarter.


This is a valid point, but at some point a threshold is hit where you are basically doing nothing other than servicing past debt, and that would seem to be a real problem.


US will print more money to pay for its debts. Inflation will cause US credits not to be a popular asset to buy, so debts naturally will reduce. US people will need to deal with higher inflation like the rest of the world does all the time.


> The debt is higher than the GDP

The "% of GDP" measure of debt is misleading. Nothing special happens or "runs out" at 100%.

A better way to say "150% of GDP" here is "18 months of GDP".


Thats the trillion dollar question everyone will soon be asking…


Generally when the jig is up the bankers will incite and finance another world war.


Tax the people who are creditors and return some of the money to debtors, we could do this for all debts and sure it would be painful but life would go on. The current situation where the wealthy farm out their money to people and make over sized profits with extremely low risk has to be blunted at some point; the debts owed by the US government to the rich is do excessive at this point there has to be a plan to recover some of it. We must find a way to redistribute some of the rent seeking at some point or have society collapse as the rich acquire every last bit of wealth.


You realize that the problem has been debtors have made out like bandits with interest rates being manipulated lower for 25 years, right?

Now you want to give them another handout, the first time interest rates get to almost normal levels?


Eventually all wealth will end up with fewer and fewer people until the system completely collapses. Debt burden is just another instance of the rich making themselves even richer at the expense of working people - this year tax payers are expected to pay over $660bn to rich people in interest payments.

There needs to be some way of taxing the ultra rich effectively to be able start balancing the books. Every time this is discussed people start talking about high earners and never about the types of people lending the government hundreds of millions of dollars.

For example the average tax rate paid by the top 400 wealthiest people in the US was just 8.2% (2010-2018) and that is on additional income not the billions they already have. Let's not get started with Trusts handed down generationally to avoid inheritance tax completely.


This mindset is backwards to me. Of course we don't tax things people have that have already been taxed. Why should we?

We need to balance the books by not having unsustainable vote-buying policies that cost outsize amounts of money.

> For example the average tax rate paid by the top 400 wealthiest people in the US was just 8.2%

Citation needed. What does "average" mean? What is the denominator on the fraction that generated the percentage?

Thinking if only we could take other people's money forever to pay for things is pointless.

If it's expensive to do things, for whatever reason, then inflation goes up for everyone. If it's pointless to start things, because the government will dip in whenever it likes, then the endless progress-train will stop.


>We need to balance the books by not having unsustainable vote-buying policies that cost outsize amounts of money.

Balanced budgets with perpetual motion machine money is unsustainable. It is simply mathematically impossible.

>then the endless progress-train will stop.

There is no such thing.


> What is the denominator on the fraction that generated the percentage?

100. That's what "per cent" means, percentages are a way of comparing ratios without needing to worry about scaling both numbers. The % sign is shorthand for /100 .

As for the citation im not the op but it was the 400 richest families and it was the white house itself https://www.whitehouse.gov/cea/written-materials/2021/09/23/...


No.

The numerator is theoretically their tax bill.

The denominator is not 100. It should be their income - but that's complicated to count.


The silliness aside, the answer is in the link:

> For the denominator, we use changes in the reported wealth of the Forbes 400 to estimate the income of the 400 wealthiest families.


So in other words it's completely bogus, at the very least due to unrealized capital gains, and probably thru few more fudge factors like what is "reported" wealth?


It does seem disingenuous to use a percentage, when the error bars are likely an order of magnitude either side of the stated figure.


You do realize that it is not billionaires in top hats buying the government treasuries? It is pension funds and other institutional investors mostly.


You do realise who makes profits from owning the companies who own the pension and hedge funds that invest in these things?

I have no idea what the best solution is but the road to serfdom we are all on is unsustainable.


How many pension funds are private and how many are owned by governments? I ask because I don't know, and it's a global market with foreign funds buying treasuries (or from the other perspective local funds buying foreign treasuries).

I completely agree with your second sentence.


I'd say the best solution is to not let power accrue where responsibility is not. Allowing incumbants, be they companies, regulators, legislators, or anything, get too entrenched and the system calcifies around fakeable outcomes.

Keeping productivity and personal choice at the forefront means people have to keep getting value. I can switch away from Netflix at any time. I can't switch away from my local council or choose to stop paying them. And so my local council is allowed to have a terrible track record and keep on existing.

I also don't know the detailed answer, but I think more choice where people individually decide where to spend their money is a big part of building useful things that don't result in crippling expense on an economy.


"normal levels". Interest rates have been declining for almost eight centuries and now that population growth is no longer exponential, it is unreasonable to expect exponential compensation.

In fact, in the face of real physical laws like the constant thermodynamic increases in entropy, it would be foolish to expect to even get a zero nominal yield. Life is the gradient between our endowed energy reserves and the heat death of the universe. Something that violates this is akin to a perpetual motion machine.

You either get inflation, negative interest. Any pretense of permanent zero yield will require political redistribution of income from one part of the economy to another.


What you call “normal levels” is a price floor on the cost of capital the market would not support.


If you remove the fed from the market and the interest rate is determined by available savings what happens to the interest rate?


It will get stuck at liquidity preference. In other words, no, interest rates won't be determined by available savings, in fact the opposite will happen. People will actively withdraw money from the economy to extort interest beyond what the economy can pay.

Companies will pile up inventories until they realize that they should stop producing and fire their workers and instead just hold this perpetual motion machine money instead of running a productive business. Yields from speculation trump yields from production, see cryptocurrencies.

Then as more and more businesses quit, prices will rise and you get to see inflation. The speculative bubble will collapse and the cycle repeats, just like with cryptocurrencies, except you will lose your job and your income will fall.

In short, the interest rate will get stuck at some above market clearing rate.

If we assume the presence of Oeconomia Augustana banks next to the regular banks and that people will quickly switch to this type of banking, then the absence or presence of central banks or whatever policies they make will be utterly irrelevant because under OA banking, the interest rate is actually market oriented.


I read what you wrote and I believe I understand what you're saying.

Interest rates in the early 1980s in the US went as high as 14% and the economy didn't shut down. Businesses kept right on doing business, despite the high interest rates.

That doesn't seem to be what your theory would predict and yet it happened in real life. https://en.m.wikipedia.org/wiki/Federal_funds_rate


Well I’m not sure, my guess would be not that much because fractional reserve banking. I’m not sure how the original money supply works without central banks though.


Without central banks, private banks would basically have to issue their own promissory notes.


You realize that you are a creditor to your bank right? This just means all savings at a bank will be taxed.


Median savings level in America is about 4k

Taxing even as much as 5% of that per year would be $200, peanuts compared with tax on earned income from doing useful work


I don’t make any returns on that at all. I think a short sharp one off redistribution is better than endless austerity for the poor and socialism for the rich.

I’d love to hear some other proposals about how to pay our debts, I think interest payments for governments are set to surpass GDP or something. Seems like we need to find a way to tax back some of the money the super wealthy are lending to the government.


One approach, which tends to be unpopular around these parts, is to print money and hand it to people in the form of universal programs, UBI, etc. This increases inflation which makes past capital depreciate (effectively what brought down SVB) and so level the playing field for those who are currently not holding much capital, which is most people.

One of the reason it isn't very popular is that the traditional way of injecting money in the economy is by adjusting interest rates which makes it relatively neutral for capital (because they can get more interest) at the cost of the working class. As Piketty points out in his book though [1], the times where inflation was very high because of large scale government programs (like in the post WWII reconstruction era) were booming times for the middle class. The current covid recovery had unemployment rates at all times lows, well up to the point where governments decided tackling inflation was worth killing jobs.

Of course, reducing unemployment to improve returns was never popular so it's being sold to everyone by pushing a scary inflation narrative. Personally, I'd rather pay more for my boxes of cereals than being laid off.

[1]

https://en.m.wikipedia.org/wiki/Capital_in_the_Twenty-First_...


I hope you realize that the majority of federal and state funding works as you describe with the exception that the benefits are not Universal and targeted at the poor. Wealth transfers from the rich to the poor are at an all-time high.


Inequality is also high (when compared to the post war era anyway) so clearly there is plenty of room for more transferring.


The interesting aspect to me is that the lower inequality wasn't because there were more transfers then.


> I don’t make any returns on that at all.

That doesn't make you any less of a creditor; the money in a bank account is a liability for the bank from an accounting standpoint. The bank is indebted to you for the exact amount in your bank account. How much of your money do you think should be taxed to repay debts of the poor?


Not even necessarily the poor: how much of your money should be taxed to repay debts of the people who lived splendidly, way beyond their means?


How much responsibility do the rich have for a functioning society? Ever more of the money is with them after 2008 and COVID, I’m happier just saying let’s go back to say 2008 levels of inequality which would mean transferring trillions to the poor. Did the poor agree to the transfer of around $13tn in the US in printed money which largely filtered through to the rich?


> I don’t make any returns on that at all.

You might want to consider switching to to a High Yield Savings Account so that you do.


>socialism for the rich

There's no such thing. You're describing an aspect of capitalism (the part where capitalists benefit).

"Socialism" does not mean "benefitting from wealth redistribution". By that logic, someone like Genghis Khan could be considered the ultimate 'socialist' because he pillaged more than anyone and redistributed all that wealth to himself.


Honourable intentions, but you got it all wrong. What you want, or at least the consequence of what you want, is called hyperinflation. It's better to read what it is in Wikipedia than having savings in the country experiencing it. Good though, you're not leading the FED.


Aren’t the creditors that get screwed essentially the citizens of the countries? They pay money into huge pension programs that are pilfered with management fees and poorly structured investment portfolios. These citizens also, more importantly, backstop financial mayhem through funding of depository insurance programs (that apparently insure accounts beyond the insured amount … for arbitrary banks), direct bailouts to failed enterprises, and central banks that provide cheap money for the well-connected fat cats to disburse (or hold onto) as they wish.

The system in the US is essentially corrupt and engineered to support the status quo. It is far from a free market supported by savvy players making decisions based on actual risk/reward principles.


Pensioners comprise the biggest creditor in the world.

Are you sure you want to tax people who've worked all their lives, saving up money, who are now living off the interest payments?


How about a deal: They stop taxing us first?

The amount of wealth being sucked from the labour of younger generations is beyond any ridicule. Extremely high taxes on labour - that goes to the old, extremely high real estate prices - to the old, extremely high rents - to the old. Yet it is not enough and they take on huge debts to the government - to be paid by the young. Top that with the forced pension fund contributions of young workers who are to expect to never receive any pension when they turn old.

I know this is a high earner tech forum, and these things don't become most readers, but for people working in other sectors the perspective is different.


Every culture, going back thousands of years has young people taking care of old people. We've just made the system so efficient that you don't have to talk to the old people and wipe their poo.


I'm not against taking care of older people, and you have to make an effort to interpret my post like that.

Every culture has also had older people stepping down at some point to let fresh blood take the charge and carry the burden. In Western societies we don't see that so much anymore. Instead it seems the old prefer to see things rot than to let younger people take over. With land, farms, houses, businesses, etc.

If younger people got an honest chance all would benefit, including the old. Instead young people are mostly seen as people to exploit or ignore.

It is not difficult to look up history. The current old people who hold power in small and in large, when you look up their history, they all got help to start their ventures. Very few of them are self made, so why the act?


Sounds like you are discussing the demographic pyramid. 100 years ago, life expectancy was roughly 40 years. Death cleared out the old.


I think that is a misunderstanding of life expectancy. Death didn't clear out the old 100 years ago, it was infant and child mortality that pulled the average down.


Young people took care of old people because there was a larger social contract in place: old people's responsibility was to build an environment in which their successors could thrive, so that later everyone could share in that thriving.

>We've just made the system so efficient

It's only efficient at robbing the young.

One half of this social contract, the young-giving-to-old, was formalized (and forced) while the other half, old-building-for-young, was utterly abandoned. There's a great meme about this: "no take, only throw!"[0]

Eating all the seed corn[1] will eventually lead to famine.

[0] https://knowyourmeme.com/memes/no-take-only-throw

[1] https://en.wiktionary.org/wiki/eat_one%27s_seed_corn#English


> Pensioners comprise the biggest creditor in the world. Are you sure you want to tax people who've worked all their lives, saving up money, who are now living off the interest payments?

Yeah, and I say that as someone on the wrong side of forty.

The relative growth in wealth from pensioners to working groups in most Western nations is completely unsustainable, and will end very badly eventually. Trying to flatten the curve now is preferable to the alternatives.


So in future no one will lend?

And people who over-borrowed will get free money but people who didn't (and saved instead, and put up with 0.5% rates) will be punished?


Anyone who funded (multiple!!) stock repurchase with bond issue in the last five years deserves whatever they get.

Particularly in low margin, high capital industries like aerospace.


To be fair equity is just another type of debt. If you can retire it and replace it with debt that cost even less then it's quite a sound move.


I wasn't aware that was a thing - how is equity a type of debt? They seem to differ in their core concepts. I'm not being snarky or anything - I'm genuinely curious about the reasoning!

Equity debt, I get that, it's like a home equity loan, or, well, a corporate bond, kinda.

Assuming that equity≅debt - which, respectfully, I'm not - isn't it comparing an unknown rate of depreciation + an unknown future commodities market versus a very known interest rate environment? The latter of which, circa 2018, there was nowhere to go but up, and everyone knew it. Depreciation . . aerospace, forecasting is crap even assuming it even exists; I'm lucky if they know what's on their own shelves. A lack of fundamental comprehension of what the word risk actually means . . eh, that's endemic, actually.


Equity is an ownership stake in a company, including an ownership of future profits. In some sense a company is promising in the future to return profits to shareholders. While the company is in a growth stage and healthy, that need to return profits to shareholders is put off indefinitely. When a company is healthy but not growing, they are expected to return those profits to shareholders (the debt comes due).

I agree that it is a shaky metaphor, but the idea that equity/debt is a promise to return money to the holder eventually can allow you to put them in the same bucket if you squint.


The equity is the part of the capital of the company which is owed to shareholders. It's their money retained by the company to fund it's operations. Its not a metaphor it's actually what it is. It's literally a liability of the company.


Something I've noticed when reading financial news lately is that having the interest rates so low for so long has raised an entire generation of people who no longer separate between having money and borrowing money. Carefully read news articles about recent businesses in trouble, and notice how casually the articles implicitly claim that the problem is solved by extending them more credit. Banks are in trouble because their assets became worth less have their problems solved by lending them more money, on time frames that can't even remotely bridge the problems. It's also subtly a theme in a lot of other text too, where borrowing is if not outright conflated with having money, certainly it's very close.

It's not the same, though. Not even in a ZIRP environment, and certainly not in a non-ZIRP environment.


Not just businesses - countries too!


[flagged]


What's the point of keeping track of debt if you can keep raising it indefinitely? Why not just forget about it and use the available resources as needed.


It's just a legacy system. We're still in the process of converting it.


An alternative to repaying debt is inflating it away, which is what we've been doing.


US government debt is mostly bought by Americans who might reasonably be expected to step up and take a loss for their country. The rest is countries like China who are looking safely stash some money or to get dollar currency to make purchases denominated in dollars.

The US government has, from its inception, pretty much operated with a deficit and done fine. With such a gargantuan GDP and supreme position in foreign relations, the US should be a utopia for its citizens


i too have been told for 50+ years the National Debt will bury us. Often by people like Ross Perot

Now that i actually have savings, i load on up T-bills paying 5 percent, and Ibonds paying like 6 or 7, now im thinking "God i love the national debt, its so awesome, its literally free money, i wish there was more national debt. The national debt should sell t-shirts and drop an album".


Is it possible for the US to take on bigger and bigger debt because it has a huge national defense budget?

I have often heard that household/consumer debt cannot and should not be treated the same way as the debt a government holds. But it is still something to be concerned about because Argentina printed a lot of money to pay off debts I believe and that devalued their currency a lot.


No, the economy must decide between guns and butter. Military expenditures are destructive and wasteful. The US can afford its disproportionate and non-productive military expenses because it is subsidized by the rest of the world buying US treasuries and the dollar as the world reserve currency since the end of Bretton-Woods combined with sovereign fiat.

This is a world historical anomaly that is underappreciated. When the world dollar regime goes, so too will our unbounded budget and ability to impose global sanctions unilaterally.


Defense spending is not the biggest part of government expenses. It's about 12%.

https://fiscaldata.treasury.gov/americas-finance-guide/feder...

And that doesn't even include state and local government spending.


yes thats basically, going along the lines of the Chartalist / Modern Monetary Theory view of money, and combining it with the Petrodollar theory of money.

sorry - edit - tried to keep this short, keep messing up. Theres a lot of deep rabbit holes if you want to dig into those theories online.

just keep in mind a lot of economists disagree with MMT / Chartalism and some even disagree with what a Petrodollar is.


This works great until it doesn’t. Interest rates are rising and debt isn’t as cheap as it was. At some point it will be unsustainable but we just don’t know when. I fear that we will push the limits on this until it’s too late.


> Interest rates are rising

This doesn't affect old debt unless it needs to be refinanced. Only new debt. Plus, interest rates will most likely not be 5-6% within the next 12-36 months.

Will they be higher than they were at 0.25%? Yeah, probably. But isn't a 2-3% interest rate "healthier"? It provides alternative investment options instead of "let's all pile into equities even though valuations are already detached and the entire market is trading at 35.96x (which it was at Jan 1, 2021)"


A lot of non-government debt is likely to be floating rate. America is somewhat unique, with a large consumer fixed debt market in mortgages thanks to government subsidies, but that's not the norm.


> This doesn't affect old debt unless it needs to be refinanced.

That's a really big unless. That's the whole point.


How much debt is refinanced at what rate/how often versus new issue?


Donald Trump once addressed this by saying "So long as oil is traded in US dollars, debt doesn't really matter." And I kind of have to agree with him. So long as the hemoglobin of the global economic engine is still there, the debt can keep being issued with out concequence.

But this only holds while oil is still relevant and/or available.

So it works short term but over the course of decades it is a big problem.

The market will remain solvent and irrational far beyond all logic.


Honestly I can't see any candidates to actually replace the dollar. To become the world reserve currency the issuing country has to be very stable, make their capital markets as open and transparent as the US, accept running a large trade deficit and issuing government bonds to almost anyone no matter the geopolitical situation.

China is stable, but without a regime change they are not at all interested in opening their financial markets. India is large and a little more open, but the big question mark is stability (both external wrt Pakistan/China and internally). The EU could do it but first they'd have to get buy-in from the export giants which seems unlikely and they'd also strain the relationship with the US.


The debt does matter because it is a source of inflation.

But US never experienced that inflation because of the dollar being the reserve currency. So all the debt created in the US just becomes assets for lenders worldwide. This doesn't happen with other currencies. No one is running around to buying Indonesian Rupiah denominated debt.

If the demand for US dollars declines, one can imagine all the debt not finding buyers, causing inflation to go through the roof.

All that said, the US is very privileged but it's not likely to last as other countries are actively hunting for alternatives.


While many think it’s a hype cycle, which it is to some degree, I think generative AI is about to pour gasoline on economic growth.


Be careful with your estimations. Generative AI currently resembles VR in 2018. Very impressive as a technology, and will certainly find its niches, but grossly overestimated by the public.


I’ve been working with the tech for a long time now, and I’m convinced we’ve only scratched a very cursory amount of the value waiting. People are focused too much on its ability to write and emit bad code and say silly things and making it read poetry in the voice of a Dalek. Particularly generative multi modal AI hooked up into a feedback loop with classical AI and reasoning techniques, solvers, optimizers, etc, will be remarkable.

For instance: I uploaded an image of a saloon with cowboys playing cards and asked one of the multimodal models to describe the scene. It did with details about the game and the card table etc. Acting as a goal base agent I told it we need $20. It said it could either play cards or work at the saloon and recommended saloon work as it’s assured. I advised it to play cards and describe how to walk to the card table and to provide an image with the card table labeled and the route annotated with obstacles. It was able to do this. This could be fed into a navigation system. I then uploaded an image of a bunch of card hands and asked it to classify each hand as what it’s holding and the relative strengths. It was accurate. Using a optimizer it could feed the visual card state and play an optimal game. Etc. Each one of these tasks would have been absurdly hard five years ago.


Like crypto and VR, sounds pretty cool but it’s not clear how this fuels economic growth.


It’s a toy example of solving an abstract task that using classical techniques would have been impossible, but by combining the abstract semantic capabilities of LLMs with classical techniques becomes fully achievable. The problem with classical agents and other techniques is they fail hard when faced with a landscape that was unanticipated or requires a comprehensive semantic awareness beyond the models domain. LLMs provide a space where abstract semantic “reasoning” can happen. It isn’t a deterministic optimizer in any way, no matter the attempts to show it can do basic math tasks etc with larger models and context lengths. But why does it have to? There are tasks, class as classification, description, semantic analysis, and others it does very well at. By combining techniques you can create agents that are much more robust and autonomous in many different situations that wasn’t practical before.

This example could very well have been a factory robot that can use LLM to assist in solving unanticipated challenges, expanding its generalized capabilities beyond a very narrow and specific task set. For instance, the same machinery can have the ability to assemble many different things with minimal additional programming by providing it with specifications, drawn plans, quality control criteria and examples, etc. The LLM can use those to encode instruction to its specialized sub systems that operate with classical techniques to achieve some goal based optimization. The “glue” so to speak to achieve the abstract task is supplied by the LLM. As it performs if it’s doing things wrong reprogramming is as simple as promoting it with the error and examples of better. Through reinforcement techniques the more it performs the tasks and encounters edges and errors the better it’ll get. Compared to current factory floor robotics this would be a remarkable advancement as floors could be reconfigured more rapidly, and the set of specialized machines could be reduced to a more general and commodity set.


The difference is that generative AI is actually useful for ordinary people.


And also doesn’t require a you to buy a $3000 computer to use it.

The price of VR hardware is still a massive drag on adoption - even if it was AMAZING most people aren’t going to buy the necessary hardware

chat GPT, and for a while Midjourney are FREE

And they save you time and effort

VR=fun(ish) AI=fun+useful


> chat GPT, and for a while Midjourney are FREE

They are free for now. Eventually OpenAI will no longer be able to subsidize the compute costs of GPT-3.5 and GPT-4 for everybody... But yes, let's enjoy GPT-4 at cheap prices while we still can. I don't think we'll be able to in the near future.

I've been interested in a while in running some AI models locally, but ironically, I'd need to purchase a $3,000 computer to do so, and consider the steep hikes to my electricity bill as I run that machine.


MS have put it in Bing. Google will catch up and put it in google search.

I will happily pay whatever OpenAI charge to not have ads

It’s just too useful not to

Midjourney is the same. Saves soooooooo much time. It pays for itself for the month in a day


Fortunately I specced my laptop for vr so it should be able to quantized LLaMA 7b at least. ;)


You can get a stand alone headset for $399 but I guess that being a secret even among tech people is also a data point.


The bigger problem imo is lack of space. I used to have a VR space set up and would play frequently. But I have now moved and do not want to dedicate a huge amount of precious space to VR gaming.


Yeah, I think this is a big part of it. Just the general UX problems of hopping into it is a major blocker. I think this is a much bigger hurdle than unit price. Maybe MR will help with this a bit but who knows.


I have a quest. It’s not a secret. But it’s still $400 more than FREE isn’t it.

Even $20 a month is a lot more manageable than a $400 up front + games

And chat GPT actually saves you time and money. It’s not just a bit of fun


AFAIK all the cheap-ish (< $500) headsets either have awful specs, run android, or come bundled with spyware (or all of the above). Unless I missed a big product launch lately.

It's not a secret, more just that tech people don't want what's for sale.


Generative AI is really expensive hardware-wise, with high marginal costs. For now, a limited quantity usage is sponsored by the companies as PR to increase adoption, but in the long run for practical applications it won't be (can't be!) neither free nor cheap. Current models are intriguing but still need improvements, we know how to get those improvements by larger models, but all that means that the next iteration(s) will cost more money per 'answer'.


What I mean is, for an end user, it’s currently cheap and that must be a big factor in adoption.

IF you could get average VR for free and awesome VR for £20 a month, then I think the adoption curve would look a hell of a lot different.


All I know is that computer programming is qualitatively an entirely different thing than it was six months ago, different than it has ever been in the decades preceding. Programmers that never had to think about "what" to do are are just implementation machines are going to be hurt by this. Programmers that have to think hard about strategy, design, specification are about to be massively more productive. The ability to clearly communicate specifications; "what to do"; is more valuable than ever because it's the one way to unlock massive productivity gains.


> All I know is that computer programming is qualitatively an entirely different thing than it was six months ago

Citation needed. My day to day has not changed one bit, even if I'm using Copilot and ChatGPT. I get marginally faster autocomplete and a buggy template engine. This type of comment is exactly what GP means when they say "grossly overestimated."


I use ChatGPT throughout the day. I don’t find code completion very useful, but I ask questions about esoteric areas I’m unsure of, or about some numerical technique I’m trying to recall. It unblocks me faster than anything I’ve ever used and I’ve learned more about some topics in the last six months than in the last thirty years. The fact I can ask elaborative questions and get answers that are fairly deep and nuanced is remarkable. Code it emits is often specious but fairly close to usable, more than enough to open a door for me in what used to be hours of fruitless searching and reading countless stack overflow answers telling the questioner they’re asking the wrong question and answering something unrelated instead.

That you guys aren’t finding the value tells me you’re using the tool wrong and likely have already decided with prejudice, because I have primary experience that once you try to figure out how it’s useful, you’ll find remarkable utility waiting. It’s ok, man. I’m sure many folks found the computer useless too.


The quality of answers goes down dramatically the more niche your questions are. Further, while it might be helpful for "research", the answers it gives will have very wide variations of accuracy and isn't very useful for the day to day work of an actual software engineer which involves much more than just writing code. It works great if you treat it like a freshman in college who loves hearing their own voice and bullshits answers, but not great at being a fact engine.

And to be clear, I didn't say that it wasn't useful. I said that it didn't change my job in any meaningful way.


This explains fairly clearly a bit of rust that, while not the most obscure, is IMO fairly poorly explained in most resources I’ve found, and covers a lot of edges that are really useful to understand… I can keep digging and digging into the details presented. I find it absurd to say it’s like a freshman in college - I’d love to hire those freshman if you can point them out:

> Explain when pin is important in rust when dealing with asynchronous code, particularly when dealing with Box<dyn> types.

In Rust, Pin is a concept that is central to understanding the implementation of asynchronous operations. To explain why, let's first cover the basics.

Asynchronous tasks often involve future types that encapsulate a computation that will produce a value at some point in the future. These futures may need to poll other futures, which in turn might not be ready, causing them to be suspended and resumed later on. This suspension and resumption is where the Pin concept comes in.

By default, Rust allows moving types around in memory. However, when you have self-referential structures, like many of the future types in asynchronous programming, this can lead to problems. A future may hold a reference to another part of itself, and if the future is moved in memory, that internal reference may become invalid.

Pin is Rust's solution to this problem. If a type is "pinned" (wrapped in a Pin), it is a guarantee that the memory location of the value inside will not change. This means that you can safely keep references to different parts of the same structure because you know they won't move around.

Box<dyn Future> is a heap-allocated future object. Using Pin with Box<dyn Future> (i.e., Pin<Box<dyn Future>>) is common in async code because it guarantees that the future object will not be moved around in memory, making it safe to poll.

However, one should note that simply creating a Pin<Box<T>> does not mean that the T will stay put; it's also necessary to ensure that T does not move itself or its interior around. The contract of Pin is such that, once a value has been pinned, it is not allowed to move or have its memory invalidated until it is dropped. For a type to uphold these guarantees, it must be Unpin or ensure that all its operations maintain these guarantees. Most native Rust types are Unpin and therefore don't need to worry about these guarantees.

In short, Pin is important in Rust's async code as it provides a mechanism to safely handle self-referential structures, which are common in async programming. The Rust async ecosystem uses Pin to ensure that once a future begins executing (is polled), it will remain at a fixed location in memory until it has completed.


This is also a good demonstration of how it can be slightly wrong, or maybe misleading

> If a type is "pinned" (wrapped in a Pin), it is a guarantee that the memory location of the value inside will not change.

Pin is a wrapper around a pointer type and disallows moving the value that that pointer points at. You don’t pin a value, you pin a reference to a value. IMHO this sentence is unclear about that.

You sorta get the right vibes from the rest of the post, but there’s a major important concept in here it explains decently but also just wrong enough that you may internalize the incorrect thing.


Interestingly I read what it said as precisely what you said. But you’re right it elides the crucial nuance that the pointer is pinned, and “the value inside” can ambiguously be read as the type is a value and not a pointer to a value. I read “inside” to mean “inside the pinned pointer” but it can (maybe more accurately?) be read as inside the pin is a value.

I agree - it’s not a panacea devoid of human involvement. I think really importantly, whatever failings the technique has, it’s reflective of humanity itself, and humans are often slightly wrong or misleading. I think confusing it’s output with that of a deterministic program is wrong - it’s much more akin to the output of a human than not.

My argument though is it’s not a freshman in college level of human failings. It’s clarity is far beyond that on many subjects, and it’s facility to “understand” is remarkable for a human let alone a computer. The fact I can interrogate it and ask it the clarifying question you bring out is beyond useful - I can dig into the specific areas of confusion I have, and often if it’s wrong I’ll cop to it unlike a human oracle.

> Clarify then: If a type is "pinned" (wrapped in a Pin), it is a guarantee that the memory location of the value inside will not change.

When we say "the memory location of the value inside will not change," what we mean is that the value that the pointer points to is guaranteed to not move in memory. This is a critical guarantee when working with certain types of data, like self-referential structs or certain async code in Rust.

Let's take an example of Pin<Box<T>>. Here's what it means:

Box<T>: This is a heap-allocated instance of T. The Box is a smart pointer that points to this heap-allocated T. Pin<Box<T>>: This is a Pin wrapping the Box. This means that the T that the Box points to is pinned. This does not mean that the Box itself cannot be moved. The Box is a pointer, and pointers can be copied and moved freely. But because we have used Pin, the actual heap-allocated T that it points to cannot be moved. This is what we mean when we say "the memory location of the value inside will not change." We're talking about the T, not the Box.

This is important when the type T has self-references, because moving T in memory would invalidate those self-references. By using Pin, we ensure that T stays put, and those self-references remain valid. This is a key part of the design of Rust's async programming model.

However, this guarantee is only upheld as long as the type T does not implement Unpin, which is a marker trait that indicates that it is safe to move a type out of a Pin. Types that are not Unpin uphold the guarantee that they cannot be moved out of the Pin once they are placed in it.


> I think really importantly, whatever failings the technique has, it’s reflective of humanity itself, and humans are often slightly wrong or misleading.

Absolutely, and I don't think every explanation has to be perfect to be useful, hence me saying "you gotta watch out" and not "fuck this tool, nobody should use it."


It may give you explanations/code of numerical methods that run and look very plausible, but are subtly wrong; and you have to know both the language and methods in order to untangle the bug.

But I have found asking questions like "how would I ..?" or "explain ..?" can be useful as long as you don't treat like a final answer.

Programming is one of the fields were correctness actually matters in a binary way. But jobs where "sort of" or 80% is good enough can be replaced with language models.


The quality of the answers is often too poor to be used as reference.


Somehow all the non programmers are in full force telling actual coders how their job is changing


This is so blindingly obvious to anyone and I can't believe it's not said enough


I am the primary source of my opinion, give me a break. I'm sharing my own personal experience; it is therefore not an "estimation".

I will say that checking back a day later, I am stunned at how many smart programmer types are clearly not even close to getting how useful this tool is. For people that don't find it very helpful, to me that just means they're not asking enough of it yet. More people just need to spring for the $20/month and start asking GPT4 about any old technical subject they think they know well but about which they might also have holes in their knowledge.

It's a tool that rewards creativity, so you kind of get out what you put in. Many folks here are too young, but back when search engines were coming out, there was a phase for many people where they only knew how to treat it as a toy. Like, search for their own web page to see if it showed up; search for a question they already knew an answer to to see if it was correct. It required a bit of a mental reset; a "click"; before each person would know automatically, without reminding themselves, that they could actually search for things they didn't know.

It seems like there is something similar going on here. It's not a search engine so it's not exactly about stuff we do or don't know, but I gather that a lot of people thus far have not uncovered the advantages it actually gives them.


> All I know is that computer programming is qualitatively an entirely different thing than it was six months ago

^ Your citation, sir.


I work as a programmer for a Fortune 100 company that everyone on the planet has heard of. I have heard nothing from any other developers about using AI. The programmers are not talking about it, the company is not talking about it. I don't know if we are behind the times or if this isn't as big a thing as you are making it out to be.


At least so far, it's really not a big deal. It's sometimes better than search at answering questions about well-known APIs or whatever, but can't do anything very complex without running into trouble, and will happily make up plausible-looking but entirely wrong code when it doesn't know the answer.

It's like a barely-got-a-passing-grade coding bootcamp grad with above-average-for-that-cohort Googling skills and a problem with dishonesty. Sometimes that's helpful, a little, but mostly it's a time-wasting distraction.

What it can revolutionize in its current form is mainly limited to high-automation bullshit industries. Spammers, scammers, certain kinds of marketing content that's currently farmed out to the lowest bidder, astruturfed media campaigns, targeted mass propaganda. That part, I think will be big, but will be entirely bad for society. Unless it makes another leap forward at least as big as the last one, programming's nowhere near being threatened yet, nor, even, is most writing that's not already bullshit-heavy (though it'll help make normal writing somewhat more efficient, probably)


The above is only true for GPT3.5 (and probably underestimates that as well). But completely false for GPT4.

I had a bunch of deeply-nested map/flatmap code and it rewrote it into a short pretty for-comprehension, error free. For those that haven't learned those desugaring rules cold, it's a massive help.


Isn't that the kind of thing an IDE could be able to do, just by programmatically swapping between the choice of syntax to express the same logic? Aren't there a relatively small, finite number of these cases that would cover a large percent of typical transformations, and do so with 100% reliability?


Sure, given a buggy and brittle bundle of special cases. That sort of thing can work, I'd presume, if you use an IDE that's deeply integrated with your compiler and language development. With third-party tools, version skew alone makes it untenable, let alone that the amount of transformations you could want is basically unbounded.

The cool thing about GPT is that it understands code contextually and flexibly. I can literally throw it syntax that was developed specifically for a single project, that it's almost certainly never seen before, and as long as it's similar to something it's seen elsewhere, GPT can figure it out. There's no way you're getting that from an IDE feature.

I got GPT 3.5 to transform a (simple) function into return-early style a few days ago. It didn't quite get it right, but it got the correct idea. I did this by typing "Transform this code into return-early style." With an IDE feature, I'd have to hope the developers had heard of and cared about return-early style in the first place, and then that their tool was powerful enough to realize when and how to apply it. (Highly unlikely, IMO.)

edit: In other words,

> Aren't there a relatively small, finite number of these cases that would cover a large percent of typical transformations, and do so with 100% reliability?

I believe the opposite is the case.


Can an IDE answer followup questions of how those sugaring rules worked, and suggest other alternative formulations if I want part of it to be more explicit?


"Hi ChatGPT, here's a schema for an old MySQL 5.5.x table. Can you give me a "Legacy" play-slick case class and table class that reflects that schema, using Scala Play 2.8.x and play-slick 5.x? Can you rewrite that schema to a MySQL 8.x table, using your recommendations of best practices and naming conventions? Can you suggest the new schema to me in the form of a play-slick evolution? Also can you give me a slick case class and table class that reflects the new schema?"

I've rewritten portions of a legacy side project in three days, what would have taken me weeks without GPT4. And beyond the simple time estimates, there are the tasks that I just simply would not have attempted otherwise.


I've been doing programming for 30+ years and having a hard time seeing how AI will help me.

I have templates and scaffolds for most of my reptitive work already. I feel that I'm good enough with SQL that it would take me longer to prompt an AI to write a query than it would for me to just write it.

I am open to the possibility that I'm old and set in my ways but it's been a long time since the act of writing the code has been the bottleneck in my work.


The greatest benefit I have seen so far is using AI for composing code in unfamiliar frameworks and languages. You do not need to read the docs. It is a stackoverflow killer.

If you already can speak the language, the best it can do is provide the boilerplate.


I see. That makes sense. I don't compose code in unfamiliar frameworks and languages. I try as much as possible to use the frameworks and languages I know best. I got off of the treadmill of "framework of the year" chasing a while ago.


I think this is an example of how it's all different now though. Switching costs have all of a sudden decreased dramatically. There is less cost to picking the right tool for the job. Programmers that have the perfect set of templates and shortcuts and that can code in their chosen language like water and so therefore have no interest in switching to any other language... they're more at risk than the ones with broader experience.


I've been using it for a few days. It's not very useful (to my usecases), but it is extremely fun. It's like pairing with a golden retriever.


I work as a programmer for a Fortune 100 company that everyone on the planet has heard of. I am sick of hearing about using AI. Everyone is talking about it, the company has meetings about it every week, we've came out with policies about using public LLM's like ChatGTP for company purposes.


> Programmers that never had to think about "what" to do are are just implementation machines are going to be hurt by this

what is the most popular low-code/no-code solution that i could bring into an enterprise organization that can:

* read/write to/from a database

* read/write to/from a message queue

* read/write to/from a cache

* make HTTP API calls and do light ETL/logic on the request/response bodies

* OutSystems: This is a popular choice for enterprise organizations, as it provides robust low-code development capabilities. It supports integration with databases, message queues, and caches. It also allows developers to make HTTP API calls and perform light ETL operations.

* Mendix: Another popular choice for enterprise-level applications. It offers a wide range of features, including database integration, message queue handling, cache interaction, and HTTP API calls.

* Microsoft Power Apps: This platform is part of the Microsoft Power Platform, which also includes Power BI for data visualization and Power Automate for process automation. It supports data integration, including reading and writing to databases, and can make HTTP API calls.

* Appian: This is a low-code platform that allows the creation of apps that can integrate with databases and make HTTP API calls. It may require additional configuration or use of additional services for interaction with message queues and caches.

* Zoho Creator: Zoho Creator is a low-code application development platform that allows users to create custom applications with minimal coding. It can integrate with databases and make HTTP API calls, but may not natively support interaction with message queues and caches.

These aren't new and I'm still employed


Yeah all this doomsday prophets for software engineers is just situation that keeps repeating itself. I've seen OK code generation around 2006. So what if it got a bit better? That's not how entire projects get done, code part is rather miniscule in most organizations that aren't single-focused startups.

The less you are just a pure code monkey and nothing more, and the more you bring added value of actually managing whole software dev lifecycle the less you are replaceable (and more senior you become with all the + and -). Chasing requests from teams that couldn't care less about your projects, navigating process hell, synchronizing with global teams, evaluating various pitches and proposals, hiring, architecture design, testing team interaction/management, internal politics and so on and on. This is still software engineer territory, albeit senior one (in my massive org I am at lowest dev position, but with salary 2 levels higher, and I get to do this and much more).

I've not seen even an attempt to put any dent on job market for those skillsets. If you are worried, ramp up your skills, thats what one should be doing regardless.


> That's not how entire projects get done

Devil's advocate: it takes visionaries at the top (CEO/management) to push for things like "that may not be not how entire projects get done today, but maybe in the future if we can get a good low-code solution that does what we want well, we could save a lot of money"

Is it a pipe dream? I don't know. That's where good execution comes into play.


I've been playing around with PowerQuery in Excel to do some ETL work. It's pretty nifty and goes a lot farther than Excel used to, and in many ways it's better than other tools I've used.

And yet...

To do a few things I consider basic, you need to write nontrivial code. The two examples recently:

1. To strip non-digits from phone numbers, I needed to edit a step in the query by hand to add Text. Select. It doesn't seem to be accessible from the GUI.

2. To roll three columns up into one by replacing nulls with values from the next column, I had to write a lambda for List.Accumulate. Again, not available in the GUI, and probably not something a low-code user could hack. I think they'd have to pull it into Excel and then use a long IF formula to get there.

Low code can also be really slow when you think it's just gotta do what you want, but it doesn't, and you need to code it. However, a good platform can get the annoying bits out of the way, and the auto-preview features the platforms usually have are great.


Won’t this be actually bad for the economy ? I’m personally not spending as much because I’m not clear if I’ll have a job when ChatGPT-5 comes out.

I’d say many people are doing similar things ?

Less jobs is not necessarily a good thing ?


> Less jobs is not necessarily a good thing ?

The rollout of the lightbulb significantly reduced the economic potential of candlemakers, but it improved the quality of life for society overall. So, whether technological advancement is a "good thing" depends on which group you are in: the one with obsolete skills or the one that benefits from cheaper and/or better goods and services. Either way, the overall economy benefits from creative destruction.

[0] https://en.m.wikipedia.org/wiki/Creative_destruction


Not having to pay developers 6-figure salaries might be a good thing in the big picture.


Or ideally, we can keep making 6-figure salaries, but be able to accomplish more in the same number of hours.


I admit GPT4 is useful, and I have been using it extensively in a fair number of different projects lately. And all it does for me is save time typing. I still have to think exactly the same amount I had to before. I do think its going to improve efficiencies and there will be moderate economic effects, but I also wouldn't say programming is qualitatively a different thing. In a few years, who knows? But the main issue as I see it is that ultimately programs must interact with the real world and solve problems for which there is not an abundance of training data. It isn't that I think it is impossible to build AIs to work in those domains, but I think our current approaches might be slower getting there than the leap to GPT4 encourages us to believe.


>Programmers that never had to think about "what" to do are are just implementation machines

Do such programmers even exist today? I thought the Dot Com Crash wiped out any remnants of such jobs.

Every software development job now is a composite job of some sort, demanding skills and duties that were previously separated into other roles. This trend of condensing/combining roles applies to non-tech industries as well.

Specialization is still demanded, too, but the predicted future of hyper-specialization where every professional exists in a nanoscopic niche never materialized. We instead got multi-specialization due to an ever-shifting and highly unreliable labor environment.

This is why the job title of "GPT Prompt Engineer" is a joke, and will never actually exist.


Have you ever considered that "how" to do something is pretty much always several orders of magnitude more complicated than "what" to do?

Wouldn't it make more sense to have LLMs replace middle management? Knowing "what" to do is way easier to predict. Even the worst programmer is far better organized than the people gathering the requirements and they have to fill in the blanks all the time. These are junior programmers who know fuck all about a business yet manage to know more about the project overnight compared to management that's been there for many years!


My perspective is pretty much opposite yours. A really common trouble is when programmers implement a working solution to a poorly understood problem. Building a good specification is oftentimes much, much harder than implementing that specification.

Maybe what you're referring to is what happens in a lot of software organizations, where the "what" is often just phrased as a poorly-thought-out, loose problem, and it's actually up to the developer to come up with the specification and the implementation at the same time. But that's not what I'm talking about. What I'm saying is that if you're good at specification, your job is about to get a whole lot easier, and therefore more valuable, by using these models.


You're right. I appreciate this comment. This is what people come to HN for. Thanks. Fat chance finding any traction with getting devs at those orgs your just described getting traction even with the LLM backing them. People don't like to be wrong. The status quo will continue, perhaps indefinitely. I also still have my doubts that an LLM can be smarter than a fresh-out-of-college junior dev.


Thanks for the nice exchange. ;)


Here’s another example of real life improvements - my wife, who isn’t an English speaker, asked ChatGPT to write her a letter to the immigration service requesting a visa for her brother to visit. She filled in basic details like names and visit details and it provided a beautiful letter, conforming to what an immigration lawyer would typically write. She then took that, edited it slightly, and sent it off. What would have a multi hour task for her of research, struggling to write it, and editing, or hiring a lawyer, was reduced to a casual prompt and some light amendment.

You might consider this example pedestrian and trivial. But it materially improved my wife’s life. VR and crypto have never done that for people in their day to day tasks. Neither have Alexa, and I’ll go further, the web made it easier - but this entirely solves the problem.

These situations happen all the time in varying forms. I don’t need to convince anyone though. Y’all will see in five years.


VR at its height was a hobby platform. Chatgpt alone is used by 100m+ people a month now.


I bought a quest and enjoyed it. But it mostly gathers dust in my cupboard. I used it for maybe a month regularly.

I am using generative ai every single day for many months so far. And every day I am still amazed by it - especially GPT-4.

Virtually every day I get excited about a new thing I could do with it.

It’s amazing (like VR was, for a bit) but also a lot more terrifying and awesome.

I don’t think it’s the same


>> Virtually every day I get excited about a new thing I could do with it.

That is pretty much the definition of a hype cycle. I am old enough to remember the flury of microwave cookbooks that came out in the 90s when the microwave became commonplace. People were finding out new things they could do with the microwave each day. These days it is mostly used for warming up food and microwave popcorn


But we all have a microwave… because ultimately they are really useful


Yes they are useful, just not for cooking five course meals. AI will have it use, the only question is for what.


The question is what __else

It already has so many uses. That’s my point. There will be more, unknown uses, but there are a lot right now that are already potentially world changing.


The microwave oven was a byproduct of inventing radar, and radar was not invented for the purpose of heating food.

Radar continued to evolve (klystron, traveling-wave tube, etc.) and eventually led to huge technological developments like satellite communications and WiFi, while microwave ovens stagnated since their first commercial release in 1947.

In the 1970s they got cheaper/smaller/more reliable, but those 70s models are functionally identical to ones we use today. Half a century has passed and either model will safely heat up your dinner, in the same way, in the same amount of time.

LLMs are radar. ChatGPT is the microwave oven, or perhaps just a radar dish crudely pointed at a pizza. We will see a lot more uses for LLMs beyond things like ChatGPT.


All things in a hype cycle have this characteristic but not all this that have this characteristic are a hype cycle. The iPhone is an example. I remember iPads people laughed “it’s a giant phone! So silly!” Etc. To the point made, something can be in a hype cycle because it is obviously useful, but for what we aren’t sure - so there’s a huge creative grasping, and convergence of hucksters and con artists, that boils down into what is the long lasting utility. (Ala microwave)

The thing is the utility is realizes right now by a lot of folks and the only tool folks are using is a crappy web UI onto a chat bot. Crypto had no utility ever, just some crypto anarchist dream of a world without the Treasury. VR is neat but rarely useful.


Even if all it did was enable a new level of college cheating, which we can already read about today, it'd have hundreds of times the impact of VR.

(Not an endorsement of cheating, just an observation of practical effect)


Maybe colleges will actually have to figure out this thing called "teaching" again instead of just dectupling[1] fees and deferring everything else to some half-baked MOOC and some YouTube lectures.

Any university course that can be cheated on using ChatGPT now has about as much value as a Masters in Pi Digits did in late 1949[2].

And any university that doesn't teach the use of the technology will be underserving it's students, since like it or not, it's out there now.

[1]: https://www.bloomberg.com/news/articles/2012-08-23/college-t...

[2]: https://www.pocket-lint.com/laptops/news/109122-computer-cal...


I don’t see this resemblance at all. VR has peaked and plateaued at nothing more than a gimmick just like crypto. The hype around AI is because it was immediately useful from day 1 of a commercial product.


And with very little moat, allowing any company big or small to dip their feet in the waters. It's likely to be a commodity.


I anticipate I will find it somewhat useful in Microsoft Office products in the future. Beyond that I don’t know. Logos are easy to make now. Knowledge workers should probably think about becoming plumbers and AC repairmen. Something where you use your hands, because AI doesn’t have hands.


I think that's what makes it a hype cycle. A period of massive short term growth/expansion/copy-cats followed by a selection process for the actually useful products, like the dot com bubble


Boy, I don't know.

VR really isn't positioned to disrupt entire industries while companies are already figuring out how to replace people with LLMs.


Having engaged in a couple previous hype cycles (2017 crypto, 2020 tools for thought, 2022 Twitter alternatives), this one feels qualitatively more mainstream, and thus overall larger - it's not just tech bros getting into these tools, it's virtually everyone. So I tend to agree that we're going to see a big AI boom in the coming years.

My main question at this point is whether regulation and/or lawsuits will hamper any of that progress or at least slow it down.


In a hype cycle people overestimate the short term impact and underestimate the long term impact. Developers won't be redundant, however, sites like Stack Overflow may struggle. There will be job losses, but not in the way people expect, however, there will also be job gains in other areas that were previously uneconomical.


What are some areas dependent on LLMs where you see job gains proportional to the losses related to LLMs?


The good equivalent example is translation. You would expect Google Translate to make translators redundant. However, they are busier than ever. As they can now translate documents a lot faster, it becomes more economical for companies to translate documents.

Regarding job gains, we initially will see a whole bunch of snake oil sales people selling AI.


Err.. but AI has pretty much obsoleted most translation work. Of course, well-written poetry and prose will still need to be translated by an expert in both languages, as well as possibly literature and linguistics. But even then, GPT-4 may be better than this person some of the time.


Absolutely not. OpenAI can't be taken to court if a mistranslation causes issues. That is why it will always require a human to confirm the translation is correct.

As discussed before, OpenAI doesn't know it doesn't know, and will dream up something.


Gonna be prompt engineers (and human langchains)


This one feels more than all those but less than web 1.0. it'll be big for a bit, but it can't do everything everyone wants.


AI feels far more like the internet hype cycle of the mid to late 90's and the smart phone hype cycle of the mid to late 2000's than anything you listed. Your examples didn't make much noise outside of niche interested audiences. AI hype is everywhere.


I have heard completely non computer people mention it. Whereas the other things never.


> this one feels qualitatively more mainstream, and thus overall larger - it's not just tech bros getting into these tools, it's virtually everyone.

This may just mean the bubble is inflating faster than with the previous examples. Mainstream excitement often = it's about to pop.

The time to sell high and bail on crypto was when it became mainstreamed in the general public. I think the exact day was when Elon Musk hosted SNL and mentioned Dogecoin. The time to bail on Real Estate was when the receptionist in your office quit to become a Realtor. The time to bail on dotCom 1.0 was when your dad had a web site idea.


Dogecoin is down from the 2021 high but this Internet thing seems to have had legs. AMZN is up at $113 from shy of $4 circa 2001. Sounds like your dad's one for one. What's his thoughts on ChatGPT?


If only AMZN represented the entire dot com bubble.


Exactly. Talk about entirely missing the point!


The question is how?

Same old “everyone starts a tech company and white collar workers continue to exploit manual laborers” or local AI craters entertainment and software employment as viable options for the masses since the machine can just generate state asked of it, so we kick off rebuilding/modernizing all the crumbling real infra?


I suspect it's like most future tech: its impact on the next couple of years is over estimated and its impact 10-20 years out is under estimated.


Generative AI still needs time to mature before it will have a massive impact. It's just hard to from a company around a product that changes on a week-by-week or even day-by-day basis. The only companies winning big in the immediate term are those creating the models themselves.

As soon as generative AI can largely manage and automate the integration of IT components effectively, that's when I expect to see the explosion.


I believe this as well. The productivity gains from AI promise to be astounding.


AI will push lots of people out of work. And will let capitalists to make money at faster rate. That might cause economic growth, technically.


I hope you're right. I think it's more likely to bring economic devastation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: