Change in Data Sources Led to Lower Inflation Reading
Excerpts:
“On its merits, you can defend the change,” said Omair Sharif, founder of Inflation Insights, a forecasting firm. “Optically, it’s just not a good look in an environment when people are worried about political interference.”
Mr. Sharif said he did not believe the change was politically motivated. But Courtney Shupert, an economist at MacroPolicy Perspectives, another forecasting firm, said such decisions undermine public confidence in the statistical system.
“It seems like we are moving to more of a vague, uncertain, cloudy data quality environment that is going to make market participants less confident in the data that we do receive,” Ms. Shupert said.
Also relevant: The DOGE team set up a Starlink satellite at the White House [1].
DOGE staff installed the terminal on the Eisenhower Executive Office Building roof in February 2025 without notifying White House communications or cybersecurity teams, ignoring their prior warnings [2]. The resulting "Starlink Guest" Wi-Fi used only a password—no usernames or two-factor authentication—unlike standard networks requiring full VPN tunneling and device logging.
This allowed devices to evade monitoring, transmit untracked data outside secure channels, and potentially enable leaks or hacks, as noted by former officials and experts like ex-NSA hacker Jake Williams. A confrontation ensued with Secret Service when DOGE accessed the roof unannounced [3].
They don't "just have taps" in whatever isp you come across. And they certainly, and i cannot be clear enough on this, they don't just spy on Americans. It's literally the one thing they expressly forbidden from doing
Even in the days of telegrams, FDR was opening and reading millions of American’s telegrams to use the information therein to target his political enemies.
You can’t build centralized systems that enable spying and not expect people to do the “forbidden” thing. We have to build systems that make this impractical.
I’m sure you thought GSM was secure too. They absolutely have taps, just search for YouTube videos then cross reference the exact places and situations they talk about.
> When Claude writes tests for code Claude just wrote, it's checking its own work.
You can have Gemini write the tests and Claude write the code. And have Gemini do review of Claude's implementation as well. I routinely have ChatGPT, Claude and Gemini review each other's code. And having AI write unit tests has not been a problem in my experience.
yeah i have started using codex to do my code reviews and it helps to have “a different llm” - i think one of my challenges has been that unit tests are good but not always comprehensive. you still need functional tests to verify the spec itself.
When I open a new query window, the AI tab is selected by default which is annoying. I just want to write SQL without having to switch tabs like I could before. Not only has it ruined my muscle memory, it's also more inefficient.
The problem with red-light cameras is that enforcement becomes robotic. Robots are perfect—they don’t make mistakes (at least in theory), and they don’t show leniency. If policing is done by robots, then humans are expected to be infallible.
This is a complete non-issue. It's a traffic light, you are supposed to stop when it turns yellow! The yellow is the leniency. If you can't manage to stop before it turns red, you are either: 1) speeding, 2) driving a vehicle with defective brakes, or 3) mentally impaired. In all three cases you are a danger to fellow road users.
Besides, it's not a "the machine says so and not even the Supreme Court can overturn it" scenario. If there's genuinely a reason to cross into the intersection while the lights are red (such as there having been an accident, and a cop is temporarily managing traffic) the ticket will be waived. Heck, there will probably even be photographic evidence of it!
Most countries even have cops judge the tickets, just to already filter out those weird cases. The registration is done by a robot, but the policing is still done by a human.
Or you have a heavy, inbalanced object in your car you don't want sliding, something fragile in tow you don't want to have fast decelaration, or just don't have super-human reaction time since some light have extremely fast yellows.
Or, a deer jumped out on the side and you briefly looked away at it.
Or you could tell the driver behind you wasn't slowing down, so the safer option is to go.
Or. Or. Or. Real life is messy, and there's a million reasons to go though a yellow instead of slowing down.
> Most countries even have cops judge the tickets, just to already filter out those weird cases. The registration is done by a robot, but the policing is still done by a human.
This is common in the US as well. The machine takes the picture, filters out the illegible ones, and sends the rest to an actual officer who will issue the ticket.
> and they don’t show leniency. If policing is done by robots, then humans are expected to be infallible.
This is bad when applied to laws that were written with an exception of leniency and selectivity in enforcement, which is quite a lot of them. For running red lights though? I don't mind if the robots take you off the road automatically.
Running red lights? That's not all the cameras are used for. If are making a right turn on red and didn't come to a complete stop first you can get a ticket.
Okay? Rolling through a red light is dangerous whether you do it straight or to the right. Hell, the latter probably kills more pedestrians. I don't really mind holding drivers to high standards.
But why would you do that? Especially if you know there are robots enforcing that you come to a complete stop?
There are many places that don't even allow rights (or lefts) on red.
I got a right on red ticket once, and then I made it a point to obey the law -- especially at the intersections with the robots.
For things like traffic laws especially (where there are very simple cut and dry rules), why is it okay to break the law, and why is it not okay for robots to enforce the law?
> If policing is done by robots, then humans are expected to be infallible
The reality is that the people doing the policing are counting on humans not being infallible
Fines have become an important revenue stream, that's why they are being automated.
Now that this is becoming more widespread, there's a perverse incentive for governments to maximize the difficulty in avoiding fines. Lower the speed limit on roads designed for higher speeds for "safety", etc
There are many citizens, like me, begging for red light cameras so something can be done about the rash of crashes and killings from willfully reckless drivers.
> Fines have become an important revenue stream, that's why they are being automated
Maybe we should legislate traffic fines out of existence, and just use points. Or at the very least the fines should never go back in any recognizable way to the budget of the police doing the enforcement.
Subjectivity in applying the law is a huge problem, especially given how corrupt and violent police are. Red light cameras remove police from the equation for that infraction and apply the law evenly. They also scale in a way that police just can't, and that's extremely important for safety.
I live in a city where red light running is an epidemic. Drivers flagrantly just don't stop, and it kills people all the time. Red light cameras - plus actually revoking drivers licenses, and then actually throwing people in jail for driving on suspended licenses - are the only way to fix this.
It's far past time that drivers are no longer immune to consequences for violent, sociopathic behavior.
Well, in 2019 an estimated 840 people died in the U.S.A. by red light running (<https://ncsrsafety.org/stop-on-red/red-light-running-fatalit...>). That's about 2.3 people a day, so last person killed by someone running a red light was statistically about 10.5 hours ago, the last one before that about 21 hours ago.
The "content over chrome" trend was started by Microsoft's Metro design language. Windows 8 and Metro are one of the biggest UI/UX disasters since the dawn of computing. Why would Apple keep copying the worst ideas from Microsoft?
Metro worked perfectly well on tablets. And every OS since W8 has actually kept some version of Metro (in the form of e.g. larger touch-targets), because having a single version of Windows UI for both touchscreen and mouse-and-keyboard computers, is what enabled the creation of the "2-in-1" or "convertible" touchscreen notebook, a design that basically every modern Windows notebook instantiates.
Liquid Glass also makes more sense on tablets. I think Apple is copying Microsoft because Apple is also moving toward full UI-level unification between their desktop mouse-and-keyboard UI and their mobile/tablet touchscreen UI. They've already done it for some apps (e.g. Notes.)
MacBook Neo is getting a lot of attention for good reason. It is a great laptop. The fact that it isn't "convertible touchscreen" notebook doesn't seem to bother anyone.
Apple copying Microsoft is a mistake. It used to be the other way around.
The Windows 8 equivalent server edition also included the upgrade to Metro UI. I don't know, I guess MS figured IT wanted to provision Windows services using a surface tablet?
I actually really did like Windows Phones though. I can imagine a world with a third competitor in that space today... But MS didn't seem to have any understanding or ability to develop an ecosystem that works. Even when they were literally paying people to write apps for their app store, it was just terrible.
> Why would Apple keep copying the worst ideas from Microsoft?
Remember also the "Get a Mac" ads that parodied Windows Vista permission dialogs, but now macOS is a permission dialog hell.
Tim Cook was an IBMer. I'm sure that Cook was a fine hire as an operations manager, but I doubt that Steve Jobs intended for someone like Cook to be in charge of everything at Apple, including UI design. (Jobs never put Jony Ive in charge of software, by the way, whereas Cook did.) Indeed, I doubt that Jobs groomed anyone to be his successor. By the time Jobs learned he had a fatal illness, it was too late, and he had to turn over the company to someone the board of directors would accept, which was Cook. Jobs was CEO but didn't own the company; infamously, the Apple board of directors chose John Sculley over Jobs in an earlier power struggle.
You are rewriting history. Any time Jobs had to step aside from the CEO position, Cook took over immediately. He was Jobs' designated successor for a decade when he learned he was sick. They merely implemented the succession plan they already had.
When Cook took over, he was unequivocally the only choice. He steered the company in his own direction, with a focus on operational health to the detriment of other things. He kind of lost the plot somewhere in there and has been spinning his wheels for a while. That's not what I'm contesting. It's your idea that Jobs didn't want Cook. Jobs loved Cook.
> Any time Jobs had to step aside from the CEO position, Cook took over immediately.
Any time Jobs had to step aside from the CEO position temporarily, Cook took over immediately. Metaphorically speaking, Cook kept the trains running on time. Cook did not set or change the direction of the company at the time, and Jobs was still available for consultation.
Sick is not the same as dying. Jobs initially didn't think he was dying, and tried to treat his illness with some hippie-dippie "alternative" medicine, when aggressive treatment might have saved his life.
> He was Jobs' designated successor for a decade when he learned he was sick.
Citation needed.
> Jobs loved Cook.
In what way? According to biographer Walter Isaacson, Jobs lamented that Cook was "not a product person".
It worked so incredibly well on the Windows Phone 7, but translated horribly to the Windows 8 desktop. Especially the weird mouse gesture to get to the neutered Settings panel, the redundancy of that panel to begin with, and the entire UWP app experience. Windows 10 was a great marriage of these two concepts, even if the Settings menu was still redundant, it was functional. Then comes along Windows 11 even it's most recent feature updates feels like a half-finished UI.
That article was written in 2014, just a few years after the trend started, and still today, over a decade later, Apple, once famous for its UX, is still failing to follow it.
What puzzles me is that information like this is out there. How did Apple get it so wrong?
I am hopeful for the new UX VP. He has his work cut out for him.
Speaking for myself : it's a bit creepy and unsettling. Using brain cells is probably inching closer to consciousness than today's silicon is, and consciousness isn't well understood so I'd fear this line of research could eventually lead to the "I have no mouth and I must scream" the other commenter referenced. Many decades from now we might be wondering how much of a human brain needs to be grown in a lab before it's considered unethical.
Is that an issue only because these neurons are biological (still artificial because they are lab grown)? Silicon neurons could also become more powerful and lead to the "I have no mouth and I must scream". In fact, top tech companies are investing 100s of billions of dollars they year to make their silicon neurons more powerful.
Poor CEO my abs. When ChatGPT came out Microsoft was singing victory songs, and predicted Google's imminent death. 3 years later Google has one of the best models and Microsoft is still borrowing OpenAI's model. Not only that, Google is running their models on their own hardware, not Nvidia's.
One of the things that a CEO drives is vision and innovation.
Sundar misses the mark on these things. AI is a good example. Google invented the transformer architecture, but simply published it for its competitors to use. It took a code red in 2023 to finally push Google to develop products based on this.
Cloud. Years late to the game. All it would have taken is a letter similar to the famous Bezos memo to eventually get all of Google's world-class scaled infra pointing externally and generating revenue. Instead, Google Cloud started late, and couldn't reuse much of the internal infrastructure.
Stadia, another example. That architecture is probably the future. It's not clear how gamers in developing countries are going to afford thousands of dollars in hardware that sits idle 90% of the time.
> Google invented the transformer architecture, but simply published it for its competitors to use.
That's how innovation works in this industry. If companies didn't allow researchers to publish their work it would set us back decades. Researchers building on each other's work is how this industry was built.
> It took a code red in 2023 to finally push Google to develop products based on this.
So Google executed. Ability to execute is one of the things that makes a good CEO. Other CEOs have additional qualities such as vision, and getting others to believe in the vision. But not every CEO needs to be a Steve Jobs!
Plenty of innovations are coming out of Google, just look at Nano Banana Pro for example.
Google invented the basis of LLMs, but under Pai failed to come up with the idea of ChatGPT. Getting Gemini into a workable state required the return of Page and Brin. It seems to be working out for Google, but how they got here is a very big mark against Pichai's leadership.
1. Proprietary Data (Youtube, docs, gmail, cloud logs, waymo, website analytics, ads, search, the list is huge)
2. Commercial Datacenters (theyre ahead at least)
3. Chip production (Google is manufactoring proprietary chips)
4. Consumer OS (Chrome, Andriod)
5. Consumer Hardware (Pixel)
Basically google has access to data that OpenAI will never have access to, can lower costs below what OpenAI can, and is already a leader in all the places OpenAI will need massive capex to catch up.
You can't train LLMs on proprietary data, at least not if you want to make that LLM as accessible as Gemini. Otherwise random people can ask it your home address.
So it matters less than one would think. Also, ChatGPT can do 'internet search' as a tool already, so it already has access to say Google maps POI database of SMBs.
And ChatGPT also gets a lot of proprietary data of its own as well. People use it as a Google replacement.
>You can't train LLMs on proprietary data, at least not if you want to make that LLM as accessible as Gemini. Otherwise random people can ask it your home address.
If this is your only criteria I think you have a misunderstanding of what proprietary data is and ways companies can mitigate the situation in the inference stage.
What if the CEO isn't just telling the company how much to invest, but also has influence on how that money is used? Google's relative success, if it exists, I'd rather not judge, isn't from investing more than everybody else. Because the money just keeps pouring into these things, for all contenders.
https://www.nytimes.com/2026/03/13/business/economy/inflatio...
Story title:
Change in Data Sources Led to Lower Inflation Reading
Excerpts:
“On its merits, you can defend the change,” said Omair Sharif, founder of Inflation Insights, a forecasting firm. “Optically, it’s just not a good look in an environment when people are worried about political interference.”
Mr. Sharif said he did not believe the change was politically motivated. But Courtney Shupert, an economist at MacroPolicy Perspectives, another forecasting firm, said such decisions undermine public confidence in the statistical system.
“It seems like we are moving to more of a vague, uncertain, cloudy data quality environment that is going to make market participants less confident in the data that we do receive,” Ms. Shupert said.
reply