If llms are able to write better code with more declarative and local programming components and tailwind, then I could imagine a future where a new programming language is created to maximize llm success.
It would also be harder for the LLM to work with. Much like with humans, the model's ability to understand and create code is deeply intertwined and inseparable from its general NLP ability.
Would this be addressed by better documentation of code and APIs as well as examples? All this would go into the training materials and then be the body of knowledge.
If a company can align it's business model with user goals, then it can work in the long run. Apple has somewhat aligned it's integrated hardware sales business model with user privacy. Google and Meta are advertising companies and capturing user data and attention will always drive the business.
Yes, but it's not a meaningful part of their revenue unlike Google where it's' their entire revenue.
They are very different companies in structure and it certainly is a "pick your poison" but it's completely stupid to act like they're the same on this front. Apple is better on user privacy
...unless you care about state actors, which you should, in which case your data is the US government's either way.
The selling point software that monitors social media to do support is about brand reputation. If people are saying bad things about your company on social media, then that is hurting your public image, so you should respond to those issues and make sure they get resolved. In practice, this means that the people responding are less likely to be outsourced because the the funding is more than just a support cost. In practice, this means you can often get better support through Twitter than email.
Are there any successful models that weren't trained with RLHF, or using a system with RLHF. I'm curious if this could be done without a fine tune step that would't meaningfully bias this.
This article does a poor job of representing induced demand so it's can't be trusted to critique it. Traffic and congestion is defined by (number of trips) * (distance traveled). So, if 10 people take 1 mile trips every day, or 5 people take 2 mile trips every day, the traffic congestion will be the same.
Induced demand says that by building roads, you enabled people to buy a cheaper house further out of the city, and drive further, thus congestion stays the same. This is true for other trips as well. You might drive further to Costco to get cheaper groceries rather than to your neighborhood store if there is a fast road there.
I think there is a legitimate criticism of induced demand that it usually doesn't provide a tradeoff for when you have enough roads. 0 roads in all size cities isn't the answer. At some point a city has enough roads and should focus on mass transit or other transportation. I've never seen an induced demand argument attempt to define this threshold and why.
Economics should define that threshold. There are things you cannot do on mass transit - get the lumber to build a new house/apartment for example. There are things you don't want on mass transit (I don't want you to take your smelly garbage to the dump via the train even though this is possible). Thus a small town will need to build roads. However as the town grows to a city eventually the minimum road to all lots is not enough. At this point we need to ask what is more cost effective: building transit in this town or building more lanes. Unfortunately transit depends on the whole system (this applies to roads too, but we started with them!), which means long term transit might have been a better answer, but right now more roads are cheaper.
Doing REST microservices is incredibly slow because of the amount of work it is to agree on what a "clean" and "consistent" api looks like for each service. It's just such an endless well of trying to establish best practices without refactoring constantly.
It's worth it for your public API, but it's such a huge time sink for internal APIs.
When senior leadership is non-technical, they can't tell when non-customer visible changes are useful or they just have mediocre engineering. They also can't tell when their software stack is such a mess it's beyond fixing.
That's the most important job of a CTO. Communicate the cost and implications of any development done by tech team. This could be feature required by product/sales/CEO or internally by tech team.