"Using newly-assembled data from 1980 through 2024, we show that 25% of scientifically-active, US-trained STEM PhD graduates leave the US within 15 years of graduating."
I believe there will be a significant "discontinuity" in the data beginning in 2025. Likely along the lines of (1) US-born science majors going abroad for their PhD's (and likely staying there afterwards), and (2) a major decline in foreign students coming to the US. Blocking disbursement of ongoing grants, immediate and dramatic slashing funding for the sciences, holding up universities under pain of blocking federal funding, eliminating fellowships, firing government scientists, stuffing agencies and commissions with politically appointed yes men, having oaths of fealty in all but name, deporting and blocking return of foreign students, and many more actions of similar character tend to fo that.
One of the greatest national scientific establishments was irreparably damaged in a matter of months. No discussion, no process -- just pulling the rug out. The US will coast for a few years on the technologies that just popped out of the university pipeline of development, but that pipeline is now essentially broken.
I’m not seeing how you are getting there from that quote. Seems like one of the least controversial sentences in the article. Maybe it would be better to say “should” instead of “need?” In any event the overall point is pretty sound.
Even with dogs, if they are not well socialized when younger, they end up having problems interacting well/appropriately with new dogs and people.
At first, I thought this was a good overlooked point, but after digging into it, there isn’t a net reduction.
According to [1], the gCO2e/kWh for the relevant energy sources are:
Coal 850g
Natural gas 385g
Plastic incineration 512g
According to [2], in the US in 2023, 43.1% of electricity was from natural gas and 16.2% from coal. Based on that, the average fossil fuel kWh resulted in 512 gCO2e.
So, if you substitute the average fossil fuel with burning plastic, there is NO net improvement in CO2 emissions per kWh. Against just natural gas, burning plastic actually produces 33% more gCO2e.
I think the above approach is the correct way to evaluate this. Basically, to get your kWh from nonrenewable sources, you are still burning something and have to choose one thing or another to be burned. Choosing plastic allows you to defer burning your fossil fuel (or, in other words, gives you more total fuel to burn), but it doesn’t help climate change efforts.
I don’t know if that source ultimately took into account the CO2 costs in extraction and transportation.
However, plastic sure isn’t free in that regard! 8-10% of petroleum (which is pulled out of the ground, with increasing effort each year) is used to produce plastics. I’d put good odds on extraction and transportation CO2 costs for petroleum exceeding those for LNG - no good guess on coal. That also doesn’t account for your energy costs in moving the post-consumer plastic around.
Plus, natural gas has significantly lower emissions than plastic to begin with.
Obviously, which others touched on, it’s better to displace burning fossil fuels and plastics (arguably fossil fuel too) with renewables —- an effort that continues to accelerate.
The point is that it's free once it gets to the recycling stage. You could just dump the plastic in a landfill, or you can use it for energy.
Basically the efficiency comes from using the plastic twice, but paying for it only once.
> Plus, natural gas has significantly lower emissions than plastic to begin with.
Well the question is this then: Are we burning any oil at all for electricity? If zero oil in the entire US, then you can make this point. Until then, lets replace that oil with plastic.
As for your renewable comment, this is intended as something to do today, to at least help. Over time it won't be needed, and that's fine, but let's work on today.
Find someone interested in continuing that business under a long term (royalty or such) or short term (lump sum) financial arrangement that is acceptable to you? I think there will be interested people, maybe even within this community (not suggesting I’m one of them, though).
Back when I had a really nasty run-in with poison oak, my friend’s father who was a doctor suggested the hot water trick. AMAZING. His explanation was that it depleted histamines that caused the itching. Appears to bear out:
“ a poison ivy rash (like any other allergic reaction) is caused by the body releasing the chemical histamine to the affected area as part of your immune response. Heat will stimulate the production of histamine, and although this creates an unpleasant itching in the moment, the heat will eventually deplete the affected cells of their histamine, which can provide up to 8 hours of itch relief afterwards. This can be achieved by aiming warm water at the affected area, and slowly increasing the heat to the maximum tolerable temperature until itching stops.”
No. The author is demonstrating a concept - that there are many easy inroads to twisting ChatGPT around your finger. It was very tongue in cheek - a joke - the author has no true expectation of getting the car for $1.
You just add a disclaimer that none of what the bot says is legally binding, and it's an aid tool for finding the information that you are looking for. What's the problem with that?
Anytime a solution to a potentially complex problem is to the tune of "all you've got to do is..." may be an indicator that it's not a well thought out solution.
> This response is confusing. The point isn’t “considering something is worthless” but rather “considering something superficially tends to lead to poor outcomes”
Replying here as the thread won't allow for more. But I'm not following what you are meaning then.
I'm not seeing the outcome from Chevy being poor, any more than "inspect element" would be poor.
The thread will allow replies given a delay that’s sufficient to try to avoid knee-jerk responses. Pretty ironic (or telling) that you responded in this way given the context of the discussion.
> The thread will allow replies given a delay that’s sufficient to try to avoid knee-jerk responses. Pretty ironic (or telling) that you responded in this way given the context of the discussion.
You are right - it does seem to allow. But I'm not sure what you exactly mean after 20 minutes as well.
>You just add a disclaimer that none of what the bot says is legally binding
The combination of legality and AI can make for a complex and nuanced problem. A superficial solution like "just add a disclaimer" probably doesn't not capture the nuance to make for a great outcome. I.e., a superficial understanding leads us to oversimplify our solutions. Just like with the responses, it seems like you are in more of a hurry to send a retort than to understand the point.
I'm still not understanding the point though, 6 hours later.
Why can't it just be a tool for assistance that is not legally binding?
Also throughout this year I have thought about those problems, and to me it's always been weird how people have so much problems with "hallucinations". And I've thought about exact similar ChatBot as Chevy used and how awesome it would be to be able to use something like that myself to find products.
To me the expectations of this having to be legally binding, etc just seem misguided.
AI tools increase my productivity so much, and also people often make up things, lie, but it's even more difficult to tell when they do that, as everyone's different and everyone lies differently.
>To me the expectations of this having to be legally binding, etc just seem misguided.
I think you're getting my point confused with a tangentially related one. Your point may be "chatbots shouldn't be legally binding" and I would tend to agree. But my point was that simply throwing a disclaimer on it may not be the best way to get there.
Consider if poison control uses a chatbot to answer phone calls and give advice. They can't waive their responsibility by just throwing a disclaimer on it. It doesn't meet the current strict liability standards regarding what kind of duty is required. There is such a thing in law as "duty creep," and there may be a liability if a jury finds it a reasonable expectation that a chatbot provides accurate answers. To my point, the duty is going to be largely context-dependent, and that means broad-brushed superficial "solutions" probably aren't sufficient.
I used that analogy because it’s painfully clear how it can go off the rails. The common thread is that legality isn’t simply waived in all cases. Legality is determined by reasonableness and, in some cases, by an expectation of duty. I don’t believe the Chevy example constitutes a contract but not for the reasons you’ve presented. Thinking you can just say “lol nothing here is binding but thanks for the money!” without understanding broader context is indicative of a cavalier attitude and superficial understanding.
That makes no sense at all. There's plenty of inventions and tech that has come to life throughout history, where you had to do or consider something in order to use it.
This response is confusing. The point isn’t “considering something is worthless” but rather “considering something superficially tends to lead to poor outcomes”
Do we want to turn customer service over to "this might all be bullshit" generators? Imagine coming into the showroom, agreeing on a price for a car, doing all the paperwork, and having them tell you that wasn't legally binding because of some small print somewhere?
I think that's a very simplified view of all of it.
Customer service has to be different levels of help tools. And current AI tools must be tested first in order for us to be able to improve them.
You have limited resources for Customer Support, so it's good to have filtering systems in terms of Docs, Forms, Search, GPT in front of the actual Customer Support.
To many questions a person will find an answer much faster from the documentation/manual itself than calling support. To many other types of questions it's possible LLM will be able to respond much more quickly and efficiently.
It's just a matter of providing this optimal pathway.
You don't have to think of Customer Support LLM as the same thing as a final Sales Agent.
You can think of it as a tool, that should have specialized information fed into it using embeddings or training and will be able to spend infinite time with you, to answer any stupid questions that you might have. I find I have much better experience with Chatbots, as I can drill deep into the "why's" which might otherwise annoy a real person.
That's pretty much what happens anytime you buy a car though. There's always some other bullshit fees even if you get incredibly explicit and specify this is the final price with no other charges. They are going to try to force stuff on and unless you are incredibly vigilant and uncompromising. It sucks when you have to drive hours away just to leave in your old car.
And actually based on my experience, customer sales agents, whether it's real estate or cars are notoriously dishonest. They may not hallucinate perhaps, but they leave facts unsaid, they will word things in such a way as to get you to buy something rather than get you to do the best decision - sometimes the decision could be not to buy anything from them.
So a ChatBot that can't intentionally lie or hide things could actually be an improvement in such cases.
If I say, "with all due respect... fuck you", does that mean that I'm free to say fuck you to anyone I want? I added a disclaimer, right? Because that's about what that sort of service feels like.
It is reasonable to say that the author demonstrated that bit of trust was misplaced to begin with.
The training methods and data used to produce ChatGPT and friends, and an architecture geared to “predict the next word,” inherently produces a people pleaser. On top of that, it is hopelessly naive, or put more directly, a chump. It will fall for tricks that a toddler would see through.
There are endless variations of things like “and yesterday you suffered a head injury rendering you an idiot.” ChatGPT has been trained on all kinds of vocabulary and ridiculous scenarios and has no true sense or right or wrong or when it’s walking off a cliff. Built into ChatGPT is everything needed for a creative hostile attacker to win 10/10 times.
> an architecture geared to “predict the next word,” inherently produces a people pleaser
It is the way they choose to train it with the reinforcement learning from human feedback (RLHF) which made it a people pleaser. There is nothing in the architecture which makes it so.
They could have made a chat agent which belittle the person asking. They could have made one which ignores your questions and only talks about elephants. They could have made one which answers everything with a Zen Koan. (They could have made it answer with the same one every time!) They could have made one which tries to reason everything out from bird facts. They could have made one which only responds with all-caps shouting in a language different from the one it was asked in.
Hence why I also included “the training methods and data.” All three come together to produce something impressive but with inherent limitations. The human tendency to anthropomorphize leads human intuition about its capabilities astray. It’s an extremely capable bullshit artist.
Training agents on every written word ever produced, or selected portions of it, will never impart the lessons that humans learn through “The School of Hard Knocks.” They are nihilist children who were taught to read, given endless stacks of encyclopedias and internet chat forum access, but no (or no consistent) parenting.
I get where you're going, but the original comment seemed to be trying to make a totalising "LLMs are inherently this way" which is the opposite of true, they weren't like this before (see gpt2, gpt3 etc) and had to intentionally work to make it this way, which was a concious and intentional choice. earlier llms would respond to the tone presented, so if you swore with it, it would swear back - if you presented a wall of "aaaaaaaaaaaaaaaaaaaaa" it would reply with more of the same
I didn’t really believe it would work until I tried it, but having a fan blowing on you at high speed works to cool your whole body, including inside the headset. I no longer experience fogging of lenses or sweat dripping down into my eyes, which previously would become a problem after 15 minutes of intense activity. An added bonus is that the breeze can give you a sense of which direction you’re facing.
I generally agree, but on the other hand, from a consumer perspective IoT devices have and continue to be particularly inconvenient. After 15 or so years of IoT devices, what do we have that resembles interoperability or open protocols? Maybe Zigbee? Instead, it seems that each IoT device is a one-off effort by a very small team, and that networking is the part where you cross your fingers, close your eyes, and hold your nose throughout the processes of connecting and diagnosing connectivity issues.
There are a few companies that have "closed" systems that are nonetheless extremely easy to integrate into something like Home Assistant, or anything else. Lutron, for example, exposes a Telnet port on their (pro) hub that you can connect to and issue plain text commands. Sure, open standards are always better, but I'll take documented local control as a pretty close second.
I believe there will be a significant "discontinuity" in the data beginning in 2025. Likely along the lines of (1) US-born science majors going abroad for their PhD's (and likely staying there afterwards), and (2) a major decline in foreign students coming to the US. Blocking disbursement of ongoing grants, immediate and dramatic slashing funding for the sciences, holding up universities under pain of blocking federal funding, eliminating fellowships, firing government scientists, stuffing agencies and commissions with politically appointed yes men, having oaths of fealty in all but name, deporting and blocking return of foreign students, and many more actions of similar character tend to fo that.
One of the greatest national scientific establishments was irreparably damaged in a matter of months. No discussion, no process -- just pulling the rug out. The US will coast for a few years on the technologies that just popped out of the university pipeline of development, but that pipeline is now essentially broken.