In my experience, having done it both ways, first on VM's, then on lots of fully or mostly managed services, I generally prefer the latter because systems tend to be a lot more "self-healing" - because they're someone elses responsibility. This has had a dramatic effect on improving my sanity and sleeping well at night. I only wish I could migrate to an even more fully managed stack that's more reliable and still less work. The cases where I haven't been able to are either too expensive or would be too difficult to migrate.
This is a great question IMO. For those saying "just don't share passwords", consider that many of the vendor accounts you might need to use in business don't have team accounts, and there is no option but to share accounts. It's a legitimate thing that comes up rather often.
I would use Hetzner more but most of my user base is in the USA, and there is a lot of added latency to having your servers in Europe. I sort of think of Hetzner like a Costco for servers - does anyone know the closest equivalent in the USA?
Update: Hetzer does have USA servers, but apparently not dedicated bare metal servers in USA (only shared/VM).
I didn't notice but the dedicated machines are indeed "vCPU". But isn't that how all VPS providers do it? How else would they manage/sandbox the machines?
The point is that their dedicated servers in EU are not VPS:es. They are dedicated servers, not virtual private servers. This usually means better performance (at the same pricepoint but dedicated have a higher starting point) among some other pros.
A VPS with dedicated vCPUs is not the same as a dedicated server.
Different companies, those are just both places with high concentrations of data centers; Ashburn specifically has been a data center hub since the AOL days.
Nope different companies
But they may colocated in the same date centers
Hetzner USA is located inside
NTT Global Data Centers Americas, Inc. QTS Investment Properties Hillsboro, LLC
I kind of wonder or suspect that user agents are being misidentified. I have heard that sometimes Android gets identified as Linux. (which I guess is actually not wrong, but you get the point...)
Freeways are much, much less forgiving of abrupt speed changes and braking, which is something waymo (used to) have quite an issue with. Moving to freeways shows they are confident that won't be an ongoing issue.
Yes I was driving on a freeway last week and traffic went from 80mph to dead stop about as fast as I could stand on the brakes. I didn't hit the car in front of me with only a few feet to spare, and fortunately the driver behind me was also paying attention. The jam eventually cleared, and there was absolutely no indication of what caused it.
>> and there was absolutely no indication of what caused it
That's the Accordion Effect.
With enough cars on the road, one little tap of someone's brakes flashes their brake lights and it ripples upstream until the 80mph column of cars is forced to go to down to zero ASAP.
Emergency braking is much harder at freeway speeds.
At 35mph you can have something (radar/cameras) look a few meters ahead, then if there is a stationary obstacle you slam on the brakes.
At 60 that doesn’t work because braking distances are much longer. There might be an obstacle directly ahead of you on the pavement, but you won’t hit it because the car will turn with the road. This means that your emergency braking system needs to be aware of the steering, the road layout, and the expected route.
Whenever I drive on highways in heavy rain I wonder how a self-driving car would behave. Virtually all human drivers drive unsafely in these conditions by following too closely. Would a Waymo keep the distance? Seems difficult to do in heavy traffic. The alternative I guess is to drive very slowly.
I thought this as well, but I think it emerged in an Waymo interview that any weird thing that can happen on city street can also happen on a freeway and the reaction time is lower and the consequences higher on a freeway.
Freeways have some of the same challenges as streets, but not all of them. Treating the environment as if anything can happen at any time is just an "abundance of caution" thing. To use a real example, you don't get people walking up and throwing an egg at the side of your vehicle on controlled access freeways.
Although tools like ChatGPT, GitHub Copilot, and other stochastic parrots are useful in specific contexts, AI still has nothing of "I"; on the contrary, what they really do is generate text that makes sense (grammatical text, as opposed to ungrammatical texts), which was already possible with Prolog in 2011.
If you think LLMs are no more capable than Prolog, I'm not sure if you'll ever be impressed by anything.
There are so many things bad about this contentless article.
- It's not a scam, I paid $199 for the device and I have one
- The hardware is nicely designed and fun to use, the UI for interacting with the LLM is better than other options (my grandma could play with it easily)
The action model idea for training UI usage as a mechanism is an interesting idea.
The author is just so incredibly wrong about the shift in capability from prolog and GOFAI to now it's almost a caricature.
Just because the device exists and appears to work doesn't mean it isn't a scam.
For example, if someone sells you a machine that they claim is powered by magic but actually there is a hidden physical mechanism, you can claim the machine still performs the function, but they have misled you about the workings in a way that has huge implications on the capabilities of the product.
That is what is going on here. Look at MKBHD's review, the main potential he pointed out was the LAM functionality and the possible future uses of that. It turns out that entire thing was faked, and the "magic" that was the LAM is a "hidden physical mechanism" of duct tape'd together manual automations
> That is what is going on here. Look at MKBHD's review, the main potential he pointed out was the LAM functionality and the possible future uses of that. It turns out that entire thing was faked, and the "magic" that was the LAM is a "hidden physical mechanism" of duct tape'd together manual automations
I suspect the plan is/was to replace this with an actual LAM after release or possibly they thought they'd have it ready for release but ran into the 'last 5%' issue that seems to plague AI automation of anything. Definitely not an ethical move, but I can see how someone who bought into the AI hype might think it was a reasonable gamble to become a billionaire.
When I watched the keynote/preordered it seemed obvious the LAM UI training was work-in-progress. The core of the device was a new UI for interacting with an LLM via voice in a way that's better than the phone. It does this.
I think the people calling it a scam are the same group that just complains about anything. The author drawing an equivalence between prolog and LLMs is a lot of evidence in favor of this sort of bullshit, I just have a really low tolerance for this kind of argument and person.
I'm not delusional - I think the Humane device is extremely disappointing (and their marketing was way more misleading imo), but I'm glad they built and shipped something/tried something in this space. I don't believe the rabbit complaints are earnest it's just cool to hate new stuff for some people. It's the same batch that hated the iphone in 2007, just tedious and uninteresting.
> I think the people calling it a scam are the same group that just complains about anything. The author drawing an equivalence between prolog and LLMs is a lot of evidence in favor of this sort of bullshit, I just have a really low tolerance for this kind of argument and person.
Thanks! I`m not a huge fan of people like you too.
You're looking for the Juicero: a real tangible (over)engineered product that certainly did a thing, but nowhere near what the manufacturers said it aimed to do.
Juicero might be an example, though I'm not sure if they made any specific large claim that was totally false, unlike in this case. Perhaps if explicitly Juicero claimed that some special magic was going on during the squeezing process, but I don't think they did
They claimed that you needed the large amounts of force from their machine to squeeze the juice from their packets. People found that you can open them up and squeeze the juice out by hand because the machine isn't actually doing any sort of fresh juice squeezing.
I mean, it is a bit of a scam though right? I can pay for an NFT, 'own' the NFT, and still have been scammed.
In the case of this device, it told users it has functionality that, plainly, it does not have. It marketed itself based on these features. That is a scam.
They say it's going to be free forever with no subscription, but they have to pay for chatgpt API calls. Even if you forgive them for overhyping their chatgpt wrapper, they're still a ponzi scheme.
I still think of NFTs like the art market. If you bought the thing and you are happy with just owning what you got then there isn't a problem.
On the other hand if you bought with the expectation to sell in the future for a profit, you are trading on the perceived value. Scams rely on making you believe that the future accepted value will be much higher than what it will actually be.
That would be analogous to creating the expectation of absent functionality in the Rabbit.
In both cases the fraud is not in the item being sold but in the misrepresentations about it.
I think why this matters is because these days it feels like everything is misrepresented. I can buy a graphics card and be happy with its performance even though it is almost guaranteed to be well below the manufacturers promised level.
What is the returns policy for the Rabbit? Can people get a refund if they don't like it?
'in 2011' is weirdly recent though isn't it, not even to mention specific? Was there some particular advancement in Prolog based NLP around then that those of us with only a basic introduction to it (and not using it in an NLP course) wouldn't be aware of how capable it is in that regard?
Because I agree with you that sounds ridiculous to me, but also my level of Prolog is like
and dimly recalling what a 'cut' is. (I'm exaggerating a bit but I've barely used it since university, not even 100% sure about that syntax)
So I'm prepared to believe the state of the art for ChatGPT-like thing done in Prolog is a lot more impressive than I might have expected if someone asked me yesterday.
In fact, when I compared it to Prolog, it was exactly with the view that, back in 2011, it was already possible to define a sentence/text as grammatical or ungrammatical with just a few lines of code. What LLM models like ChatGPT do is generate text based on corpora distributed across the web, grouped, tokenized and trained with a general purpose, but, in the end, they still need the same rules as Prolog to determine whether or not they can regurgitate the text to the user.
The problem is that most people can't see that rabbit r1 is a deceptive product, at least. chatGPT (and Gemini, Claude and many others) doesn't do this (it doesn't trick its users into thinking that the product does one thing, but it actually does another).
I know how they work, and I think it's a really good job. But in the end, they still have to perform a gramatical check (like Prolog sentences) to be sure the text should be sent.
As I say, I know i pushed the limits, that's on me.
Ironically, the "stochastic parrot" could very likely put together a better argument than this ridiculous comparison with Prolog and 2011 tech.
Sure, markov chains are also picking the next token, but so am I when writing this comment. Am I just a "stochastic parrot"? Or is it the author that is parroting other people's opinions without giving them any thought?
Impressed? Here are few things that impressed me in the long decades since I am online.
In 1995 we discussed the HP48G calculator on Usenet and Dave Arnett who designed the calculator chimed in.
When my uncle left illegally Hungary in 1981 for the US, communication was sparse. We went to my grandmother for Sunday lunch and wrote a letter, together. Answers came in like two months. End of the 1980s phone calls began to happen but supremely expensive and short. By the time my grandfather passed in 2011 at the tender age of 98 he spent easily an hour every day video calling over Skype for free with his son.
I wintered out in Israel in 2007. It was almost impossible for me to get around on public transit as I do not read Hebrew. I also spent more than a week at the turn of 2015/2016 in Israel: Tel Hazor, Tel Megiddo, Avdat, Mitzpe Ramon, Eilat. I used public transit for all that, thanks to the smartphones with GPS and maps and real time transit instructions it was trivial.
I am easy to impress. You just need to knock down barriers of communication.
When these LLMs roared onto the scene I was neither impressed nor was I entertained. I was frightened. Nothing has happened since which would have proven us wrong, to the contrary. Australia leads the way by banning deepfake porn. More of that please, most especially ban the use of deepfakes of people who run in elections and the mass generation of texts about those.
As an aside, I enjoy the translate-in-camera functionality of smartphones very much so I am not against all AI -- it just needs to be used wisely.
I mean, LLMs are pretty different than deepfakes, but are "fear" and "being impressed" mutually exclusive?
I agree that deepfaking stuff with politicians is extremely concerning, but that doesn't really detract from the fact that Stable Diffusion is pretty cool. I was very impressed the first time I tried out Midjourney and Udio.
tbf, I did this comparison with Prolog thinking more about the generated text bt LLM models than the tech behind it. My fault. But, I still think that rabbit r1 it's a SCAM in the sense that is misleading the users.
I get where that comment is coming from. As far as I can tell when asked to generate text what all of the popular AI products are doing is the equivalent of smooshing together the first page of google results for the same prompt, xor'ing the shit out of it, then running the soup through a Word grammar checker, and then slowly typing out the results at like 300 baud and everyone is losing their shit over it.
Asking AI to do anything that requires "I" has it falling flat on its face in a cruel mockery of the word "I" and I just don't understand the hype. Well, that's a lie I do understand the hype and the mechanism by which piles of cash and gigawatthours of energy are being burned through and it makes me sad.
When all of the upcoming virtual brand ambassadors prove to be embarrassing failures maybe a splinter of reality will penetrate the hype-reinforced skulls of all of the "visionaries" funding this nonsense.
I don't know if I completely agree with that summary; ChatGPT (and Claude and Gemini) also has some degree of memory and context as well. It can at least to some degree remember what I and it typed and adjust things based on that. I would consider that some level of "intelligence", even if it's kind of baby intelligence.
ETA:
I should point out that it doesn't just "smoosh google results together", and it can actually do pretty interesting transformations beyond a simple "Grabbing the first result". If I ask ChatGPT to give me ten unique Java homework assignment problems, it will give me exactly that, and from my experience they actually are unique, at least they don't show up immediately when I search for similar things on Google. I can then ask it to give me those questions in LaTeX so that I can render it into a pretty thing. I guess you could argue that that is just an AST, so fair enough, but I think it's pretty easy to see why people are losing their shit over it.
It also has for more than a year hooked into Wolfram Alpha, so it addition to usually correctly parsing your problem, it can then send that parsed problem to a more objective source and get a correct answer.
ChatGPT has been an immense timesaver for me. It's been great to generate stuff like homework assignments, or to summarize long text into something more palatable. I don't automatically trust its output obviously, but it's considerably more useful than a Markov chain.
I'm always surprise by this kind of article or comments of people who don't know anything of how LLMs work or what they can do. The problem is that it requires, as it is the case for most tools, some learning curve. Prompting is not always straightforward, and after using these models for a while, you start discerning what should be prompted and what won't work.
The best example I have is a documentation that I wrote in Word that I wanted to translate in Mardown on a GitHub site (see https://github.com/naver/tamgu/tree/master/documentations). I split my document into 50 chapters of raw text (360 pages) and I asked chatGPT to add Mardown tags to each of the chapters. Not only did it work very well, but I also asked the same system to automatically translate in French, Spanish, Greek and Korean each of these chapters, keeping the Markdown intact. It took me a day to come up with 360 pages translated into these languages with GitHub ready documents. So the electric consumption was certainly high for this task, but you have to compare it to do the same task by hand over maybe a few weeks of continuous work.
Every token emitted is a full-pass through the network, with the prompts and previous tokens (sent by you and the AI) given as input.
And I agree that there is certainly a capacity for reasoning, no matter how flawed it is. There is plenty of evidence of AI solving novel problems 0-shot. Maybe not 100% of the time, but even if you have to run it 100 times and it gets it right 75% of the time in pure reasoning problems, it's doing better than randomness.
I completely agree. I'm not philosophically-brained enough to know how to define "intelligence", but I do think that ChatGPT qualifies as at least "intelligence-lite".
Yeah, as a distiller of collective knowledge as captured over the first 30 years of commercial internet, it does exactly what it should. That's not always "right" but it still provides huge value even if all it does is practically filter and distill, leaving you to nitpick or correct.
It doesn't have to be "magic" to displace a lot of things people spend time on every day.
I'm not getting anything like that, for some reason. I assume I changed a setting and forgot. (probably something to make bookmarklets work) Pasted in URL bar in a new tab, private browsing, FF version 126.0, up to date fedora, history doesn't save.
Thus only a very small number of companies (currently TSMC, Samsung, and Intel) attempt to operate leading-edge nodes, and the industry has shifted to a “fabless” model where companies like Apple and Nvidia design their chips but have them manufactured by “foundries” like TSMC. By pooling the orders of many different chip companies, the foundries can achieve the scale necessary to afford cutting edge fabs.
I wonder if AI training will end up being similar in the long-term (it's already partially true today).