Hacker News new | past | comments | ask | show | jobs | submit | newfocogi's comments login

AWS Trainium is a machine learning chip designed by AWS to accelerate training deep learning models. AWS Bedrock is a fully managed service that allows developers to build and scale generative AI applications using foundation models from various providers.

Trainium == Silicon (looks like Anthropic has agreed to use it)

Bedrock == AWS Service for LLMs behind APIs (you can use Anthropic models through AWS here)


I'd be curious to know if being exposed to more conspiracy theories correlates to believing them. At the extremes, if I'd never seen a conspiracy theory before, it would be hard to identify it. And if everything I heard claimed to be a conspiracy, I imagine I might just block them all out. I imagine there's a goldilocks zone where conspiracy theories are most palatable.

What I can tell you is that GeX and Millennials seem to be able to recognized online scams/ads/paid content easier that both Boomers and Zens

As an outside observer, I'm interested to see how poolside handles keeping up with their competition on two fronts. On one side, they're trying to build a better code model then the large labs (OpenAI, Anthropic, Microsoft). On the other side, they need to outperform the UX of Cursor and VSCode w/ integrations.

It seems to me they raised on the pedigree of their founders, but to win they need to be better than the competition. The margin for being better than the competition on both fronts seems quite narrow.

But I also wouldn't mind having $500M to train a foundation model, deploy fine-tuned and RAG-based solutions on top of it for big customers and see what sticks.


Wow, my initial thoughts on more jersey variation is "that's kinda fun, but sometimes they're ugly". I'm surprised you think it makes sport hard to be passionate about.

I assume you're referring to football (soccer) since you called them matches.

Do you really think jersey changes are affecting your passion for sports or is this just another example of the system breaking down, but pay-to-watch is what's really hurting your interest?


It's hurting because they're constantly trying to sell you something, even the jersey is full of sponsor in soccer


In this age of endless expertise, it's easy to be fooled into thinking someone is a true authority until you hear them speak on a topic you know well. There's a certain thrill in getting a glimpse behind the curtain, seeing the man (or woman) behind the rhetoric. While I tell myself that 40% of what they say is just made up or misinterpreted, I can't help but keep listening, captivated by the illusion of insight. Even when we know better, the siren song of perceived wisdom is hard to resist. At the end of the day, true expertise is rarer than we'd like to admit - but the fantasy is always enticing.


I think vibes are underrated. The smart people can easily mislead you because they're smart. So you can cover things up with "official statistics", maliciously or by accident.

For instance, inflation is a big one. I remember during the first spike in inflation (2021 I believe), I started nothing prices have gone up between 25-50%. We've been told at the time inflation was something like 7% but that would mean paying $5.35 for something that used to cost $5, which was obviously not what was happening. In short, they play games with the numbers.

Bezos was on Fridman talking about something similar. He learned that Amazon’s metrics said typical wait time less than 1 min to reach customer service. But everyone complained about how long it took. So in a meeting he called Amazon’s customer service line and was put on hold for over 10 minutes, far exceeding the promised wait time. He stated, “When the data and the anecdotes disagree, the anecdotes are usually right."

All in goes off vibes and try to tie it to reality but sometimes miss the mark. But I think the vibes are often more right than the data.


> I think vibes are underrated. The smart people can easily mislead you because they're smart. So you can cover things up with "official statistics", maliciously or by accident.

> For instance, inflation is a big one. I remember during the first spike in inflation (2021 I believe), I started nothing prices have gone up between 25-50%. We've been told at the time inflation was something like 7% but that would mean paying $5.35 for something that used to cost $5, which was obviously not what was happening. In short, they play games with the numbers.

When there is a mismatch between your personal gut feeling and some official number or alleged fact in the world, there are different ways you can react:

A) You could think "Hmm, that's weird, is it possible that I'm missing something?"

B) You default to thinking that clearly you are right, so this is just another case of those so-called experts lying to you.

Had your response been A), you would have looked a bit more into it and realized that the overall inflation number is not based just on a subset of a few grocery items, but based on all different kinds of living expenses that people have. Many of those prices increased much less in 2021 than the overall 7% inflation rate (e.g., prescription drugs, cell phone plans, airline fares, motor vehicle insurance), so naturally, inflation in other categories was much higher to result in an overall rate of 7%.

If your gut feeling also tells you to doubt the inflation numbers for individual item categories released by the U.S. Bureau of Labor Statistics ([1]), you can get the raw data for those too, if I remember correctly.

One problem with your gut feeling is that it's very susceptible to various biases. For instance, the price of one grocery item increasing by 30% will be much more noticeable to you than the price of another item staying the same. It's also very easy to not realize that you are comparing the current price to the one from two years ago or so, thereby dramatically overestimating the yearly inflation rate.

I didn't mean to single you out, but the tendency by so many people to have overconfident knee jerk reactions to various information, instead of at least considering that they might have unknown unknowns or things they don't fully understand, is something that really concerns me.

[1]: https://www.bls.gov/opub/ted/2022/consumer-price-index-2021-...


My gut tells me that there are literally trillions of dollars tied to the "official" inflation numbers, so there is a huge incentive to nudge them one way or another, not to mention that no politician likes to be blamed for high inflation.

But if you want to get into it, sure. The inflation numbers are not a fixed basket of goods. They take into account elasticity and shifts the basket to weight less expensive items more as inflation goes up.

For instance, suppose you have only two goods, bread and butter. Bread costs $5 and butter costs $10, and suppose the inflation numbers are based off 50% bread and 50% butter. Now suppose both these prices double. What happens to inflation? The naive response is inflation is 100%. But no, the BLS in its infinite wisdom realizes that if butter doubled, you'd likey consume less of it and opt for more bread! So maybe now the breakdown would be 75% bread and 25% butter, so your basket that cost you $7.5 now costs you $12.5 (0.75 * 10 + 0.25 * 20). Inflation is only 67% compared to 100%. Trillions of dollars of government spending tied to inflation (e.g. pensions, wage increases, etc) has been saved!

In some respects its true, consumption will obviously shift to the cheaper items. But on the other hand, I want a simple objective measure of what increased money supply is doing to the price of goods. I'll figure out myself how much bread and butter I should buy.

So hence, I don't exactly "trust the experts" especially when there is trillions at stake.

But they would never play games right? The BLS is above reproach. What percentage of Americans can name anyone at the BLS or the methodology? Doesn't matter. Obviously the relative importance of Cakes, cupcakes, and cookies is 0.113, shifting from 0.188 just last month. Pretty obvious objective move.

https://www.bls.gov/news.release/pdf/cpi.pdf


This is not how it works. The weighting is based on the Consumer Expenditure Survey. BLS does not arbitrarily assign the weights. If they change the weighting between butter and bread it's because they found that people were buying more bread in the CES not because they assume that's what will happen.

CPI's methodology is transparent and the data is available if you wish to reproduce it. They aren't playing games with the data. There are all kinds of reasons your personal inflation rate might differ from CPI but it's not because BLS is putting their thumb on the scale to try and show less inflation.


Why did vibes take over from "gut feeling" I wonder


Every new generation reinvents slang. "Politically correct" became "woke". "Hip" became "cool" became "fire" or whatever the kids are saying these days.



Wow, I didn't know this effect had a name. I've experienced it so many times.

I've also seen how politicians lie and tell half truths about stuff, where I know the full story like them.


It was in the article...


Shows that it's not safe to assume that a random Besserwisser on HN has read the article. In my defence I did skim through it though.


I consider it the cost of information rather than amnesia.

When I read articles about something that I don't know much about, I usually don't have time to fact-check everything individually if it's not obviously wrong and seems to be plausibly presented, so I use it as a base theory until I receive evidence to the contrary - while knowing that it is likely still full of errors.


These guys are clearly at least somewhat intelligent and have brought up arguments in the past that I, in my infinite wisdom, haven’t considered. It’s up to me whether I take those arguments onboard after a sufficient amount of research. So I don’t think we should not listen at all. We should just not be all-in.


Totally. Combine with a nice sweater, a headset mic, a giant screen behind them, an audience and boom! Insta credibility. Looking like it is a TED talk is just as good as being a TED talk - and of course then all true! Deep expertise.. (maybe not these guys in particular. Just musing on some very good looking disinfi. Same thing as dressing people up in lab coats)


I remember them talking about self driving and the tesla's being so far ahead and then not being able to tell the difference between Cruise and Waymo. Waymo is so far ahead of everyone else as someone that uses them in SF it's not even funny. It definitely was my Gell-Mann amnesia moment with them.


TLDR: Quantized versions of Llama 3.2 1B and 3B models with "competitive accuracy" to the original versions (meaning some degraded performance; plots included in the release notes).


Quantization schemes include post-training quantization (PTQ), SpinQuant, and QLoRA.


Thx, I prefer not to visit meta properties :X

They were already pretty small but I guess the smaller the better as long as accuracy doesn't suffer too much.


For the group, because I can never keep the terms straight:

Semaglutide is one specific drug within the class of GLP-1 agonists. Other examples include Liraglutide (Victoza, Saxenda), Dulaglutide (Trulicity), and Tirzepatide (Mounjaro, though this is technically a dual GIP/GLP-1 agonist).

Semaglutide is available under several brand names for different uses: Ozempic (type 2 diabetes), Wegovy (weight management), and Rybelsus (oral form for type 2 diabetes).


I'm enthusiastic about BitNet and the potential of low-bit LLMs - the papers show impressive perplexity scores matching full-precision models while drastically reducing compute and memory requirements. What's puzzling is we're not seeing any major providers announce plans to leverage this for their flagship models, despite the clear efficiency gains that could theoretically enable much larger architectures. I suspect there might be some hidden engineering challenges around specialized hardware requirements or training stability that aren't fully captured in the academic results, but would love insights from anyone closer to production deployment of these techniques.


I think that since training must happen on a non-bitnet architecture, tuning towards bitnet is always a downgrade on it's capabilities, so they're not really interested in it. But maybe they could be if they'd offer cheaper plans, since it's efficiency is relatively good.

I think the real market for this is for local inference.


I find it a little confusing as well. I wonder if its because so many of these companies have went all in on the "traditional" approach that deviating now seems like a big shift?


I suppose hardware support would be very helpful, new instructions for bitpacked operations?


People are almost certainly working on it. The people who are actually serious and think about things like this are less likely to just spout out "WE ARE BUILDING A CHIP OPTIMIZED FOR 1-BIT" or "WE ARE TRAINING A MODEL USING 1-BIT" etc, before actually being quite sure they can make it work at the required scale. It's still pretty researchy.


This doesn't strike me as much of a problem as it appears for you. What are the biggest issues you foresee?

I'm an avid podcast listener, but I already ignore 99.9% of podcasts out there. I'm not concerned that this is going to become 99.99%.

If these AI generated podcasts are all bad, I will just continue to ignore them. If some turn out to be good, it seems like a win to me.

If you're worried about an existential "what happens to the world if all media is machine generated", I guess I'm willing to hop on the ride and see what we find out.


99.9? There are roughly 3mm podcasts out there right now - I listen, regularly, to about 10 over a year (in any given week maybe 3-4). I'm therefore ignoring 2,999,990 or 99.9997% of podcast. I definitely agree with you that this isn't a problem.

(Also - ironically, one of the podcast out of those 10 that I listen to regularly - it's the Deep Dive on AI. A NotebookLM production! )


It could poison the well - make it hard for people to find new good podcasts, and reduce discovery and revenue. Also they could fragment our society even more, disconnect people from people. Doesn't seem worth the risk.

If people want to listen to AI generated podcasts, they can just make them themselves. They don't need publishing on a platform alongside human-made podcasts. If I was Apple, who ultimately control curation of podcasts, then I'd prevent them. After all, Apple Intelligence will soon do as good a job of making your custom podcast if that's what you want.


How are all of the 99.99% podcasts that currently are not worth listening to not already poisoning the well? If the current ranking algorithms work, I don't see why it can't work with more podcasts, AI or not.


Maybe the current ones aren't trying to poison the well.

As an exception, consider Infowars. Now imagine someone 10 times smarter, maybe even with no monetary goals.


Imagine if reality wasn't actually real! That would be nuts.


Could you share some links? I’m not familiar with this.


I don't have links, but look at Detroit during the crash of 2008. There were a lot of photos at the time of entire neighborhoods abandoned by people whose mortgages were underwater.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: