sometimes "founding" contributors join long after founding.
sometimes they join during funding.
Neither case justifies a number either. Think founding team secures $3M based on something, wants to bring in engineers.
Founder(s) worked for 1 year researching on their own dime.
Then it's also about who bears the load. Founders typically bear the load, if other members do that, they should be cofounders.
Otherwise, it's more or less pareto / power law with a risk adjustment.
this, but broader. Goodness and morality is a subjective and more importantly relative measure, making it useless in many situations (as this one).
while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)
Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.
That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.
I'm assuming you're using your Bilt card when this happens.
Your Bilt agreement stipulates how itemized transaction data (level 3 in payment terms, with level 2 being "enriched" with subtotals/tax and merchant information- which is what you typically see with your normal bank)
Card networks (Mastercard, VISA) have different fee structures that incentivize more detailed information like level 3 for lower processing fees for merchants - here's more details on levels https://na-gateway.mastercard.com/api/documentation/integrat...
both a mistake to believe something is not political in a general sense just as it’s a mistake to confuse different specific politics.
Every organization has its own politics. Diversity of opinions is good and enables progress and that’s the whole purpose of science if you think about it. However confusing national politics or petty politics with the main competing forces is a mistake.
To make it even more complicated, there’s not even a main political struggle within any organization. Simplified, there’s one of ideas around the scope of work and one about organization - and yes these are or should be intersecting, but things are in motion and can’t be perfectly intersecting.
So all is political and everything will mix, but there should be a ranking of priorities and a common sense of why extremes of any kind are deeply wrong. And firing people in science for national political reasons would be extreme and wrong.
Everyone should remember that Germany was leading in science before it decided that nationalism should be absolute and practically destroyed its scientific leadership and never recovered. Moreover all those kicked out found their place in United States and eventually built the modern scientific world from computers to the atomic bomb.
I might be missing some context here - to what specific context does your comment refer to?
I'm asking because I don't see you in the conversation and you comments seems an out of context self-promoting plug.
Hey! I'm sorry you feel that way. There's several people who have subscribed to updates to OpenAI from my comment so there is clearly value to other commenters. I understand not everyone is interested though. It's just a free side project I built and I make no money.
Additionally, I believe my contribution to the conversation is that gpt-4o-mini, the previous model advertised as low-cost, works pretty well for my use case (which in this case can help others here). I'm excited to try out gpt-03-mini depending on what the cost looks like for web scraping purposes. Happy to report back here once I try it out.
I think it's less a problem of cost for the average person and more a problem of setting the market price for them at a fraction of the current one. This has such a deflationary impact that it's unlikely captured or even conceivable by the current economic models.
There's a problem of "target fixation" about the capabilities and it captures most conversation, when in fact most public focus should be on public policy and ensuring this has the impact that the society wants.
IMO whether things are going to be good or bad depends on having a shared understanding, thinking, discussion and decisions around what's going to happen next.
Exactly, every country should urgently have a public debate on how best to use that technology and make sure it's beneficial to society as a whole. Social media are a good example that a technology can have a net negative impact if we don't deploy it carefully.
Ok, this conversation about social media has cropped up time and time again and things haven't improved but got even worse. I don't expect we'll be able solve this problem with discussions only, so much money is being poured in that any discussion is likely to be completely neglected. Not saying that we shouldn't discuss this but more action is needed. I think the tech sector needs to be stripped of political power as it got way too powerful and is interfering with everything else.
I agree, though while everyone is having public debates, these companies are already in there greasing palms. I personally think the fact we are allowing them to extract so much value from our collective work is perverse. What I find even more sickening is how many people are cheering them on.
Let them make their AI if we have to. Let them use it to cure cancer and whatever other disease, but I don't think we should be allowing it to be used for commercial purposes.
For better or worse, there's a system and range of possibilities and any actionable steps need ot be within the realm of this reality, regardless of how anyone feels about it.
Public information and the ability for public to analyze, understand and eventually decide what's best for them is by and large the most relevant aspect. Your decisions are drastically different if you learn soemthing can or cannot be avoided.
You can't dissallow commercial purposes. You can't even realistically enforce property rights for illegal training data, but maybe you can argue that the totality of human knowledge should go towards the benefits of the humans, regardless of who organizes it.
However there's a lot that can be done like understanding the implications of the (close to) zero-sum game that's about to happen and whether they are solvable in the current framework and without a first principles approach.
Ultimately, it's a resource ownership and resource utilization efficiency game. Everyone's resource ownership can't be drastically change but their resource efficiency utilization can as long as the implications are made clear.
People want things. Information doesn't want anything.
People want property rights for both physical and intellectual property.
Likely rules are ill suited for levels of scaling achieved with current technology. A single person can read a ton of stuff and retain only a fraction compared to how much a system can.
So information wants to be free is nonsense. People want access to more information, at the same time they don't want others to have accees to the same information. However having access to more information has marginal effects when capacity is limited. Hence it's the information capacity that makes all the difference and this puts a small class in disproportionate advantage.
It usually means that the US government should give for-profit entities billions of tax dollars to benefit the elite and corpo class.
A funny anecdote I found the other day is that during the Korean War the US government wanted to break up AT&T but AT&T got the Army to argue that AT&T was instrumental in winning the war, I'll leave it as an exercise to the reader to find out if AT&T won the Korean War but one of the end results was rewarding AT&T with additional lucrative military contracts.
Not putting words in OP’s mouth from the comment you’re replying to. But from my understanding, the default state of information is to spread itself, it’s an inherent characteristic. You have to put effort to suppress it (example making info classified or enforcing secrecy). If you don’t actively put effort into suppressing information, it will spread.
The defining character of information is its reproducibility. Even verbal speach: "And then she said..." But digital information in particular is almost infinitely reproducible, without loss, worldwide.
So when people say "information wants to be free", what they actually mean is that trying to restrict the movement of information is fighting against the essential nature of information.
> Information wants to be free (as in libre) in the same way water wants to flow downhill
Water flows downhill to maximise entropy. The equivalent for information is dissolution into randomness. If anything, by this analogy, the “freedom” that involves information being copied and transmitted is the equivalent of pumping water uphill.
And I disagree about the work when it comes to information. Our natural inclination as humans is to share things we find interesting. Like "check out this song" or "check out this article". I don't think this takes much work. It just happens. In this sense the information is free like the stream is free to flow through the hills.
On the flip side there is substantial effort put into impeding this free flow of information with schemes like DRM. Similar to building a dam. But once cracks form the free flow resumes.
Continuing the water analogy you could say there is also substantial effort put into building the infrastructure to make information accessible to many more people. As a library is to a city's plumbing infrastructure.
When the phrase was coined in 1984 [1], it was a valid hypothesis. The last forty years have given evidence for the null.
The more plentiful information has become, the more we've sought (and in some cases, needed) to corral and control it. Sometimes for our own purposes. In many cases because absent such archiving entropy takes its toll.
The problem with "information wants to be free" is it presumes a natural force which doesn't exist. There also isn't a natural force that wants to make DRM and NFTs. But there is one that wants to forget, to corrupt and re-interpret. (There are very human forces that wish to control.) Sit back and let information do what it wants, which is precisely nothing, and the forces that beckon us into control and forgetfulness will win.
sometimes "founding" contributors join long after founding. sometimes they join during funding.
Neither case justifies a number either. Think founding team secures $3M based on something, wants to bring in engineers. Founder(s) worked for 1 year researching on their own dime.
Then it's also about who bears the load. Founders typically bear the load, if other members do that, they should be cofounders.
Otherwise, it's more or less pareto / power law with a risk adjustment.
reply