My intuition went for video compression artifact instead of AI modeling problem. There is even a moment directly before the cut that can be interpreted as the next key frame clearing up the face. To be honest, the whole video could have fooled me. There is definitely an aspect in discerning these videos that can be trained just by watching more of them with a critical eye, so try to be kind to those that did not concern themselves with generative AI as much as you have.
Yeah, it's unfortunate that video compression already introduces artifacts into real videos, so minor genAI artifacts don't stand out.
It also took me a while to find any truly unambiguous signs of AI generation. For example, the reflection on the inside of the windows is wonky, but in real life warped glass can also produce weird reflections.
I finally found a dark rectangle inside the door window, which at first stays fixed like a sign on the glass. However it then begins to move like part of the reflection, which really broke the illusion for me.
For IoT devices, the upcoming regulations will probably include a stipulation that vendors need to specify a guaranteed support period for the devices. I would prefer the same kind of commitment and dependability for games to a simple badge. It would combine free choice for how to build your business model with the ability for customers to make an informed choice ("they can pull the plug in 5 month? I'm not paying EUR 60 for that"). At least as long as there isn't a malicious compliance cartel, e.g. all big vendors only guaranteeing a month and "kindly" supporting it for longer…
(And my highest preference would be for vendors to be forced to publish both server and client code as free software, if they don't continue selling their service for reasonable prices. Not only for games, but for all services and connected devices. Getting political support for such regulations is, of course, extremely hard.)
> it is stupid for anyone who knows how git or GitHub API works?
You need to know how git works and GitHub's API. I would say I have a pretty good understanding about how (local) git works internally, but was deeply surprised about GitHub's brute-forceable short commit IDs and the existence of a public log of all reflog activity [1].
When the article said "You might think you’re protected by needing to know the commit hash. You’re not. The hash is discoverable. More on that later." I was not able to deduce what would come later. Meanwhile, data access by hash seemed like a non-issue to me – how would you compute the hash without having the data in the first place? Checking that a certain file exists in a private branch might be an information disclosure, but gi not usually problematic.
And in any case, GitHub has grown so far away from its roots as a simple git hoster that implicit expectations change as well. If I self-host my git repository, my mental model is very close to git internals. If I use GitHub's web interface to click myself a repository with complex access rights, I assume they have concepts in place to thoroughly enforce these access rights. I mean, GitHub organizations are not a git concept.
> You need to know how git works and GitHub's API.
No; just knowing how git works is enough to understand that force-pushing squashed commits or removing branches on remote will not necessarily remove the actual data on remote.
GitHub API (or just using the web UI) only makes these features more obvious. For example, you can find and check commit referenced in MR comments even if it was force-pushed away.
> was deeply surprised about GitHub's brute-forceable short commit IDs
Short commit IDs are not GitHub feature, they are git feature.
> If I use GitHub's web interface to click myself a repository with complex access rights, I assume they have concepts in place to thoroughly enforce these access rights.
Have you ever tried to make private GitHub repository public? There is a clear warning that code, logs and activity history will become public. Maybe they should include additional clause about forks there.
Dereferenced commits which haven't yet been garbage collected in a remote yet are not available to your local clones via git... I suppose there could be some obscure way to pull them from the remote if you know the hash (though I'm not actually sure), but either way (via web interface or CLI) you'd have to know the hash.
And it's completely reasonable to assume no one external to the org when it was private would have those hashes.
It sounds like github's antipattern here is retaining a log of all events which may leak these hashes, and is really not an assumption I'd expect a git user to make.
I'm sorry, but that is just elitist bullshit. First, even if we accept your implicit premise, that it is a training hurdle only, there is enormous value in accessible science, literature and education. In our connected society and in a democracy everyone benefits from everybody else understanding more of our world. In software engineering we have a common understanding that accidental complexity reduces our ability to grasp systems. It's no different here.
Second, your implicit premise is likely wrong. Different people have different talents and different challenges. Concrete example: In German we say eight-and-fighty for 58. Thus 32798 gets two-and-three-thirty-seven-hundred-eight-and-ninety where you constantly switch between higher and lower valued digits. There are many people, me included that not-seldomly produce "Zahlendreher" – transposed digits – because of that, when writing those numbers down from hearing alone, e.g. 32789. But then, there are also people for whom this is so much of a non-issue that when they dictate telephone numbers they read it in groups of two: 0172 346578 becomes zero-one-seven-two-four-and-thirty-five-and-sixty-eight-and-seventy. For me this is hell, because when I listen to these numbers I need to constantly switch them around in my head with active attention. Yet others don't even think about it and value the useful grouping it does. My current thesis is that it is because of a difference between auditory and visual perception. When they hear four-and-thirty they see 34 in their head, whereas I parse the auditory information purely auditory.
What I want you to take from my example, is that these issue might not be training problems alone. I have learned the German number spelling from birth and have worked in number intensive field and yet I continue to have these challenges. While I have not been deeply into history, I suspect that my troubles with Xth century versus x-hundreds might persist, or persist for a long time, even if I get more involved in the field.
That's fine, the thought-stopping accusation of "elitism" doesn't bother me. It's a preoccupation for people preferring equality based on dumbing things for the lowest common denominator, lest - god forbid - someone has to make an effort.
I don't think lowly and condescedingly of people like that, I think they're capable of learning and making the effort - they're just excused and encouraged not to.
People who actually have learning difficulties (because of medical conditions or other issues) or people from diffirent cultures accustomed to other systems, are obviously not the ones I'm talking about - and don't excuse the ones without such difficulties, and in countries that have used this convention for 1000+ years.
>Second, your implicit premise is likely wrong. Different people have different talents and different challenges
The huge majority that confuses this doesn't do it because they have a particular challenge or because their talents lie elsewhere. They do it because they never bothered, same way they don't know other basic knowledge, from naming the primary colors to pointing to a major country on the map. They also usually squander their talents in other areas as well.
Besides, if understanding that 18th century is the 1700th is "a challenge", then the rest of history study would be even more challenging. This is like asking to simplify basic math for people who can't be bothered to learn long division, thinking this will somehow allow them to do calculus.
> This is like asking to simplify basic math for people who can’t be bothered to learn long division, thinking this will somehow allow them to do calculus.
Oh dang, you kinda sorta had me until here. This sways me towards @Perseid’s point. :P Math education is full of unnecessary mental friction, and it pushes lots of people away. We know that finding better, simpler ways to explain it does, statistically, allow more people to do calculus. Long division is a good example, because it’s one of the more common places kids separate & diverge between the ones who get it and the ones who don’t, and there are simpler alternatives to explaining long division than the curriculum you and I grew up with, alternatives that keep more people on the path of math literacy.
We can see similar outcomes all over, in civil and industrial design, and in software and games, from cars to road signs and building signs to user interface design - that making things easier to understand even by small amounts affects outcomes for large numbers of people, sometimes meaningfully affecting safety.
The numbering of centuries is admittedly a simple thing, but maybe it actually is unintentionally elitist, even if you don’t think condescendingly, to suggest people shouldn’t complain about a relatively small mental friction when having to convert between century and year? Yes most educated people can handle it without problems, but that doesn’t tell us enough about how many more people would enjoy it more or become educated if we smoothed out how we talk about it and make it slightly easier to talk about history. This particular example might not change many lives, but it adds up if we collectively improve the design of writing and education traditions, right? Especially if we start to consider the ~20% of neurodivergent people, and ~50% of less than average people.
> They do it because they never bothered
Why should people have to bother, if it’s not necessary? Your argument that some people are lazy might be deflecting. Is there a stronger argument to support the need to continue using this convention? Being able to read old history might be the strongest reason, but why should we waste energy and be okay excluding people, even if they are just lazy, by perpetuating a convention that has a better alternative?
There are a lot of shibboleths and pointless conventions in the world. Flutists are called "flautists" because classical musicians aspire to be Italian, and if you don't pretend to be Italian too you'll embarrass yourself. Minute hands were originally long because they pointed to an outer dial of minutes, while hour hands were distinguished by being decorated, but now being slightly longer is just a stylization that means minute hand (even though "minute" means "small") and we have two pointers using the same dial for different enumerations, one without the relevant numbers and distinguished by fractional differences in its width and length. British English spellings are substantially French, and this is perpetuated as a matter of national pride.
Like the word shibboleth, these examples are all kinds of language. Even the clock hands are a sort of visual language. Nth century is another language element. The conventions make outsiders stumble, but for insiders they're familiar and shedding them would be disturbing. Over time they become detached from their origins, and more subtle and arbitrary.
In programming we have "best practise", which takes good intentions and turns them into more arbitrary conventions. These decisions are unworked again later by people saying "no that's dumb, I'm not going to do that", even if it is "how we do it" and even if learning it is a sign of cleverness. We have to be smart to learn to do dumb pointless things like all the other smart people.
Is this good? Keeps us on our toes, maybe? Or keeps us aligned with bodies of knowledge? I think it's definitely good that we have the force of reformist skeptics to erode these pointless edifices, otherwise we'd be buried in them. But new ones are clearly being built up, naturally, all the time. Is that force also good? Alright, yes, it probably is. Put together, this is a knowledge-forming process with hypothesises (I don't like using Latin plurals, personally) and criticisms, and it's never clear whether tradition or reform is on the dumb or overcomplicated side: it remains to be seen, as each case is debated (if we can be bothered).
Market concentration is really the underlying problem. Microsoft should never have been allowed to buy GitHub. Microsoft Windows should have long been split into a separate company to Microsoft Office etc. If there wasn't this one gigantic business, then whichever smaller business made Teams would have a much more equal footing with other competitors, as they would not be at an unfair advantage for integration into other currently-Microsoft-owned products as well as the aggressive bundling Microsoft does with Teams.
The number of anti-Microsoft people that still use Github is astounding to me, and then just blame Microsoft for buying it.
At some point, if people want an alternative to Github, perhaps it starts with people not using Github and switching to alternatives.
Honestly, it would seem people like market concentration. I don't think people like having to use multiple repository management websites. However, I do wish it was centralization in experience over a federated system, rather than what we have noe. e.g. a "source control browser" that normalizes github, bitbucket, sourceforce, sourcehut, etc. into a single seamless interface.
But even that doesn't seem to be high on anyone's list.
> "The number of anti-Microsoft people that still use Github is astounding to me, and then just blame Microsoft for buying it."
Voting with your wallet (or with your attention & time for free things) makes sense if there's an alternative you can choose that's as good as the one from the company you dislike, or if you consider the impact on you of any deficits in the alternative to be less important than sending a message by voting with your wallet/time.
But it's completely understandable, and very common, for people to be in a situation that while they want to boycott a company/product because of how they act in some way (from software UI decisions to using child labour in sweatshops to...) but are faced with the choice between using/buying one of their products or suffering from what they consider to be a significantly worse and/or more expensive product.
And if you wish that one or both of Microsoft selling / giving away Github, or MS changing how they run Github, would happen, then why not publicly express blame in the hope that enough similar complaints build pressure, regardless of whether you're avoiding it or feeling you need to use it?
(Personally I don't feel I use Github enough to be a useful voice on how MS have handled it since the acquisition, but I feel like many people have expressed being pleasantly surprised that they've broadly let Github be Github, at least compared to worst-case fears of how much they might try to make it more Microsofty.)
Network effect. Especially for open source. The thinking is basically that GitHub is where developers find your project so if you don’t use GitHub you won’t find developers.
I think this ignores just how much better GitHub is compared to its competitors — at least from my experience of using bitbucket at work. GitHub rightfully should have more market share.
I'm not sure that's so obvious these days at least. The era of tech mega corps just being able to buy up all the competition seems to be mostly over(ish) for now (.e.g Figma, ARM, Broadcom/Qualcomm, Visa/Plaid)
> At the end of the day it should only matter if Microsoft's practices are hurting consumers rather than their competitors.
Focusing on short term repercussions for consumers has significantly hurt long term consumer interests and there is evidence that it hurt the economy in general. In the decades preceding the 1980s it was generally understood that competition itself is a necessity for effective free markets and that extreme power concentration (as we e.g. see today in the IT sector) is hard to reconcile with efficient markets and political freedom.
See [1] for details, here is an excerpt:
> An emerging group of young scholars are inquiring whether we truly benefitted from competition with little antitrust enforcement. The mounting evidence suggests no. New business formation has steadily declined as a share of the economy since the late 1970s. “In 1982, young firms [those five-years old or younger] accounted for about half of all firms, and one-fifth of total employment,” observed Jason Furman, Chairman of the Council of Economic Advisers. But by 2013, these figures fell “to about one-third of firms and one-tenth of total employment.” Competition is decreasing in many significant markets, as they become concentrated. Greater profits are falling in the hands of fewer firms. “More than 75% of US industries have experienced an increase in concentration levels over the last two decades,” one recent study found. “Firms in industries with the largest increases in product market concentration have enjoyed higher profit margins, positive abnormal stock returns, and more profitable M&A deals, which suggests that market power is becoming an important source of value.” Since the late 1970s, wealth inequality has grown, and worker mobility has declined. Labor’s share of income in the nonfarm business sector was in the mid-60 percentage points for several decades after WWII, but that too has declined since 2000 to the mid-50s. Despite the higher returns to capital, businesses in markets with rising concentration and less competition are investing relatively less. This investment gap, one study found, is driven by industry leaders who have higher profit margins.
What makes this so difficult is that it would be hard to fix even if there was agreement on the problem.
If governments were to parcel up markets and stop companies from crossing rather arbitrary dividing lines, it would effectively stop all investment in disruptive technologies because any real disruption most likely infringes on some of these laws.
If you stop large companies from expanding into neighbouring industries, e.g by bundling new stuff with their existing offering, you stop them from becoming bigger but at the same time you are reducing competition. The risk is that you might end up with smaller companies but even less competition.
I'm not ideologically opposed to government intervention. I just don't know how to do it. All discussions on how to break up some tech giant quickly reveal how devilishly complex the problem is. And it's different for each of them and for each industry.
What would be a general rule to prevent growing concentration without damaging innovation, ossifying existing market structures and make impossible demands on the political system in terms of keeping all those detailed rules up-to-date and fit for purpose?
>> If governments were to parcel up markets and stop companies from crossing rather arbitrary dividing lines
>There is absolutely no need to do this until you become Microsoft's size and no government has or will.
I'm not so sure. Debates about how to break up the tech giants often revolve around which particular activities shouldn't be under the same roof because there is an intrinsic conflict of interest.
For instance, some of the accusations against Amazon appear to be pointing to a potential solution where Amazon would no longer be allowed to compete with Amazon Marketplace traders or with publishers. Not sure if Lina Khan has anything like this in mind or not.
We also had many debates about whether media companies should be allowed to be internet access providers or operate internet backbones. Net neutrality is supposed to stop any misuse of power, but net neutrality itself is under constant fire from deregulators.
The thing is, it doesn't make much sense to break up a specific company because doing both A and B causes a conflict of interest but then let other companies do A and B. That's why in my view any such breakup implies a need for defining boundaries between markets that cannot be crossed.
> boundaries between markets that cannot be crossed.
This only applies to dominant companies/ monopolies.
But there is merit to the idea - like should investment banks be allowed to profit from taking a position against the position of their customer, even if that was done on their advise?
> defining boundaries between markets that cannot be crossed
So basically entrenched companies in specific markets would be extremely hard to challenge unless you have very large amounts of capital just laying around doing nothing? Even start-ups would struggle a lot more to get funding because no established company outside of that specific market would be allowed to purchase them. I'm not sure overall that would benefit consumers that much (IMHO the complete opposite but it's debatable).
Of course it depends on how the boundaries are defined, but just in tech:
Apple (being a computer company) would have never been allowed to develop the iPod/Phone/Pad without spinning them off into independent companies?
Google (being an OS provider) wouldn't have been able to sell Pixel phones themselves, but that wouldn't be an issue since Android probably wouldn't have been a thing in the first place.
So we'd be permanently stuck with Symbian and Nokia/Sony Ericsson/Blackberry/etc.
Same applies to MS, which is a great counterexample, despite all their resources and power they completely failed to leverage that in the mobile market. Then you have Intel vs ARM, Google and social media, even Kodak to an extent.
Having a lot of money, resources and great engineering is not necessarily such a huge competitive advantage when trying to enter an adjacent market. You must also be capable of developing competitive/innovative products while not being afraid to cannibalize your current revenue streams. Especially if we're talking about major public companies. Pouring billions into some (potential) boondoggle without any immediate return is hard to pull off without generating a severe backlash from your investors.
Having a seemingly "perfectly" competitive market (i.e. margins are close to the "risk-free" rate of return) doesn't necessarily lead to a lot of innovation because companies in such markets can't afford to make risky investments and tend to just focus on maximizing efficiency of current technologies. e.g. yes Google being able to fund Waymo with their Search/Ad revenue/etc. is not exactly fair to their potential competitors but IMHO preventing that would have significantly slowed down any real progress in the field.
That's exactly what worries me. So if something is done to prevent growing market concentration it better be something that doesn't rely on this sort of fine grained market segmentaion.
One alternative that could work is to mandate open APIs and a requirement for large platforms to carry all legal traffic and content. I know this is incredibly tricky as well. Who pays for the infrastructure? What about security and privacy issues? It raises many questions but it seems more promising as a direction of travel.
Yeah I don't know how you would break up Microsoft Office or regulate that. There are competitors but it's so pervasive, most companies use it. You'd have to create a public API that other competitors could use, and the HR lady is going to be pissed!
> preceding the 1980s it was generally understood that competition itself is a necessity for effective free markets and that extreme power concentration (as we e.g. see today in the IT sector)
Yet Bell wasn't broken up until 1982 so I'm not sure if it was a such a turning point. IMHO allowing AT&T's monopoly to exist for that long was much more detrimental to consumers than whatever MS, Apple and other tech companies are doing these days.
But yeah I certainly overall agree that competition has generally been the driving force behind most of human progress and economic growth at least over the last few hundred years. It's just not entirely clear what measures should governments use to maximize the competitiveness of markets without introducing inefficiencies and costs that slow down economic growth and technological progress (while not providing that many benefits to consumers either).
I fully believe we lost more than we gained from the breakup of AT&T - local access prices went up significantly, and while long distance rates declined, it did so roughly linearly with the decreasing cost of bandwidth.
In the end we pay about as much as we ever have in aggregate - but at a loss of all of the benefits the AT&T monopoly - subsidized general science research from the labs, a plethora of union jobs, and an overall loss of US manufacturing capacity.
My belief having working in the sector, anything that looks like a utility is better off as a tightly regulated monopoly than being open to the winds of competition.
I was a teenager at the time, and what I remember, above and beyond pricing alone, was just how firm a grip Ma Bell had on our entire civil communications infrastructure. The Carterfone decision was still a relatively-recent thing with radical implications -- you mean I can plug stuff besides phones into the wall socket?! -- and it was definitely time for things to open up further.
Intra-LATA calling between neighboring towns got a bit more expensive for a while, yes, but long distance almost immediately became much cheaper. It was like the move from film photography to digital -- suddenly everybody was taking photos freely, because the marginal cost was almost gone.
Post-breakup long distance calling became something people weren't inherently reluctant to use, and that was a big deal. Especially with the concurrent rise of BBSes. There's no way I'd ever agree that we were better off with the status quo.
What you were probably thinking of is that 0% of the irrational numbers between 0 and 1 can be described by language as single entities. Or phrased differently: If you had a magic machine that could pick a random real number between 0 and 1, with 100% probability you would get a number that no finite phrase / definition / program / book could define. That is because everything we can abstractly define is part of a countable set and the set of irrational number (and real numbers) is uncountable.
For that reason, quite a few mathematicians view the real numbers as a useful, but ultimately absurd set. Much more sane is the set of computable numbers, that is the set of numbers for which you can find an algorithm that computes the number to arbitrary precision. (More formal: A number x is computable if there exists a Turing machine that gets as input a natural number n, terminates on all inputs, and outputs a rational number y such that |x-y|<10^-n .) Every number you ever thought of is computable, but as a mathematician, working with the set of computable numbers is much more tedious than working with real numbers.
What about numbers computable with random draws? Doesnt that create the chance of hitting something totally irrational among the reals? Or how is computable numbers defined to avoid this?
> Much more sane is the set of computable numbers, that is the set of numbers for which you can find an algorithm that computes the number to arbitrary precision.
But perhaps still not as sane as one may hope. It would be very sane to be able to compute, for any two numbers, which one is larger (or whether they're equal), but sadly this is not computable for the computable numbers.
> Every number you ever thought of is computable, but as a mathematician, working with the set of computable numbers is much more tedious than working with real numbers.
I mean, I've thought of noncomputable reals like Chaitin constants.
> It would be very sane to be able to compute, for any two numbers, which one is
> larger (or whether they're equal), but sadly this is not computable for the
> computable numbers.
I'd like to understand - Can you explain this? It seems like it would be easy to have a Turing machines that uses the other two Turing machines, adding one digit at a time until it finds a difference.
> I mean, I've thought of noncomputable reals like Chaitin constants.
Heh, but how many digits can you actually provide? Not too many. So have you really thought of the number in any meaningful sense when you barely know any of its digits?
Also interesting that computer languages themselves are countable, so while it's hard to specify the digits algorithmically for the Chaitin constant of any computer language, you already know that the set of ALL Chaitin constants are countable.
>I'd like to understand - Can you explain this? It seems like it would be easy to have a Turing machines that uses the other two Turing machines, adding one digit at a time until it finds a difference.
I assume it runs into problems when you try to check if 2 > 2.
> Heh, but how many digits can you actually provide? Not too many. So have you really thought of the number in any meaningful sense when you barely know any of its digits?
I’ve never thought through very many digits of pi either. Or even 1/3 for that matter!
Given that the Artemis program is motivated by space settlement, I'm surprised nobody has referenced "A City On Mars" by Kelly and Zach Weinersmith (of https://www.smbc-comics.com/ acclaim) yet. I went into the book with lots excitement for extraterrestrial colonies, and finished it being convinced to better wait.
They argue that if you actually look into the details, especially into the "dry" political, legal and social ones, trying to settle mars or the moon likely actually increases our risk of existential crises (at the current point in time at least). Think conflicts between nuclear powers over the (surprisingly few) good spots on the moon, or rocks (=asteroids) flung to earth by space settlers (there is a lot of deadly potential energy floating above all our heads).
Furthermore, there are loads of open space biology questions that quickly become ethical questions when permanent settlements are considered. Can you have babies in low/micro gravity? How can you do it without too much harm to your child? The responsible approach is to do a few more decades of targeted research first.
Regardless of the downers it delivers, it's actually a fun read and I can recommend it wholeheartedly.
That's a very engineering way to approach the problem. The issue it runs into is that the question "should we go to mars" isn't a settled matter that leads into the question of "how do we go to mars". The first question is as flexible as the second.
Getting to mars means that the question "can you have babies on mars" now becomes highly emotionally charged, which means the answer to "should you have babies on mars" becomes obvious. Without any pressure, the former question will always be answered by asking the latter.
It's probably pretty rare nowadays, since it's off by default and rather hidden in the settings dialog ("Search for text when you start typing"). I had it activated up until (quite) a few years ago, and I think I switched it off, because of bad JavaScript interactions.
I've tried it with a question which requires deeper expertise – "What is a good technique for device authentication in the context of IoT?" – and the Search mode is also worse than the Chat mode:
The search was heavily diluted by authentication methods that don't make any sense for machine-to-machine authentication, like multi-factor or biometric authentication, as well as the advice to combine several methods. It also falls into the, admittedly common, trap of assuming that certificate based authentication is more difficult to implement than symmetric key (i.e. pre-shared key) authentication.
The chat answer is not perfect, but the signal-to-noise ratio is much better. The multi-factor authentication advice is again present, but it's the only major error, and it also adds relevant side-topics that point in the right direction (secure credential storage, secure boot, logging of auth attempts). The Python example is cute, but completely useless, though (Python for embedded devices is rare and in any case you wouldn't want a raw TLS socket, but use it in a MQTTS / HTTPS / CoAP+DTLS stack, and last but not least, it provides a server instead of client, even though IoT devices mostly communicate outbound).