The company got nearly 2B of dollars in investment, was acquired for half the price, and the founders somehow managed to make a profit. No wonder he's a fan. Which other system would allocate resources so effectively?
Serious question, but how exactly does Atlassian hope to recoup that billion dollars?
Loom is useful, but given that Atlassian's investors are expecting a large return, do they really hope a screen recorder to eventually add 2B+ dollars to their value? The AI features will eventually be commoditized, so that can't act as a moat.
It will integrate with their other products and add value through lock in. Same as all of their other acquisitions. Atlassian is all about building an ecosystem so valuable you don't even want to escape.
it's not just the screen recorder: it's also the large userbase, the corporate connections, and the IP. if the stock goes up a few percent that alone covers the purchase
I think most AI research up to this day is a dead end. Assuming that intelligence is a problem solvable by computers implies that intelligence is a computable function. Nobody up to this day has been able to give a formal mathematical definition of intelligence, let alone a proof that it can be reduced to a computable function.
So why assume that computer science is the key to solving a problem that cannot even be defined in terms of math? We had formal definitions of computers decades before they became a reality, but somehow cannot make progress in formally defining intelligence.
I do think artificial intelligence can be achieved by making artificial intelligence a multidiscipline endeavor with biological engineering at its core, not computer science. See the work of Michael Levin to see real intelligence in action: https://www.youtube.com/watch?v=Ed3ioGO7g10
> Nobody up to this day has been able to give a formal mathematical definition of intelligence, let alone a proof that it can be reduced to a computable function.
We can't prove the correctness of the plurality of physics. Should we call that a dead end too?
If you believe in functionalism (~mental states are identified by what they do rather than by what they are made of), then current AI is not a dead end.
We wouldn't need to define intelligence, just make it big and efficient enough to replicate what's currently existing would be intelligence by that definition.
My point is that if you use biological cells to drive the system, which already exhibit intelligent behaviors, you don't have to worry about any of these questions. The basic unit you are using is already intelligent, so it's a given that the full system will be intelligent. And not an approximation but the real thing.
Thanks for pointing me out to this. This is a proposed definition of intelligence. Is it the same as the real thing, though? Even assuming that it was:
> Like Solomonoff induction, AIXI is incomputable.
That would mean that computers can, at best, produce an approximation. We know the real thing exists in nature though, so why not take advantage of those competencies?
I interviewed there once and they asked me what I would do if a service broke after a deployment. I said the first step was to revert to the last known good version and then investigate. Color me surprised when that was not the answer they expected.
Cloudflare's internal release tool suggests revert when monitoring detects failures during deployment, so this question doesn't describe Cloudflare's practices. There must have been something more to it, or it was a misunderstanding.
If I ever interview at Cloudflare and get this question I might answer with "call the sales team and have them fix it by selling someone an enterprise subscription paid upfront by the decade" just to see if the interviewers read Hacker News :P
Depending on the service’s criticality, the cost of rolling back versus pushing a fix, service dependencies in their environment… having to push a fix might have been the better approach.
Without more details about the environment, it is a 50/50 call.
The mongols sacked Baghdad, killed everyone, and destroyed all institutions of higher learning and the people working in them. Then the age of exploration permanently shifted the center of world power away from the Middle East and the Silk Road. Then European colonial powers subjugated those territories. Afterward, Saudi Arabia started exporting its very "particular" (to put it lightly) branch of Islam. Sprinkle some US intervention and you get to the present day.
Not much of a turning away, but a systematic and violent dismantling of institutions of learning (and a supporting culture) that took centuries to build. No surprise that those conditions didn't happen again in the same place. In most places, such a golden age never happens.
Yes, but not the kind of engineering mentioned in this article. Recently I went in a bit of a Youtube fueled permaculture/afforestation binge.
Reforesting without planting trees by encouraging shrubs to grow by pruning them (which are really trees with too many branches to grow tall): https://www.youtube.com/watch?v=RBP2uRQk5pQ
The solutions are there and rely on accelerating positive natural processes. But they don't funnel public money into private pockets, so let's fund harebrained schemes instead.
The first time I saw the rock dam one, it blew my mind. Dude just went nuts and casually build hundreds or thousands (can’t recall and haven’t rewatched yet) of these mini dams over a lifetime and completely terraformed his valley. It’s really inspiring because there is something genuine there that is simple, cost effective, and scalable (to some not insignificant degree).
It’s crazy because some outstanding solutions like this are right here in front of us but we’re not really doing anything to aid or incentivize them.
Only 12.5% of the population scores in the levels 4 or 5 (they had to group them together because there were so few is my guess). This is a disgrace. There's no reason why every adult should not be able to read proficiently. We are talking about reading, not some obscure skill.
I wonder what these figures would be in Cuba. From what I remember reading, they were much higher because of widespread literacy campaigns.
> Only 12.5% of the population scores in the levels 4 or 5 (they had to group them together because there were so few is my guess)
The reason they did that is called out in endnote 3, "This analysis combines the top two proficiency levels
(Levels 4 and 5), following the OECD’s reporting convention
(OECD 2013), because across all participating countries, no
more than 2 percent of adults reached Level 5."
Most of my experience is in customer-facing roles, and I would argue that reading a customer email chain where they try to describe what is happening, sometimes with pictures, requires level 4 understanding.
You often end up with multiple documents (several emails, pictures, logs). There's often competing information (customers are speculating about what's wrong, but they likely include lots of other information because they don't actually know). And you definitely need background knowledge about the product.
Add in translating that into a bug report for the engineering team? A successful high-level customer support agent needs level 5 reading ability.
But my experience asking questions of my teammates in the company Slack channels tells me very few of them are even actually at level 3.
It bothered me enough that there was not something like this already available, when the idea seemed so simple, that I ended up doing it myself. The user experience is not very polished yet, but I managed to turn the courses in here https://improviseforreal.com/learning-materials into courses for this software that can teach you ear training and to improvise on any instrument in all keys, all modes, and most common chord progression. Currently, I am doing it for piano, but I've only reached about 15% of the total jam tracks so far. Obviously, I am not distributing the tracks myself.
I am pretty sure the main issue is that there's no one funding implementing these ideas. We've known about mastery learning, spaced repetition, interleaving, etc. for decades, but it's not all been put together into a coherent system. Something like https://mathacademy.com/ is similar, but it's not open sourced and cannot be used to create your own materials. No need for LLMs or anything fancier to be involved when there's so much low-hanging fruit that's not been implemented yet. The core of my software is just a depth-first search over a graph, lol.
I think they meant incest. If we already did our share of it in the distant past, then fun with your cousin leads to a baby with a pig's tail. Or at least it did for the Buendía family: https://en.wikipedia.org/wiki/One_Hundred_Years_of_Solitude
The whole thing about how GPL propagates to unrelated works that happen to use a GPL licensed software is a misunderstanding. One that the FSF foundation is happy to propagate, but not one that would hold in court.
The concept of derived work in copyright law has nothing to do with how the binaries are linked together nor is an entire work derived from a GPL library just because they happen to call it at one point. Lawyers look at this very differently.
I don’t see definitive statements in that article. It’s a lawyer stating opinion, using qualifiers like “in most cases” and “I would argue”. Most concerning is:
> This is a complex topic that courts and lawyers disagree on
I would argue, in most cases, the benefits aren’t worth the risk, nor the legal fees spent to ascertain and manage that risk.