I've often felt that we (humanity) have pushed the boundaries of our knowledge right to some really hard lines to cross: relativity won't allow interstellar travel anytime soon, we are making progress in "soft AI" but "hard AI" is still insurmountable, even fusion seems as far as it's always been. Further "real" progress seems sometimes nearly impossible, or very arduous and slow at best.
Then this guy shows up, cuts a frog's leg, shines some weird lights on the wound... and the frog regenerates it! (frogs do not normally regenerate legs). This has been really astonishing to me. I hope this whole research area lives up to my excitement after listening to that talk, because really awesome things may come out of it.
I highly recommend this talk to literally anybody who has scientific curiosity en general!
It does feel like this research crosses a big line in science. Especially with the recent articles about how gene editing can cause weird defects that we don't understand, this shows we don't need to tinker at the lower levels when life already has the tools for higher level control.
Really cool talk / highly recommend! Some things you might learn / amazing facts:
- Children 7-11 can regrow fingertips
- Planarians (a kind of flatworm) are immortal
- If you teach a planarian to find "home", chop off its head and let it regrow its brain, it will still remember where home is
Note also that this was presented at NeurIPS so the talk also includes some discussion of non-neural information systems this kind of thing could inspire.
We are currently good at manipulating molecules and cells(machine code/hardware level) but if we understand the algorithms(software level) which control the large scale form and function - it would bring in radical change.
Hardware/software distinction in biology is a very interesting take and we all know what device independence can do.
Another excellent talk by the creator of Clojure, and like the previous ones, relevant for all programmers.
Rich is a good speaker but I don’t find the line he toes to be agreeable.
He spent many minutes motivating it. If you support some functionality to a client (as a library, or a service, or more abstractly), and then later want to relax the constraints that you require from those clients, this should be an acceptable change that they shouldn't have to worry about / be broken by. Like if all of a sudden some REST api no longer requires a specific HTTP header to be present in requests, then this change shouldn't break clients that have been including that formerly-required header all along.
Similarly, if you provide something to clients, and you add functionality to provide a response in all circumstances, not just some, then this should not break clients either.
This clearly is not true of `Maybe` / `Option[T]` and I think it's a pretty fair critique. Maybe we should be using Union types in these situations instead.
Union Types don't compose straightforwardly because they flatten; maybe this is desirable in some cases but the rationale in Hickey's talk leaves me unconvinced. The only practical use case for union types I'm aware of is smoother interfacing to dynamically typed APIs; any nice examples for something beyond that?
That would be a breaking change. And should be, if you're into that sort of thing.
The objection is to the opposite case: What was a T is now an Option[T]. I don't know Scala specifically, but that's a breaking change in every typechecked language I know. Rich is arguing that it shouldn't be. But it could be possible even in typed languages through union types. For example, you can do this in TypeScript by changing T to T | undefined, which is a superset of T.
I think the point remains that, while this is not a breaking change from a contractual viewpoint, most type systems would deem it incompatible.
I've learned that videos of his talks are just not worth my time.
His point is that Maybe a should be composed of all the values of a, plus one more value, nil. A value of type a is a member of the set (nil + a). Why should having a more specific value reduce the things you can do with it? It breaks composition, fundamentally. It's like saying (+) works on integers, but not on 3. I'm saying this someone who really enjoys type systems, including haskell.
No, that's a simple union type. There are very good reasons for Maybe to be different than unions (Maybe can nest meaningfully, simple unions can't.)
Maybe Maybe a is valid, and often useful, type.
Of course, if you have a function of type a -> b and find out you need a more general Maybe a -> b, instead of a breaking change, you just write a wrapper function that produces the correct result for Nothing and delegates to the existing function for Some(a) and you're done without breaking existing clients.
(Now, I suppose, if you're u had something like Scala implicits available, having an implicit a -> Maybe a conversion might sometimes be useful, though it does make code less clear.)
Fundamentally because it would require you to conjure up a value of type b from nowhere when the Maybe a is Nothing. If we view the function type as implication this would not be valid logically without some way of introducing that value of type b.
You could imagine some function from Nothing -> b that could rescue us. But since it only handles one case of the Maybe type, it is partial (meaning it could give undefined as an answer). There is basically two total functions that we could change it to:
* Maybe a -> b in which case we are back where we started.
* Unit -> b which essentially is just b, which can be summed up as meaning we need some kind of default value to be available at all times.
Now this is only "illogical" because we don't postulate a value of type b to be used as this default.
> It's like saying (+) works on integers, but not on 3
No, it's like saying (+) must work on all integers AND a special value nil that is not like any other integers, but somehow included in them and all other data types. We can't do anything with this nil value since it doesn't carry any data, so in the case of (+) we would essentially have to treat it as an identity element.
This is good though, since (+) has 0 as an identity element, so we can just treat nil as a 0 when we encounter (+). However, when we want to define multiplication we still need to treat nil as an identity element (since it still doesnt carry any data), except the identity element for multiplication is 1. This would be repeated for every new function that deals with integers.
So by mashing together Maybe and Integer we have managed to get a frankenstein data type with an extra element nil which sometimes means 0 and sometimes means 1.
Why not just decompose them into Maybe and Integer and supply the default argument with a simple convertion function like fromMaybe?
(FWIW, I actually agree with Hickey that using Maybe in api design is problematic and I've encountered what he's talking about. But while that might be an argument for where he wants to take Clojure, it's not an argument for dismissing type theory the way he does.)
I guess I overlooked it because the other way is so logically trivial, since it basically boils down to A -> B => (A || Nothing) -> B, which is just an or-introduction. So if you wanna implement Maybe generically the "work" lies on the other side.
But since Hickey's argument sort of is that we shouldn't implement Maybe generically, I guess my argument here becomes circular. (Begging the question maybe?)
Yeah, that's (part of) Hickey's point. That the "best" type systems fail this test, and require manual programmer work to solve this problem. Again, I'm saying this as someone who really appreciates Haskell.
In this regard I really like OCaml:
let get_something = function
| Some x -> x
| None -> raise (Invalid_argument "Option.get")
Maybe Clojure's standard library just isn't that focused on vectors? Python stdlib doesn't support this as well, but NumPy does.
Further his exposition on `Either a b` was built on a lack of understanding of BiFunctors.
The icing on the cake was his description of his own planned type theory. What he described was, as I could decipher from his completely ignorant ravings, a row-based polymorphic type system. However he passes off his insights as novel rather than acknowledging (or leveraging) the decades of research that have gone into type theory to describe the very system he is trying to build.
Worse, he continued to implore his audience to spread FUD about type theory, claiming several times, "it is wrong!"
Here's my response to his talk, which I found insightful:
Also, I think your comment suffers from the problem here, where you invoke "type theory" without any elaboration:
Rich has taken the time to explain his thoughts very carefully, and his clear about what his experience is, and domains he is talking about. Whereas I see a lot of vague objections without specifics and that aren't backed up by experience.
From his talk he hasn't convinced me that he understands Haskell's type system. Not only does he misunderstand the contract given by `Maybe a` but he conflates `Either a b` with logical disjunction which is definitely a false pretense. He builds the rest of his specious talk on these ideas and declares, "it [type theory] is wrong!"
He goes on to describe, in a haphazard and ignorant way, half of a type theory. As I understood it these additions to "spec" are basically a row-based polymorphic type system. Why does he refuse to acknowledge or leverage the decades of research on these type systems? Is he a language designer or a hack?
I can't even tell to be honest. He has some good ideas but I think this was one of his worst talks.
To apply your own standards, have you been a prolific Clojure contributor/hacker/user? Have you contributed to any major open source Clojure projects?
Can you actually say what about Maybe/Either he got wrong because it seems like he understands it perfectly well, speaking as a Scala developer.
I was trying to address the points he made but parent appealed to his authority which I haven't found convincing.
> To apply your own standards, have you been a prolific Clojure contributor/hacker/user? Have you contributed to any major open source Clojure projects?
I have been a user on a commercial project. It's a fine enough language. I wouldn't call myself an expert. And I haven't given a keynote address where I call out Clojure for getting things I don't understand completely wrong.
> Can you actually say what about Maybe/Either he got wrong because it seems like he understands it perfectly well, speaking as a Scala developer.
His point about Maybe was misguided at best. The function `a -> Maybe a` is a different function than `a -> a`. Despite his intuition that the latter provides a stronger guarantee and shouldn't break code, callers may be expecting the Functor instance that the `Maybe` provides and therefore is a breaking change.
His point about `Either a b` was perhaps further from the mark. It is not a data type that represent logical disjunction. That's what the logical connective, disjunction, is for. Haskell doesn't have union types to my knowledge. Either is not a connective. It's a BiFunctor. His point that it's not "associative" or "communtative" or what-have-you simply doesn't make sense. In fact he calls Either "malarky" or, charitably, a "misnomer."
To his credit he says he's not bashing on type systems, only Maybe and Either. But he later contradicts himself in his conclusions. He goes on about reverse and how "the categoric definition of that is almost information free." And then, "you almost always want your return types to be dependent on their argument..." So basically, dependent types? But again, "you wouldn't need some icky category language to talk about that."
So again, I think he has some surface knowledge of type systems but I don't think he understands type systems. I'm only a beginner at this stuff and his errors were difficult to digest. I think if he wanted to put down Maybe and Either he should've come with a loaded weapon.
He's had better talks to be sure! I just don't think this one was very good. And in my estimation was one of the poorer talks this year.
I don't agree with Hickey, but isn't there a connection between Either as a basic sum type and logical disjunction via the curry-howard correspondence?
And wouldn't "forall a b. Either a b" be the bifunctor, since it has a product type/product category as it's domain, while the result "Either X Y" (where X and Y are concrete types, not type variables) has the semantics of logical disjunction ie. it represents a type that is of type X or type Y?
That’s what makes his talk strange. He talks about types as sets and seems to expect Either to correspond to set union? If he understood type theory then he’d understand that we use isomorphism and not equality.
You can express type equality in set theory and that is useful and makes sense.
But it doesn’t make sense in his argument.
Malarky? Come on. Doesn’t have associativity? Weird.
I don't really follow that. How can it be a breaking change? Can you give an example?
Where your downstream parsers match on `Nothing` and assume the stream hasn't been consumed in order to try an alternative parser or provide a default.
If you change an equation to use `option` instead you have a completely different parser with different semantics.
I was thinking of a case where I use your function in a combinator that depends on the Functor and Monoid instances provided by the `Maybe` type. If you change your function to return only the `a` and it doesn't provide those instances then you've changed the contract and have broken my code. And I suspect it should be easy to prove the equations are not equivalent.
Perhaps, maybe, public apis like protos shouldn't encode requiredness of any piece of data (I actually fully agree with this).
But that says nothing about whether or not my private/internal method to act on a proto should care.
Another way of putting this is that requiredness ahiuslnt be defined across clients (as in a proto), but defining it within a context makes a lot of sense, and maybe/optional constructs can do that.
Or iow, other peopleay use your schema in interested and unexpected ways. Don't be overly controlling. Your code on the other hand is yours, and controlling it makes more sense. So the arguments that rely on proto-stuff don't really convince me.
(I work at Google with protos that still have required members).
That's pretty much my point. Hickey is very clear what domains he's talking about, which are similar to the domains that protos are used in -- long-lived, "open world" information systems with many pieces of code operating on the same data.
People saying "Rich Hickey doesn't understand type systems" are missing the point. He's making a specific argument backed up by experience, and they are making a vague one. I don't want to mischaracterize anyone, but often I see nothing more than "always use the strictest types possible because it catches bugs", which is naive.
I agree with your statement about private/internal methods. I would also say that is the "easy" problem. You can change those types whenever you want, so you don't really have to worry about modelling it incorrectly. What Hickey is talking about is situations where you're interfacing with systems you don't control, and you can't upgrade everything at once.
Here's my understanding of one of your points: required fields in a data serialization format place an onerous burden on consumers. So in proto3, every field is optional, and this permits each consumer to define whats required for its own context.
Unfortunately, I can't find any connection between the dilemma at Google and the suitability of the Maybe type. You say this:
>The issue is that the shape/schema of data is an idea that can be reused across multiple contexts, while optional/required is context-specific. They are two separate things conflated by type systems when you use constructs like Maybe.
I agree - the value of a field might be a hard dependency for one consumer and irrelevant to a second consumer. But Maybe has nothing to do with this. If the next version of protobuf adds a Maybe type, it would not obligate consumers to require fields that they treat as optional. It would just be another way to encode the optionality, not optionality as a dependency but optionality of existence. A required input could still be encoded as a Maybe because the system can't guarantee it's existence. So Maybe is simply an encoding for a value that isn't guaranteed to exist. And that's exactly the scenario you described in proto3 - now every field could be encoded as a Maybe.
A second point that stuck out to me:
>I didn’t understand “the map is not the territory” until I had been programming for awhile. Type systems are a map of runtime behavior. They are useful up to that point. Runtime behavior is the territory; it’s how you make something happen in the world, and it’s what you ultimately care about. A lot of the arguments I see below in this thread seemingly forget that.
Your worldview here is very different from my own, and perhaps while this difference exists, there won't be much mutual understanding. I don't find any relationship between types and anything I understand as "runtime behavior". Types are logical propositions. The relationship between programs and types is that programs are proofs of those propositions. Runtime does not enter into the picture. That's why constraint solvers work without running the program.
I would say that "X doesn't understand type theory" is becoming a common form of "middlebrow dismissal" , which is discouraged on HN.
And that's exactly the scenario you described in proto3 - now every field could be encoded as a Maybe.
No, in protobufs, the presence of fields is checked at runtime, not compile time. So it's closer to a Clojure map (where every field is optional) than a Haskell Maybe.
This is true even though Google is using statically typed languages (C++ and Java). It would be true even if Google were using OCaml or Haskell, because you can't recompile and deploy every binary in your service at once (think dozens or even hundreds of different server binaries/batch jobs, some of which haven't been redeployed in 6-18 months.) This is an extreme case of what Hickey is talking about, but it demonstrates its truth.
I don't find any relationship between types and anything I understand as "runtime behavior".
Look up the concept of "type erasure", which occurs in many languages, including Haskell as I understand it. Or to be more concrete, compare how C++ does downcasting with how Java does it. Or look at how Java implements generics / parameterized types.
Some refactorings of code are basically just relaxation of requirements or tightening of return values - and Maybe is littered everywhere / changed everywhere. It just makes the the code hard to read and a lot of busy work - but to no real value.
This is the same in Java code. Too many Optional<MyActualClass> / Nullable everywhere. Unit tests littered everywhere to deal with Optionals. But no real functionality change or new information for future maintainers. Just extra cruft.
spec seems to be work picked up where Optional / Maybe has left off.
I was left a little vague on how it'd work in the end. I guess your program would specify what spec/input-output it expects from a library's API and then the library git history is traversed till you hit the last "version"/hash that matches the spec you require. Then we can just get rid of version numbers entirely and I guess you would just get a warning when a library made a breaking change and your dependency is stuck on an old version/hash in the git tree
Am I understanding that right? I get that the new way of development he describes would prevent breaking changes entirely (though it honestly sounds messy with lots of legacy stuff floating around)
Typically, you need to move back and forth a number of times before it clicks. So you have all these people on the net fighting over which end of the spectrum is the Right one, and next month they're on opposite sides.
They're tools, hammers, screw drivers whatever; it doesn't make any kind of sense to identify with them.
Unless your goal is to be a tool, I guess.
With that, comes the next benefit: less debugging and lower maintenance costs. It has to be balanced against annotation costs. Most projects claimed to come out ahead if it was just safety rather than full correctness. You can also generate tests and runtime checks from specs to get those benefits.
So, for high reliability with lower maintenance, there's definitely an advantage to knocking out entire classes of error. That leads to last advantage Ill mention: ability to warranty (market) and certify (regulators) your code free of those defects [if the checker worked]. The checkers are also tiny on some systems. Rarely fail. Inspires extra confidence.
The academic approach doesn't look very constructive to me, never did. Once you have a language nailed down so hard that errors aren't possible, the complexity of dealing with it will be more or less the same as dealing with the errors in the first place.
I would much prefer a focus on more powerful tools that fit into the current "unsafe" ecosystem while offering a more gradual and flexible path to improved safety.
Argument for popularity = superiority. By your same logic, we shouldve stuck with COBOL for important apps since all the big businesses were using it. I have a feeling you dont write new apps in COBOL.
"the complexity of dealing with it will be more or less the same as dealing with the errors in the first place."
The best empirical comparison of C and Ada showed the opposite. All studies showed the safer languages had less defects with usually more productivity due to less rework later on. Evidence is against your claim so far.
"I would much prefer a focus on more powerful tools that fit into the current "unsafe" ecosystem while offering a more gradual and flexible path to improved safety."
Me too. Rust and Nim are taking that approach. People are finding both useful in production so far.
* Contracts For Getting More Programs Less Wrong: https://www.thestrangeloop.com/2018/contracts-for-getting-mo...
* "It's Just Matrix Multiplication": Notation for Weaving: https://www.thestrangeloop.com/2018/its-just-matrix-multipli...
* Hackett: a metaprogrammable Haskell: https://www.thestrangeloop.com/2018/hackett-a-metaprogrammab...
* Git from the Ground Up: https://www.thestrangeloop.com/2018/git-from-the-ground-up.h...
* Beyond Unit Tests: Taking Your Testing to the Next Level: https://www.youtube.com/watch?v=MYucYon2-lk
* Dataclasses: The code generator to end all code generator: https://www.youtube.com/watch?v=T-TwcmT6Rcw
* Automating Code Quality: https://www.youtube.com/watch?v=G1lDk_WKXvY
There are probably more I might add later as I remember them.
+1 Very interesting. I will make sure to check the `Hypothesis` module.
Really good discussion on event-collaborative microservice systems that scale well for performance and maintenance as well as a good argument and use case for server and client code generation.
Definitely my favorite so far.
What was called microservices imo was just a higher level modularization of code to allow easier parallel development and maintenance/scaling, but since it needs a server that meant some boilerplate to initialize and keep the app running. A lot of people went about this in different ways which made things a bit chaotic to those new to the idea I think. The next logical step would be to remove that server code by running on a serverless architecture to focus on the functions that operate on data rather than having to maintain the initialization code and other boilerplate as well.
Among all of this advancement, an architect thinking the way Michael Bryzek does would have no problem using either approach since the focus is on the data and how data is used.
Still, server-based microservices are still very prominent, and I'd argue they are also much more platform agnostic if done right whereas (if I'm not mistaken) much of serverless is done in specific platforms causing you to be vendor-locked due to different deployment processes for each platform. I think research is being done to make platform-agnostic serverless deployments easier, but I haven't read much on it yet and am unaware of current advancements in that area.
The Streams will later be edited and uploaded to https://media.ccc.de/ where they will appear incrementally a few days (or sometimes even hours) after the talks are over. So it's usually not a problem to miss a talk.
I learned about the ECS pattern and got to see some Rust refactoring in action.
Previous HN discussion here: https://news.ycombinator.com/item?id=17977906
That's why I like ECS. I also like the EC part to just be a table I can query, and the S system should be very tight and the dependencies should be used explicitly. That way you go around some thing like dynamic ordering of the update function based on object pools and other nonsense.
Nabil Hassein: Computing, Climate Change, and All Our Relationships
Tom 7: Reverse emulating the NES to give it SUPER POWERS!
Let's Program a Banjo Grammar was also one of my favorites. :D
Edit: I just looked at your username...that you, Ryan? :)
Gary Bernhardt had a creative concept for selecting speakers. Half were invited, internet famous people like Tom 7 and Julia Evans. Half were people who'd never spoke at a conference before, like Nabil and I. For the call for proposals, we submitted 2 minute videos instead of abstracts. Gary invested a lot of time mentoring the new speakers, which I'm so grateful for!
In September, I did another, kind of similar talk at Strange Loop, called "Picasso, Geometry, Jupyter." https://www.youtube.com/watch?v=GYJ77F_8kq0
The topic and ideas were good, probably better than my Deconstruct talk. But my execution and presentation were uneven and unpolished, so I'd like to try it again at another conference. I'm submitting to PyCon — we'll see!
To do CAD program for synthetic biology Chemistry professor writes his own implementation of Common Lisp.
Another favorite one of mine was Jimmy Chin talking about how they shot Free Solo.
To be honest, I thought this was a guy with a wife and children (there was a story earlier about him doing a free solo), since he’s single I guess his life is his own.
But it’s the massive prep they do and skills they have that make it genius
I'll watch Jeff Dean talk about pretty much anything, and his keynote "stump speeches" are more or less a greatest hits compilation of recent stuff from Google Brain.
Kevlin Henney - Procedural Programming: It's Back? It Never Went Away
If you're interested in a deep analysis on software development behaviours, through patterns either at the code or organisational level, his talks are gold.
I particularly enjoyed his take on management analysis, it resonates with a lot of "failure" situations that I had experienced in my previous job:
Kevlin Henney - Agility != Speed https://youtu.be/kmFcNyZrUNM
Edit: nm, found this from the US National Bureau of Economic Research: https://www.nber.org/chapters/c6876.pdf
Comparing the Bretton Woods system to the Eurozone is not a direct comparison
Here is an interview that explains his position a bit better than my paraphrasing: https://amp.theguardian.com/books/2017/mar/07/yanis-varoufak...
He may be good at writing books or giving lectures but he was a complete disaster as minister of finance.
He’s very articulate, and not only does he make a lot of sense, I’ve never seen him caught in a lie.
And I have no idea what makes you think it’s pub talk, I’m from the land of pubs (Ireland) and I’ve never heard anyone talk like he does in one.
“Can’t discern from overengineered public talk” is plausible, but it would be very unusual to phrase it that way.
It’s not really clear that he meant "public."
 he seemed to have human values and of course knowledge about the topic
It was really nice to see that the current Devops/SRE best practices are reflections of engineering/operations best practices in other fields
I dont really know anything about biology, but this piqued my interest.One of the most interesting talks of 2018.
> Titled “A New Golden Age for Computer Architecture: Domain-Specific Hardware/Software Co-Design, Enhanced Security, Open Instruction Sets, and Agile Chip Development,” the talk covers recent developments and future directions in computer architecture. Hennessy and Patterson were recognized with the Turing Award for “pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.”
Also not 2018 but his elevator with Howard Payne talk:
https://youtu.be/oHf1vD5_b5I (1h version)
https://youtu.be/ZUvGfuLlZus (2h version)
And the search for the perfect door:
Emmy-winning TV comedy writer becomes award-winning AI conversation designer, frets about the dismissive way humans interact with computers and the corrosive social effects of badly-designed bots. Explains why conversation interfaces need to stick up for themselves and push back on bad behavior, for the good of human discourse.
Joe Armstrong - Keynote: The Forgotten Ideas in Computer Science - Code BEAM SF 2018
It's an approach to living your life in a way that lets you be effective at accomplishing whatever you're after. Starting on something similar to a Mazlow's heirarchy(in that you must establish each level before going to the next one), you eventually find a way of not only living better and more effectively yourself, but finding how to expand that to your team and those around you. I didn't expect it to be good, but I liken it to Carnegie's "How to Win Friends and Influence People"- it gets to the core of the problem in the best way possible.
"Building 100 year systems in the shadow of PDF".
I saw an other version at Scala Exchange but that one requires registration  and it's seems to be almost the same talk.
It's very impressive summary of her latest book ("The Art Of Logic") about the importance of abstract logic in every day life and how abstraction can help us to understand each other (together with emotions).
"Failing to Fill: The Spiderweb Software Way" https://youtu.be/stxVBJem3Rs
They rock presenting great content in a great way.
Monica Dinculescu (https://github.com/notwaldorf)
PWA starter kit: build fast, scalable, modern apps with Web Components (Google I/O '18)
Kelsey Hightower (https://github.com/kelseyhightower/)
Keynote: Kubernetes and the Path to Serverless - Kelsey Hightower, Staff Developer Advocate, Google
It was about different generations in the workplace, what each grouping typically liked, disliked, was motivated by, and how they communicated differently.
I really liked it because of the content, but also because it was given by a small panel, I was a member of it. :)
Why did I propose such a talk? I usually submit technical talks to conferences, sometimes they are chosen. This year I did NOT have a technical talk selected by the Summit committee, but on a whim I had submitted this 'softball/managerial' title. It was the one that got me a ticket to Summit! You never know.
I'll have to watch it though.
I find almost all from GOTO 2018 very insightful.
esp ones listed below.
GOTO 2018 • The Do's and Don'ts of Error Handling • Joe Armstrong
GOTO 2018 • Pragmatic Event-Driven Microservices • Allard Buijze
GOTO 2018 • Why Business Cases are Toxic • Chris Matts
GOTO 2018 • Unconditional Code • Michael Feathers
Also really dug Jeremy Rifkin's The Third Industrial Revolution: A Radical New Sharing Economy. Outlines the rollout of a possible new engineering platform consisting of free energy, satellite internet, and AI.
Keep the links coming, please. Terrific counter-programming for holiday down time ;)
Joe Rogan interviews Dr. Andrew Weil https://youtu.be/WjYYdMNUXF8
Joe Rogan interviews Elon Musk
Joe Rogan interviews James Hetfield
Great explanation of this space and their views on augmenting existing regulated markets.
A pretty apt typo given what feeling many of his blog posts induce in readers :).
"Algorithmic extraction of keywords, concepts, and vocabularies."
Sorry only slides and no video:
Please, people in this situation: learn how to make a screencast! A screencast is the most amazing piece of technology that most people don't know how to use. If you're on a Mac, you can do it already - QuickTime has a dead-simple screencast tool. If you're on Windows or Linux, you'll probably have to download some small piece of software, there are tons of good choices.
Then do a screencast of the actual talk along with the slides, and post that instead of just the bare slides.
And this talk I gave about Starting a Business in Silicon Valley got a lot of traction on HN: