
Some excerpts from recent Alan Kay emails - dasmoth
http://worrydream.com/2017-12-30-alan/
======
pixelmonkey
There's an interesting book on the topic discussed in these emails, entitled
"Why Greatness Cannot Be Planned":

[http://amzn.to/2CtIrRR](http://amzn.to/2CtIrRR)

There's a YouTube talk by the author on the subject here:

[https://youtu.be/dXQPL9GooyI](https://youtu.be/dXQPL9GooyI)

The rough idea is this:

\- creativity often arises by stumbling around in a problem space, or in
operating "randomly" under an artificially-imposed constraint

\- modern life is obsessed with metrics and goal-setting, and this has
extended into creative pursuits including science, research, and business

\- sometimes, short-term focus on the goal defeats the goal-given aims (see
e.g. shareholder value focus)

\- the authors point out that when they were researching artificial
intelligence, they discovered that systems that focused too much on an
explicitly-coded "objective" would end up producing lackluster results, but
systems that did more "playful" exploration within a problem space produced
more creative results

\- using this backdrop, the authors suggest perhaps innovation is not driven
by narrowly focused heroic effort and is instead driven by serendipitous
discovery and playful creativity

I found the ideas compelling, as I do find Kay's description of the "art"
behind research.

~~~
tacon
That book was written by computer scientists, and they demonstrate how the
optimal search algorithm is often to pick the most unusual thing to try that
you haven't tried yet. I'd like to see more discussion about this as an
approach to personal development. Many people advise students and new
graduates to sample the search space broadly, without detailed goals, just to
see what's out there. It almost becomes the "do something every day that
scares you" maxim. There is an old saying about "Everything you want in life
is waiting just outside your comfort zone."

~~~
maroonblazer
I've not watched the video nor read the book but I take it slightly
differently.

Reinertsen, in "Managing the Design Factory", describes the need for companies
to generate high value information as efficiently as possible. Using Shannon's
theory of information, he argues that low-information events are those events
where the likelihood of either success or failure is high. Out of this comes
the fact that you maximize the information output when the probability of
failure is 50%.

How many of us take risks - professionally or personally - where the
probability of failure is 50%?

------
dang
For anyone who hasn't run across this, Alan has shown up on HN periodically,
and the resulting post history is just astonishingly rich:
[https://news.ycombinator.com/posts?id=alankay1](https://news.ycombinator.com/posts?id=alankay1).
Threaded conversations at
[https://news.ycombinator.com/threads?id=alankay1](https://news.ycombinator.com/threads?id=alankay1).

If you read the threads you'll see that he has frequently continued
conversations long after the submission dropped off the HN front page, and
many topics have gotten developed further than one usually sees. HN's archives
are full of gold and for me this one of the more obvious veins of it.

If anyone wants to spend some time aggregating and collating these writings,
as the OP has done from emails, I would see about getting YC to fund the
effort, subject to Alan's permission of course. Email hn@ycombinator.com if
you're interested.

~~~
crispinb
Irony of ironies!

~~~
crispinb
[This bit of pointless & unexplained snark would have been deleted or edited
had I revisited in time]

------
tzahola
>Socrates didn't charge for "education" because when you are in business, the
"customer starts to become right". Whereas in education, the customer is
generally "not right".

Very important takeaway!

~~~
weinzierl
The whole paragraph is gold:

> Socrates didn't charge for "education" because when you are in business, the
> "customer starts to become right". Whereas in education, the customer is
> generally "not right". Marketeers are catering to what people _want_ ,
> educators are trying to deal with what they think people _need_ (and this is
> often not at all what they _want_ ). Part of Montessori's genius was to
> realize early that children _want_ to get fluent in their surrounding
> environs and culture, and this can be really powerful if one embeds what
> they _need_ in the environs and culture.

This is so true on so many levels. Schools and university came to mind
immediately. For me, the most striking fit is to educational articles and blog
posts on the web.

When the www was young it was full of stuff that catered my intellectual
curiosity. Nowadays _content creation_ is driven by _keyword research_ ,
_search volume_ and _CPC_. After three paragraphs I usually know this will end
in a _call to action_. Not that there wouldn't be any interesting stuff out
there, but it is drowned in a sea of ever repeating trivialities.

~~~
agumonkey
Remember when Google was useful to find good stuff faster instead of driving
what should be there ?

~~~
jononor
I remember that when I started googling, I would often go to page 2,3,4,5 to
find what I wanted. These days I never go past the first page. I actually had
to check now that there was still a page switcher...

What changed the most? How good I am at formulating a query? What kind of
information I'm looking for? How good Google is at curating the first page?
How much information is available? Sure hope its not that my queries are more
trivial, or that my preferences for answers favors simpler,easier ones...

Thankfully I still get lost on Wikipedia (and Youtube), in a kind of what-is-
all-this-I-need-to-know-more learning spree.

~~~
agumonkey
A bunch of factor:

\- we became impatient if not lazy \- google does a huge job at parsing
complex queries \- the content is more organized in itself (html semantic
markup, meaningful css classes, etc etc)

The other day I tried a tails/tor and .. i cant recall.. ipfs probably. This
network is bare, so bare I felt excited just like the 56K days. It quickly
went bad because there's a huge % of bad / crazy content, but still I felt
entering a more interesting territory as a visitor.

------
roi
The thing is that Licklider's vision of computers as "interactive intellectual
amplifiers for all humans, pervasively networked world-wide" has already come
to pass, and created huge economies of scale and exponential pressures for
compatibility and conformity that didn't exist before.

In the 1970's a few dozen brilliant people could create a completely new and
self-contained computer system because the entire computing world was tiny and
fragmented. there wasn't the imperative to be compatible to all-pervasive
standards (even IBM's dominance in business was being challenged by the
minis).

These day if you want to create a new computer system that people will use you
need at the minimum to provide a networking stack and a functional web
browser, some emulation or compatibility system to support legacy software
that people rely on, device drivers for a huge range of hardware, etc. All
this not only takes a huge amount of work, it also punctures the design
integrity of your system, making it into a huge mountain of compatibility
hacks before you even start on your own new concepts. But the deadliest enemy
of innovation is the mental inertia of masses of users with a long history of
interacting with computers. They are no longer the blank slates who have never
seen a computer you had in the 70's.

Even in the realm of art people realized that the romantic or modernistic
model of artistic revolution that Kay invokes is untenable and retreated into
postmodernism.

~~~
panic
_> All this not only takes a huge amount of work, it also punctures the design
integrity of your system, making it into a huge mountain of compatibility
hacks before you even start on your own new concepts._

One possible approach that doesn't require writing a full compatibility layer
is virtualization. Run your new system alongside a mature system like Linux,
then rig it up so windows can appear inside whatever new environment you come
up with. You do still need to write the code to send events and get pixel
buffers, but it seems like much less work overall.

~~~
mkl
Then you're constrained by the existing window and input systems, which
probably makes many interesting new concepts impossible to implement.

I don't know an alternative except to do everything new within one window,
which would be at least as bad.

------
chubot
Does anyone know what happened to HARC ? It was announced in May 2016, with
Alan Kay's involvement and Bret Victor's:

[http://blog.ycombinator.com/harc/](http://blog.ycombinator.com/harc/)

[https://news.ycombinator.com/item?id=11679680](https://news.ycombinator.com/item?id=11679680)

In September "eleVR" said it was leaving YC research:

[http://elevr.com/elevr-leaving-ycr/](http://elevr.com/elevr-leaving-ycr/)

I presume these e-mail excerpts are inspired by the end of HARC too? What
happened?

In the thread about first year reports a couple months ago, someone suspects
that funding has ended too. Also, Bret Victor's work and name are
conspicuously missing from the first year reports.

[https://news.ycombinator.com/item?id=15564072](https://news.ycombinator.com/item?id=15564072)

[https://harc.ycr.org/reports/](https://harc.ycr.org/reports/)

~~~
ontouchstart
Thanks for the link by @j_s, I found following insightful comment by you
(@chubot):

> The misleading thing about Linux is that it IS IN FACT a big idea -- it's
> just not a technological idea. We already knew how to write monolithic
> kernels. But the real innovation is the software development process. The
> fact that thousands of programmers can ship a working kernel with little
> coordination is amazing. That Linus wrote git is not an accident; he's an
> expert in software collaboration and evolution.

[https://news.ycombinator.com/item?id=15164092](https://news.ycombinator.com/item?id=15164092)

From your comment, I think the funding and development process of these post
Bell Lab/PARC research projects might need a paradigm change from technology
idea oriented to social development oriented due to the reality of our highly
networked social environment (the dream of Licklider).

If we read the CRAZY eleVR thread as a tree/forest,
[https://news.ycombinator.com/item?id=15162957](https://news.ycombinator.com/item?id=15162957)

We can see that how much misunderstanding of each other exists in our tech
community. There are much deeper social development problems to be solved than
simple VC or philanthropy funding.

@vihart was right saying

> HN. You're a weird place.

It is a more fundamental problem that vihart and gang can’t solve. And that
would be an opportunity at another meta level.

Think what Lick would do.

~~~
chubot
Thanks, glad you got something out of the comment.

You have a point that, ironically, the success of Licklider's vision may have
changed the nature of collaboration for the worse in some ways. It's been a
huge net positive, in that humanity as a whole is more productive, but there
are things that have been lost.

(Or maybe another way to look at it is that Licklider/Engelbart's vision has
been watered down, and you could solve these communication problems with
better software design.)

Another example I recall is that Rob Pike said that Bell Labs was a wonderful
environment because everyone had to physically sit in the same room, huddled
around big shared computers. There is a natural exchange of ideas in that
environment, which leads to big research breakthroughs. When Ethernet and
personal workstations came around, suddenly you could sit at your desk and
close the door and put your headphones on.

I personally noticed that irony at my own job. Often people would be talking
more to people in a remote office than the person sitting next to them. And
sometimes people would chat electronically with someone in their own office --
e.g. if they didn't want to take off your headphones or disturb other
officemates!

I also wonder what a new model for funding basic research looks like. It seems
like YCombinator would be a logical entity to fund such research, but as far
as I can tell, it was a failed experiment.

And I think there is some danger in longing for the "golden age". I was
reminded in reading the "Dream Machine" that a lot of it was driven by fear of
the Soviets (Sputnik, etc.). And I also wonder if what made it work back then
is no longer present, for better or worse.

~~~
ontouchstart
The fact that we are having this conversation across time and space shows the
power of Lick's "Intergalactic Computer Network". We have reasons to be
optimistic.

On the other hand, on top of the computer hardware/software communication
layers where this conversation takes place, there is a profound layer that we
seem to take for granted and not consciously aware of (like fish in the
water), i.e., written language composed and consumed by real human being
(English in this case). To communicate effectively with this distributed (non-
local in space) and asynchronous (non-local in time) layer, we need more
contexts other than the words themselves. In Rob Pike's Bell Labs and Alan
Kay's bean-bag-chaired PARC, local synchronous face-to-face communication
would be much more effective because the audience were from the same
background and shared the same context.

Let's take a look at the contexts of Bret Victor's blog post "Some excerpts
from recent Alan Kay emails". These are excerpts of private written
communication (emails) between Alan Kay and Bret Victor. We could imagine what
Alan had in mind when he wrote those emails, or what Bret had in mind when he
read them, further more, why Bret made those excerpts and posted them on his
blog. However, since we were neither Bret nor Alan, we can only read them
within the constraint of our own contexts. We can only guess what message Bret
wanted to send with his blog and we are not even sure we (HN community) are
his intended audience.

I am sure there are some YC funders reading this and the fact that we don't
see any of them making a statement here shows that they are also under the
constraint of their contexts. So I guess the challenges and solutions to
funding of research lie upon these contexts as well.

It is all about pink and blue planes.

------
signa11
> An example of the vision was Licklider's "The destiny of computers is to
> become interactive intellectual amplifiers for all humans, pervasively
> networked world-wide". This vision does not state what the amplification is
> like or how you might be able to network everyone in the world.

i am going through licklider's 'the dream machine' book. it is quite long, and
dense but ties together lots of individual threads (for me at least) that were
kind of floating loose otherwise. i had known about luminaries in the field
(and their work) specifically, licklider, shannon, norbert-weiner, cerf and
kahn, metcalfe, von neumann, vannaver bush (to name a few) but this book
brings them together.

interestingly though, licklider's background was in psychology, as opposed to
ee/maths. did we lose some of the diversity of perspective ?

another book, that i found to be pretty good is 'the idea factory', it
complements the 'dream machine' book quite well.

edit-001 : fmt changes, and added reference to another book.

~~~
baq
> interestingly though, licklider's background was in psychology, as opposed
> to ee/maths. did we lose some of the diversity of perspective ?

The answer to this question is almost right in the emails: the focus of
funding research switched from visionary work of defining problem space to
exploration of solution space. Engineers are people who are educated, perhaps
sometimes predisposed, to solve problems, not find problems worth solving.

------
evaneykelen
The name Licklider is mentioned a couple of times. Recently I read a
fascinating account of his work. Great read for those who want to know more
about the history of computing and the internet:
[https://www.amazon.com/dp/B01FIPHEXM/](https://www.amazon.com/dp/B01FIPHEXM/)

------
pdkl95
> The destiny of computers is to become interactive intellectual amplifiers
> for all humans, pervasively networked world-wide". This vision does not
> state what the amplification is like or how you might be able to network
> everyone in the world.

That's a vision that recognizes the value in creating _tools_ , not
constrained solutions for the _current_ problem. Freedom can be
frightening[1], but letting people explore what is possible with your new tool
sometimes leads to unexpected uses and the occasional paradigm-shifting
discovery.

[1] (with apologies to DHH[2]) If we give people unconstrained tools, they
might change the String class to shoot out fireworks at inappropriate time!

[2] [https://vimeo.com/17420638#t=27m27s](https://vimeo.com/17420638#t=27m27s)

------
mr_tristan
Another way of thinking about this is that large companies have basically
decided that executing optimally is more important than necessarily creating
huge leaps themselves. They can just focus on making money, create a sizable
amount of cash, and acquire new technology.

I'm sure there's a cultural effect here as well. Seems like there are fewer
large companies that are willing to just let technologists "muck around for a
bit" swinging for the fences without some kind of goal.

But, I just wonder, if VC-funded startups are where new tech grows, we end up
just focusing on incremental improvements, instead of potential large leaps.

~~~
forapurpose
> Seems like there are fewer large companies that are willing to just let
> technologists "muck around for a bit"

I wonder if this is true. Many of the major computer industry companies have
labs: Google, Microsoft, IBM ... (Apple? Facebook? Amazon?)

I wonder if it's more or less than before.

~~~
madhadron
Compared to Bell Labs or PARC, these are purely applied labs.

~~~
forapurpose
Thanks. Do you know of any source that talks about this issue (applied vs
basic in the major corporate labs) and on a somewhat sophisticated level?

------
sgentle
What this makes me wonder is, if the problem is not enough funding for
visionary work, or more accurately that modern funders misunderstand how and
why to find visionary work, then what will change their minds?

Because I feel like I've read a few of these posts, watched similar talks, and
I nod along and agree every time. Only problem is, I'm not a wealthy
benefactor, director of a research institute, or a business exec with control
of discretionary R&D funding.

Where do those people hang out, what would convince them like this post
convinces me, and is anyone working on that?

~~~
auggierose
I think it is hard to get people to understand what they do not understand
yet, actually what they have opposing experiences to. It seems to me making
money is easy if you a) work hard b) are extremely goal driven (and obviously
you need to be sufficiently smart). Now this seems to be working in research
too, where the extremely goal driven part means you focus on publishing
research papers in prestigious places.

I think it is working in research (given for example that some of the papers
in conferences like POPL are just great). Of course you won't hear from those
instances where it is not working (until it does).

The problem is you somehow have to decide who to give money to, and you cannot
possibly know who carries a vision inside of them that they are actually able
to realise given enough funding and time.

------
chauhankiran
> set up deadlines and quotas for the eggs. Make the geese into managers. Make
> the geese go to meetings to justify their diet and day to day processes.
> Demand golden coins from the geese rather than eggs. Demand platinum rather
> than gold. Require that the geese make plans and explain just how they will
> make the eggs that will be laid. Etc.

\-- I have seen these types of situations so many times while project
implementation.

------
Jyaif
Is the underlying assumption behind this text is that less innovation is
happening right now? Are the billions poured in AI not counting?

~~~
bboreham
The comparison is with an environment where the mouse, windowing and Ethernet
were created.

Billions poured into AI have produced some interesting tricks but little that
changed the entire world of computing in the way those things have.

~~~
Jyaif
That's funny you should mention "the mouse, windowing and Ethernet", because
in the last 5 years the majority of people have stop using all 3 of those
(thanks to mobile). So that takes care of the "little that changed the entire
world of computing" argument.

Yes, billions have been poured into AI and we still have no strong AI. That
just shows that it's a problem that is way more complicated that inventing the
concept of a pointing device, making computers talk to each other, or having
applications show something on a screen that isn't text. The people in the 80s
weren't smarter (or dumber) than the people of today, there were just a lot of
low hanging fruits back then.

~~~
icc97
> because in the last 5 years the majority of people have stop using all 3 of
> those (thanks to mobile).

Even if this were true, which I disagree with, it has changed the world. Just
because we don't still use the iPod anymore doesn't make it any less
brilliant.

> The people in the 80s weren't smarter (or dumber) than the people of today,
> there were just a lot of low hanging fruits back then.

They didn't just take the low hanging fruit. In the 60s it was assumed that
computers would just get bigger and bigger. They had to take the highest
hanging fruit that everyone thought was ridiculous. They had to envision an
entirely different world. So perhaps they weren't more intelligent but they
had a more powerful vision of the future.

You're probably laughing at similar people today.

------
imrehg
Makes me wonder, what could we do, or what could current organizations do to
recreate that environment of research and creativity in today's (and
tomorrow's) world? Also, what do you do in your own life (if any) to create
that sort context that is conductive to this kind of creativity?

~~~
osullivj
Committees impose compromises and conditions on funding, so govt or public
company money won't work, as Kay points out. It must be pvt money, like the
Medicis funding the renaissance. Something like the Gates Foundation. It could
be a prestige project for Bezos or Zuck.

~~~
discreteevent
DARPA was government though

~~~
MaysonL
And DARPA is nowhere near as good as ARPA was.

------
jancsika
> Parc was "effectively non-profit" because of our agreement with Xerox, which
> also included the ability to publish our results in public writings (this
> was a constant battle with Xerox).

Parc was "effectively non-profit" because there was no way to collect or
monetize reams of user data like there is today. If there had been, either Kay
wouldn't have gotten that agreement or Parc would have been much more
selective about what constituted "results."

For example: how many companies have solved the talk-to-our-computers-and-our-
computers-talk-back problem now?

~~~
marshray
> there was no way to collect or monetize reams of user data like there is
> today

Xerox was at the time the world leader in handling literal reams.

A huge portion of data processing was related to managing customer data,
mailing lists, etc.

~~~
jancsika
I'm talking about a different type of user data-- what would be input into
things like visual programming languages and devices that Alan Kay talks
about-- Grail, light pen inputs, etc. I assume what Kay is talking about is
the ability to publish insights, data, and specifications for the languages
and UIs that were being developed at the time. So when you see Kay doing user
studies working with kids, the stuff he made public was the actual research on
what was learned about human-computer interaction during that time, languages
used, developed, etc. It wasn't the position of the kids' eye, fingerprint of
their typing patterns, mouse movement patterns, etc.

Someone can correct me if I'm wrong, but I've never seen anything Kay wrote
from the 70s or 80s that discussed the implications of mining user data on
such an enormous and regular basis as is currently done. And I doubt Xerox
itself thought of that category of user data as being valuable outside of
research, or in any way significant the way "customer data, mailing lists,
etc." were. But today, that user data is assumed to be valuable and can
cheaply be collected and monetized. So even if some Parc-like team exists
today, the results will be practically locked-down to keep a competitive
advantage over others. Or it will make much smaller strides with specific
goals in mind-- like Mozilla with Rust. Or it will clone and improve upon
existing proprietary technology and make it available to the public-- like
GNU.

It's a bit like reading someone wax nostalgic about Bitcoin's initial
bootstrapping mechanism and complaining that today's altcoins don't put enough
emphasis on distributing the tokens to people who aren't speculators. That's
fine as a description of a problem, but its a fairly toothless observation in
terms of solving the problem. Distributing tokens when nearly nobody is
watching is a completely different problem from distributing them when
everyone is not only watching but also speculating. Similarly, having a
research team do work that is a pubic good when nobody thinks the data
generated is something that can be directly monetized is a completely
different problem from trying to do the same when the greater part of the
economy depends on _everybody_ watching, collecting, and monetizing the data
that your software/hardware generates.

------
Jare
This bit

"An important part of any art is for the artists to escape the "part of the
present that is the past", and for most artists, this is delicate because the
present is so everywhere and loud and interruptive. For individual
contributors, a good ploy is to disappear for a while"

reminded me of John Carmack explaining in his .plans how he would go off and
seclude himself for a couple of weeks to start a new iteration of the id
technology.

------
mitchtbaum
I have intimately enjoyed the work of Engelbart, Kay, Nelson, Victor, etc over
these years. They make computer history meaningful and believable.

------
cateye
It is a very difficult narrative. Mainly because while reading it, I had to
think constantly about survivorship bias and appeal to authority. Would I
really read this if it was written by a 20 year old without a track record? Or
how many Alan Key's are out there that completely failed and would prescribe a
totally different approach and attribute the cause and effect on other
factors?

I also totally get the point of funding problem finding versus funding
business plans that are full of bullshit and pretend to have a risk free
solution with a play book that just needs some money to execute it linearly
step-by-step. So, for that part it is written very well and the point is
clear.

Another topic is: would all kind of innovations really emerge if the funding
mechanism was radically different in the last 20 years? Would we see all kinds
of big leaps or was ARPA-Parc just a serendipity moment that couldn't be
repeated no matter what the funding structure was?

~~~
BaronSamedi
I think Parc was a unique "a serendipity moment". These Cambrian Explosion
type moments happen frequently in history: high creativity in which a new form
is developed, followed by an increasing focus on small incremental
improvements, and finally stagnation. The amount of potential a new form has
determines how long this process takes.

I believe that for workstation/desktop computing we have already hit
stagnation. I also suspect that programming languages have not advanced in any
significant way since around 1990. By this perhaps surprising claim, I mean
that the languages available today are no more capable than those available in
the 1980s.

~~~
lmm
I don't know how you're defining "capable"; I'd take the view that
Haskell/Scala/... are immensely more effective than anything that was
comparably mainstream in 1990 (and that Haskell/Scala/... development in
practice today is a lot more effective than Haskell in the '90s, even if a
language that's notionally the same was available), and that the interesting
research languages of today have made similar progress beyond the interesting
research languages of 1990. It takes a frustratingly long time for things to
percolate into the mainstream, so it's easy to think that, say, ML existed in
1970, and forget that ML is still a long long way ahead of what were commonly
used industrial languages even as late as 1990.

~~~
BaronSamedi
I would define "capable" in terms of developer productivity--the time it takes
to deliver quality software. For example, if you decided to write a DNS server
from scratch would choosing Haskell or Scala (both fine languages) really
enable a massive (order of magnitude) improvement over choosing Smalltalk, C,
Ada, or Common Lisp? I doubt it. What we do benefit from is the availability
of libraries, IDE's, and various frameworks. These, I would argue, rather than
innate abilities of the modern languages are what enable us to be more
productive (assuming that we are).

~~~
lmm
> For example, if you decided to write a DNS server from scratch would
> choosing Haskell or Scala (both fine languages) really enable a massive
> (order of magnitude) improvement over choosing Smalltalk, C, Ada, or Common
> Lisp?

Well, for a sufficiently low defect rate, I suspect it would take an order of
magnitude longer (for normal developers) to achieve it in C, Smalltalk, or
most Lisps. Ada perhaps not, but then maybe it would've taken the order of
magnitude longer to implement, particularly if you factor in making enough
money to buy the development tools first. Then as now, the future is not
evenly distributed.

I mean, you're undeniably right on one level: for a small, standalone
executable whose main task is bit-banging, where a high defect rate is
acceptable, it was possible to bash that out 20 years ago at a similar speed
to what you can do in today's languages. But I think that's actually a much
smaller niche than it first appears, and not where the focus of our efforts
has been because that's not actually what businesses most need.

> What we do benefit from is the availability of libraries, IDE's, and various
> frameworks. These, I would argue, rather than innate abilities of the modern
> languages are what enable us to be more productive (assuming that we are).

I think the ability to write long-term-maintainable, general-purpose libraries
or frameworks is actually very tightly tied to the languages we use.

------
jasonkostempski
Are the full emails availabe somewhere?

------
quantumofmalice
_> Most don't think of the resources in our centuries as actually part of a
human-made garden via inventions and cooperation, and that the garden has to
be maintained and renewed._

Just right.

------
ridewinter
What if a stepping stone to UBI was a basic income to everyone that’s
demonstrated some technical aptitude? If scientists and engineers didn’t have
to work for, say Zuckerberg, to survive and instead because they genuinely
wanted to collaborate, maybe the tech giants wouldn’t be so disproportionately
powerful. And creativity in the tech world would skyrocket.

~~~
CuriousSkeptic
No need to comstrain our self to engineers. There are probably lots, and lots,
of people that would produce more value on their own agenda.

~~~
dreamache
And there would be plenty more that would produce far less.

~~~
xtian
Is that a fact or a fear?

~~~
dreamache
"Give a man a fish and you feed him for a day. Teach a man to fish and you
feed him for a lifetime."

UBI has a lot wrong with it, but I'll just start and end with a first
principle: Taxation is theft and therefore, government and UBI are immoral.

I advocate for freedom and consent, especially as it pertains to economics.
The income I earn through producing goods and services that I exchange
voluntarily with other individuals should be mine to keep; all of it. Anything
less is not freedom and certainly not consent.

~~~
forapurpose
What about all the goods you consume that you do not pay for, but were
developed over the decades, centuries, and millennia and by your
contemporaries too: Language, liberty, civil rights, the system of laws,
security, roads, economics, finance, wheels, computers, mathematics, much of
health care and education. Are you paying the Einstein estate for relativity?
King James' translation team for their Bible? Alan Kay for all he did that you
use? All the FOSS developers? The Suffragettes for your vote?

If not for taxation, who will pay for the police that keep you safe, the
education programs that likely educated you (assuming you had some publicly
funded education) and most of your customers, employees, and business
partners? Public health that prevents epidemics? The list goes on forever.

