I know it sounds tangential, but this does resonate with my own roundabout way of going from studies -> work and the gross lack of fundamental science companies in Africa as a whole. I did not have depression, but rather my own cocktail of problems as a student, and a lot of it (at least for me) stemmed from clear dissonance between your current situation and an inability to anticipate a future self that is acceptable to your current self.
With depression, the situation is similar and more importantly, you become more and more subjective and people outside your situation are needed, as probably is medication, to bring you back to personal assonance with what you want in life.
The article doesn't say what the reasons were, so Wang may have had different reasons.
But there is certainly a process for handling advisor-advisee issues at a big school like Stanford:
You are in a foreign country, with no friends, no family, working over time, in some places extremely little money.
If you are burnout the first thing that you need to do is to stop. Make friends and lovers, travel and work outside for a year or so.
To use medication or drugs is not going to solve your problems,if you continue living such an unnatural lifestyle.
And I said that having lived that lifestyle myself, but I took my breaks from the toxic system.
When I went to university I ended in deep debt.
Took me years, to pay off that debt, it felt like literal slavery, since I would just take whatever job that could pay the debt that was offered (I had to decline many jobs because their pay was lower than my debt, meaning taking them I would just stay in debt forever), and then I would stay until I was fired or had to quit.
Even then, those were not "real" jobs according to our government, in my country my "real" job registry, is empty, I never "worked", because I never got hired full-time legally, I couldn't afford to, whatever people offered that the pay was good enough, I would accept, even if the employer was obviously hiring me as "freelancer" just to not pay taxes.
The times I had burned out during this, I had to keep going, travelling for a year? I dunno if I will ever afford that. Making friends? How? When? I am not in a foreign country yet I have two friends at most, and one of them moved to another country and I see him once every 5 years.
So it is not like people have a choice after they went to college, I still feel that going to college was the biggest mistake I ever did on my life, I should never have went, and never took that debt, it is paid now, but I am working with Marketing, instead of working with programming, and have little room for error, no debt, but no surplus either, if something goes wrong with the business I am screwed.
> to pay off that debt, it felt like literal slavery
This is what I feel is almost criminal about systematically committing university students to years-long debt. Education shouldn't have to be traded for years of someone's life - especially when most of the labor market expects them to have at least a degree.
I think the root of the problem is deeper than education (as a business) though - it's how society is organized. To have a minimum standard of living, one must eat every day and have a place to stay - which is already a kind of debt, a constant need for money. Poor people are essentially enslaved to dead-end jobs, just to be able to survive.
Hopefully, in a sensible/utopian future, we will look back on this social arrangement as barbaric, inhumane and uncivilized. Until then, best of luck navigating, adapting, flourishing despite the setbacks.
But this ignores that the source of the problem could be brain chemistry itself. Or at least that it is part of the problem. If it were situational only, everyone in the same situation would have the same outcome. But the physical structure and chemical cocktail of each individual brain plays a huge role in how those situations affect each person. The brain is an organ, and just like any other organ, there are situations that cannot be fixed with “lifestyle changes”. Medication should not just be hand waved away.
And my comment there. And also my comment here (which links to more): https://news.ycombinator.com/item?id=21629112
A good resource on suicide prevention: https://www.metanoia.org/suicide/
"When pain exceeds pain-coping resources, suicidal feelings are the result. Suicide is neither wrong nor right; it is not a defect of character; it is morally neutral. It is simply an imbalance of pain versus coping resources. You can survive suicidal feelings if you do either of two things: (1) find a way to reduce your pain, or (2) find a way to increase your coping resources. Both are possible."
A Basic Income could make a big difference to make it possible for people to live a graduate student lifestyle without the stress of navigating the social dysfunction of many (not all) graduate programs...
Seems like these people came a bit too early to the party.
Some people have left biomimicry in the dust and are manipulating matrices to sets find optimal parameter sets representing concepts.
Other people (https://www.humanbrainproject.eu/en/) haven't! There is a big community of people building novel types of neural network using spikes and so on.
And you are right - it isn't clear who is right. On one side the "no nature" folks might say : we don't even understand neurons to the point where we can definitely categorize them in a mouse brain, let alone explain their behaviour, or actually properly measure their behaviour, and so on and so on - this before understanding any of it really.
On the other hand the "lets mimic things we don't understand" folks would say - we need to understand this because the human brain out performs all AI today on a 20 watt budget.
My view is the understand stuff agenda is right - but not now and probably not for 20 years (in a big way). We need small scale capability and direction finding work - and we will need to do that for a long time.
The ReLU was originally introduced by computational neuroscientists as a more biologically accurate approximation compared to the other activation functions. E.g. https://www.nature.com/articles/35016072
I would expect similar history behind convolutional neural networks, given how the early stages of the vision system have been understood to operate.
I wouldn’t dismiss an approach because it isn’t popular right now. Biological neurons force you to think about constraints you normally wouldn’t and crucially have solved temporal credit assignment, memory etc. in interesting ways that already have inspired some breakthroughs.
As an example the replay memory idea was directly inspired by the rough understanding neuroscientist developed of the role of the hypocampus in learning.
(The activation function is the neuron('s mathematical model) - we elide everything that happens internal to the neuron and model it as only its inputs and outputs.)
The main problem is that the neurons in this article are not trainable (you can't update the weights dynamically) and not differentiable (you can't use the usual gradient-based training procedures to update the weights). They could be used for inference only though.
When I learned how to design an ALU to say, add, and wait for the propagation of carry bits that’s like O(n) where n is the number of bits in the number, it made me want to just use superposition for addition, which is physical and instantaneous. Of course, that has all sorts of other problems that make it worse (so much worse).
Once you learn how to slow down the earlier bits, you end up with all the bits arriving at the same time, and you can have up to n adds pipelined and timed to the clock when output matters.
But with NN I think now would be a fun time (and likely the last decade as well) to rethink the basics right down to the basics. Everything need not be a NAND gate. :)
It's difficult to over-come that, particularly because it's not a comparison between 'analogue v 64-bit float', but 'analogue versus 8-bit int'. (it's tough for scale analogue circuits to operate even when 8-bit accuracy).
Very interesting chips.
It sorta makes sense, considering your body is generally electrically conductive everywhere. Signals propagate as waves of charge sustained by cells opening and closing little pores that selectively release ions. "Conductors" have pores, and "insulators" don't. That's also why nerve signals are relatively slow to propagate vs. straight up electricity. That's also why you can measure muscle activity via an ECG; you're measuring the transient voltages being generated by the waves of charge moving around. The voltages themselves don't have much meaning to your body. Of course, if you apply enough voltage to build up a charge inside a muscle, you can cause it to actuate.
I recently had to get a pacemaker, and apparently the particular lower threshold to trigger my heart muscle to contract is around 0.7V for 350uS, at whatever impedance the lead happens to be. Below that, and my heart muscle does nothing. Anything above that, any my heart muscle does a full beat. The device applies a 2V pulse so that there's plenty of margin, and can go up to 5V if needed, in case the lead's impedance increases. The lower the voltage the better, in terms of battery life. The cool thing is that the device can safely coexist with any natural electrical activity my faulty nerves may have, since the muscle simply responds to whichever pulse happens to arrive first.
The human brain would like to have a word with you.
You should note the O(log n) delay growth is for a logic (ideal) model. Considering the 2D geometry of real circuits, there are models with more detail, in particular VLSI models. 
I suspect due to routing and path lengths even the delay would turn out to be O(n) in VLSI, or at least a power of n (with a small constant).
 Seems like a good introduction: http://cs.brown.edu/people/jsavage/book/pdfs/ModelsOfComputa...
Someday I'll sit down and actually try to design a prototype for such a thing.
No, it's O(1). The bit size of how you represent a single datum does not grow with size of your model.
"The closest analogy I can think of is a sand filter, an item of apparatus used in water-purification plants. As contaminated water flows through a bed of sand and gravel, sediment gradually clogs the pores of the filter and thereby increases resistance. Reversing the flow flushes out the sediment and reduces resistance." -- https://www.americanscientist.org/article/the-memristor
A sand filter has no limit (as it gets more clogged, it starts resembling a closed valve) and it can't release built up energy (flowing in the opposite direction just "opens the valve")
I wonder if there is any truth to that notion? That to solidify a memory, there can't be outside help, that electric chemical signal has to trigger firing the neurons associated with the memory to strengthen their connections.
Sample size of 1, yadda, yadda. Make of it what you will. I think it is interesting.
Current hardware approaches to biomimetic or neuromorphic artificial intelligence rely on elaborate transistor circuits to simulate biological functions. However, these can instead be more faithfully emulated by higher-order circuit elements that naturally express neuromorphic nonlinear dynamics1,2,3,4. Generating neuromorphic action potentials in a circuit element theoretically requires a minimum of third-order complexity (for example, three dynamical electrophysical processes)5, but there have been few examples of second-order neuromorphic elements, and no previous demonstration of any isolated third-order element6,7,8. Using both experiments and modelling, here we show how multiple electrophysical processes—including Mott transition dynamics—form a nanoscale third-order circuit element. We demonstrate simple transistorless networks of third-order elements that perform Boolean operations and find analogue solutions to a computationally hard graph-partitioning problem. This work paves a way towards very compact and densely functional neuromorphic computing primitives, and energy-efficient validation of neuroscientific models.
But then, sure enough wolfram alpha: 100,000,000,000,000 carbon atoms is ~0.167 nano mols.
Or ~2 nanograms of carbon. Dang.
Human brain is not smart in and of itself - it learns everything from the environment, including fundamental concepts and reasoning. We are not just surrounded by nature, with all its glorious detail, but also by the human society and culture. Our environment is very complex and rich. That's something neural nets have to replicate some other way, and simulation is one.
It shows that just inventing a better artificial neuron does not meaningfully advance the problem of artificial intelligence. It's just one leg, it needs two legs to stand on.
We don't even really know how many synapses an average neuron in the human brain has.
It was the last remaining basic two wire circuit element (the others being the resistor, capacitor, and inductor). We could model cells, up to the memristive portion, entirely with basic circuit components (albeit in very complicated arrangements). We were missing that part of our bits boxes. Now we aren't, well, sorta. It'll be a while before we get these in our hands.
Also, these things are a bit bigger than just modern hardware. It's as if you just added a new primary color to Bob Ross; everything changes. We're going to need to redesign computers from the electrical-engineering-ground up.
Well maybe, it'll be a few years (hopefully not decades)
These single device "neurons" also hints of things to come. Even larger neural nets with more efficient structures may, perhaps, cross a threshold in terms of what it can learn and express. I'm thinking Alpha Go Zero and beyond.
I believe this is only the start of this type of hardware.
What makes "this time different"?
They were going to revolutionize the speed of data access, right? https://past.date-conference.com/proceedings-archive/2017/pd... Making them behave like neurons is a cool hack, but sort of tangential to why anyone cares about memristors.
I keep an open mind though, so if this time really is different, then that's valid. It just seems like more hype.
> I believe this is only the start of this type of hardware.
I agree, but I also suspect a pragmatic solution is still decades away, much like nuclear fusion. Of course I no longer follow this area closely and could be completely wrong.
First Single Device To Act Like a Neuron
One thing that’s kept engineers from copying the brain’s power efficiency and quirky computational skill is the lack of an electronic device that can, all on its own, act like a neuron. It would take a special kind of device to do that, one whose behavior is more complex than any yet created.
This is just getting it wrong. Modern neural networks run as vectorized operations on a GPU. They are efficient because they do not use a single device to act like a single neuron, they can do massively parallelized work by optimizing for the hardware we have. If we had memristors, it doesn't seem like GPUs are the first thing they'd replace. More likely they would replace some sort of memory, since what makes them unique is storing information rather than performing analog operations faster.
Solid state batteries, hydrocarbon fuel cells etc. all work - at high temperatures.
But from my point of view, the difficult part is the many-to-many connectivity.
Depends on the neuron, but things like NMDA and AMPA receptors are very well studied at this point and are the primary portion of synaptic plasticity, again, depending on the neuron: https://en.wikipedia.org/wiki/AMPA_receptor
Again, depends on the neuron, but dendritic integration is a classic example: https://www.sciencedirect.com/topics/neuroscience/dendritic-...
> Represent information
The do so in the frequency space. But again, it's highly dependent on the system of neurons : https://en.wikipedia.org/wiki/Neural_coding
That said, holy cow yes. The brain is really something when it comes to power usage.
Neither is learning; we know some things, sure, we have models for long- and short-term plasticity, but we have no idea how learning actually works.
With computation, there are many hypothesis and dendrites do indeed do more than you would think, but again, these are just small details.