Several options available but followings are superior to others in terms of capturing the real neuron/synapse behavior with a simple and compact VLSI implementation.
For neurons: Exponential adaptive integrate and fire models @ real-time.
For synapses: STD/STP + LTP/LTD circuits @ real-time.
Yes thats what i mean. IF models are approximations. I thought the HBP was implementing compartmental simulations. Also the models we have for LTP are at best speculative
Every model is an approximation :) I don't work at HBP, therefore can't speak for their implementations. These are the SoTA of nm engineering. What do you mean by compartmental simulation?
Multi compartmental models of neurons simulate the local voltage dynamics of membrane in detail , by considering the neuron as a graph of connected cylinders , and uses the cable equation along with hodgkin-huxley-like models to model the ionic channels of the membrane. It's the most detailed and closest to reality way to simulate neuronal dynamics. Most neuromorphic chips simulate much more abstracted, integrate and fire neurons. In both situations, plasticity is the big unknown because we know so little about it (and it's so complex).
> It's the most detailed and closest to reality way to simulate neuronal dynamics
Nope, you can keep going with the same logic: HH is just an approximatiin of the molecular kinetics and what you really need are the FEM models in 3D with all the protein pathways etc etc.
But this leads to an entirely intractable project. The science lies not in reproducing the exact neurons bug for bug, but keeping only the necessary details, within technical possibility, for explaining some observed phenomenon. LIF and AdEx neurons seem like a good compromise for neuromorphic hardware.
the HH model is good enough to reproduce almost everything that is recorded from neurons (Thats why H & H got the Nobel prize after all). Even dendritic regenerative spikes are reproduced by such models. I think it's generally acceptable that one does not need to do molecular dynamics to recreate membrane voltages. LIF and AdEx are very crude approximations though (i.e. no dendritic spikes, no plateaus, impossible to use them to approximate compartmentalized calcium levels that induce plasticity). And if you go that route, you have to justify why they are a better choice than e.g. Izhikevich neurons or indeed just sigmoid units.
> I think it's generally acceptable that one does not need to do molecular dynamics to recreate membrane voltages
everyone draws the line somewhere, this is yours. Even we take this statement to be true, you still have metabolic networks changing transmitter concentrations, dendritic arbors evolving in entirely unidentifiable ways, etc. The goal of modeling is not to say that every detail is there, but ones relevant to account for specific feature of data.
While there is more detail that can be simualted, there is a vast very successful literature using compartmental models.
Markram's work was the simulation of a cortical column with compartmental models anyway.
The problem with this of course is the necessary level of detail depends on the phenomenon: simulating seizure propagation is probably different from simulating emotions. For more complex tasks we have no clue what that level is.
Somewhat off topic... but do you happen to know if the dendritic tree and its functional subunits in multi-compartment models can be treated mathematically as a multilayer network of simpler neurons?
I've been wondering if dendritic arborization means that current deep learning ANNs are hopelessly far away from biological reality, or if perhaps with deep networks simple ANNs could indeed learn to compute in a similar way to complex biological neurons, just over many layers of artificial neurons.
Absolutely, dendritic trees are generally considered active and can elicit dendritic spikes. This 2-layer model is well studied in hippocampal CA1 neurons for example[1]. ANNs are far from reality but they do validate connectionism as probably the correct abstract model of learning. The thing that's harder to crack is plasticity i.e. the learning rules. Plasticity in real neurons is a very complex process and there is no indication that anything like backpropagation takes place.
I am surprised that nobody mentioned spacemacs[1] and its org-mode support. If you are used to vim shortcuts but want to give a chance to fully-supported org-mode, you should definitely try spacemacs.
Still, I don't think I am ever going to use spacemacs to write code, but for note-taking(with inline LaTeX preview)/todo/calendar, I am 100% happy with it.
+1 to this, coming from vim I found spacemacs' defaults pretty painless.
I only use it for org-mode right now, still do most of my heavy editing in other tools but it's already been extremely useful for note-taking and todo tracking.
I can also recommend pandoc[1], I've used it to convert some org-mode documents into various wiki/markdown formats for presenting to others.
I want to like spacemacs, I really do, but using spacemacs is like having a bipolar cat. One moment it is nice and warm and cuddly, the next moment it's randomly knifing you and trying to assert dominance by missing the box.
Spacemacs doesn't magically liberate you from having to learn Emacs - it simply makes certain things more obvious, but also it obscures some other things. Be patient with it and you'll be rewarded.
Whenever something breaks on my config due to an upstream change (a package gets updated with a breaking change) it usually takes me less than 2 minutes to triage that and find a workaround. It doesn't happen very often and I update packages multiple times a week.
> Spacemacs doesn't magically liberate you from having to learn Emacs
Understandable, and not an issue for me.
My first issue was that there is an error from the start in the main version. That bug was fixed, but syl20bnr refused to do a hotfix. To make matters worse the main download that a new user gets is 13 months old at this point.
So a new user can either install the hotfix themselves or move to the dev branch. A new user isn't informed that they should just be on the dev branch and that due amount of changes since the last main branch release that the new user is better off deleting everything and starting over straight from the dev branch.
experienced users know: it's better to be in development branch than in master. Sadly that's is the current state of affairs with Spacemacs. Development branch ironically is more stable - yes, things break sometimes, but they also get fixed rather quickly. AFAIK maintainers currently working on the next stable release and trying to figure out better strategies to move forward.
I mostly use Spacemacs for org-mode, but I've found it quite useful for the occasional remote ssh session (TRAMP). Can highly recommend that.
I'm also considering playing around with the interactive shell, in particular for various scripts and whatnot, and I'm thinking of broadening my horizon a bit more. Could the Emacs experts in here give me some suggestions as to what might be good next steps, alongside the shell stuff?
This a very cool repository indeed! Have you compared the speed/accuracy of spikeflow with other SNN libraries like Brian2 on decent GPUs (or even TPUs)?
Performance issues of emacs (made much worse by many layers of plugins), combined with vim’s counterintuitive bimodality, originally meant to compensate for slow terminals? Thanks but no thanks; for me, Spacemacs took the worst from two editors. I love the eye candy, though (when it works); however, it’s not enough to compensate said curious choice.
And don't forget inline LaTeX support. With this feature, listening technical conferences, lectures or noting mathematical ideas are really comfortable.
For books, PDFs : Zotero (if book/pdf requires to be sync have a copy at iCloud)
- Because I usually need to cite these things.
For notes: First condense the thought and put it nicely to Dynalist
- Because it is all easily searchable, linked and think-then-write tree-structure.
- Mobile app is a must to capture ideas, thoughts on mobile.
For todo: Things/Todoist etc. would work.
- Because you need deadlines, reminders.
It really depends on how you do note-taking. In my case, I take notes related to my research projects, meetings and things I find useful, which require me to find a software with Latex, MathJax support. I don't like latex code inside my note, I like the rendered view. Second requirement is that this software should be mobile support. On the bus, on waiting line etc, I should be able to edit/add notes.
Outliners are good for noting ideas. It makes you think more structural and organized. The less bullet you spend, the simple the idea becomes, that way reading it requires less energy.
I use Dynalist because of the reasons above and its mobile app is 10/10.
Uniquely, AFAIK, the Dynalist team make their Trello kanban available for us all to see. We can also subscribe to and vote for our wished-for developments.
I am currently comparing OmniFocus 3 with Things 3 to decide which one better represents my working habits or preferably improve it.
- Things 3 is simple and captures everything I need from a todo-app. It is like writing down the things you need to do on a paper.
- OmniFocus 3 is complicated. There are ~5 buttons whenever you go in the app. No question, it is more customizable but does it really make a difference?
I wonder what could be added to this list from models explained in Eliezer Yudkowsky's Rationality: From AI to Zombies book (which is available online: https://intelligence.org/rationality-ai-zombies/)
PS: Created an OPML file for 113 models with explanations:
Several options available but followings are superior to others in terms of capturing the real neuron/synapse behavior with a simple and compact VLSI implementation.
For neurons: Exponential adaptive integrate and fire models @ real-time.
For synapses: STD/STP + LTP/LTD circuits @ real-time.