Think of all the projects to never get round to making!
In old times you had a piece of plastic with a copper layer. You put wax on it, then you could use a needle to remove wax in right places (you basically print the inverse of the circuit)
Place that PCB in acid, and after the copper gets dissolved you clean it up, and have printed PCB.
There are many techniques people have, and modern ones involves printers, but you kind of always could do one at low cost. It might just not look that nice.
You know those devices that severely disabled people like the late Stephen Hawking used to interact with his computer? They require only a very small amount of muscle movement to control a switch etc. What if an implant could be put directly into a muscle or a group of muscles (or a a nerve ending somewhere more suitable) so that a tiny movement could trigger some action? It would essentially give humans a set of re-programmable multi-function buttons. Connect it to your phone over bluetooth, and you can setup an action to be triggered by it. How would you filter out regular/involuntary input vs. intentional input? Not sure! But this avenue of development is exciting.
It's currently targeted at the blind, but I'd imagine seeing people could use it just as well if it picked up, say, infrared, ultraviolet, x-rays, radio waves, ionizing radiation or virtually anything else normally undetectable to human senses.
The potential of human sensory enhancement is truly staggering.
 - https://www.newyorker.com/magazine/2017/05/15/seeing-with-yo...
"Ready to buy or try a BrainPort Vision Pro? Fill out the patient survey here to determine if you are a good candidate. A Certified Trainer will contact you to discuss the next steps!"
Oh, never mind... I got from the quoted text that it would probably be prohibitively expensive, and that if I tried to find out more I'd probably divert professionals from helping actual blind people in need of this tech .
I wonder how hard it'd be to engineer something similar for a lot less money, though.
It just looks like a tiny camera mounted in regular sunglasses and connected to a computer which then translates the pixels from the camera to electric impulses on a grid of electrodes on a "lollipop" paddle that goes in to the mouth.
The camera, computer, and software translating the signals seems like the easy part. The only tricky part would be the electrodes, as those would have to be spaced pretty closely and create enough of a signal to be felt while not interfering with each other and at the same time being safe to put in to the mouth and not be affected by the saliva in the mouth, not taste bad, resist being accidentally bitten on, and probably some other engineering challenges I can't think of right now off the top of my head.
It's definitely an interesting challenge.
It's a huge win all around, and the world be a much better place if more people in tech did stuff like this than figuring out more effective ways of getting people to click ads or spy on them.
 - https://www.youtube.com/watch?v=4eMQuhxNDJM
This is standard in arm prothesis at least since the 1960ies.
If you are at all interested in this, I'd strongly encourage you to check out Muffwiggler, a huge forum for modular synthesizer enthusiasts.
Also useful is the Modular Grid, a gigantic database of thousands of amazing modules (and which also lets you design your own modular setup using these modules).
VCV Rack is also great for playing around with software versions of some of these modules, which can be pretty expensive (the guy in the video has likely invested thousands and thousands of dollars in to his Eurorack setup, while VCV Rack is free). On the other hand, you can also build your own Eurorack modules (see the DIY forum on Muffwiggler for that), which can be a lot cheaper.
Something else I was thinking about was that if the main problem the guy in the video was trying to solve was that the knobs and switches on his modules were too difficult to manipulate with his prosthesis, then he could have tried the HackMe Vectr, which would have let him simply wave his whole hand around above the module to send out three voltages from his hand movements in three dimensional space. Mind control is, of course, way cooler, and doesn't require movement at all.
Finally, I was also thinking about how his modular interface might have been designed with opto-isolators, to minimize the risk of voltage inadvertently being sent back from the modular in to his arm, should he plug his cable in to an output rather than an input. Someone with more of an electronics background than me can probably comment on whether this would be a good idea.
 - https://en.wikipedia.org/wiki/Eurorack
 - https://en.wikipedia.org/wiki/Modular_synthesizer
 - https://muffwiggler.com/forum/
 - https://vcvrack.com/
 - https://www.youtube.com/watch?v=HSoVBz-1eQ0
 - http://hackmeopen.com/Vectr/
Plugging into an output would merely make some op-amps unhappy. Maybe a little magic smoke, but nothing lethal...
(Not a real EE and I'm definitely not confident enough to design interfaces between fleshy bits and mains-powered hardware.)
There are a limited number of binary LV2 bundles available for Windows n macOS. That number has been slowly increasing, though just a fraction of LV2 use CV ports so far, but it is early days yet.
Many non-musicians are under the impression that composers think up music in their head and then write it down, and while some do work that way to some extent (with they myth of Mozart working like that in the movie Amadeus being probably the most well known and most extreme case of composition being completely like that), for most composers (including Mozart) this is much more of an iterative, experimental process.
In recent decades, and especially with the advent of electronic music and computers, a lot more randomness and elements not fully or directly under the composer's control have entered in (though, again, this goes back at least back to the age of Mozart, with his dice game).
There's a whole field of algorithmic composition where music-generation becomes either semi-automated or even completely automated, and the composer's role is one of writing or choosing the algorithms, and of choosing various other aspects of the music (such as which instruments or sounds get used) but without pre-determining the final work in totality or in advance.
Brian Eno talks about this much more eloquently and deeply than I can, so I again encourage you to read his talk.
 - https://www.edge.org/conversation/composers-as-gardeners
 - https://en.wikipedia.org/wiki/Musikalisches_W%C3%BCrfelspiel
 - https://en.wikipedia.org/wiki/Algorithmic_composition
Even without music theory or knowledge of specific instrument we are able to hum or vocally replicate a piece of music - why is it not possible to directly transform it into some instrument?
Is it not possible to hum > grab the pattern > generate instrument music with that?
I mean i can articulate smth best with my voice which is with me since I've learned to generate voice on my own so why we don't exploit it with help of some software?
There is also a good chance someone can learn to encode quite a bit into vocalizations.
Anyone who seriously learned an instrument or to sing, knows that it takes very hard practice for years to improve.
Yes, but vocalizations may well improve more quickly.
Frankly, I have toyed with this kind of thing. Software could do a lot, in particular, allow the user to adapt their expressions in various ways.
As someone who's done some experiments with those, there are some major caveats:
- Most people can't sing a melody as well as they think they can. Trying to figure out what they were trying to do is where most of the effort goes. Even I'm mediocre at such, and I'm at a professional level at one instrument, and competent at a few others.
- You have to deal with overtones. Almost all naturally produced notes are a stack of waveforms, and figuring out which one caries the intended signal isn't always trivial (and can be even more complicated in polyphonic harmony).
This is reading muscle signals with electrodes and translating those muscle activations into a variable voltage signal. One could argue that because you're not reading mechanical action it could be considered "thought control" but I'll leave that up to the individual to gauge for themselves.
It's a game that has a defined assembly-like language, you hack into things to do other things. The catch is some of those things you hack into are actually your nerves, so that you can still function properly.
Pretty cool game by Zachtronics
Combine this with a little ML and you could probably have a general purpose interface trainable in a few seconds/minutes to interact with any suitable output based on your personal signals! Imagine piloting a vehicle without moving a muscle, or playing a video game, or interacting with someone remotely...
Hell, now that I think of it, there's no reason you have to cut off your arms to do so. Is anyone aware of any open source prosthetic tech/code that one could conceivably hook up to functioning appendages?