It is, but I could never get it to work on my Windows machine. Process is always suspended upon startup, and only continues after I detach frida. I am working with his examples here:
I haven't tested it yet but I like to be able to inject into running native apps and control them, debug them, etc.
On MacOSX, there is the SIMBL project which enables such a plugin architecture for any OSX apps. E.g. you can get extra features like window-always-on-top, window-transparency, etc. And you can automate / script some apps which would not be scriptable otherwise or to such extend. Or you could add specific features to applications. E.g. Dropbox on MacOSX uses a technic like this to display some state-icon of its Dropbox directory in Finder.
SIMBL would inject other native code into to some app. In most cases you would inject other ObjC code.
I like the idea to dynamically script or mess around with an app. For that reason, I used SIMBL to inject Python + iTerm + PyObjC. That way, you can interactively interact with any app with Python. https://github.com/albertz/Pyjector
Oops, in retrospect I should have resisted my OCD-fueled urge to link every single reference to be consistent. :p
Speaking as its creator and maintainer, it's been a pet project for the last 5 years (7 if you start from frida-gum, the code instrumentation engine), but to this day it's still rather obscure considering its potential. I regret not spending more time marketing the project over the years, so these days I'm doing my best to make up for that. :)
Looks like they also removed a number of posts in the forum where the discussion started, including a post where a moderator wrote that customers asked Lenovo for this kind of service. The same moderator also pointed out that it was against the community rules to argue with moderators...
Can someone explain a bit about the problems? I am not a Neuroscientist but a Machine Learning researcher, and given the recent results in Deep Learning / Artificial Neural Networks which reach Human performance in certain specific tasks, it almost looks like if you would just put an ANN together which is somewhat big enough and somewhat similar wired together as the Human brain, you would yield something similar. And the point is, it doesn't really matter at all whether that is exactly like in the Human brain, and also, you don't really need to understand in detail how it will work, like you can even not really explain the current simple ANNs. It just works anyway. So, under this view, it's just a matter of scaling up, and to wait until we have enough computing power. And then, you would add more details to make it more close to the real Human brain.
As an ML practitioner, you certainly know that the ANNs you use have not much in common with biological (real) NNs. First, ANNs tend to be mostly feedforward, while rNNs are highly recurrent. ANNs are therefore not very good in tasks where memory is needed (I know about the developments involving LSTM neurons, but they are not analogous to the implicit memory of recurrent neural networks.) Second, ANNs usually don't have a time dimension, the firing is essentially a floating point value instead of action potential. Third, recurrent neural network structures do not scale: an efficient/reliable/reduntant system of 100B neurons will probably have extremely different structures than another one with 100M neurons -- because of recurrency and other stuff a rNN is a chaotic process (in the sense of sensitivity to parameters) that should be stabilized by the structure.
And there are many-many other differences. Note that our task is not to solve problems but to figure out how the brain works.
Also, recurrent neural network simulation cannot really be scaled right now, we don't have the hardware. It is not parallelizable with our current tools because of the huge number of connections.
(Disclaimer: I was involved in a project trying to model real neural networks. It wasn't a huge success, but we learned a lot.)
This is a good informed summary, thank you. I asked the same question of my Alzheimer researcher friend, and he gave a pretty similar response including aspects like the huge computational requirements and the basic non-similarity of rNNs and ANNs (albeit with a disclaimer that he wasn't in the field).
Nonetheless, I look forward to seeing more simple rNNs being created over time (besides the C. elegans one that was modeled recently). Who knows what strange organizational rules or structures we will discover from this strand of research?
I would say that the problem with your proposal here is that those "more details" you mention consist of a vast horde of information we do not know anything about that far outweighs what we do know. At best, we know that certain areas of the brain concentrate certain functionality, and we can map rough levels of interconnectedness and activity traversal. That is not nearly enough information to build a brain. The neural nets that exist may sometimes achieve "Human performance" at narrow tasks but it's a big leap to infer that those neural nets are actually functional models of subsystems in the brain, or that their isolated functionality can be plugged into a larger group of such neural nets and have something coherent emerge. Our actual brains are far more interconnected and overlapping than is obvious from reading the latest fMRI study.
Cross platform .NET usually means using a native toolkit on each different platform and the result is WPF on Windows, GTK# on Linux, and MonoMac on Mac. The purpose of this is to avoid the uncanny valley of cross platform toolkits like Java Swing at the expense of writing extra UI code.
Some wrappers like Eto.Forms and XWT exist but don't expect drag and drop visual designer tooling.
Thanks to Mono WinForms is sort of cross platform to a certain extent but if you give it a try you'll see why it is not recommended.
WPF is very heavily tied into DirectX and other Windows-only APIs.
Open Sourcing it is one thing, converting that into an engine that can do layout and rendering identically across multiple platforms is non-trivially difficult. Talk to the browser folks on how hard they had to work on that, and they wern't dealing with 3D stuff.
That's the toolkit if you want Metro apps, dealing with MS's crappy store, etc. etc. If you want a traditional Windows application that just works, then WPF is fine. (And I think WPF has an option for non-blurry text rendering, whereas every Metro app seems to be blurry.)
Okay, yeah, so still XAML. I've only recently looked into Windows RT, my time was always spent in ASP.NET, so WPF/WinRT/Silverlight, it all seems like the same thing to me, but I do know a lot of Windows app developers that bitch up a storm about the diffs between WinRT XAML and what can be done in WPF :)
I've been messing with Eto.Forms recently. You define everything through code (or optionally in a JSON blob) rather than in a designer, and it tries to use native controls to display your design.
It's pretty solid and easy to use and I'm quite happy with it. If you want an exquisite, beautifully-designed OS X app then you should be looking into having separate frontends per platform anyway, but having tried Eto I find it fits my needs.
It's interesting that somehow, because this is reverse engineered now and I can do funny things with it and hack around with it, I actually consider to buy this thing.
I would wish that more companies would allow for modifications / hacks right from the start and actually support that. That would attract the hacker community and might result in many more interesting use cases.
I got one as a present and oh, how I hate Nike for not even caring about android users, much less of using this kind of sensitive data without uploading to their cloud.
The device itself is usable as a quite ok step counter requiring Windows only software to even do _anything_ with it.
As it measures mainly walking, I cannot even recommend it for the intended purpose.
I wonder if there is a market boost that comes from having your stuff hacked.
Using the Keurig 2.0 as an example: Adding DRM to the coffee maker turned off a lot of people. But once a hack was discovered it generated more publicity for the company. It makes me want to buy one just because I know I can get around the DRM. If it never had DRM, I don't think I would want one as bad.
I'm fooling around. I'm sure everyone on the project rightfully will get a ton of return on their investment. I'm actually working on a browser UI myself. It's only because of platform OS integration that I'm using Java, but one thing I did learn using a lot of Kivy is how much of a pain it is to be far from the platform libraries.
Unlike Kivy, which is Python, you've chosen JS, so at least your UI is on a more naturally portable layer given that you have a browser runtime right there. Documenting your plumbing into the system libraries will be really critical. The JS community will hack every other piece of the system on their own, but user after user will shy away from the system plumbing because it's not what attracted them initially, so I would prioritize there.
Writing cross-platform UI's may be a pipe dream for now. OpenGL is headed towards a weird place with Mantle and Metal etc while OpenGL Next comes together. The HSA Alliance (Notably missing Nvidia and Intel) might offer some hope that CPU architectures and graphics API's (and GPGPU) will converge again, but right now, HTML5 with JS is probably the best we have for sheer everywhere.
Probably where I would diverge most sharply (and somewhat irreconcilably) with the web community and perhaps even the high-minded ideals of Mozilla is HTML/CSS, but the "superior" tools I find in Android are themselves relatively new. Do I wish that HTML/CSS/JS would go away? Yes, but without need of burial. A technology will displace it someday through sheer elegance that unicorns had not yet fathomed. If it's at all recognizable in what I'm using today, it's an early stage experiment.