They use graphene to approximate the ReLUs. The device is a few microns on each side.
The 2D image is first converted to a 1D wavefront by some external means, then launched into the side of the device.
Training happens in digital computer, by simulating the behavior of the device.
Full paper: https://arxiv.org/pdf/1810.07815.pdf
A daily 3-5 sentence (+ 1 picture?) email summary of an interest scientific paper might be worth pursuing.
Thank you Dang and PG.
That could have some interesting applications for things that need it. Or scale: a gazillion ML calculations per second. With zero power consumption. From a piece of glass.
And as far as timing goes - 1 cm of glass delays by ~50 picoseconds. That corresponds to around 20 gigahertz. Fast, but not mind blowingly fast.
(You can have a hundred worker CPU cores doing the necessary conversions, but just need to worry about the parallelization complexity. But, then again, this is exactly what already happens when we feed data to hefty devices like GPUs and TPUs.)
Most of what we call “AI” today also uses prelearned weights for their neural networks and in many use cases these weights are not touched after deployment.
I don’t see why a neural network encoded in glas should not be an AI while the same neural network on a computer is one — either you have to call both AI or neither.
I suppose the question is, if an AI has learned and you export that final learned state to use in a now-hardcoded classifier, is that classifier still AI (or part of the overall AI) or is it simply the output? I can imagine arguments on both sides. If you accept that as AI, then sure, this fits the bill!
Most of what we call AI are hardcoded solutions once in production. There may be ongoing offline improvements being made, but once the improvements are established the production AI is replaced with a point-in-time snapshot of the AI undergoing offline training. Self-learning in production causes all kinds of problems, but most significantly it's a security issue since it gives an attacker the ability to manipulate behavior by curating examples.
But you’re right, I think many “AIs” shouldn’t really be named that either!
That's exactly as self-assessing and self-modifying as a neural network implemented using any other kind of computation substrate.
It is like using AI to design, say, the most aerodynamic plane.
Only here they used an AI to design something that performs a task that we traditionally use as a benchmark for AI models. But this piece of glass, just like the plane mentionned above, is not learning anything and it's not an AI.
If I understand correctly, it’s the design process, not glass, that used learning. Along the same analogy, I guess the sculpture in London (glass, here), which was designed using random walk (neural nets), would be the same: the sculpture in itself isn’t “random walk”, but the design process was.
(I couldn’t recall what was the name of the sculpture. Here’s the wiki link: https://en.m.wikipedia.org/wiki/Quantum_Cloud )
Edit: I read the other comments and it’s getting more confusing! AI, from my school courses, would be implementation of algorithms like Hill climbing where a system is online: it takes some input, and tries its best to find a solution. Now if I take the output itself for use in, say, signal processing — that “output” would be a “device” to do something and won’t be an “AI device”. Does this make any sense at all? I’d love to get some pointers on this to read.
FWIW having worked with some serious ML heads at Google and Facebook in the past, none of them referred to ML as AI.
And you're a peace of matter that has been designed by a lot of trial and error to perform one specific task: to propagate your species.
The parent is not saying the glass is useless, just that it isn't intelligent
and silicon is?
Just seems a bit like being told you can buy a flying car, only to find out that it's a short flight time, not flying very high and depends upon approaching a ramp at speed.
> Training the AI involved presenting the glass with different images of handwritten numbers. If the light did not pass through the glass to focus on the correct spot, the team adjusted the size and locations of impurities slightly. After thousands of iterations, the glass “learned” to bend the light to the right place. (...)
This sounds pretty much like they're training a neural network.
This single piece of glass can do as many visual recognitions as physics allows, using no energy (light aside).
It's definitely AI - it's still recognizing a handwritten digit. But not all AI applications require further training. Sometimes you just need the end result.
- Thinking Humanly - the ability to problem solve, introspection, and learning (Cognitive Science)
- Thinking Rationally - the ability to perceive, reason, and act (Logic)
- Acting Humanly - simply do things that (at the moment) humans do better (Turing Machines)
- Acting Rationally - the design of intelligent agents (the focus of the textbook, Betty's Brain)
Yes, they all vaguely sound the same, but the point is, if you took the glass and had to select which definition it marks off, which would it be? My point is what about this glass makes it "intelligent"? The end of the article starts to talk about a combination of AI-glasses could form some sort of efficient image recognition process. Since we still don't have a clear definition of what intelligence is, is it simply a combination of tiny little perceptrons (from neural networks) that have specific differentiation tasks?
source, me: Stanford CS grad.
Because this system can’t adapt, I agree it is probably not meaningfully “intelligent”.
I suppose the next stage is to develop another digital analog of a physical system which can decide for readers if the original digital analog of a physical system is "real AI".
Presumably with sufficient training it would be able to read and assess New Scientist articles about its own development.
+ Can you embed it in a mirror to portray information if a face is recognized?
+ Or can you make glass semitransparent when something is recognized?
+ What about windows that blur anything that looks like a face? Preferably keeping other objects the same. A bit of an autoencoder where only certain objects are blurred.
+ Can the amount of light shining on it lead to different results? Then you can show a little movie by shining more and more light through the glass using it in reverse.
It's a simple embodiment but I guess there are thousands of applications we haven't thought of yet. Really cool train of thought!
I don't think movies, but some monotonic changes should be possible using critical refractive angles.
Maybe a zero power trigger for things like a touchless hand soap dispenser.
Light goes into face A of a digital sundial predominantly from a particular angle and comes out face B patterned like a number. Presumably if you shined light in the pattern of a number on face B, you'd get light coming out of face A predominantly at a particular angle.
Only because the sundials are purpose-built for producing clear digits. This would produce blurry messes that, if you squint hard, would look a bit more like one digit than like the others. But the principle doesn't appear to be different.
This doesn't "recognize" anything. It's just a complex waveguide. You can see it in their ray path diagrams.
Which is how classifiers work. Which is also how our brains work. The only real difference is the medium. Recognition is a reliable ability to convolute a large form into a very small amount of information, distinguishable between forms. When I look at a hand drawn "8", I don't store all the curves I see as the number 8, I have an abstraction of the number in my head, and I signal recognition by activating that abstraction, very similarly to how this glass computer functions. How else does one recognize something?
I don't see how your first and second statements are mutually exclusive.
The whole presentation of this article is very "woo". They say bullshit like "the glass learned to bend the light". This is clearly stupid and false. The system DESIGNING the glass learned how to construct the glass such that the light bended appropriately. The glass hasn't learned shit.
> The system DESIGNING the glass learned how to construct the glass such that the light bended appropriately
With a bit of rephrasing you could say exactly the same about a neural network surely. The learning was embodied in the glass, so depending on definitions, the glass was 'taught' something. It 'learnt' something.
I kind of see what you're saying, but it does seem to be discounting something actually interesting and new (in my experience).
Our grasp of materials science to date has not permitted compact, stable, passive optical computational systems at the degree necessary to implement digit parsing. That’s the advancement shown here.
Not to diminish the advance demonstrated in the paper.
Problem is with voice, it's the cadence, content and variations in the tones that help discern an individual. For that you need to factor in a variable amount of past data as you can't just take a snapshot - think taking a picture of a single frame in a movie and knowing what the entire movie is about.
However, you could make something that needs a certain set of frequencies, so with that something that could identify males or females by the sound of the voice and the general rule of male voices being lower in frequency. That would be a start. Though even that simple task, you start to see exceptions.
But if you can flowchart it and have ways to handle the logic, who knows what is possible and impossible.
Not analogue, but certainly would be Zero power.
We now have the computing power to use a huge training set to set the parameters of a function. But after that it is like this glass.
I really like this as example. It shows how a function reads numbers by visualising the trained function.
But companies (Darpa for example) are already making the next step to AI: creating a realtime feedback loop to update the trained function.
The outcome of that will be scary AI as far as I'm concerned.
And that's very interesting. This is the first application of what could be a very very powerful new technology.
So far as I know addressable super-high resolution 2D holographic displays are still a lab curiosity. There are some obvious technical problems with adding depth, but as soon as you have a 3D addressable light matrix that can be reconfigured dynamically, you have a whole new base for physical computing.
As others have stated here, this construction is due to machine learning whether or not the device itself has a capability to adapt further.
Makes you wonder what kind of spin a "Hello World" program would glean from some journalistic minds.
Number system convertor using light and glass - core logic
Alphabets to ascii code convertor - Application 1
ASCII code to binary convertor - Application 2
Imagine, if convertors are placed on top of one another.
Source of light need not be a problem as we can stack lenses to power it up.
Based on reading other's comments, I can see they are comparing it to an AI.
Its a problem solver using light and stacked glass. It uses structure from an AI. Neural networks filters data. you can use a digital(electronic) device or purely electrical
Input - Processing - Output - Conventional computation engine, it doesn't say it should be electrical / mechanical. First computer was a mechanical device.
This uses light!
Here is another application.
How can a blind person tell whether it is day or night ? give a light bending device that will focus light on the person's palm. If it heats up, its day.
Use a stacked light bending layers to get a yes/no answer.
EDIT: Actually they describe some nonlinear material inclusions in the paper, interesting.