Researchers at UCLA have printed out their own type of neural network, one which utilizes light instead of electrons to process inputs. The D2NN would be trained in feature recognition and analysis as well as classification and would do so at a much faster rate than a conventional neural net. Another benefit is that once you have printed plates with the correct patterns to successfully classify objects, the only power required for this is the light source being processed, no internal electricity required. This passive design implies that teaching the network may require printing new sheets, though the link at Slashdot was not completely clear about that detail … much like the sheets themselves.
"Matt Kennedy from New Atlas reports of an all-optical Diffractive Deep Neural Network (D2NN) architecture that uses light diffracted through numerous plates instead of electrons. It was developed by Dr. Aydogan Ozcan and his team of researchers at the Chancellor's Professor of electrical and computer engineering at UCLA."
Here is some more Tech News from around the web:
- 'Unhackable' Bitfi crypto-currency wallet maker will be shocked to find fingernails exist @ The Register
- Elon Musk's Tesla to drop Nvidia hardware in favour of its own AI chips @ The Inquirer
- Intel 10nm delay raises speculation of foundry business scale-down @ DigiTimes
- Linux kernel 4.18 delayed: Bug ate my rc7, says Linus Torvalds @ The Register
- The Hackaday Superconference Takes Over Pasadena This Fall