Google has launched the second iteration of its no-code Teachable Machine so that inexperienced users can take their bespoke machine learning (ML) models and apply them to projects such as classroom activities.
Teachable Machine 2.0 carries over the features from the original, allowing users to record images and video from a webcam and use them to train ML models for tasks like pattern recognition. Now, these same models can be taken and exported to websites, apps and physical machines.
Open source curriculums are making use of the tool to give children their first taste of ML, without the intimidating aspect of learning to code.
One such example is a programme run out of MIT's Media Lab by education researcher Blakeley H. Payne for six to 10-year olds. The children are invited to the lab where they use Teachable Machine 2.0, among other things, to get a wider knowledge base about technology and what it can do.
"Parents - especially of girls - often tell me their child is nervous to learn about AI because they have never coded before," said Payne. "I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of 'I've never done this before.'"
The tool is entirely browser-based and all the data that's fed into it stays on the origin computer while the processing is done in the browser.
Teachable Machine can record from a computer's webcam and microphone and be trained to recognise images, sounds or poses. It can identify different people or objects and detect when they leave or return to the shot.
Other real-world use cases involve helping those with impaired speech use voice-powered computer products. For example, neurologic conditions that impair speech, such as motor neurone disease, can impede an individual's ability to interact with software. But, Teachable Machine can take audio, turn it into a spectrogram and be trained to correct the sound of speech that isn't produced normally.
Elsewhere, educators at New York University's interactive telecommunications program used the tool to create video games where characters could be controlled using hand gestures, thanks to the pose recognition feature.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.