This Week in AI
‘Godfather of AI’ quits Google with regrets and fears about his life’s work. Geoffrey Hinton, known as one of the ‘Godfathers of AI’, has resigned from his position at Google to express his concerns about the potential risks of AI technology. Hinton’s work has directly contributed to developing technologies such as ChatGPT and Google Bard. He warns that the competition between new AI technologies will be impossible to stop, leading to a world filled with so much misinformation that distinguishing truth from falsehood will become impossible. Moreover, the technology could potentially eliminate jobs and even pose a threat to humanity itself as it becomes capable of writing and running its own code.
Hinton’s departure from Google is significant, given his stature in the AI community. He is often referred to as the “Godfather” of AI. In his resignation statement, he expressed his desire to speak openly about the dangers of AI without worrying about the impact on his employer. He even admitted to regretting his life’s work, believing bad actors would inevitably exploit the technology. With other concerned critics like Yudkowsky and Gary Marcus, Hinton’s resignation is a significant development in the ongoing debate about the potential risks and benefits of AI.
Scientists develop A.I. system focused on turning peoples’ thoughts into text. Semantic decoders are AI systems that can translate brain activity into text and potentially benefit patients who have lost their ability to communicate physically. Researchers at the University of Texas have developed a non-invasive semantic decoder system that uses a transformer model. It can produce text that closely or precisely matches the intended meaning of words around half the time without requiring any surgical implants. However, it still cannot be used outside of a lab setting because it requires an fMRI scanner. Nevertheless, the researchers believe they could eventually create a more portable system.
Amazon is developing an improved LLM to power Alexa. Amazon’s Alexa might finally be getting the upgrade many have been waiting for since the rise of language models last year. CEO Andy Jassy leads the charge in building the world’s best personal assistant, with Alexa being the natural starting point. Although it can sometimes be frustrating to use, incorporating powerful state-of-the-art language models could be the solution. Amazon is currently developing an improved language model (LLM) to power Alexa, which will be capable of understanding more complex and nuanced language, making it easier for users to interact with the voice assistant. The new model will also support more languages, making Alexa more accessible globally. Additionally, Amazon is working on enhancing the privacy and security of Alexa to instil greater confidence among users when using the voice assistant in their homes.
Interesting Projects

Nvidia Jetson Iron Man HUD. Engineer and avid cosplayer Kris Kersey, has created a superhero helmet with the help of NVIDIA technology. The helmet, inspired by the superhero Iron Man, is equipped with two cameras — one by each eye slot — that see what the helmet’s wearer is seeing. The HUD then presents information, including the current temperature, humidity, altitude and GPS location. It can also classify what’s in the user’s view based on deep neural networks for object detection.
Nvidia Jetson Iron Man HUD. Engineer and avid cosplayer Kris Kersey, has created a superhero helmet with the help of NVIDIA technology. The helmet, inspired by the superhero Iron Man, is equipped with two cameras — one by each eye slot — that see what the helmet’s wearer is seeing. The HUD then presents information, including the current temperature, humidity, altitude and GPS location. It can also classify what’s in the user’s view based on deep neural networks for object detection.
Hardware

M5Stack Launches the Compact, Camera-Packing CoreS3 with the IoT and TinyML in mind. M5Stack has launched a new compact camera, the CoresS3, designed with IoT and TinyML in mind. At its heart is the Espressif ESP32-S3, featuring a dual-core Tensilica Xtensa LX7 processor running at up to 240MHz with 512kB of static RAM (SRAM), 384kB of on-chip flash, Wi-Fi, and Bluetooth 5.0 Low Energy (BLE) connectivity and vector acceleration specifically designed for tinyML workloads at the edge.