Patrol Robot and AI-Enabled Architectural Masterpiece Design Solution

Picture

Once every 2 weeks, we collect the most interesting news from the AI world for our staff. We thought you may find them worth reading, too!

In this first issue of our #InfocusAI digest, you will discover how Microsoft’s neural network learns in polynomial time, why it takes so much effort to recognize speech from brain activity, and how a robot with computer vision patrols secured facilities in Australia. You will also find out how AI can control traffic safety and help architects with their creative projects. 

AI focused digest – News from the AI world

Issue 1, 02-15 September 2022

Microsoft and Harvard present a neural network architecture that can learn well in polynomial time

A research team from Microsoft and Harvard University has come up with a neural network architecture that learns in polynomial time. The architecture is based on concurrent sharing of weights between both recurrent and convolutional layers, which permits reducing the number of parameters to a constant value, even in networks built from trillions of nodes. The research shows that simple network architecture is capable of learning just as well as any other limited sample size algorithm. The authors refer to this attribute as “Turing optimality.” Read more here. The study with a detailed description of the architecture is available here

AI learns to recognize speech from brain activity 

A recently published study of the capability of artificial intelligence to interpret speech from brain activity gives modest hope to people who are unable to communicate with voice. Researchers trained AI to recognize speech in 53 languages ​​and applied this language model to data banks of human brain activity. Participants of the experiment listened to various excerpts from famous pieces of literature, and at the same time their brains were scanned using electroencephalography or magnetoencephalography. An AI-powered system then matched the recorded stories with brain activity patterns and suggested what the person could have been hearing. According to findings, AI ​​can understand a 3-second speech segment based on MEG signals with 72.5% accuracy when it has a choice of 10 best-fitting options. Experts say these are striking results, but the ultimate goal of helping people who cannot communicate through speech is still a long way off because of language infinity and unavailability of brain-scanning equipment. On top of that, the system has to learn to “guess” both what a person has heard and what they want to say. Read more on this story here

Australia will get a site patrol robot 

Stealth Technologies and Honeywell struck hands to put on the market a patrol robot for secured facilities. This fully autonomous vehicle will be able to cruise the area and stream video to the base, meanwhile testing microwave, photoelectric and electromagnetic security sensors on the go. This robotic security guard can run for up to eight hours on lithium-ion batteries and uses computer vision technology to recognize faces and license plates. Up until now, the solution has only been tested at a correctional facility in Western Australia. Its developers now want to give the system a trial run and then start selling it to telecom and defense customers in Australia and New Zealand. Read more here.

Artificial intelligence for architects

We keep tracking the progress of Midjourney and Dalle-2 neural networks. This time our focus is on “text-to-image” – an AI-enabled technology that converts text into images and is being actively mastered by leading architecture firms. Neural networks that were trained on billions of images and their textual descriptions can generate breathtaking visuals and inspire creative professionals. Architects and designers use these tools early on in their projects to quickly develop novel visual concepts for buildings and test how they fit into the urban environment. For example, HDR, an international architecture firm, used AI to design a new building in Ontario: a neural network studied the city’s cultural heritage sites and generated sketches of the new building that matched the local landmark style. Read this article for more examples and spectacular images. 

Canada runs AI pilot to detect distracted drivers

The University of Alberta in Edmonton is testing an AI-powered system to detect drivers who use mobile phones while driving. The solution can capture images through the windshield and detect moments in the video stream when a driver is distracted by a smartphone while driving. The project is currently in the pilot phase and is not being employed by the traffic safety department just yet. Right now, the researchers are using it more to identify the scale of the problem. In the future, however, this pilot may come in handy to decide whether AI can help boost traffic safety.

News
Latest Articles
See more

Investment

Media about MTS AI

Solutions

Cases

Partnership

AI Trends

Team news

Events

Tech

AI Trends
Patrol Robot and AI-Enabled Architectural Masterpiece Design Solution
Events
RuPAWS Dataset Introduced at LREC 2022 Conference
Tech
NLP Researchers Create Paraphrase Identification Dataset
Solutions
Voice and Text Bots Transforming the Customer Service
Tech
AI-Based Profanity Editor: Way to Make Online Communication Safer
Cases
MTS AI Helps DalTransUgol Get Rid of Garbage in Hoppers
Solutions
Video Surveillance in Retail: An MTS AI Solution
Cases
MTS AI Trains Artificial Intelligence to Pick Movie Posters
Cases
MTS AI Helps Segezha Pulp and Paper Mill Trim Production Flaws
Solutions
Solution for Video Surveillance in Manufacturing Industry by MTS AI