Bee Protector, Ocean Explorer and English Tutor

fgfg Picture

In the new issue of the #InfocusAI digest we tell you how artificial intelligence assists  in studying ocean currents and how it is going to help fight Asian hornets in the UK, teach children English in China and operate robots in South America. We also explain why AI is not always able to judge violation of rules correctly and why humans are “at fault”.

AI-focused digest – News from the AI world

Issue 18,  4 – 18 May 2023

USA researchers found the best way to study changes in ocean currents

Researchers at Massachusetts Institute of Technology, together with statisticians at Columbia University and oceanographers at University of Miami and University of California, have developed a new ML model to assess and predict changes in ocean currents more accurately. The standard statistical model based on the Gaussian process, currently used for these purposes, reconstructs currents using GPS data from buoys and identifies zones of sea water divergence, but not accurately enough as it is initially based on incorrect assumptions about water motion. The new model, suggested by the researchers, makes predictions by analysing data from buoys with reference to the previously provided information about hydrodynamics that reflect the physics of ocean currents. It requires a small amount of additional computational expenses but is more accurate at predicting currents and identifying divergences than the traditional model. This will help experts to better assess data from buoys and monitor the transportation of biomass (such as Sargassum seaweed), carbon, plastics, oil, and nutrients in the ocean more efficiently. Learn the details and access the academic paper on the topic on the MIT news website

Researchers found out why AI and humans often judge violation of rules differently

Training an ML model to reproduce human judgements about rule violations requires a modified approach to labelling data — such is the conclusion of an international team of scientists while studying how artificial intelligence draws its conclusions and predictions. Their findings and experiments are reported in the Science Advances journal. Read the news about it on the MIT website.

Currently, normative conclusions from AI and humans diverge very frequently. The reason for this is that ML models are often trained on data with descriptive labels. In other words, people are first asked to identify the factual features of the text or image: whether there is profanity in the text or whether the dog in the photo looks aggressive. Then, the artificial intelligence is trained on these labels to imitate normative human judgements: whether the text complies to the language policy of the platform, whether the law on keeping aggressive animal breeds is violated and so on. The models trained to “deliver judgements” or make predictions on this principle often do not repeat human logic – the AI’s judgement of rule-breaking is stricter than that of humans. This divergence occurs because a person labels data differently depending on the task at hand. People are stricter when labelling an object or phenomenon with descriptive labels, even if these labels signal a violation of rules. At the same time, their judgements are more lenient when faced with the task of making normative labels, such as whether there is a breach of the rules. In the example of aggressive dogs, the discrepancy in labelling was as high as 20%. The researchers ultimately concluded that correct labelling in such cases is critical for model learning. Therefore, training AI to make normative judgements on datasets labelled by people who answered the question whether some or other phenomena comply with certain rules is more effective. Descriptive datasets are not appropriate in this case.

South America scientists suggested a new Deep Q-Network-based method for controlling robots

Researchers from Ecuador and Chile proposed a new approach to building hand gesture recognition systems (HGR systems) that can be used in human-machine interfaces. Their method is based on reinforcement learning, more specifically the Deep Q-learning (DQN) algorithm. It is used to classify electromyography signals and inertial measurement units from Myo Armband sensors. The HGR system developed with this method has an accuracy of 97% in classification and 88% in gesture recognition. These indicators exceed the achievements of other approaches described in the scientific literature. The instrument has already been tested on two robotic platforms — a helicopter test-bench and the UR5 robot. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response. More about the architecture and components of the instrument you will find in the Nature journal. 

The UK plans to fight Asian hornets with AI

Asian hornets, rapidly spreading throughout Europe and killing local bees, have motivated British conservationists to team up with technology companies to find ways to control them. As such, Pollenize, which works on projects to conserve insect pollinator populations in the UK, and  computer vision system developer Encord are now developing a prototype device based on CV and ML to help detect “outsider” hornets. Its task will be to distinguish them from local insects of similar appearance (European hornets, bumblebees, bees and others) and promptly notify the relevant authorities of their appearance on the island. This way, immediate action can be taken to destroy the nests of the intruders. It is necessary to know the potential scale of the disaster to appreciate the significance of the project: the spread of Asian hornets could result in a death toll of one bee every 14 seconds. This Forbes article will tell you more about the problem and the project.  

An AI-powered smartphone to be released in China

Xiaodu, a subsidiary of a Chinese tech giant Baidu, is set to launch a smartphone with artificial intelligence specifically for school students next week, reports South China Morning Post. The device will help with school assignments and learning English. For example, the tool can scan essays and suggest improvements. Curiously, these functions are based not on the most advanced Chinese language model Ernie (which was created by Baidu), but on the LLM specifically designed by Xiaodu to focus on learning. The phone also includes features for parents: location tracking, poor usage conditions control (such as when a kid is watching videos or reading while lying down in dim light), a contact whitelist, a blue light filter and others. Speaking of which, in China, smart products for children are now a promising and very lucrative area for tech companies, as the demand from parents is enormous. The list of devices offered to young users is extensive, ranging from the familiar smartwatches, to pens translating scanned text and robots teaching to play chess.  

News
Latest Articles
See more
Tech
MTS AI Launches an Open Large Language Model
AI Trends
Drop in Lidar Prices and Rise of Industrial Robotization in China
AI Trends
Centaur to Simulate Human Behavior and Inspiration from Kandinsky
AI Trends
Habermas Machine and AI for Additive Technologies
Tech
MTS AI Opens Public Access to Kodify Demo Version
Solutions
MTS AI Creates AI Assistant for Bank Employees
AI Trends
Reliability of LLMs and Alternative to Lidars
Cases
MTS AI and VisionService Present MAX System
AI Trends
AI in Science and Cameron in Stability AI
Tech
MTS AI taught Cotype Lite to communicate in the Tatar language.