Logic vs. Bias and AI Drawing Thoughts

fgfg Picture

In this issue of #InfocusAI, you will learn how Japanese scientists trained the neural network to generate images based on brain fMRI, what method of combating bias in language models was invented at MIT, and how a CV-application from Cambridge researchers will help in monitoring the state of forests. We will also touch upon the topic of fraud using speech synthesis and tell you about AI services skills to work with which are needed today on the Russian market. 

AI focused digest – News from the AI world

Issue 13, 22 February – 9 March, 2023

Logic to help overcome bias in language models

Scientists at MIT’s Computer Science and Artificial Intelligence Laboratory wondered how to make language models less biased. This is one of the main problems of the LLM, recognized even by the language models themselves. For example, ChatGPT honestly reports that it can be biased because it is trained on data that reflects existing stereotypes  in society. Scientists have suggested that logic will help to cope with this drawback. They trained their language model to predict the relationship between sentences based on context and semantic meaning: whether the phrase follows from the previous one, contradicts it, or is neutral. For example, this will avoid situations where the word “doctor” is automatically attributed to the masculine gender, even when there is no reason to believe this statement is true. MIT’s logical language model performed well on the iCAT test, which “measures” fairness, at 90%, while other models’ scores for language comprehension range from 40% to 80%. More can be found on the institute’s news website.

AI learns to generate images based on brain MRI

Scientists from Osaka University in Japan trained the Stable Diffusion neural network to generate images not only based on a text description, but also guided by human brain activity via  fMRI. This required a significant database of fMRI records — brain activity of the participants was scanned while they viewed thousands of different photos. Text descriptions of these photos were then linked to brain scans taken while viewing them, and the model was further trained. You can find out how well the neural network copes with the task in this preprint, and further discuss the dangers and benefits of this technology in any social network. Heated debates on this topic have been going on there for several days now. 

Cambridge scientists develop CV-application to monitor forest health

Scientists from the University of Cambridge have developed an application based on computer vision for automatic measurement of tree trunk diameter. This is an important indicator needed to monitor forest health and carbon sequestration. Usually such measurements are made either manually and take a long time, or using expensive LiDAR sensors. The new application is based on an algorithm which determines diameter using relatively inexpensive, already installed into many mobile phones sensors which support the augmented reality function. At the same time, it is suitable both for measuring trunks in forests, where trees stand straight and at an equal distance, and for more natural ecosystems with curved trunks and low-hanging branches. Read more about the development in the scientific article here.

AI falls into the hands of scammers

This week, The Washington Post raised the alarm about the growing number of scammers using the speech synthesis feature. Criminals use the standard scenario of telephone fraud “your dear in trouble, need money.” To make the message sound more convincing, with the help of text-to-speech services available on the Internet, they select the voice and adjust it so that it is similar to the voice of the loved ones. To “clone” the voice with this technology, just a small voice sample taken from social networks is needed, the newspaper writes, referring to Hany Farid, a professor at the University of California at Berkeley. However, even without the use of speech synthesis, this type of fraud is flourishing: in 2022, more than 36 thousand cases were registered in America when attackers pretended to be friends and relatives in order to lure money. Technology has made it a little easier. The only hope here is on one’s own vigilance.

Demand for skills to work with ChatGPT and generative AI services grows in Russia

In Russia, over the past six months, the number of vacancies requiring skills to work with neural networks has increased by 62%, the Vedomosti newspaper reports,  referring to the representative of the job search service hh.ru. The demand for ChatGPT specialists is growing most actively – only in the last month the number of vacancies mentioning the sensational brainchild of OpenAI has grown 13 times. Job requirements also more often list skills to work with the DALL-E, Midjourney and Stable Diffusion generative models, as well as with the application for coloring black and white photos called Colorize. Most of the vacancies mentioning these services are open in the IT and finance sector.

News
Latest Articles
See more
AI Trends
AI in Science and Cameron in Stability AI
Tech
MTS AI taught Cotype Lite to communicate in the Tatar language.
AI Trends
Chinese Version of J.A.R.V.I.S. and Agentic AI
Tech
MTS AI Presented Cotype PRO
AI Trends
Batteries for Microrobots and Global Spending on AI
Solutions
How AI Helps Optimize the Quality Control and Sorting Process in Manufacturing
AI Trends
Studying DNA and Fighting LLM Overconfidence
Без рубрики
MTS integrates WordPulse for analyzing calls and chats
Cases
MTS AI created an AI moderator for NUUM
AI Trends
Elastic Batteries and Struggle for Fairness of AI Decisions