AI Governance and OpenAI Competitors’ Alliance

fgfg Picture

We’re happy to deliver you the next issue of our #InfocusAI digest. We’ll talk about a newly formed alliance to promote open AI, MIT’s recommendations on the governance of AI, and the institute’s next innovation to improve human interaction with artificial intelligence. You will also learn about a Chinese LLM tailored for Southeast Asia and a neural network that will teach programmers at Russian universities. 

AI-focused digest – News from the AI world

Issue 31, November 30 – December 14, 2023

MIT released a set of policy briefs on governance of AI

The Massachusetts Institute of Technology has started this week releasing a series of policy briefs on the governance of artificial intelligence. The goal is to help the U.S. government develop an effective regulatory system for the AI industry that will encourage it to thrive with minimal risks and increased benefits to society, MIT News reports. The series consists of four papers. The key one is titled “A Framework for U.S. AI Regulation: Creating a Safe and Thriving AI Sector.” The researchers show that in part, the AI field can be regulated by existing organisations and laws. However, the purpose and intent of different AI tools must be clearly defined, allowing new rules to be created and existing rules to be applied according to the purpose of a particular AI application. For example, if an AI impersonates a doctor, making a diagnosis and prescribing medicine, it should be as illegal as a human practising medicine without a licence. The scientists are also calling for the creation of a self-regulatory organisation that could be in constant contact with the technology industry in order to react as quickly as possible to all changes and progress in it. Additionally, the article discusses the distribution of responsibilities between creators of general-purpose tools (which include language models), providers of AI solutions based on them and consumers. The other three briefs focus on generative artificial intelligence. They address the labelling of AI content, the risks of LLM and technological innovations to regulate them, the impact of GenAI on the labour market and measures to help develop this technology without compromising the interests of people. All materials are published here

MIT developed an automated system for human-AI team training

MIT researchers, in collaboration with MIT-IBM’s Watson AI Lab, have developed an automated system that teaches users how to interact with artificial intelligence more effectively. For example, if a radiologist is using an AI model to analyse X-rays, the system can train them to determine when the AI assistant’s conclusions can be trusted and when it is best to ignore them. In short, it works like this:

1. The system collects data and discovers instances when a human specialist trusts the models’ advice, but should not do so because the advice is wrong.

2. It then automatically learns the rules of collaboration with AI and describes them in natural language.

3. Based on those rules, an adaptation programme with training exercises is created. While going through them, the specialist learns to better recognise situations when the model should or shouldn’t be trusted, based on the feedback about their work and the work of the AI.

The scientists note that the system is fully automated, to the point that it learns to create training programmes based on data from the AI-human interactions. It is also extremely adaptable and can be used in various fields – from social media content moderation to programming and medicine. Learn more information from the publication on MIT News and in this research paper

Meta Platforms* and IBM launched an alliance to promote open AI

Meta Platforms* (recognised as extremist and banned in the Russian Federation) and IBM have launched a coalition of more than 50 tech companies and research institutions to push up a so-called open model of AI, according to The Wall Street Journal. In addition to its initiators, the AI Alliance includes Intel, Oracle, Cornell University, the National Science Foundation and several dozen other scientific organisations and companies. They state that the goal of their alliance is “to stand behind open innovation and open science in AI”. This will provide customers with more alternatives and reduce the risks to them of becoming dependent on a single AI vendor, which have become more apparent since the “November upheaval” at OpenAI. The WSJ notes that many members of the new association largely support the idea of open source, and many of them have their own developments and achievements in the AI field and want to compete for attention to them with OpenAI. “If you think the future of AI is going to be determined by two, three or five institutions, you’re mistaken. I hope that it gives more clarity and confidence that the world of open innovation is a world to bet in,” IBM senior vice president Darío Gil, as quoted by the WSJ. 

*Meta Platforms is recognised as extremist and banned in the Russian Federation.

A neural network will be used to train programmers in Russia

Scientists at the Bauman Moscow State Technical University and the Moscow Institute of Physics and Technology have created a neural network to teach students programming. The tool is already being tested on one of their educational platforms, Kommersant reports. The AI service analyses the code and actions of a novice programmer, reveals what algorithms they used to solve certain tasks, and determines their level of knowledge and skills in programming. Based on this analysis, the AI generates an individual academic path for the student and offers a set of consolidation exercises from more than 7,000 options on 30 topics. The service is planned to be launched in other universities after testing. 

Alibaba Group launched a large language model for Southeast Asia

Alibaba Group research unit, Damo Academy, has launched an LLM tailored for Southeast Asian languages. The model is called SeaLLM. South China Morning Post reports that this is the company’s first region-specific language model, and its launch confirms Alibaba’s ambition to grow markets in Southeast Asia. SeaLLM was pre-trained on Vietnamese, Indonesian, Thai, Malay, Khmer, Lao, Tagalog, and Burmese data sets. A chat assistant using the technology has already been released to help businesses engage with Southeast Asian markets. Read more information in this article

News
Latest Articles
See more
AI Trends
Sora in Hollywood and AI for Big Football
Team news
MTS AI Signs the Declaration on Responsible Development of GenAI
AI Trends
New Claudes, Singing Portraits and Surgeon’s Avatar
Cases
MTS AI Launches Cloud Video Surveillance for “Gulfstream”
AI Trends
Need for Speed and AI vs. Counterfeit
Media about MTS AI
MTS AI Unveils New LLM Specifically for Business Use
Team news
MTS AI employee joins AI Alliance Science Council
AI Trends
Biothreat Protection and LLM under the Dragon Sign
AI Trends
Sleeper Agents and New Abilities of GPTs
AI Trends
LLM Jailbreak and AI for Training Robots