Doc Producers vs GenAI and LLM Harmlessness Test

fgfg Picture

While we were watching how Sam Altman’s misadventures with OpenAI would end and wondering about the nature of Project Q, a lot of interesting stuff happened in the world of AI. For example, Chinese scientists have conducted a harmlessness test for LLMs, ETH Zurich has figured out how to speed up neural networks by 300 times, and American researchers have found out how to train models more efficiently through crowdsourcing. Also, Stability AI has opened up its AI video model to researchers, and documentary filmmakers have called to limit generative AI use in the film industry. All this – in the new issue of #InfocusAI. 

AI-focused digest – News from the AI world

Issue 30, November 16-30, 2023

LLMs fared poorly in China’s harmlessness test

Scientists at the Shanghai Artificial Intelligence Laboratory, together with Fudan University (China) have developed a new approach to testing large language models for alignment with human values and harmlessness. Their benchmark, called FLAMES, involves evaluating AI on five dimensions: fairness, legality, data protection, morality and safety. Notably, morality integrates traditional Chinese values such as harmony and benevolence. The results of all 12 models tested leave much to be desired, especially in the safety and fairness aspects. However, the leader in the FLAMES benchmark turned out to be Claude with a score of 63.08%. In second place is China’s InternLM-20B with 58.2%. GPT-4 scored 39.04%. Reed more about the evaluation methodology and LLMs’ test results in this article

ETH Zurich developed a technique that can accelerate language models by 300x

Researchers at ETH Zurich have found a way to significantly accelerate the performance of neural networks, in particular LLMs. Their idea is to replace traditional feedforward layers with fast feedforward layers, or FFF. FFF uses a mathematical operation called conditional matrix multiplication (CMM). Unlike the traditionally used Dense Matrix Multiplication (DMM), where all input parameters are multiplied by all neurons in the network, CMM selects only the right neurons for each computation, and as a result only a handful of neurons is required per input. This can significantly reduce the computational load, which makes language models faster and more efficient. In experiments with BERT, the researchers were able to achieve a 99% reduction in computation. Their own CPU and GPU-based implementation of CMM  resulted in a 78x increase in the speed of inference. However, the researchers believe that with better hardware it is possible to achieve a 300x improvement in speed. The more information you can get on VentureBeat and in this scientific paper

New training method with crowdsourced feedback has been invented in the US

Researchers from MIT, Harvard University, and the University of Washington have developed a new reinforcement learning approach based on crowdsourced feedback. They call it Human Guided Exploration, or HuGE. In contrast to traditional reinforcement learning, where the reward function is usually designed by competent experts (it is a very time-consuming and complex process), HuGE involves the use of crowdsourcing to design a reward function and feedback from non-expert users. From other methods that rely on feedback from non-experts, it differs in that its reward function directs the agent to what it should learn, rather than telling it exactly what it must do to complete the task. “So, even if the human supervision is somewhat inaccurate and noisy, the agent is still able to explore, which helps it learn much better,” explains Marcel Torne, one of the method’s authors.  More details are available on the MIT News website. 

Stability AI has opened up for research preview its AI generative video model

Last week, Stability AI, the company behind Stable Diffusion, made available for research preview Stable Video Diffusion, their first foundation model for generative video. The developers have kindly put the code for the model on GitHub and the weights for local launch – on Hugging Face, warning that none of this is for commercial use just yet. The technical specifications can be found here. Stable Video Diffusion is released as two image-to-video conversion models that are capable of generating 14 and 25 frames at customisable rates from 3 to 30 frames per second. Users can also sign up for early access to the text-to-video conversion interface, which is coming soon. The company plans to develop an entire ecosystem of generative models that will be based on Stable Video Diffusion. 

Documentary filmmakers call to limit generative AI use in documentaries

The previous news is like adding fuel to the fire for the US film industry, which has an ongoing controversy regarding the use of generative artificial intelligence. Recently, documentary producers joined the movement against the unregulated use of GenAI. In their open letter, they expressed concerns that the use of AI-generated footage in documentaries would lead to the final and irrevocable muddying of historical records. There will be AI-generated videos on the internet that will be perceived and used in other films as real ones. To avoid this, the signatories to the letter call for rules and limits to be set for GenAI in documentary filmmaking. Read the text of the letter in this piece by The Hollywood Reporter.

News
Latest Articles
See more
AI Trends
Sora in Hollywood and AI for Big Football
Team news
MTS AI Signs the Declaration on Responsible Development of GenAI
AI Trends
New Claudes, Singing Portraits and Surgeon’s Avatar
Cases
MTS AI Launches Cloud Video Surveillance for “Gulfstream”
AI Trends
Need for Speed and AI vs. Counterfeit
Media about MTS AI
MTS AI Unveils New LLM Specifically for Business Use
Team news
MTS AI employee joins AI Alliance Science Council
AI Trends
Biothreat Protection and LLM under the Dragon Sign
AI Trends
Sleeper Agents and New Abilities of GPTs
AI Trends
LLM Jailbreak and AI for Training Robots