What did MIT come up with to protect images from unauthorized modifications with neural networks? How does GPT-3 handle analogical reasoning tasks? What chatbots is Meta* (recognized as an extremist organization and banned in the Russian Federation) going to surprise social media users with? How is Hollywood’s “rebellion” against AI coming along? When will the world try an energy drink designed entirely by artificial intelligence? All this awaits you in the new issue of #InfocusAI.
AI-focused digest – news from the AI world
Issue 23, July 20 – August 3, 2023
MIT creates a tool to protect images against manipulation with neural networks
Researchers at MIT have developed a tool capable of protecting photos against modification with neural networks. The invention is called PhotoGuard. It’s based on the perturbation technique — minuscule alterations in pixel values invisible to the human eye but detectable by AI. It uses two “attack” methods for image protection: encoder attack and diffusion attack. In the first case, the image’s mathematical representation is changed in a way that causes the generative model to perceive the image as a random entity, which makes any manipulation difficult. The second approach is easier to explain with an example. There are two pictures, one is original, and another one is the target which is completely different. Diffusion attack involves making changes to the first image so that it remains unchanged for humans but becomes similar to the target pattern for AI. Any generative model trying to modify the original picture will work as if it’s modifying the target picture, and thus the original picture will be protected. More details — with illustrations, developers’ comments and a link to the research paper — on MIT News.
GPT-3 handles analogical reasoning tasks at the college student level
This week, researchers at the University of California in Los Angeles published an article in Nature Human Behaviour on the abilities of the large language model GPT-3 to reason by analogy without special training. Analogical reasoning is the basis of human intelligence: it involves finding a reasonable solution for a new problem by comparing it with a more familiar (previously solved) problem. The researchers demonstrated that GPT-3 performs well on a range of analogical reasoning tasks that are commonly found on tests like the SAT Reasoning Test for US college admissions. At the very least, the model’s results are no worse than those of American students. Although, the AI failed in a number of areas, in particular, in the use of tools to deal with physical tasks. However, the public is already accustomed to numerous proofs of AI abilities comparable to human ones. What’s more interesting is the question raised by the authors of the study in the discussion part of the article. Do LLMs mimic human reasoning due to learning from a big dataset or use a fundamentally new type of cognitive process? To learn about scientists’ assumptions on this topic, we suggest reading the study.
US film studios are hiring ML professionals over the protests of actors and screenwriters
US film studios are actively hiring artificial intelligence specialists. According to The Hollywood Reporter, right now, Disney alone has a half-dozen positions open related to generative AI. Apple and Amazon, that run media businesses among other things, have ML vacancies in the tens. Warner Bros. Discovery is searching for several AI specialists in the video games division and in the corporate sector, Paramount is looking for a machine learning engineer in the CBS division. Sony Pictures Entertainment and Netflix don’t trail far behind. Active recruitment of a wide range of ML specialists is taking place against the backdrop of ongoing unrest of actors and screenwriters regarding the use of generative AI to the detriment of their rights and interests. Actors fear that their images and voices will be misused by film studios for little or no cost. And screenwriters are scared of a scenario where large language models like ChatGPT will take their jobs away from them altogether. Curious what the Academy Awards will look like with the increasing role of AI in the film industry…
Meta* creates a series of chatbots with different personalities for its social networks
Meta Platforms* is planning to launch a series of AI-based chatbots imitating various personalities as early as September. According to Reuters, citing the sources familiar with the company’s plans, with this project the tech giant hopes to retain and boost engagement of users in its social networks. In fact, a bot mimicking Abraham Lincoln, as well as a surfer-advisor for travelers, are already being tested.
*Meta Platforms is recognized as an extremist organization and prohibited in the Russian Federation.
AI designs a new energy drink recipe in Hungary
Last week, Hell Energy Drink, a Hungarian drink manufacturer, announced the imminent arrival of the world’s first energy drink created entirely by artificial intelligence. When developing the new recipe, the AI took into account consumer expectations and their taste preferences, and included vitamins, amino acids and herbal ingredients. The energy drink recipe is kept in strict secrecy on a single computer with an advanced security system, and its copy is in a safety vault in Switzerland. Incidentally, the packaging design for the new drink is also the AI’s handiwork (brainwork?) — the company’s brandbook and the philosophy of the name are reflected in the visuals. Learn the project details on the Food Navigator website. The drink will be available by autumn in 60 countries for tasting and judging how much soul the neural network put into it.