In this issue of #InfocusAI digest we will explore AI capabilities in meme generation, discuss whether AGI can conquer the world or not, find out how ML helps for studying the Antarctic and why neural networks for robots to be.
AI-focused digest – News from the AI world
Issue 62, March 14 – 27, 2025
On average, AI turns out to be more successful in creating memes
German and Sweden scientists evaluated how good LLMs were as assistants in meme generation, a popular type of modern art. In their experiment, one group of authors created memes without LLM assistance, while the second group had an option to utilize creative capabilities artificial intelligence (GPT-4o) offered. The third group consisted of a GPT-4o model that created memes with no human assistance. The investigators applied crowdsourcing to assess the results these groups presented using creativity, humor, and shareability as criteria.
The scientists discovered that:
- On average, memes created by AI performed better than human-only memes.
- People who used LLMs created more memes and spent less effort and time, but their average scores were less than those of non-human memes.
- However, when evaluating top memes, humans produced the most humorous results.
Conclusion: While AI can boost productivity and create funny content that appeals to a broad audience, it is still inferior to human creative genius when it comes to memes.
Scientists update AGI development risks
Researchers from OpenAI, the University of California, and Oxford have recently updated the article they created back in 2022 on alignment issues when developing artificial intelligence, i.e. on how to tie up the growing capabilities AI presents and the interests of humankind. The major concern the scientists have is linked to AGI, or artificial general intelligence. They are convinced that if we do not change the approaches we use to train AGI now and continue to train models as we do today, such advanced AI will be able to learn to cheat and pursue goals which are quite different from goals and tasks that humans have, and will do so discreetly without being noticed. The updated article presents new signals indicating that these concerns are very real: tampering with the remuneration system, examples of the AI trying to convince the user that a false answer is correct, attempting to evade surveillance, and others. For details, please use this link. New information in the article is marked as Update (March 2025).
University of Oxford to improve learning quality together with OpenAI
The University of Oxford and OpenAI signed a five-year collaboration agreement for students and faculty staff to enhance learning and research by gaining access to research grant funding and OpenAI tools. They will have an option to use current GPT-4o and o1 models in their work, including ChatGPT Edu, a dedicated secure version of the neural network for researchers. The university also launched a pilot project to digitize and publish materials held in the Bodleian Library collections, which have previously been unavailable online. In particular, the researchers will make accessible 3,500 dissertations from the multitude of disciplines published in 1498 — 1884. The organizations concerned have signed a partnership agreement within the NextGenAI initiative, a consortium between OpenAI and 15 leading universities and research institutions. The company has committed $50 million in grants and financing access to its products for the project.
AI aids in forecasting Antarctic ice flows
AI has helped US scientists to better understand how Antarctic ice behaves. Earlier forecasts of ice movement in the Antarctic relied on the assumption that Antarctic ice and laboratory ice have similar behavioral patterns. However, its behavior is much more complicated than what scientists can simulate in the laboratory, having too many variables. The researchers therefore developed a machine learning model to analyze large-scale movements and changes in ice thickness recorded with satellite imagery and airplane radars between 2007 and 2018. They found that the constitutive models of shelf glaciers closest to the continent were consistent with laboratory experiments. However, as ice got farther from the continent, it became anisotropic, i.e. its physical properties were not the same in different directions. By contrast, most constitutive models assumed that ice had the same physical properties in different directions. This knowledge will help researchers to more accurately simulate changes in Antarctic glaciers, which is especially relevant in view of the global warming challenges. For details, please visit Stanford Report.
Neural networks to enter physical world
It seems that the key players of the AI market are quite intent on making their neural networks available not just in a virtual world but in the physical world also, by imparting them to robots. For instance, Google recently presented the Gemini Robotics neural network based on the Vision-Language-Action architecture, combining computer vision, a language model and motion control tools. Its users can ask the robot to wash dishes or cut vegetables. The model will identify optimal solutions for the task and find the necessary items in the kitchen. Then, NVidia published the Nvidia Isaac GR00T N1, its open-source, fully customizable foundation model for humanoid robots. Developers will be able to change how the model acts according to their tasks and additionally train it using their data. Major companies enter this market following successful startup projects. Last month alone, we discussed the Helix model by the startup called Figure.