Elastic Batteries and Struggle for Fairness of AI Decisions

fgfg Picture

Today, #InfocusAI will dwell upon elastic batteries for flexible electronics, associations for secure AI development, a new study from MIT on the fairness of ML models, and an impact of generative AI on writers’ creativity. 

AI-focused digest – News from the AI world

Issue 46, July 11-25, 2024

Chinese scientists have figured out how to create a fully elastic battery for flexible electronics

The Chinese scientists have created a polymer electrolyte layer for fast transportation of lithium ions, which can stretch 50 times (by 5,000%) and maintain the charge storage capacity even after 67 charging/discharging cycles. In this way, it has become possible to design more efficient elastic lithium-ion batteries, which are necessary for flexible electronics used in medicine and for soft robots that are gaining popularity. Attempts to create flexible batteries have been made by various scientists before. However, having moderate elasticity, all versions were difficult to assemble. And another common problem was the loss of charge storage capacity over time, which could be determined by instability of the fluid electrolyte in them and weak connection of the electrolyte layer with the electrodes. The Chinese researchers managed to get rid of these shortcomings by placing the electrolyte not in a liquid, but in a polymer material fused with two flexible electrode films. To learn more, see this press release or the scientific article in ACS Energy Letters.

Randomization can improve fairness of ML models for allocating limited resources

The researchers from MIT and US Northeastern University have put forward the idea that randomization, that is, adding an element of randomness into decisions coming from ML-models’ predictions, can increase their fairness in a number of cases when it comes to allocating limited resources. An example would be, in particular, the ranking of candidates for vacancies using AI or the identification of patients for kidney transplantation. Usually, in order to ensure fairness of decisions when using AI in such situations, various methods are used, i.e., from adjusting the features the model takes into account when making predictions to calibrating scores. However, scientists believe that this is not enough. If we look at the example of candidate selection, it may be that several companies use the same model to evaluate applicants, and it always puts the same worthy candidate at the end of the rating, depriving him/her of the chance to fill a vacancy. Researchers have shown that randomization can be useful in such cases. When the same group of people constantly receives negative decisions and when these decisions are associated with a number of uncertainties, randomization may increase fairness. However, different situations require different “levels of randomization” in decision-making. To learn more about how scientists propose to determine this required “level of randomization”, read this article on MIT News

Generative AI leads to an increase in individual creativity of writers but decreases content diversity

Generative AI increases writers’ creativity on an individual level but reduces the diversity of content if taken collectively. The scientists from the School of Management at University College London (UCL) and the University of Exeter came to such a contradictory conclusion as a result of an experiment involving 293 writers. The participants were divided into three groups, their task was to write short stories. The first group did not use AI at all, the second was allowed to get only one idea from ChatGPT, and the third was given the opportunity to request as many as five ideas from an AI bot. The results were as follows. The stories written with access to generative AI were rated as more creative, unexpected, and even pleasant and useful. At the same time, the authors whose creativity score was less high before the experiment launch showed the greatest progress thanks to AI. However, the stories written with the help of artificial intelligence turned out to be more similar to each other than those that were created only by humans. Experts and politicians responsible for supporting creativity have a lot to think about… The experiment and its conclusions are described in detail in Science Advances

Global IT giants have joined forces in their fight for secure AI

At the Aspen Security Forum in the United States last week, the creation of the coalition for secure artificial intelligence — CoSAI (Coalition for Secure AI) — was announced. As reported in the press release of the new coalition, its goal will be to form methodologies, tools, guides, and standards that will help create, implement, and apply secure AI systems. Many leaders of the global technology industry have already become members of CoSAI: Google, IBM, Intel, Microsoft, OpenAI, NVIDIA, PayPal, Anthropic, Cisco, Chainguard, Cohere, and GenLab.

A research laboratory in the field of AI security to be appear in Russia

Further on the topic of secure artificial intelligence, in Russia, the T-Bank AI Research Group has joined forces with the Central University to create a research laboratory for the development of sovereign and secure AI. The laboratory is named Omut AI. TASS reports that it will specialize in developing methods and approaches for control and secure use of AI technologies, as well as searching for more efficient architectures to create LLM and multimodal models. They promise to open access to research to the entire scientific and industrial community.

News
Latest Articles
See more
AI Trends
Centaur to Simulate Human Behavior and Inspiration from Kandinsky
AI Trends
Habermas Machine and AI for Additive Technologies
Tech
MTS AI Opens Public Access to Kodify Demo Version
Solutions
MTS AI Creates AI Assistant for Bank Employees
AI Trends
Reliability of LLMs and Alternative to Lidars
Cases
MTS AI and VisionService Present MAX System
AI Trends
AI in Science and Cameron in Stability AI
Tech
MTS AI taught Cotype Lite to communicate in the Tatar language.
AI Trends
Chinese Version of J.A.R.V.I.S. and Agentic AI
Tech
MTS AI Presented Cotype PRO