AI in Finance and Finance for Development of AI

fgfg Picture

The new issue of #InfocusAI will tell you about capabilities of large language models to predict public opinion, stock performance and to read interest rate signals. In addition, you will learn about the EU’s new structure for auditing AI algorithms and why Sam Altman predicts an imminent end to the era of ML models development by scaling.

AI focused digest – News from the AI world

Issue 16, 6-20 April 2023

Development of AI by scaling is coming to an end — Altman predicts

Last week, OpenAI CEO Sam Altman, speaking at MIT, suggested that further progress in artificial intelligence would not come from creating increasingly large models. Other ways should be found to make them better. As VentureBeat writes, a likely reason for this statement is finance. More specifically – the exorbitant expense for creating giant models. Training a modern LLM requires hundreds of millions of dollars’ worth of computing resources. More and more of these resources are required, which means that the expenses grow. Besides, the GPU demand for AI is so high now that they are not always instantly available when needed. Even technological giants, such as Microsoft, and Elon Musk, who recently created his own company to participate in the AI race, have to wait in the queue. But there is another explanation to this Altman’s comment. In particular, many experts think that the scale of modern models is larger than necessary, and parameter count is a “false measurement of model quality.” Future breakthroughs in AI are expected to come from improved model architecture, better data efficiency and refined algorithms.

ChatGPT can differentiate between “hawks” and “pigeons”, and predict stock performance from headlines

Last business week has started with a discussion of the Bloomberg article about using GPT technologies in finance. Citing two recent studies, the agency reports that all the hype of the last few months regarding the influence and capabilities of generative AI in this field is entirely justified. According to the first research, ChatGPT can decipher Federal Reserve System monetary policy statements as good as humans. In particular, it does an excellent job of identifying hawkish (interest rate hikes) and dovish (interest rate cuts) signals in the Fed statements. The bot can even justify its opinion as intelligently as a financial analyst. And the results of the second academic paper demonstrate ChatGPT ability to predict the future price movements of stocks. Researchers prompted the bot to pretend to be a financial expert and, based on corporate news headlines, make a judgement on whether that was good or bad for the growth of company stock prices. The technology was able to correctly analyse the implications of the news. The Bloomberg article stresses that the model can handle the above tasks even without special training but fine-tuning it with specific examples could yield even better results. 

Bloomberg plans to incorporate generative AI

This time, the news is not from Bloomberg, but about Bloomberg. The finance news agency intends to purposely integrate a GPT-like ML model into its software to automate some functions normally performed by humans. For example, the Bloomberg internal AI model can come up with headlines by analysing the article text, determine whether the headlines are bearish or bullish for investors, suggest names of people and companies, etc. Remarkably, the info-agency’s technology division did not use OpenAI but took advantage of freely available tools on the market and adapted them to its proprietary data repository. Read more at CNBC

Language models trained on media diets can predict public opinion

Researchers from MIT and Harvard University have proven the ability of language models trained on media diets (media content consumed by an individual or group) to predict public opinion. The basis for the academic paper was the shortcomings of traditional surveys commonly used to examine societal attitudes to certain phenomena. The researchers took LLM BERT as their base and adjusted it according to different media diets. As input data, the model uses information about the news, online sources, radio and TV programs consumed by a particular group of people, and outputs a prediction of how that group will respond to a particular question of interest to researchers. To test how well the model performs the task, the researchers compared its calculation results with the results of national representative polls conducted about COVID-19 in the USA. The conclusion is: the model is capable of predicting human judgements and does this particularly well for people who follow the media closely. In addition, the test results are consistent with other academic literature on how the media influence opinions. This ability of language models, as always, can be used for good and bad:  to identify potentially damaging messages to people and to manipulate public opinion, for example. But it’s better to read the study fully before making your own judgement. The preprint is here

AI algorithms under European Commission’s control

The European Commission has created a new division — The European Centre for Algorithmic Transparency — to examine the AI algorithms that underpin online platforms and search engines. The goal is to timely identify and address any risks that their use entails. The article states that the centre will literally look under the hood of search engines and online platforms to see if they facilitate the distribution of illegal or malicious content, for instance, and whether they are violating the EU digital legislation. The list of platforms receiving special attention will be formed in the near future. And while it is being formed, you can read this article on TechCrunch for details and learn about some political aspects of the initiative. 

News
Latest Articles
See more
Tech
MTS AI Launches an Open Large Language Model
AI Trends
Drop in Lidar Prices and Rise of Industrial Robotization in China
AI Trends
Centaur to Simulate Human Behavior and Inspiration from Kandinsky
AI Trends
Habermas Machine and AI for Additive Technologies
Tech
MTS AI Opens Public Access to Kodify Demo Version
Solutions
MTS AI Creates AI Assistant for Bank Employees
AI Trends
Reliability of LLMs and Alternative to Lidars
Cases
MTS AI and VisionService Present MAX System
AI Trends
AI in Science and Cameron in Stability AI
Tech
MTS AI taught Cotype Lite to communicate in the Tatar language.