Could an AI design a better medicine to fight cancer, solve math problems that baffle the best human minds, or help us find answers for a cleaner planet? We may not be there yet, but AI is rapidly getting better at solving complex problems. It’s also creating exciting possibilities – and unexpected risks. The year 2023 was a turning point for AI, and changes are coming even faster in 2024. Drawing on Stanford’s 2024 AI Index Report, we’ll examine the cutting-edge of AI, the economic changes it’s bringing, and the serious questions it raises about its use.
Note: Keep in mind that these are our (decidedly human) interpretations and key takeaways. While we’re cautiously excited about the transformative potential of technologies like AI, our company’s mission is to help everyday humans not only survive, but prosper in an uncertain future.
Who’s leading the AI race?
The race to create groundbreaking AI is no longer just between researchers at top universities – it’s now dominated by the deep pockets and vast resources of industry players. In 2023, industry released a staggering 51 notable machine learning models, more than triple the 15 produced by academia. There were also 21 models released through industry-academia collaborations, a new high, demonstrating that some of the field’s top talent is drawn to the resources big tech offers.
This dominance comes at a literal cost. OpenAI’s GPT-4 system is estimated to have used $78 million worth of compute power during training, while Google’s Gemini Ultra cost a jaw-dropping $191 million. These are not investments most universities can match, leaving them at a disadvantage.
The foundation of recent AI progress lies in “foundation models.” Foundation models are large, complex AI systems that are trained on vast amounts of data, enabling them to be adapted for a wide range of tasks. These models serve as a “foundation” for creating more specialized AI applications.
In 2023, the number of foundation models released globally more than doubled to 149. Importantly, the trend towards open-source models continued, with 65.7% of newly released models being open to the public. This allows smaller companies and researchers access to the technology, even if the best-performing models remain closed-source within large corporations.
Key Takeaways:
- Industry Dominance in AI Research: In 2023, industry released 51 notable machine learning models, compared to just 15 from academia and 21 collaboration. This trend reflects the vast resources big tech can dedicate to AI research and development.
- Soaring Costs of Top-Tier Models: Training costs for top AI models are skyrocketing. OpenAI’s GPT-4 system is estimated at $78 million, while Google’s Gemini Ultra reached a staggering $191 million. These figures highlight the financial barriers for universities and smaller players in the field.
- Rise of Foundation Models: Foundation models, trained on massive datasets for diverse applications, are a driving force behind recent AI advancements. The number of foundation models doubled in 2023 to 149 globally.
- Open-Source Movement in AI: There’s a growing trend towards open-source foundation models, with 65.7% of new models being publicly accessible. This allows wider participation in AI development beyond the confines of big tech companies.
- US Leads in AI Models: The United States remains the global leader in AI models, outpacing China, the EU, and the UK. This dominance could impact future AI development and intellectual property.
What’s AI now capable of?
The year 2023 saw impressive leaps in AI capabilities, but also reminders of lingering limitations. One major advancement is the rise of multimodal AI. Systems like Google’s Gemini and OpenAI’s GPT-4 demonstrate remarkable flexibility in handling both text and images, and in some instances, even audio. This opens up possibilities for AI assistants that can not only comprehend your question, but illustrate the answer, or systems capable of generating original visuals based on textual descriptions.
Despite these exciting developments, AI systems still face significant performance hurdles. While exceeding human levels on image classification and some language understanding tasks, AI struggles with complex reasoning, mathematical problems, and demonstrating true “common sense.”
Progress on harder tasks is being made using an intriguing approach: AI-generated data. Systems like SegmentAnything can generate specialized data for complex tasks like image segmentation and 3D reconstruction. This AI-created data offers a new avenue to improve performance in areas where acquiring human-labeled datasets is costly or time-consuming.
One other important development is the rise of human evaluations in benchmarking. As AI systems can now produce compelling text, images, and more, measuring their output is increasingly difficult for computers alone. Initiatives like the Chatbot Arena Leaderboard are a step towards more nuanced assessment.
Key Takeaways:
- Rise of Multimodal AI: Systems like Gemini and GPT-4 demonstrate remarkable abilities to process both text and images, potentially leading to new kinds of AI assistants and creative image generation tools.
- AI Performance on Benchmarks: While exceeding humans on established benchmarks like image classification, AI still struggles with complex reasoning, advanced math, and common-sense understanding.
- Closed AI Models Outperform Open Ones: Closed models outperformed open ones on 10 LLM benchmarks, by a median advantage of 24.2%. This carries important implications for AI policy debates.
- AI-Generated Training Data: Techniques like SegmentAnything show promise in AI generating specialized training data for complex tasks. This is particularly useful for image segmentation and 3D reconstruction where human-labeled data can be expensive or slow to acquire.
- Shift Towards Human Evaluation: The increasing sophistication of AI output, especially in text and image generation, necessitates more nuanced evaluation methods. Initiatives like the Chatbot Arena Leaderboard highlight the rise of human evaluators in AI benchmarking.
Can we trust AI to be responsible?
The lack of standardized evaluation for responsible AI is a major obstacle. Leading developers like OpenAI, Google, and Anthropic primarily rely on their own internal testing, each using different benchmarks. This lack of consistency makes it difficult to systematically analyze potential risks and compare the limitations of different top AI models.
The threat of political deepfakes is escalating. Deepfakes are synthetic media, such as videos or images, that have been manipulated or generated using AI algorithms to make them appear realistic. Deepfakes can be used to create fake videos of people saying or doing things they never actually said or did, which can be used for malicious purposes like spreading disinformation or harming reputations.
AI tools capable of generating highly realistic fake videos and audio of individuals are becoming increasingly accessible and difficult to detect. Research has revealed that even current AI deepfake detection methods often perform with variable accuracy, leaving them unreliable. This technology has the potential to erode public trust, undermine elections, and spread harmful disinformation.
Beyond these immediate threats, AI poses other risks. Studies show that AI systems can generate outputs that contain copyrighted material, such as scenes from popular movies. Whether this constitutes copyright infringement is an emerging legal concern that has yet to be fully resolved.
Businesses are increasingly concerned about responsible AI. A global survey on responsible AI reveals that privacy, data security, and reliability are among the top AI-related concerns for companies. Despite this awareness, most organizations have only taken small steps towards mitigating these risks globally.
Key Takeaways:
- Lack of Standardization in Responsible AI: Leading AI developers like OpenAI, Google, and Anthropic use their own internal benchmarks to assess the safety and bias of their models, creating a lack of consistency that hinders comparisons and risk analysis across the field.
- Threat of Political Deepfakes: AI tools can now generate highly realistic fake videos and audio, and existing detection methods often have unreliable accuracy, raising concerns about potential manipulation of elections and spread of disinformation.
- Emerging Legal Issues with AI-Generated Content: AI models can inadvertently generate outputs containing copyrighted material, raising questions about potential copyright infringement in creative content.
- Business Concerns Regarding AI Risk: A global survey found that data security, privacy, and model reliability are among the top concerns for businesses when it comes to AI. However, despite this awareness, many companies are still in the early stages of implementing mitigation measures.
How is AI transforming the economy?
The growth of AI is already transforming the economy, and that transformation is accelerating. Despite a recent slowdown in overall AI investment, spending on the powerful new field of generative AI skyrocketed in 2023, nearly 8X from the previous year to reach $25.2 billion. Big players like OpenAI, Anthropic, Hugging Face, and Inflection reported substantial fundraising rounds, indicating massive investor confidence in the potential of this technology.
The economic impact of AI isn’t just about investments. New research shows companies using AI (including generative AI) are achieving significant productivity gains. In a 2023 McKinsey survey, 42% of companies reported lower costs through AI, while 59% experienced revenue increases.
The job market is complex. While the number of AI-related job postings declined in the U.S. in 2023, other studies show that workers using AI actually complete tasks faster and produce better work, with the potential to bridge skill gaps across industries. Still, caution is warranted: Studies warn that AI without proper oversight can also hurt worker performance.
Lastly, 2023 saw a surge in mentions of AI, especially generative AI, in earnings calls by Fortune 500 companies. This signifies a growing prioritization and implementation of AI across various business sectors, potentially leading to even wider adoption in the near future. The true economic transformation driven by AI is still in its early stages.
Key Takeaways:
- Generative AI Investment Boom: Investment in generative AI surged in 2023, reaching $25.2 billion, an almost eightfold increase from the previous year. This highlights the significant investor confidence in this new field of AI.
- Productivity Gains with AI: Businesses that use AI, including generative AI, report substantial productivity gains. A 2023 McKinsey survey found 42% of companies lowered costs through AI adoption, while 59% experienced revenue increases.
- AI’s Impact on Jobs: AI has the potential to both boost productivity and displace workers. Studies show AI can help workers complete tasks faster and better, but could also displace a large number of jobs. It’s essential to manage its integration to avoid catastrophic scenarios .
- AI Adoption by Major Companies: Mentions of AI, particularly generative AI, skyrocketed in Fortune 500 company earnings calls during 2023. This signifies a growing prioritization and implementation of AI across various business sectors, potentially leading to even wider adoption in the near future.
Can policy keep pace with innovation?
As AI becomes more powerful, governments worldwide are taking action. The past year saw a dramatic increase in AI-focused legislation and policy initiatives. In the United States alone, the number of AI-related regulations jumped to 25 in 2023, from just a single regulation in 2016. Meanwhile, both the European Union and the United States took major steps towards comprehensive AI policy. The EU reached an agreement on the landmark AI Act, and President Biden signed an Executive Order on AI – the most significant U.S. AI policy action that year.
The legislative focus on AI is truly global. In 2023, mentions of AI in policy debates and proceedings nearly doubled. Lawmakers from 49 countries discussed AI, with at least one country across every continent addressing the topic. This highlights the rapidly growing awareness of AI’s impact, and a drive to formulate policies across the world.
An increasing number of U.S. regulatory agencies are turning their attention to AI. The number of agencies issuing AI regulations rose to 21 in 2023. This broadening regulatory landscape includes agencies like the Department of Transportation and the Department of Energy, signaling concern about AI’s impact across diverse sectors of society.
Key Takeaways:
- Sharp Rise in AI Regulations: AI regulation is surging, with the U.S. witnessing a jump from having only 1 AI-related regulation in 2016 to a staggering 25 by 2023. This reflects a growing focus on AI governance across the globe.
- Major AI Policy Initiatives: The European Union enacted its landmark AI Act in 2023, while the U.S. saw President Biden sign a significant Executive Order on Safe, Secure, and Trustworthy AI. These represent significant steps towards comprehensive policy frameworks for AI.
- Global Discussions on AI Policy: Mentions of AI in policy debates and proceedings nearly doubled in 2023, spanning lawmaking bodies from 49 countries. This highlights the international reach and urgency of establishing AI governance.
- Expanding US Regulatory Landscape: The number of U.S. regulatory agencies involved in AI oversight jumped to 21 in 2023. This includes agencies like the Department of Transportation and the Department of Energy, reflecting concerns about AI’s impact across diverse sectors.
Will the public embrace AI, or fear it?
As AI becomes more integrated into daily life, the public is paying attention. Global research shows people are increasingly aware of AI technologies and their potential impact. However, this growing awareness comes with a mix of excitement and nervousness. A 2023 survey found 52% of Americans are more concerned than excited about AI in their everyday lives, a significant increase from 38% in 2022.
This cautious sentiment is echoed across several developed nations. Despite acknowledging the potential benefits of AI products and services, citizens in Germany, the Netherlands, Australia, Belgium, Canada, and the U.S. were among the least positive about AI in 2022. However, there’s been a gradual increase in AI acceptance in each of these countries since then.
When it comes to AI’s impact on the economy, people remain skeptical. An Ipsos survey showed that globally, only 37% of people believe AI will improve their jobs, 34% expect it to boost the economy, and even fewer (32%) believe it will enhance the job market.
There are signs of demographic differences in AI optimism. Younger generations, especially Gen Z, tend to be more positive about AI’s potential to improve entertainment, health, and the economy. Additionally, those with higher incomes and levels of education typically express less anxiety about AI taking over jobs than their lower-income and less-educated counterparts.
Key Takeaways
- Public Awareness and Cautious Optimism: Global awareness of AI is on the rise, but public sentiment is a mix of interest and concern. A 2023 survey found 52% of Americans are more concerned than excited about AI in daily life, compared to just 10% who are more excited.
- Global Cautiousness with Gradual Acceptance: While acknowledging potential benefits of AI products and services, citizens in developed nations like Germany, the Netherlands, and Australia expressed cautiousness in 2022. However, a gradual increase in acceptance has been noted since then.
- Skepticism Regarding AI’s Economic Impact: Public skepticism prevails regarding AI’s positive impact on the economy and jobs. An Ipsos survey found only 37% globally believe AI will improve their jobs, and 34% expect it to boost the economy in general.
- Demographic Differences in AI Optimism: Younger generations, particularly Gen Z, tend to be more optimistic about AI’s potential for entertainment, healthcare, and economic benefits. Additionally, individuals with higher income and education levels generally express less anxiety about AI job displacement compared to their lower-income or less-educated counterparts.
What will the future of AI bring?
Artificial intelligence has progressed from a niche academic field into a force shaping businesses, policies, and our daily lives. This transformation is only going to accelerate. The rise of generative AI, capable of creating everything from realistic images to persuasive essays, is driving both investment surges and public apprehension. The lack of universal standards for responsible AI and the growing threat of deepfakes underscore the very real risks this technology poses alongside its potential benefits.
This is just the beginning. As AI advances, we face even greater societal changes and complex ethical questions. For example, as self-driving cars become a reality, AI systems will increasingly face life-or-death decisions no human wants to program. We’ll be forced to grapple with the ethics of machine decision-making. Governments globally are working towards comprehensive AI policies, but whether regulation can keep up with the pace of innovation is an open question. Public opinion, too, will shape the future of AI adoption.
What is certain is this: The world of 2025 won’t look the same as the one in 2024. AI will continue to transform industries, create new ethical dilemmas, and quite probably, surprise us in ways we can’t yet imagine. Will it be primarily a force for bettering lives, or will its risks outweigh its promise? The next few years hold many of the answers.
Note: Data and insights for this article are based on Stanford’s 2024 AI Index Report.