ARTICLES

How AI Has Changed the Startup Landscape

As the impact of AI-powered startups accelerates, many governments have correspondingly hastened their efforts to regulate the exciting yet potentially risky technology.
Forrest Wright
on noviembre 10, 2023

In the span of just a couple years, AI startups have driven revolutionary changes across multiple industries. For example, in healthcare, startups such as Babylon Health and Path AI are using AI to detect diseases and improve patient care. While in finance, AI-enabled financial planning platforms such as Trim or N26 are scaling consumer financial solutions at levels never thought possible. In transportation, startups such as ArgoAI and Waymo are leveraging AI for autonomous vehicle deployment, with significant financial backing from large corporations.

Other startups have taken a more cross-sector approach, providing services that leverage AI technology across multiple industries. The most prominent example so far is ChatGPT, the AI-enabled chatbot platform developed by Open AI launched at the end of 2022. Within just a few months, the natural-language-processing model has been used by millions, an unprecedented scaleup in the history of startups. As the impact of AI-powered startups accelerates, many governments have correspondingly hastened their efforts to regulate the exciting yet potentially risky technology.

The State of AI Legislation

The E.U. is leading the charge in drafting a regulatory framework with the world’s most comprehensive AI legislation, the AI act, which is making its way through the European Parliament and expected to pass by the end of 2023.  

The Act identifies numerous requirements for AI developers, depending on whether their technology qualifies under the four following categories: unacceptable risk, high risk, limited risk, or minimal risk. Examples of unacceptable risk AI, which would be outright banned by the AI Act, include systems that manipulate behavior, such as children’s toys that encourage risky behavior; social scoring schemes, or real-time biometric identification technology. Higher risk AI, such as autonomous vehicles, or surveillance technology, require government approval before proceeding. Limited and minimal risk AI companies and startups will likely be able to proceed with their products without need for explicit government approval if they clearly disclose to users that the technology is AI generated, and user data is not used for illegal purposes.

The EU Act has raised concern from some European startups that such comprehensive regulation might make their products less competitive than startups based in less-regulated markets. A December 2022 survey of 113 EU-based AI startups found that 50% of those surveyed thought the AI Act would slow down innovation in Europe, while 16% were considering stopping operations or moving outside the E.U. In addition to regulatory threat, European AI startups also face the issue of public wariness. In July 2023, a Morning Consult Pro poll of adults from five major European markets found that the vast majority of respondents felt that society was not yet ready for AI technology. The rate was highest in Germany, at 74%, followed by France at 70% of respondents. This indicates that European AI startups, in addition to regulatory compliance, may also need to invest in communicating their value to the public to achieve the consumer traction afforded to previous, less-controversial technologies.

The U.K. meanwhile, has sought to differentiate itself from the E.U. by crafting a “pro-innovation approach” to AI legislation as outlined in its Spring 2023 policy paper. In this paper, the U.K. government states that it does not intend to legislate AI companies and startups (at least in the near future) beyond what is currently required to operate a business in the U.K.. Rather, it intends to observe AI’s progress and produce regulatory frameworks if needed in the future while they work directly with AI companies to ensure safety.

However, while intending to take a lighter touch with AI companies at home, the U.K. has simultaneously sought to lead global efforts on AI regulation, putting these goals somewhat in tension with each other. In November 2023, the U.K. led 28 governments, including the U.S., China, and the European Union, in publishing the Bletchley Declaration on AI safety. The declaration establishes the signatories’ agreement on the risks and opportunities of AI while pledging to collaborate on AI safety research, but stops short of specific international agreements on regulation.

Against this backdrop, the U.K. also announced an AI Safety Institute, which will test new models developed by AI firms before they are launched publicly. With the institute, the U.K. intends to become the leading hub for AI development for startups globally. Potentially launching in late 2024, it has secured participation from Google DeepMind, OpenAI, the Alan Turing Institute, and international institutions including the U.S.-based Artificial Intelligence Safety Institute (USAISI).

Until recently, the U.S. federal government had taken a similar “light touch” approach in regulating AI. However, in an effort to secure America’s role as a leader in setting AI standards, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order charges multiple federal agencies with adhering to its eight priorities for AI safety, which span from civil rights to public health, housing policy to national defense. Rather than creating a new federal agency to regulate AI, the order requires the relevant federal agencies that touch the eight priority areas to address AI risks relevant to their area of expertise.

The creation of the U.S. Artificial Intelligence Safety Institute (USAISI) was announced on November 1, 2023. Like the U.K.’s AI Safety Institute, this UASISI will be responsible for developing standards related to the safety, security, and testing of AI models. Additionally, it will establish standards for authenticating AI-generated content that is quickly becoming more difficult to verify as AI models advance.

It should also be noted that several U.S. states have implemented their own AI legislation, some in response to pressing concerns that had previously gone unaddressed by the federal government. In 2022, New York passed the AI Bias Law, which prohibits employers from using AI tools to screen potential candidates unless they can demonstrate they are not using biasing factors in their model. While in 2023, North Dakota passed House Bill No. 1361 amending its state code clarifying that personhood does not include, “Environmental elements, artificial intelligence, animals, inanimate objects, corporations, or governmental entities…”.

China has differed from other major world economies by introducing legislation both earlier and more swiftly in response to specific AI developments. While its first official regulatory regime was released in 2017, the Generation AI Development Plan, several additional laws have been introduced. In 2022, China banned AI-generated media that does not contain watermarks, and more recently banned ChatGPT and any proxy servers hosting its services.

On August 15, 2023, the Cyberspace Administration of China (CAC) released their Generative AI Measures. The rules are the first of their kind in regulating generative AI. The provisions include additional requirements for labeling generative AI content, as well as mandating that generative AI conduct security assessments and register their algorithms with the government, particularly if their services have the potential to sway public opinion or advocate subversive activities.

While relatively comprehensive compared to current western legislations, the final version of China’s Generative AI Measures still ended up being less restrictive than initially proposed in early 2023. Some analysts have suggested this may be due to fear expressed by Chinese businesses and entrepreneurs that overly burdensome regulation could smother the nascent AI industry, at a time when other world economies are accelerating their technology.

Towards a Better AI Policy Framework

Collectively, these competing models for AI regulation represent a spectrum of potential responses to this emergent technology. So far, China’s Generative AI Measures is the most top-down approach with the CAC ultimately determining what AI-generated content is acceptable. The E.U. AI Act, if implemented, would represent a cautious approach to AI that requires close collaboration between startups and government regulators, particularly for those startups on the frontier of advanced AI models. The U.S. and U.K. have so far taken a more laissez-faire approach, in part due to a hesitancy of interfering with the technology’s potential, but have recently unveiled strategic policies aimed at setting standards rather than drafting legal regulation.

While the fine details of each model are still being determined, the hope is that governments keep in mind the needs of AI startups operating in this environment. Lessons can be drawn from previous policy regimes, such as the U.K.’s fintech regulatory sandbox, established in 2015. Due to the public risk associated with new financial products, the U.K.’s policy offered startups a sanctioned, regulated testing environment to try out their products on a limited set of customers before a full-scale launch. The result protected U.K. consumers while boosting London’s status as a global financial hub as companies from both the U.K. and abroad clamored to participate, helping launch successful fintech startups that participated in the sandbox like Zilch and Bud.

Similar approaches have worked in other subsectors, such as autonomous vehicles (AV). Over the last several years, Singapore has amended its transportation regulations to include AVs, and has allowed for AV testing provided the vehicle meets several technical requirements, including fail-safe mechanisms, which may include the necessity of a driver assistant to take control of the vehicle in certain conditions. Rather than issuing substantial new legislation to cover AVs, Singapore has elected to set rigorous but clear standards for participation, which has enabled startups such as Movel AI and nuTonomy to compete along established car firms.

As these examples illustrate, governments can balance public interest with innovation by setting clear standards that allow startups to participate. Making compliance too cumbersome will extinguish the hopes of many innovative startups, while benefiting larger firms with the resources to comply. It will be difficult to get this right, but the societies that do so have much to gain. Startups that emerge from jurisdictions with clear transparency guidelines will have stronger levels of trust when scaling their operations to new countries. Customers will also feel more confident in their products and services, expanding the potential market.


A version of this article was published in the Global Startup Ecosystem Report 2023. Read the report to learn more.


Contact Us

Our data shows that collaboration is at the core of the fastest growing startup ecosystems. We work with forward-looking organizations who understand that joining the global startup economy is key to to drive innovation and spur economic growth.