AI & RoboticsNews

Year of the dragon: We have entered the AI age

If you were hoping that the world would get over AI fever in 2024, you are going to be sadly mistaken. Advancements in hardware and software (everywhere) are opening up the floodgates to dynamic applications of generative AI that suggest that 2023 was the year where we only really began to scratch the surface.  

This year — the Year of the Dragon in the Chinese Zodiac — will see a widespread and strategic  integration of gen AI across all sectors. With risks assessed and strategies beginning to take shape, businesses are poised to leverage gen AI not just as a novel technology, but as a core component of their operational and strategic frameworks. In short, CEOs and business leaders, having recognized the potential and necessity of gen AI, are now actively seeking to embed these technologies into their processes.  

The resulting landscape is one where gen AI becomes not just an option, but an essential driver of  innovation, efficiency and competitive edge. This transformative shift signifies a move from tentative exploration to confident, informed application, marking 2024 as the year where gen AI transitions from an emerging trend to a fundamental business practice. 

A key dimension is the growing understanding of how gen AI allows for both increased volume and variety of applications, ideas and content.  

The staggering amount of AI-generated content will have ramifications that we are only beginning to  discover. Due to the sheer volume of this content (since 2022, AI users have collectively created more than 15 billion images — a number which previously took humans 150 years to produce), historians will have to view the internet post-2023 as something completely different to what came before, similar to how the atom bomb set back radioactive carbon dating.  

However, regardless of what gen AI is doing to the internet, for enterprises, this expansion is elevating the standard for all players across all fields, and signals a critical juncture where not engaging with the technology may not just be a missed opportunity, but a competitive disadvantage. 

In 2023, we learned that gen ai not only raises the bar across industries, but in employee capabilities. In a survey by YouGov last year, 90% of workers said that AI is improving their productivity. One in four of respondents use AI on a daily basis (with 73% of  workers using AI at least once a week).  

A separate study found that with the right training, employees completed 12% of tasks 25% faster with the help of gen AI, and that overall work quality rose 40% — with those of lower skill level making the most gains. However, for tasks outside AI’s  capabilities, employees were 19% less likely to produce correct solutions.  

This duality has given rise to what experts term the “jagged frontier” of AI capabilities. This works as follows: On one end of the spectrum, we witness AI’s remarkable prowess — tasks that once seemed insurmountable for machines are now executed with precision and  efficiency. 

Yet, on the flip side, there are tasks where AI falters, struggling to match human intuition and  adaptability. These are areas marked by nuance, context and intricate decision-making — realms  where the binary logic of machines (currently) meets its match.

This year, as enterprises begin to grapple and master the jagged frontier, we will see gen AI projects start to land and become normalized. Underlying this adoption is the decline in the cost of training foundational large language models (LLMs) thanks to advancements in silicon optimization (which is estimated to half every two years). 

Together with increased demand and amidst global shortages, the AI chip market is looking to become more affordable in 2024, as alternatives to industry-leaders like Nvidia emerge from the woodwork.  

Likewise, new fine tuning methods that can grow strong LLMs out of weak ones without the need for additional human-annotated data — such as Self-Play fIne-tuNing (SPIN) — are leveraging synthetic data to do more with less human input. 

This reduction in cost is opening doors for a wider array of companies to develop and implement  their own LLMs. The implications are vast and varied, but the clear trajectory is that there will be a surge in innovative LLM-based applications over the next few years.  

Likewise, in 2024, we will begin to see a shift from predominantly cloud-reliant models to locally executed AI. This evolution is driven partly by hardware advancements like Apple Silicon, but it also capitalizes on the untapped potentials of raw CPU power in everyday mobile devices. 

Similarly, in terms of business, small language models (SLMs) are set to become more popular across large and medium-scale enterprises as they fulfill more specific, niche needs. As their name suggests, SLMs are lighter in weight to LLMs — making them ideal for real-time applications and  integration into various platforms.

So, while LLMs are trained on vast amounts of diverse data, SLMs are trained on more domain-specific data — often sourced from within the enterprise —  making them tailored to specific industries or use cases, all while guaranteeing relevance and  privacy.  

As we transition into 2024, the spotlight will also shift from LLMs towards large vision models (LVMs) — particularly domain-specific ones — that are set to revolutionize the processing of visual data. 

While LLMs trained on internet text adapt well to proprietary documents, LVMs face a unique challenge: Internet images predominantly feature memes, cats and selfies, which differ significantly from the specialized images used in sectors like manufacturing or life sciences. Therefore, a generic LVM trained on internet images may not efficiently identify salient features in specialized domains. 

However, LVMs tailored to specific image domains, such as semiconductor manufacturing or pathology, show markedly better results. Research demonstrates that adapting an LVM to a specific domain using around 100K unlabeled images can significantly reduce the need for labeled data, enhancing performance levels. These models, unlike generic LVMs, are tailored to specific business domains, excelling in computer vision tasks like defect detection or object  location. 

Elsewhere, we will begin to see businesses adopt large graphical models (LGMs). These models excel in  handling tabular data, typically found in spreadsheets or databases. They stand out in their ability  to analyze time-series data, offering fresh perspectives in understanding sequential data often found in business contexts. This capability is crucial because the vast majority of enterprise data falls into these categories — a challenge that existing AI models, including LLMs, have yet to  adequately address. 

Of course, these developments will have to be underpinned by rigorous ethical consideration. Common consensus is that we got previous general purpose technologies (technologies that have broad-based applications, profoundly impact diverse areas of human activity and fundamentally change the economy and society) very wrong. While presenting immense benefits, tools such as the smartphone and social media also came with negative externalities that permeated all facets of our lives, whether or not we engaged with them directly. 

With gen AI, regulation is considered paramount to ensure past mistakes do not happen again. However, it may fail, stifle innovation or take time to go into effect, so we will see organizations opposed to governments leading the regulatory charge. 

Perhaps the most well known ethical quagmire gen AI introduced last year was the issue of copyright. As AI technologies advanced rapidly, they brought to the fore pressing questions about intellectual property rights. The crux of the issue, of course, lies in whether and how AI-generated content, which often draws upon existing human-created works for training, should be subject to copyright laws. 

The AI/copyright tension exists because copyright law was created to prevent people using other  people’s IP unlawfully. Reading articles or texts for inspiration is allowed, but copying it is not. If a person reads all of Shakespeare and produces their own version, this is considered inspiration, yet the challenge is that AI can consume limitless volumes of data, as opposed to a human-constricted limit.  

The copyright/copywrong debate is just one facet of a media in flux. In 2024, we will see the result of landmark, precedent-setting cases such as the NYT vs. OpenAI (however, it is unclear if this will ever go to trial or is simply a bargaining tool by the publisher) and witness the ways in which the media landscape adapts to its new AI reality. 

In terms of geopolitics, the AI story of the year will inevitably be how this technology is intersecting with the biggest election year in human history. This year, more than half of the world’s population are heading to the polls, with presidential, parliamentary and referential votes scheduled in nations including the U.S., Taiwan, India, Pakistan, South Africa and South Sudan. 

Such interference already occurred in Bangladesh, which headed to the polls in January. Some pro-government media outlets and influencers actively promoted disinformation created using low-cost AI tools. 

In one instance, a deepfake video (that was subsequently taken down) showed an opposition figure appearing to retract support for the people of Gaza, a stance that could be detrimental in a nation where the majority of Muslims hold a strong solidarity with Palestinians. 

The threat of AI imagery is not theoretical. Recent research revealed that subtle changes designed to deceive AI in image recognition can also influence human perception. The finding, published in Nature Communications, underscores the parallels between human and machine vision — but more importantly, it highlights the need for more research into the impact of adversarial images on both people and AI systems. These experiments showed that even minimal perturbations, imperceptible to the human eye, can bias human judgments, akin to the decisions made by AI models. 

While a global consensus is emerging around the concept of watermarking (or content credentials) as a means to distinguish authentic content from synthetic, the solution is still fraught with its own complexities: Will detection be universal? If so, how can we prevent people from abusing it — labeling work that is synthetic when it is not? On the other hand, denying everyone from being able to detect such media cedes considerable power to those who have it. Once again, we will find ourselves asking: Who gets to  decide what is real?

With public trust across the world remaining firmly at a nadir, 2024 will be the year when the world’s biggest election year intersects with the most defining technology of our time. For good and for bad, 2024 marks the year wherein AI is applied in real, tangible ways. Hold on tight.

Elliot Leavy is founder of ACQUAINTED, Europe’s first generative AI consultancy.

If you were hoping that the world would get over AI fever in 2024, you are going to be sadly mistaken. Advancements in hardware and software (everywhere) are opening up the floodgates to dynamic applications of generative AI that suggest that 2023 was the year where we only really began to scratch the surface.  

This year — the Year of the Dragon in the Chinese Zodiac — will see a widespread and strategic  integration of gen AI across all sectors. With risks assessed and strategies beginning to take shape, businesses are poised to leverage gen AI not just as a novel technology, but as a core component of their operational and strategic frameworks. In short, CEOs and business leaders, having recognized the potential and necessity of gen AI, are now actively seeking to embed these technologies into their processes.  

The resulting landscape is one where gen AI becomes not just an option, but an essential driver of  innovation, efficiency and competitive edge. This transformative shift signifies a move from tentative exploration to confident, informed application, marking 2024 as the year where gen AI transitions from an emerging trend to a fundamental business practice. 

Volume and variety

A key dimension is the growing understanding of how gen AI allows for both increased volume and variety of applications, ideas and content.  

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

The staggering amount of AI-generated content will have ramifications that we are only beginning to  discover. Due to the sheer volume of this content (since 2022, AI users have collectively created more than 15 billion images — a number which previously took humans 150 years to produce), historians will have to view the internet post-2023 as something completely different to what came before, similar to how the atom bomb set back radioactive carbon dating.  

However, regardless of what gen AI is doing to the internet, for enterprises, this expansion is elevating the standard for all players across all fields, and signals a critical juncture where not engaging with the technology may not just be a missed opportunity, but a competitive disadvantage. 

The jagged frontier

In 2023, we learned that gen ai not only raises the bar across industries, but in employee capabilities. In a survey by YouGov last year, 90% of workers said that AI is improving their productivity. One in four of respondents use AI on a daily basis (with 73% of  workers using AI at least once a week).  

A separate study found that with the right training, employees completed 12% of tasks 25% faster with the help of gen AI, and that overall work quality rose 40% — with those of lower skill level making the most gains. However, for tasks outside AI’s  capabilities, employees were 19% less likely to produce correct solutions.  

This duality has given rise to what experts term the “jagged frontier” of AI capabilities. This works as follows: On one end of the spectrum, we witness AI’s remarkable prowess — tasks that once seemed insurmountable for machines are now executed with precision and  efficiency. 

Yet, on the flip side, there are tasks where AI falters, struggling to match human intuition and  adaptability. These are areas marked by nuance, context and intricate decision-making — realms  where the binary logic of machines (currently) meets its match.

Cheaper AI

This year, as enterprises begin to grapple and master the jagged frontier, we will see gen AI projects start to land and become normalized. Underlying this adoption is the decline in the cost of training foundational large language models (LLMs) thanks to advancements in silicon optimization (which is estimated to half every two years). 

Together with increased demand and amidst global shortages, the AI chip market is looking to become more affordable in 2024, as alternatives to industry-leaders like Nvidia emerge from the woodwork.  

Likewise, new fine tuning methods that can grow strong LLMs out of weak ones without the need for additional human-annotated data — such as Self-Play fIne-tuNing (SPIN) — are leveraging synthetic data to do more with less human input. 

Enter the ‘modelverse’

This reduction in cost is opening doors for a wider array of companies to develop and implement  their own LLMs. The implications are vast and varied, but the clear trajectory is that there will be a surge in innovative LLM-based applications over the next few years.  

Likewise, in 2024, we will begin to see a shift from predominantly cloud-reliant models to locally executed AI. This evolution is driven partly by hardware advancements like Apple Silicon, but it also capitalizes on the untapped potentials of raw CPU power in everyday mobile devices. 

Similarly, in terms of business, small language models (SLMs) are set to become more popular across large and medium-scale enterprises as they fulfill more specific, niche needs. As their name suggests, SLMs are lighter in weight to LLMs — making them ideal for real-time applications and  integration into various platforms.

So, while LLMs are trained on vast amounts of diverse data, SLMs are trained on more domain-specific data — often sourced from within the enterprise —  making them tailored to specific industries or use cases, all while guaranteeing relevance and  privacy.  

A shift to large vision models (LVMs)

As we transition into 2024, the spotlight will also shift from LLMs towards large vision models (LVMs) — particularly domain-specific ones — that are set to revolutionize the processing of visual data. 

While LLMs trained on internet text adapt well to proprietary documents, LVMs face a unique challenge: Internet images predominantly feature memes, cats and selfies, which differ significantly from the specialized images used in sectors like manufacturing or life sciences. Therefore, a generic LVM trained on internet images may not efficiently identify salient features in specialized domains. 

However, LVMs tailored to specific image domains, such as semiconductor manufacturing or pathology, show markedly better results. Research demonstrates that adapting an LVM to a specific domain using around 100K unlabeled images can significantly reduce the need for labeled data, enhancing performance levels. These models, unlike generic LVMs, are tailored to specific business domains, excelling in computer vision tasks like defect detection or object  location. 

Elsewhere, we will begin to see businesses adopt large graphical models (LGMs). These models excel in  handling tabular data, typically found in spreadsheets or databases. They stand out in their ability  to analyze time-series data, offering fresh perspectives in understanding sequential data often found in business contexts. This capability is crucial because the vast majority of enterprise data falls into these categories — a challenge that existing AI models, including LLMs, have yet to  adequately address. 

Ethical dilemmas

Of course, these developments will have to be underpinned by rigorous ethical consideration. Common consensus is that we got previous general purpose technologies (technologies that have broad-based applications, profoundly impact diverse areas of human activity and fundamentally change the economy and society) very wrong. While presenting immense benefits, tools such as the smartphone and social media also came with negative externalities that permeated all facets of our lives, whether or not we engaged with them directly. 

With gen AI, regulation is considered paramount to ensure past mistakes do not happen again. However, it may fail, stifle innovation or take time to go into effect, so we will see organizations opposed to governments leading the regulatory charge. 

Perhaps the most well known ethical quagmire gen AI introduced last year was the issue of copyright. As AI technologies advanced rapidly, they brought to the fore pressing questions about intellectual property rights. The crux of the issue, of course, lies in whether and how AI-generated content, which often draws upon existing human-created works for training, should be subject to copyright laws. 

The AI/copyright tension exists because copyright law was created to prevent people using other  people’s IP unlawfully. Reading articles or texts for inspiration is allowed, but copying it is not. If a person reads all of Shakespeare and produces their own version, this is considered inspiration, yet the challenge is that AI can consume limitless volumes of data, as opposed to a human-constricted limit.  

The copyright/copywrong debate is just one facet of a media in flux. In 2024, we will see the result of landmark, precedent-setting cases such as the NYT vs. OpenAI (however, it is unclear if this will ever go to trial or is simply a bargaining tool by the publisher) and witness the ways in which the media landscape adapts to its new AI reality. 

Deepfakery to run rampant

In terms of geopolitics, the AI story of the year will inevitably be how this technology is intersecting with the biggest election year in human history. This year, more than half of the world’s population are heading to the polls, with presidential, parliamentary and referential votes scheduled in nations including the U.S., Taiwan, India, Pakistan, South Africa and South Sudan. 

Such interference already occurred in Bangladesh, which headed to the polls in January. Some pro-government media outlets and influencers actively promoted disinformation created using low-cost AI tools. 

In one instance, a deepfake video (that was subsequently taken down) showed an opposition figure appearing to retract support for the people of Gaza, a stance that could be detrimental in a nation where the majority of Muslims hold a strong solidarity with Palestinians. 

The threat of AI imagery is not theoretical. Recent research revealed that subtle changes designed to deceive AI in image recognition can also influence human perception. The finding, published in Nature Communications, underscores the parallels between human and machine vision — but more importantly, it highlights the need for more research into the impact of adversarial images on both people and AI systems. These experiments showed that even minimal perturbations, imperceptible to the human eye, can bias human judgments, akin to the decisions made by AI models. 

While a global consensus is emerging around the concept of watermarking (or content credentials) as a means to distinguish authentic content from synthetic, the solution is still fraught with its own complexities: Will detection be universal? If so, how can we prevent people from abusing it — labeling work that is synthetic when it is not? On the other hand, denying everyone from being able to detect such media cedes considerable power to those who have it. Once again, we will find ourselves asking: Who gets to  decide what is real?

With public trust across the world remaining firmly at a nadir, 2024 will be the year when the world’s biggest election year intersects with the most defining technology of our time. For good and for bad, 2024 marks the year wherein AI is applied in real, tangible ways. Hold on tight.

Elliot Leavy is founder of ACQUAINTED, Europe’s first generative AI consultancy.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Elliot Leavy, ACQUAINTED
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!