Le Lézard
Classified in: Science and technology
Subjects: Photo/Multimedia, Product/Service, Economic News/Analysis

How AI will Evolve from Research to ROI in 2024


2023 was the year of AI enterprise adoption, with 55% of organizations adopting AI into their workflows, according to a recent report from McKinsey & Co. This adoption has been led by Large Language Models (LLMs) that promised to fulfill numerous use cases across the digital workplace. However, the failure of LLMs to live up to their hype will be the story of 2024, as generic models become relegated to consumer-centric applications and enterprise users turn to smaller, more targeted AI models, purpose-built to meet their business needs.

Over the past year, companies have shown their willingness to experiment with AI, but long-term success relies on the ability of AI to solve specific business problems and achieve positive outcomes ? and LLMs are failing to meet those expectations. There are growing concerns around the quality, accuracy, and security of these models, to the extent that companies are already prohibiting employees from using ChatGPT to shield their data, and the broader market is filing lawsuits to prevent the use of their data for model training.

This calls into question the long-term sustainability and financial viability of LLMs, which take billions of tokens to train. Without a steady influx of good, clean and cheap data, it will become increasingly difficult and expensive to build, deploy, and refresh models. Adding to this pressure is the ongoing GPU shortage, impacting the computing capabilities and running costs of AI models, with some companies having to wait almost a year to access these chips. Combined with challenges of hallucinations, data privacy, ethics, data traceability, and responsible AI, you have a perfect storm of headwinds facing LLMs going into 2024.

While some predict a slowdown in AI adoption as a result of these challenges, at Aware we predict the opposite. Instead of giving up on AI, businesses will look for more accurate, cost-effective and custom models that solve real, complex business problems.

Trends to Watch in 2024

AI's purpose shifts from research to ROI

LLMs were created by research teams exploring the capabilities of AI technology, rather than as models designed to solve specific business problems. As a result, their capabilities are broad and shallow ? writing a fairly generic email or press releases, for example. For the modern business, they have limited capabilities beyond that, requiring more data to produce results with any depth.

While the AI landscape used to be dominated solely by OpenAI, major names in the tech world are beginning to outperform ChatGPT with their own LLMs, including Google's new Gemini model. However, due to the broad capabilities of these new large language models, the text and image-based benchmarks used to determine the model's prowess were just as general. These benchmarks ranged from simple multi-step reasoning to basic arithmetic.

If an AI company's gauge for a successful Generative AI platform is how correctly it can complete rudimentary math equations, that has little to no relevance for the work of an enterprise organization. Realizing this, companies will increasingly prioritize AI solutions designed and built to solve real use cases and drive tangible ROI.

Companies truly recognize the value of their data

Data serves as fuel for LLMs ? data traditionally sourced from end-user prompts, books, articles, social media sites and more. This method of training models provides the broad plane of knowledge LLMs are known for, but raises data leakage and security concerns. Failure to manage this data's usage creates blind spots that bad actors can attack and jeopardize a company's place in the market. These blind spots can be found in almost any internal data reservoir. Worst of all, this accessible data could house troves of personal identifiable information, leading to serious future compliance issues. To address these issues, as many as 75% of businesses worldwide are beginning to prohibit the use of LLM solutions like ChatGPT in hopes of identifying solutions that can better protect their ingested data.

This emerging drive to secure proprietary data has brought the sheer volume of enterprise data to the forefront. This data exhaust, originating from anything from collaboration data to support tickets, holds deep insight into the risks and opportunities that sit within a business. Recognizing the value of the data they hold, companies will seek to secure it by taking a "hybrid cloud by design" approach, rather than "hybrid cloud by default." Ultimately, data protection will emerge as a key pillar in a successful AI strategy, and companies will move towards prioritizing AI solutions that are trustworthy and responsible.

Beyond internal data troves, companies in 2024 will begin using AI Models to proactively analyze external sources, like Reddit, in order to gauge customer, employee and the general public sentiment. This data is begging to be harnessed and companies will be looking for an opportunity to extract these insights. By analyzing content on public external platforms, companies could be made aware of issues months in advance. The same is true for competitors, who will be able to harness this publicly available data, giving them a leg up on the perceptions of their biggest competitors.

Hybrid and targeted AI Models supplements - and maybe overtakes - LLMs

The push toward AI that drives value for enterprises will force companies to pursue hybrid strategies ? the coexistence of open-source, closed-source and custom targeted language models trained on very specific internal data sets for very specific use cases.

Each of these models house their own benefits, with closed-source bringing security, industry models bringing specificity, and open-source bringing agility for those that have the technical resources to use them.

This will be especially important when the company's proprietary or sensitive data requires stricter controls to meet compliance and legal obligations. Targeted models can help teams develop intellectual property around machine learning as a competitive advantage, training them on closely curated datasets while reducing reliance on large engineering teams or GPU instances that can add cost and complexity. Combined with the judicious use of larger AI models when appropriate, businesses can invest in solutions that fulfill their specific needs.

The rise of MLOps to manage new workflows

MLOps businesses offer managed infrastructure, running AI and ML models that provide simpler management and reduced operating expenses. As the nascent market matures, customers will elect their preferred deployment option. Data Teams will become software teams. DevOps created a movement within software development that empowers developers to run the software they wrote. The same thing is happening in data. Products have filled those needs by mapping each of the core functions and responsibilities in the MLOps movement. The most sophisticated data teams run like software engineering teams with product requirement documents, ticketing systems and sprints so they can create efficient and scalable models. And with the MLOps market size increasing by nearly 40% by the end of the decade, End-to End Data Science and ML Ops functions will emerge as a hot career path.

Human-Centered Intelligence. AI makes enterprise data actionable

Data is everywhere, and it's growing exponentially. As businesses get a handle on managing their data and using it to train AI models, they must also consider what they will do with the results AI provides. Decisions, not dashboards, will be the yardstick by which AI is measured. The ability to make decisions faster and with greater certainty will fuel the future of AI as businesses continue to iterate and refine models that drive progress with precision.

As a result, Humans & AI will emerge as partners within enterprise work environments, not rivals. The outputs of AI should amplify, not replace, workers' natural abilities. When organizations harness AI to augment human capabilities, they unlock tremendous value by scaling human-centered intelligence across the enterprise, driving greater efficiency and dependability from a workforce.

Before companies can derive results from AI-driven platforms, it is most important to determine if their choice of data platform is a) accurate and b) cost-effective. This is where targeted AI models shine, with performance metrics from these models providing over 85% fewer false negatives and false positives when tested against LLM competitors.

On top of greater accuracy, narrowly trained models prove more cost-effective than large-scale LLMs; platforms like Llama-2-13b's operating budget can reach a staggering $182,000 per month, whereas targeted offerings cost below $1,000 per month.

ROI is everything to the enterprise. No matter how great the allure of AI may be, it has to bring a clear benefit to a workforce, not just be another line on a budget sheet. This can only come from internal data troves lying in wait to be utilized effectively.

About Aware

Conversations are at the heart of every enterprise. Aware's AI-Powered Data Platform connects workplace conversations across the enterprise and transforms daily conversations into the contextual intelligence leaders need to shape the trajectory of any business. Aware's natural language processing (NLP) and computer vision (CV) models are purpose-built to understand the unique human context of workplace conversations taking place on Slack, Teams, WebEx by Cisco, Zoom, and WorkJam. Aware equips the world's most iconic brands to apply that contextual intelligence to solve a broad set of use cases, from Experience Management and Cybersecurity to eDiscovery, supported by platform APIs that connect these insights into existing workflows for over 2,500 different applications. Using Aware, companies can finally combine a meaningful employee experience and enhanced customer experience with the operational rigor needed to thrive in the future of work. Aware was founded in 2017 and is headquartered in Columbus, Ohio.


These press releases may also interest you

at 19:21
Immersive Wisdom, Inc., provider of a proven TRL-9 distributed communications and ops center software platform for Denied, Degraded, Intermittent, and Limited-Bandwidth (DDIL) environments announced at SOF Week 2024 that it has been awarded a...

at 17:35
Perficient, Inc. ("Perficient" or "the Company"), a leading global digital consultancy transforming the world's largest enterprises and biggest brands, today announced that it has entered into a definitive agreement to be acquired by an affiliate of...

at 17:35
Perficient, Inc. ("Perficient"), the leading global digital consultancy transforming the world's largest enterprises and biggest brands, today reported its financial results for the quarter ended March 31, 2024. Financial Highlights For the...

at 15:32
A report from GDToday:  On the evening of May 3, 2024, the Guangzhou Opera House dazzled audiences with the performance of "Marco Polo," a captivating opera commissioned by the esteemed venue. In order to commemorate the 700th anniversary of the...

at 13:20
Garrett Smith, Founder and CEO of Community Capital Technology Inc. ("Community Capital"), will be attending the Milken Institute Global Conference ("MI Global") May 5-8, 2024 in Los Angeles. The event brings together global executives and...

at 13:16
Quarterhill Inc. ("Quarterhill" or the "Company") , a leading provider of tolling and enforcement solutions in the Intelligent Transportation System ("ITS")...



News published on and distributed by: