Artificial Intelligence
According to Statista, the global AI market is projected to reach $184 billion in 2024, with an annual growth rate of 28.46% from 2024 to 2030. This would result in a market volume of $826.7 billion by 2030. Driving these numbers are the global generative AI market which is expected to reach $36.06 billion in 2024, with an annual growth rate (CAGR) of 46.47% between 2024 and 2030. This would result in a market volume of $356.10 billion by 2030. A subset of generative AI are large language models (LLMs) with its global size is set to experience rapid growth, projected to surge from USD $6.4 billion in 2024 to USD 36.1 billion by 2030, at a CAGR of 33.2% during the forecast period of 2024 to 2030, according to a new report by Markets and Markets.
However, these projected growth rates seem dubious, for instance, raining in any generative AI model, including an LLM, entails certain challenges, including how to handle bias and the difficulty of acquiring sufficiently large data sets. However, LLMs also face some unique problems and limitations. One significant challenge is the complexity of text compared with other types of data. Think about the range of human language available online: everything from dense technical writing to Elizabethan poetry to Instagram captions. That's not to mention more basic language issues, like learning how to interpret an odd idiom or use a word with multiple context-dependent meanings. Even advanced LLMs sometimes struggle to grasp these subtleties, leading to hallucinations or inappropriate responses.
Another challenge is maintaining coherence over long stretches. Compared with other types of generative AI models, LLMs are often asked to analyze longer prompts and produce more complex responses. LLMs can generate high-quality short passages and understand concise prompts with relative ease, but the longer the input and desired output, the likelier the model is to struggle with logic and internal consistency. This latter limitation is especially dangerous because hallucinations aren't always as obvious with LLMs as with other types of generative AI like ones that generate pictures; LLMs' output can sound fluent and seem confident even when inaccurate.
ChatGPT Generative AI language models contribute to this problem requiring substantial computational resources to perform even simple learning operations. The more complex the task, the more unpredictable are the results requiring expensive, time-consuming adjustments which still generate unexplainable results. Current Generative AI solutions have a general inability to adapt decision-making in real-time making their behavior erratic when encountering new situations. Their costs and lack of adaptability significantly limit their application at the edge.
Despite the continuous research and innovation from tech giants, such as Amazon, Google and IBM, the barriers and the gap still exist. Many of the efforts to drive the adoption of AI technology into the automotive, healthcare, retail, finance, and manufacturing industries have been crippled by these problems. As a result, the projected revenue is not being generated, significantly cooling expectations.
From 2012 to 2020, driven by the development of neural networks and deep learning, a new era of increased funding and optimism about the use cases for AI arose. Large technology vendors such as the Google used the advances to develop autonomous cars and intelligent systems like AlphaGo that were capable of beating highly intelligent humans in complicated board games.
Even with the many AI advancements over the last decade, numerous experts and scientists in the field currently believe that the industry may be entering the third AI Winter. Their belief is driven by the bubble that is forming in the AI industry where huge research projects are failing to return on their investments and their promised capabilities are falling far short of expectation. For example, Tesla just recalled 54,000 self-driving cars because they unexpectedly run stop signs. Many are beginning to believe that human-level intelligence cannot be achieved through either machine intelligence or deep-learning and that the current industry path is incapable of delivering on the expectations that industry giants are promising.
Services and Hardware
Artificial intelligence services include installation, integration, and maintenance & support undertakings. This segment is projected to grow at a significant rate over the forecast period. The artificial intelligence hardware market is dominated by Graphic Processing Units (GPUs) and CPUs due to the high computing requirements needed for current AI frameworks. The incorporation of AI into service offerings is a growing trend and is seen in such deals as Atomwise partnering with GC Pharma to offer AI-based services to help develop more effective novel hemophilia therapies. Both the Services and Hardware sector have historically earned about a third of the market’s revenue each with hardware always expected to take the lead.
Software
Software solutions are leading the artificial intelligence market with Machine Learning (ML) accounting for more than 38% of the share of the global revenue in 2020. ML is growing in conjunction with accessibility to historical datasets. Since data storage and recovery have become more economical, healthcare institutions, government agencies, and commercial companies are building massive repositories of unstructured data, all accessible to AI. From historic rain trends to clinical imaging, ML can now draw on rich datasets to advance the analytical understanding of the human condition and other complex processes. ML derived intelligence is highly marketable and can drive revenue in many businesses and industry.
Deep Learning is another AI software where Artificial Neural Networks (ANN) in combination with representation learning are used toward recognizing images, speech, signals, and written languages. Recognition datasets are queried/added/updated/presented where learning can be either supervised, semi-supervised or unsupervised. When Saris recognizes a request and then plays a song, this is an example of speech recognition and deep learning in a supervised setting. Deep Learning’s revenue (software, hardware, services) was 38% of the share of the global AI revenue in 2020. Deep Learning took off in 2016 where the largest user Facebook represented 40% of market revenue gained. Aerospace and the defense sector also contributed to over 20% of the market revenue owing to its need to perform remote sensing, object detection/localization, and spectrogram analysis. Deep Learning intelligence has a limited overall market due to excessive costs and limited application.
Recent Developments
Automakers have realized the advantages of autonomous cars and have been aggressively researching and adopting this AI technology. For instance, Audi now utilizes Deep Learning algorithms in its camera-based technology to recognize traffic signals by their characters and shapes. The auto industry was the largest end user of Deep Learning until 2019. In 2020, advertising and media started adopting Deep Learning and accounted for more than an 18% share of the global AI revenue. Now, the healthcare sector is anticipated to gain a leading share of the revenue by 2028 with this segment focusing on such things as robot-assisted surgery, virtual nursing assistants and automated image screening.
Hindrances and Potential
According to Gartner, there are three primary AI market barriers. The first barrier is skills. Business and IT leaders acknowledge that AI will change the skillsets needed to accomplish new and existing jobs. Fifty-six percent of respondents said that acquiring the necessary skills to integrate AI into everyday work tasks will be a challenge. The second barrier is poorly defined value propositions. Forty-two percent of respondents do not fully understand AI benefits and how it needs to be used. Quantifying the benefits of AI projects pose a major challenge for business and IT leaders. Some benefits can be well-defined values, such as revenue increase or time savings. Others such as customer experience are difficult to define and measure accurately.
The third barrier is the data scope and quality derived from AI. Successful AI initiatives depend on a large volume of data from which organizations can draw intelligence about the best response to a situation. Organizations are becoming aware that without sufficient quality data, the likelihood of an unknown situation being encountered goes up tremendously leading AI to failure or non-response. Some are even beginning to realize that mimicking human activity in complex situations is not just an exercise in data gathering. To be successful, AI needs the ability to adapt to the unknown where there is no previous data. A situation where both Machine Intelligence and Deep Learning fail miserably.
Signal Edge Neuro-Symbolic technology takes a different approach than Machine Intelligence and Deep Learning in solving the intelligence generation problem. By using our data-driven ByteWiseIoT Teleporter, this produces far superior transmittal results with just a fraction of the computational requirements. Then our Neuro-Symbolic Intelligence converts all field activity from all sources of information into symbols that are displayed within our COI interface. As the operator reacts, by using the COI, to the field activity, our AI learns the org's tribal knowledge it becomes humanized. The benefits Signal Edge's Humanized AITM approach are both apparent and far-reaching. Humanized AITM will be able to perform human-level decision quality even where no previous data exists. The AI industry’s weaknesses and shortcomings are where Neuro-Symbolic technology soars and where we expect high product acceptance and domination of any market segment that we choose.