Generative AI Survey: Understanding Enterprise Adoption and Challenges

Author

Posted Nov 19, 2024

Reads 1.2K

An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...

More than 60% of respondents in our survey have already implemented generative AI in their organizations, with a significant majority (85%) planning to increase investment in the technology over the next two years.

The survey highlights the growing importance of generative AI in the enterprise, with 70% of respondents citing improved productivity as a key benefit. This is no surprise, given that generative AI can automate repetitive tasks and free up human resources for more strategic work.

However, the survey also reveals challenges in implementing and scaling generative AI, with 60% of respondents citing data quality and bias as major concerns. This is an area where organizations need to focus on developing robust data management practices to ensure that their generative AI systems are fair and accurate.

Study

We conducted a survey experiment in May 2024 on a conversational AI platform, randomly assigning participants to either interact with a textbot or not.

Credit: youtube.com, So What? Using Generative AI for Survey Analysis

The textbot performed elaboration and quality probing on selected open-ended questions, which helped to improve response quality.

We asked respondents, "What do you think is the most important issue facing the country today?" and measured several outcomes related to response quality.

These outcomes included indicators of high-quality responses, such as relevance, specificity, and explanation, as well as low-quality responses, like incompleteness, incomprehensibility, and redundancy.

Three trained human coders identified these six criteria through an inductive thematic analysis of open-ended responses and then coded a sample of open-ended responses using these indicators.

A total of 1,200 participants were involved in the survey, which aimed to provide insights for large companies to make informed decisions about implementing generative AI.

Here's an interesting read: When Was Generative Ai Open Source

Survey Methodology

Our research methodology combines elements of systematic and narrative reviews, but with a focus on a qualitative approach due to the limited number of technical works available on XAI for GenAI.

We used a narrative literature review as our main approach, which allowed us to synthesize existing works and combine essential properties of GenAI and prior XAI desiderata. This approach is similar to a meta-survey, where we looked at concepts and characteristics from pre-GenAI taxonomies and surveys to inform our work.

Take a look at this: Generative Ai Boosting Approach

Credit: youtube.com, Using Generative AI to bring Qualitative Capabilities to Quantitative Surveys

We searched Google Scholar for several terms between February 15, 2024, and March 15, 2024, using keywords like "survey explainability" and "XAI for generative AI." We also performed forward and backward searches to ensure we didn't miss any relevant works.

We filtered results based on title, abstract, and full-text, preferring peer-reviewed works but also including articles from arxiv.org after quality assessment. We were stricter with older works on pre-GenAI and more open to works on XAI for GenAI containing novel aspects.

Discover more: Generative Ai 2024

System Architectures

GenAI models can function as stand-alone applications with a simple user interface, allowing textual inputs or uploads, as seen with OpenAI's ChatGPT.

A system might be essentially one large model, where a model is almost exclusively based on deep learning taking an input processed by a neural network yielding an output.

For multi-modal applications, systems that consist of a Large Language Model (LLM) and other generative models, such as diffusion models, are typically employed.

Here's an interesting read: Generative Ai with Large Language Models

Credit: youtube.com, Chapter 8 Survey Methodology Part 1

GenAI-powered systems may involve external data sources and applications interacting in complex patterns, as illustrated in Fig. 2.

An orchestration application may determine actions based on GenAI outputs or user inputs, like in ChatGPT-4 where a user can include a term like “search the internet” in the prompt.

The orchestration application is responsible for performing the web search and modifying the prompt to the GenAI model, e.g., enhancing it with an instruction like “Answer based on the following content:” followed by the retrieved web information.

Research Methodology

Our research methodology is a combination of systematic and narrative reviews, but with a twist. We focused on a more qualitative approach, like a narrative literature review, because XAI for GenAI is a relatively new field with limited technical works available.

We searched Google Scholar for several terms between February 15, 2024, and March 15, 2024, using terms like "Survey explainability" and "Survey XAI for generative AI". This was followed by forward and backward searches to ensure we didn't miss any relevant works.

Credit: youtube.com, Quantitative Survey Methodology

We filtered results based on title, abstract, and full-text, preferring peer-reviewed works but also including articles from arxiv.org after performing our quality assessment as reviewers. We were stricter with older works and more open to newer works containing novel aspects.

Our taxonomy development process was iterative, alternating between looking at concepts and empirical data to derive our dimensions. We also followed the process outlined by Nickerson et al. in 2013, which involved looking at synthesized concepts and primary research papers to develop our taxonomy.

We searched for terms on Google Scholar, using either "survey" or "review" in combination with terms like "Survey explainability" and "Survey XAI for generative AI". This was done to ensure we captured all relevant works in the field of XAI for GenAI.

Scope

Survey methodology encompasses a broad range of techniques used to collect data from a sample of a population.

The scope of survey methodology can be divided into two main categories: probability sampling and non-probability sampling.

Credit: youtube.com, Survey Builder Basics - Full Scope Freelancer

Probability sampling involves selecting a sample based on a random process, which ensures that every member of the population has an equal chance of being selected.

Non-probability sampling, on the other hand, involves selecting a sample based on non-random criteria, such as convenience or expert judgment.

Survey design is a critical component of survey methodology, as it determines the overall structure and content of the survey.

The survey design should take into account the research question, target population, and available resources.

A well-designed survey can help ensure that the data collected is accurate, reliable, and relevant to the research question.

Survey implementation is the process of collecting data from the sample, which can be done through various methods, including in-person interviews, phone calls, or online questionnaires.

Survey implementation can be affected by factors such as sample size, response rate, and data quality.

Data analysis is the final step in the survey methodology process, where the collected data is examined and interpreted to draw conclusions.

Data analysis can involve various statistical techniques, such as descriptive statistics and inferential statistics.

On a similar theme: Generative Ai Text Analysis

Survey Findings

Credit: youtube.com, Instant Survey Analytics with Generative AI

96% of companies view generative AI as a key enabler, with 82% anticipating rapid growth in its adoption across various departments. This widespread recognition is a testament to the potential of generative AI to drive business success.

The most significant benefits of generative AI will be seen in IT, security, and customer service, where it can improve efficiency, reduce costs, and enhance customer satisfaction. This is a significant opportunity for businesses to streamline their operations and improve the customer experience.

However, companies are also concerned about the need for enhanced security measures and data protection with generative AI applications. 95% of companies have concerns about security, and 94% have concerns about data protection. This highlights the importance of prioritizing security and data governance when implementing generative AI solutions.

Here's a breakdown of the key concerns and preferences related to generative AI:

Despite the potential benefits, only 10% of AI professionals expressed high confidence in their ability to develop effective in-house solutions. This highlights the challenges and limitations of building quality in-house generative AI solutions.

Model Architectures

Credit: youtube.com, [2024 Best AI Paper] A Survey of Mamba

Generative AI models have several key architectures that set them apart. One of the most notable is the Transformer architecture.

The Transformer architecture is a type of neural network designed specifically for sequence-to-sequence tasks, such as machine translation and text generation. It's based on self-attention mechanisms, which allow the model to focus on specific parts of the input sequence when generating output.

Diffusion models are another type of generative model that's gaining popularity. They work by iteratively refining a noise signal until it converges to a specific data distribution.

Other generative models, such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), also exist but are not as widely used as Transformers and diffusion models.

Diffusion Models

Diffusion models are a type of AI model that learn to reconstruct noisy data by repeatedly adding small amounts of noise to the input.

They start by distorting the input until it appears to be noise following a Gaussian distribution. This process is reversible, allowing the model to generate new samples by reversing the noise addition process.

Credit: youtube.com, A Comprehensive Survey on Diffusion Models and their impact on Medical Imaging - Kushan Raj

The Denoising Diffusion Probabilistic Model (DDPM) is a prominent technique used for text-to-image generation. It acts as a Markov chain, where only the current state is relevant for the next output.

DDPM produces an output in a sequence of T sequential steps by computing a Gaussian distribution at each step. The overall output is the product of these Gaussian distributions.

The reverse pass for generation starts from a given output and produces the original input, which should follow the true data distribution.

VAEs, GANs

VAEs, GANs, and other generative models have been gaining traction in recent years. Generative Adversarial Networks (GANs) are trained using a generator that constructs an output from a random vector and a discriminator that aims to distinguish generated outputs from actual samples of the true data distribution.

A key characteristic of GANs is their ability to learn complex distributions, making them particularly useful for tasks such as image and video generation. The generator in a GAN creates new data samples that are indistinguishable from real data, while the discriminator evaluates the generated samples and provides feedback to the generator.

Credit: youtube.com, VAEs versus GANs | Lecture 65 (Part 3) | Applied Deep Learning

Variational Autoencoders (VAEs) are another type of generative model that constrain the latent space to a given prior distribution through a regularization term as part of the optimization objective. This constraint allows for easier sampling of the latent space, which is particularly useful when the latent space follows a known distribution.

Sampling in VAEs is facilitated when the latent space follows a known distribution, making it easier to generate new data samples. This is a significant advantage of VAEs over other generative models, such as autoencoders, which do not have this constraint.

Findings

Our findings show that conversational probing can significantly enhance response specificity and detail, even with minimal fine-tuning of the large language model. This improvement is particularly notable for respondents in the conversational AI condition.

The gains in specificity and detail don't necessarily extend to other positive qualities such as relevance, completeness, or comprehensibility. For example, the "most important issue" question in the survey saw significant improvements, but other questions may not have seen the same level of enhancement.

Credit: youtube.com, Finding the Story in your Survey Data

Conversational probing can be an effective way to elicit more thoughtful answers from respondents, especially on topics like income, race, and sexual orientation and gender identity (SOGI). This could be particularly useful for survey researchers looking to gather more nuanced data.

To integrate large language models (LLMs) into surveys effectively, researchers should consider the user experience design (UXD) of the textbot integration. This includes the amount and placement of probing, as excessive probing early in the survey can increase respondent dropout.

Researchers should also conduct a pilot study to fine-tune the LLM to each question, identify cases where probing failed or should have been conducted, and refine LLM prompts for individual questions.

Different types of probing strategies should be considered depending on the specific goals of the study. For example, some research may benefit from probing for depth (asking respondents to explain why or provide details for existing examples) while other studies might optimize for depth (eliciting secondary response categories besides the "main" category cited by a respondent).

Here are some practical insights for integrating LLM textbots into surveys:

  • Consider the user experience design (UXD) of the textbot integration.
  • Conduct a pilot study to fine-tune the LLM to each question.
  • Explore different types of probing strategies depending on the specific goals of the study.

By considering these factors, survey researchers can harness the potential of LLMs to enhance data quality while minimizing the risks and complexities associated with their use.

Jay Matsuda

Lead Writer

Jay Matsuda is an accomplished writer and blogger who has been sharing his insights and experiences with readers for over a decade. He has a talent for crafting engaging content that resonates with audiences, whether he's writing about travel, food, or personal growth. With a deep passion for exploring new places and meeting new people, Jay brings a unique perspective to everything he writes.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.