Enterprise Generative AI
Real Applications, Real Safety
The race to enterprise Generative AI is on. Push beyond the boundaries of traditional chatbot functions and unlock the potential for meaningful, real-world enterprise applications. Get started today with this collection of Generative AI use cases, a ready-to-use RAFT framework for Responsible AI, and an interactive flowchart on getting started with LLMs — brought to you by Dataiku.
Generative AI Use Case Collection
Responsible Generative AI Framework
Meet Dataiku
Dataiku is the platform for Everyday AI.
Dataiku Key Capabilities
Move beyond the lab and build real and safe Generative AI applications at enterprise scale. Dataiku brings enterprise-grade development tools, pre-built use cases, and AI-powered assistants throughout the platform. Notably, the LLM Mesh provides the components in Dataiku that empower IT to take control and help teams build safe, secure Generative AI applications.
Data Preparation
Visualization
Machine Learning
DataOps
MLOps
Analytic Apps
Collaboration
Governance
Explainability
Architecture
Security
Extensibility
contact us
Capitalize on your existing infrastructure to deliver results at scale with Dataiku’s powerful, flexible, and open architecture.
DISCOVER
Connect, cleanse, and prepare data for Generative AI projects at scale.
Interactively explore data and create statistical analyses, charts, and dashboards to share insights with the broader team.
Accelerate the model development process with AutoML or go totally custom using Python, R, Scala, Julia, Pyspark, and other languages.
Automate data pipelines for clean, reliable, and timely data across the enterprise.
Deploy, monitor, and manage machine learning models and projects in production.
Deliver analytic dashboards and custom applications to business users for better, more data-driven decisions.
Dataiku makes data projects a team sport by bringing everyone together, from AI builders to AI consumers.
Safely scale Generative AI with oversight and prioritize the data projects and models that deliver the most value.
Practice Responsible AI by understanding pipelines and interpreting model outputs to increase trust and eliminate bias.
Manage risk and ensure compliance with internal controls and external regulations.
Maximize Dataiku's potential by integrating custom components and accessing OpenAI's text completion APIs for business impact.
©2024 Dataiku. All rights reserved.
Legal Stuff
Privacy Policy
This short flipbook highlights the four main pathways to scaling Generative AI, with pros and pitfalls of each. Plus, get our recommendation for the most logical approach to ensure a future-proofed Generative AI strategy — including the Dataiku LLM Mesh.
Scale Generative AI With an AI Platform
#3
Commoditization Will Happen… Fast.
Leveraging an AI platform like Dataiku solves this problem, making the ability to build Generative AI into your business a competitive advantage. Notably, the LLM Mesh provides the components companies need to build safe applications using LLMs at scale efficiently. Organizations that are able to build their own Generative AI use cases to solve their most pressing and costly business challenges will have the upper hand.
The excitement around ChatGPT when it was first released in November 2022 led to a massive uptick in investors pouring funding into AI startups. There are now Generative AI-powered point solutions for pretty much everything. Needless to say, buying up “Generative AI for [Insert Use Case Here]” is becoming expensive, fast. Perhaps more importantly, proper governance to ensure responsible use remains a challenge, exposing the business to risks…
#2
This is still one of the main challenges to building real, lasting value with Generative AI. You need to make it easy for data experts (who understand these models) and domain experts (who know what the business needs) to work together to build Generative AI into your business where it will have the most impact. Without a real strategy for involving business people and not just tech people, you risk ending up in the same place. Lots of time and investment, but with little understanding of the real business needs to be solved.
When organizations first started using regular old data science and machine learning, it was relegated to the realm of data teams and data scientists. In many cases they built models with little to no direct communication with the business they were trying to impact. Unsurprisingly, many companies felt — and some still feel today — that they don’t get the value they were expecting from data science, machine learning, and eventually AI...
Collaboration Is (Still) the Name of the Game.
#1
It’s So Much Bigger Than Chatbots.
Generative AI and LLMs are going to transform your business, but not in the way you think. It’s not going to be through chatbots. Building chatbots is too complex, there are too many risks, and, most importantly, you don’t transform an enterprise through one-off questions and answers. Real transformation requires building human-like intelligence into thousands of processes throughout your business, fundamentally changing your cost structure for knowledge-intensive tasks.
Enterprise Basics for Generative AI
READ NOW
To succeed with Generative AI, you need a strategy that will both turn use cases into reality (quickly) and safeguard against risk. This quick yet thorough read provides an action plan for how to do both — today.
Generative AI: Now Is the Time.
Flowchart: How to Get Started With LLMs
LLM-Enhanced Demand Forecast
Optimize your supply chain with dynamic, Generative AI-driven recommendations based on demand forecast predictions.
HOMEPAGE
Clinical Trial Explorer
Build Generative AI-driven insights from clinical trial data using natural language queries and Dataiku, the platform for Everyday AI.
Customer Review Analyzer
Put customer reviews to work with Dataiku and the power of Generative AI to quickly identify issues and understand patterns.
LLM-Enhanced Next Best Offer
Enable banking sales professionals to generate customized follow-up messages with the power of Dataiku and Generative AI.
Predictive Maintenance Data Explorer
Ask questions in natural language and receive real-time suggestions and visualizations thanks to the power of Generative AI.
Sales Analytics Generator
Empower your sales organization with Generative AI insights from a central data source.
Drug Repurposing Graph Generator
Distill vast biomedical data into interactive graphs using Generative AI, showing links between genes, drugs, and diseases.
Insurance Contract Explorer
Simplify the management of customer requests with personalized, intelligent responses leveraging the power of Generative AI.
CO2 Forecast Analyzer
Instantly understand energy usage and CO2 impact with simple, natural language requests, thanks to the power of Generative AI.
Product Recommendation Template Builder
Improve consumer response rates by accelerating tailored email messaging with the power of Dataiku and Generative AI.
Medical Report Analyzer
Turn unstructured data from medical reports into precision-driven healthcare insights with the power of Generative AI.
IT Support Ticket Advisor
Leverage Generative AI technology to surface accurate answers from dense technical documents and accelerate the work of IT support agents.
The Generative AI Use Case Collection
Real-World Business Applications & Examples
Developed from the invaluable experience gained from working with over 600 customers, this collection of use cases goes beyond the boundaries of traditional chatbot functions. Unlock the potential for meaningful, real-world enterprise applications of Generative AI.
LLM-Enhanced ESG Document Intelligence
Generate ESG insights from a large and complex corpus of documents in seconds thanks to the power of Generative AI.
Production Quality Data Explorer
Quickly identify defect-related challenges at scale with Generative AI-powered self-service exploration of production quality indicators.
AI Policy & Regulation Explorer
Global markets mean a global variety of regulations and frameworks — Generative AI makes quick work of regulatory research.
Financial Forecast Emailer
Make building financial forecasts easy for FP&A teams with Dataiku, then harness the power of Generative AI to efficiently share those reports.
Heraeus: Improving the Sales Lead Pipeline With LLMs
Heraeus uses LLMs in Dataiku to support sales lead identification and qualification processes.
LG Chem: Creating GenAI-Powered Services to Enhance Productivity
The services help LG Chem employees find safety regulations and guidelines quickly and accurately.
Whataburger: Using LLMs to Hear What Customers Are Saying
An LLM-powered dashboard is used to comb through thousands of customer reviews each week.
Ørsted: Monitoring Market Dynamics With LLM-Driven News Digest
Ørsted uses the digest to ensure its executive management has a more aligned understanding of market dynamics.
Responsible Generative AI
Mitigating Risk With New Technology
A baseline approach to mitigating risk when implementing Generative AI is to assess each use case across the two dimensions presented below. To access the complete RAFT (Reliable, Accountable, Fair, and Transparent) framework for Responsible AI, download the full ebook.
download ebook
Output Shared as a Report, Recommendation, or Suggestion Actions
Corporate or Business Documents
While social risks from the use of these documents are lower, it is important to ensure the documents are up to date and relevant for the question at hand.
Generally this type of delivery ensures that end users have control over how the output is shared or deployed down the line.
CLICK ON EACH cARD TO DISCOVER MORE
DOWNLOAD THE EBOOK
Roger that, we sent the PDF to the email you provided here, so check your inbox!
Deep dive into the risks Generative AI can present plus explore the RAFT (Reliable, Accountable, Fair, and Transparent) framework for Responsible AI.
Responsible & Governed Generative AI
Target of Analysis
The type of data or documents the model will make use of to generate output.
Delivery Method
How the output of a model is distributed to end users.
Individual or Personal Data
Personal data on customers, users, or patients should be protected from unauthorized access.
Virtual Assistant
Virtual assistant delivery includes chatbots, text to voice responses, and question answering.
Academic or Domain-Specific Texts
Texts should be carefully cultivated to ensure fit with the intended use case.
Automated Process
Results of a Generative AI model may be passed directly to an end user without review.
While social risks from the use of these documents are lower, it is important to ensure the documents are up to date and relevant for the question at hand. This should include designing the input parsing strategy to leverage the correct documents and content to answer a given query. Prevent leakage of sensitive or copyright material into the model output — users should not be able to circumvent model parameters to gain access to unauthorized documents
Generally this type of delivery ensures that end users have control over how the output is shared or deployed down the line. It provides an opportunity for review of outputs before action is taken, allowing the end user a chance to review explanations and provide feedback.
Personal data on customers, users, or patients should be protected from unauthorized access. Models that make use of personal information to draw conclusions about individuals should be tested for fairness across subpopulations of interest, language output should not be toxic or reinforce stereotypes about groups of people, and end users should know that their data is being used to train and deploy models (with the ability to opt out from this usage). Take care that personal information or details that are not relevant to the specific use case are not used in the model or shared back to the end user.
Texts should be carefully cultivated to ensure fit with the intended use case. Model outputs should be able to provide citations to real texts in the corpus to support any generated answers or recommendations.
Virtual assistant delivery includes chatbots, text to voice responses, and question answering. These interactions should be clearly marked as a computer/model and not a human agent. Extra precautions should be taken to make it clear the model is not a sentient agent and could potentially provide incorrect responses. Additionally, generative text models should be developed with guardrails around the type of language and discussion permissible by the model and end users to prevent toxic or harmful conversation from occurring.
Results of a Generative AI model may be passed directly to an end user without review. Automated processes are typically used to scale AI use, meaning real-time human intervention is not possible. Regular review of model outputs, quality, and the ability to pause automated processes when necessary is critical to catch any potential harms. Ensure you clearly document accountability over stopping an automated process.
get the llm starter kit
For even more customization, you can incorporate external tools with an approach referred to as "LLM agents," orchestrate the underlying logic of these retrieve-then-read pipelines with LangChain, or use the ReAct method for complex reasoning and action-based tasks. Advanced techniques such as supervised fine-tuning, pretraining, or reinforcement learning may also be appropriate. These methods adjust the inner workings of a foundational model so that it can better accomplish certain tasks, be more suited for a specialized domain, or align with your instructions more closely. Please note that these approaches often require copious amounts of high-quality training data and a significant investment in compute infrastructure.
Do you have specific data restrictions or infrastructure considerations? Should you use commercial AI services or self-hosted open-source models? What are the relative effort and cost levers to keep in mind when choosing among providers and LLMs? This video highlights the answers to some of these key model selection questions (and more!).
model selection
UTILIZE MORE ADVANCED TECHNIQUES LIKE LLM AGENTS AND TOOLS
No
Yes
Here, a knowledge bank refers to document collections such as support tickets, manuals, web pages, contracts, and so on.
Does a knowledge bank exist?
Does it require special or topical knowledge?
(In other words, are there already models or advanced analytics that can solve this problem, even if they’re not as efficient or modern an approach as LLMs?)
Examples include translation, sentiment analysis, text classification, etc.
Is your use case a commoditized (nlp) task?
More on this in the next level
Prompt engineering refers to the practice of crafting specific instructions or queries to effectively guide AI models in generating desired outputs or responses.
Discover LLM complexity levels
Want to use an LLM fora specific use case, but not sure where to start?
Examples include creating personalized marketing messages, analytics-based reports and charts, and narratives derived from predictive analytics insights.
Generate: These use cases create new unstructured content based on structured inputs, typically in the form of tailored text or visualizations (e.g., reports,emails, dashboards, images).
Examples include chatbotsand semantic search applications that help knowledge workers locate and fetch relevant and accurate information in a fraction of the time.
Answer: These use cases significantly streamline Q&Aand document retrieval activities.
Examples include accelerated document classification or summarization, sentiment analysis, and automatic entity extraction.
Structure: These use cases involve accelerating the transformation of unstructured data into structured formats by enriching it with new attributes.
Most use cases for LLMs fall into one of three buckets:
Tip
Explore the LLM Flowchart
Ready to leverage the power of Large Language Models (LLMs) to enrich data applications and generate business value, but aren’t sure exactly where to start? Curious how to choose the right model and approach for your use case?This interactive flowchart will walk you through a framework with four levels of increasing complexity for customizing an LLM’s behavior, along with the technical methods to apply and how Dataiku makes the techniques accessible to more people.
Successfully Navigate LLM Complexity
How to Get Started With LLMs
Homepage
watch the session
In this one-hour masterclass, go deeper on the four levels of LLM complexity outlined in the flowchart. Plus, hear example scenarios, use cases, and Dataiku features available at each level.
Choose an LLM for Your Generative AI Use Case
Restart the flowchart
CONTACT US
Want to use an LLM for a specific use case, but not sure where to start?
IS YOUR USE CASE A COMMODITIZED (NLP) TASK?
NO
YES
NLP MODELS AND SERVICES
In Dataiku, new LLM-powered, visual NLP recipes make it easy to perform common language tasks and infuse AI-generated metadata into your data pipelines.
By modernizing traditional NLP pipelines with LLM-powered workflows, you can save precious time and resources while also delivering more accurate and robust results.
The multi-purpose, context-understanding nature of LLMs renders many legacy data preparation and post-processing steps obsolete, so now you can go from raw text to insights in a single step.
You can use LLMs out-of-the box for commoditized NLP use cases such as detecting topics and sentiment in documents, content categorization or translation, or entity extraction — all without needing any expertise in text analytics and computational linguistics.
MOVE ON TO MODEL SELECTION
DOES IT REQUIRE SPECIAL OR TOPICAL KNOWLEDGE?
Use this option if your task is more specialized and you need to provide additional context or instructions to bridge the gap between an LLM’s natural outputs and your specific task.
With Dataiku’s Prompt Studios, you can design, compare, and evaluate prompts across models and providers to identify and operationalize the best context for achieving your business goals.
Prompt Studios also provides an estimated cost to run a given prompt against one thousand records, so teams can assess the financial impact of embedding Generative AI into data pipelines and projects during the design phase.
INSTRUCTION-TUNED MODELS READY FOR PROMPT ENGINEERING
DOES A KNOWLEDGE BANK EXIST?
RETRIEVAL AUGMENTED GENERATION (RAG)
How does RAG work, in a nutshell?
The RAG approach augments a model’s foundational knowledge with specialized or proprietary information, improving the accuracy and relevance of responses generated for “Answer”-type applications.
Some practical examples for RAG include when knowledge workers (customer service reps, technical support agents, legal analysts) need to look up facts from policy manuals, case law, and other reference material to answer questions. In some cases, the answers may be sourced from internal documents or require a citation of where the answer came from for compliance purposes.
RAG IN DATAIKU
In Dataiku, we make RAG easy by providing visual components that:
Create a vector store based on your documents,
Execute a semantic search to retrieve the most relevant pieces of knowledge,
Orchestrate the query to the LLM with the enriched context,
And even handle the Q&A user interface your knowledge workers will interact with, so you don’t need to develop a custom front-end or chatbot.
UTILIZE MORE ADVANCED TECHNIQUES LIKE LLM AGENTS AND TOOLS?
For even more customization, you can incorporate external tools with an approach referred to as "LLM agents," orchestrate the underlying logic of these retrieve-then-read pipelines with LangChain, or use the ReAct method for complex reasoning and action-based tasks.
Advanced techniques such as supervised fine-tuning, pretraining, or reinforcement learning may also be appropriate. These methods adjust the inner workings of a foundational model so that it can better accomplish certain tasks, be more suited for a specialized domain, or align with your instructions more closely.
Please note that these approaches often require copious amounts of high-quality training data and a significant investment in compute infrastructure.
GET THE LLM STARTER KIT
WATCH THE SESSION
This interactive flowchart will walk you through a framework with four levels of increasing complexity for customizing an LLM’s behavior, along with the technical methods to apply and how Dataiku makes the techniques accessible to more people.
Ready to leverage the power of Large Language Models (LLMs) to enrich data applications and generate business value, but aren’t sure exactly where to start? Curious how to choose the right model and approach for your use case?
Answer: These use cases significantly streamline Q&A and document retrieval activities.
Examples include chatbots and semantic search applications that help knowledge workers locate and fetch relevant and accurate information in a fraction of the time.
Generate: These use cases create new unstructured content based on structured inputs, typically in the form of tailored text or visualizations (e.g., reports, emails, dashboards, images).
PROMPT ENGINEERING
MODEL SELECTION
Which LLMs should your organization make available in these pre-built NLP recipes? Do you have specific data restrictions or infrastructure considerations? Should you use commercial AI services or self-hosted open-source models? This video highlights the answers to some of these key model selection questions (and more!).
Which models should you consider augmenting with your own knowledge bank? Do you have specific data restrictions or infrastructure considerations? Should you use commercial AI services or self-hosted open-source models? What are the relative effort and cost levers to keep in mind when choosing among providers and LLMs? This video highlights the answers to some of these key model selection questions (and more!).