OpenWebUI in Strategic IT Procurement – Making the Most of LLMs

🧠 LLMs in Procurement: Intelligence Meets Strategy
Strategic IT and software procurement is under pressure: rising requirements for compliance, security, efficiency, and innovation. At the same time, procurement departments are often understaffed and overworked. This is where Large Language Models (LLMs) come into play – as cognitive tools that handle repetitive tasks, identify risks, and accelerate decision-making.
OpenWebUI provides a simple, locally controlled platform to evaluate and use a wide range of LLMs. In this article, we show real-world examples of how LLMs can be practically applied in strategic procurement – and which model fits which task.
📋 Use Cases in Strategic Procurement
1. 📄 Contract Analysis and Risk Assessment
LLMs can analyze standard contracts and highlight common risk clauses:
- Payment terms vs. performance obligations
- Vendor lock-in clauses
- Unclear SLA definitions
Recommended models:
llama3.3
or llama4
for deep analysis, phi4
for concise assessments.
Example prompt:
“Analyze this software license agreement and list all clauses that could pose financial risk to the buyer.”
2. 🛠️ Preparing for Supplier Negotiations
With dolphin-mistral
or nous-hermes2
, you can simulate negotiation dialogues – including tone, cultural nuances, and typical counterarguments.
Example:
“Simulate a negotiation with a U.S. software vendor about a planned price increase. The client is not willing to agree.”
3. 📊 Tool Comparison and Vendor Selection
Choosing between competing tools is a recurring challenge. LLMs such as mistral
, gemma
, or llama3.1
can:
- Extract and structure feature lists
- Compare pricing and licensing models
- Evaluate compatibility with existing systems
Example prompt:
“Compare Microsoft Power BI, Tableau, and Looker in terms of license cost, cloud support, deployment flexibility, and data protection features.”
4. 📑 Automating Proposal Evaluations
In large tenders, dozens of bids must be reviewed. LLMs support structured evaluation:
- Identify missing data
- Cross-check against minimum requirements
- Derive total cost of ownership (TCO) advantages
Tip: Combine tinydolphin
for quick checklists with llama3.3
for more in-depth scoring.
5. 🔐 Understanding Licensing Models
Software licensing – especially with Oracle, Microsoft, or SAP – can be complex and opaque. Codellama or Deepseek can:
- Decode license types
- Formulate audit-compliant conditions
- Highlight hidden cost drivers
Prompt:
“Explain the differences between Microsoft CSP, NCE, and EA from a procurement perspective.”
💡 Model Overview – Strategic Insights
Model Name | Procurement Strengths | Best Used For |
---|---|---|
mistral |
Versatile, structured comparisons | Feature analysis, writing |
dolphin-mistral |
Dialogue-driven, emotionally intelligent | Simulations, communication |
nous-hermes2 |
Balanced and adaptable | Scenario planning |
codellama |
Parsing code and license text | Licensing, contracts |
phi / tinydolphin |
Lightweight and fast, for early-stage filtering | Checklists, summaries |
llama3.3 |
Deep reasoning and context awareness | Contract assessment |
llava |
Vision model (e.g., screenshots, UI review) | Offer screenshots, UIs |
deepseek-coder-v2 |
Technical depth, API logic | Automation evaluation |
nomic-embed-text |
Semantic search – good for document matching | RFP comparisons |
🧰 Using OpenWebUI Effectively
✅ Step 1: Choose the Right Model
Use the OpenWebUI model table to select what fits your need. Consider:
- RAM and CPU use
- Content filtering level
- Response latency and model size
✅ Step 2: Provide Context
LLMs perform better when given precise context. For example:
I am a strategic IT buyer evaluating offers for a cloud SaaS platform. The company has 800 users in 3 countries. The budget is €250,000 per year.
✅ Step 3: Iterate and Refine
Work with several models in parallel – e.g., use phi
for a quick scan, and llama3.3
for refined insights.
Use system prompts like:
“Respond as a software license manager with 10 years of experience. Avoid marketing language.”
🚧 Risks and Limitations
LLMs are tools, not decision-makers. Be aware of:
- Hallucinations in figures and legal references
- Outdated training data
- Lack of source attribution → always verify
Recommendation: Use LLMs as “thinking partners,” not final authorities.
🎯 Conclusion: Rethinking Procurement
With OpenWebUI and the right LLMs, strategic IT procurement becomes:
- Faster
- More informed
- More dialog-driven
They won’t replace your team – but they will amplify its capabilities with a layer of cognitive intelligence that wasn’t possible before.
➕ Next Steps
- Build a prompt library for tenders
- Tag models by use case
- Ensure data protection compliance (run locally)
My Local Dev/Test Setup with Ollama & OpenWebUI
An overview of available models and ideal use cases:
Model Name | Size | Use Case | Speed ⚡ | Censorship 🛡️ |
---|---|---|---|---|
tinydolphin |
636 MB | Quick chats, short answers | 🟢 Very fast | 🔓 Uncensored |
phi / dolphin-phi |
1.6 GB | Lightweight reasoning, summaries | 🟢 Very fast | 🔓 Light |
gemma:2b |
1.7 GB | Local testing | 🟢 Fast | 🛡️ High |
phi4:14b |
9.1 GB | Compact, dialog-oriented | 🟡 Medium | 🔓 Light |
gemma:7b / gemma2 |
5–5.4 GB | All-rounder, Google-style tone | 🟡 Medium | 🛡️ High |
mistral |
4.1 GB | Versatile, fast, coding & QA | 🟢 Fast | 🔓 Light |
dolphin-mistral |
4.1 GB | Empathetic tone, fine-tuned | 🟢 Fast | 🔓 Uncensored |
nous-hermes2 |
6.1 GB | Balanced LLM | 🟡 Medium | 🔓 Light |
codellama:13b |
7.4 GB | Code analysis, contract parsing | 🟡 Medium | 🔓 Uncensored |
command-r |
18 GB | Task following, instruction-heavy | 🔴 Slow | 🔓 Uncensored |
deepseek-coder-v2 |
8.9 GB | Specialized in programming logic | 🟡 Medium | 🔓 Uncensored |
devstral |
14 GB | Creative dialog, roleplay | 🔴 Slow | 🔓 Open |
llava (Vision) |
4.7 GB | Image & visual QA | 🟡 Medium | 🛡️ Moderate |
nomic-embed-text |
274 MB | Semantic text search | 🟢 Very fast | – |
llama3.1:latest |
4.9 GB | Compact, general use | 🟡 Medium | 🛡️ Low |
llama3.3:latest |
42 GB | Broad knowledge, creative | 🔴 Slow | 🔓 Light |
llama4:latest |
67 GB | Advanced, deep understanding | 🔴 Very slow | 🛡️ High |
llama2:70b |
38 GB | GPT-style, offline | 🔴 Very slow | 🛡️ High |
dolphin-llama3:70b |
39 GB | Uncensored variant of LLaMA 3 | 🔴 Very slow | 🔓 Uncensored |
deepseek-v3:671b |
404 GB | Research only – not production-ready | ⚫ Extremely slow | 🔓 Uncensored |
hammad70012/deepseek-cyber |
13 GB | Cybersecurity, reverse engineering | 🔴 Slow | 🔓 Open |
Legend
- 🟢 Fast = suitable for real-time interaction
- 🟡 Medium = usable with short delay
- 🔴 Slow = requires patience or strong hardware
- ⚫ Extremely slow = for research/testing only
- 🔓 Uncensored = freeform answers, no safety filters
- 🛡️ High = safe, moderated responses
Recommendations for Beginners
- 💬 Quick chat:
tinydolphin
orphi
- 🧠 Balanced & creative:
mistral
,dolphin-mistral
,nous-hermes2
- 💻 Coding & tech:
codellama
,deepseek-coder-v2
- 🖼️ Visual content:
llava
- 📚 In-depth analysis:
llama3.3
,llama4