insora Research Summary: Adaptive AI Assistant for SME Productivity
Executive summary
Small and medium-sized enterprises (SMEs) are central to Europe's economy yet face structural productivity headwinds. Knowledge work inefficiencies - search, duplication, and coordination - drive measurable time and cost losses. insora is an adaptive, agentic RAG-based assistant designed to reduce these losses through context-aware search, multi-agent orchestration, and collaborative workspaces. Mixed-methods research across European SMEs identified substantial adoption potential, tempered by value-perception gaps and trust requirements. Results indicate high early-adoption likelihood among small information and communication firms, short implementation cycles, and strong demand for security and transparency features. The system's architecture prioritizes secure, compliant handling of organizational data and cost-efficient context management for large knowledge bases.
Table of contents
- SME context and productivity challenge
- System overview and philosophy
- Functional architecture
- Adoption barriers and drivers
- Market segmentation and early adopters
- Key quantitative findings
- Security, privacy, and compliance
- Cost model and efficiency
- Next steps
1. SME context and productivity challenge
SMEs represent over 99% of European firms and contribute a majority share of private-sector employment and value-added. Despite their central role, SMEs exhibit persistent productivity gaps compared to larger enterprises and US peers. Interviews and surveys with European SME stakeholders highlight administrative burden, fragmented tools, and the rising complexity of knowledge work as structural obstacles to productivity.
External context (indicative): Knowledge workers commonly lose time to information search, duplication, and coordination overhead. Studies have estimated approximately 19% of time spent searching for information, multi-hour weekly losses to duplication and waiting, "work about work" occupying a substantial share of time, and significant interruption costs.
2. System overview and philosophy
insora is an adaptive, agentic assistant for SMEs combining retrieval-augmented generation (RAG) with multi-agent orchestration and collaboration. The system shifts from pre-configured, industry-specific software to context-aware software that learns from organizational behavior to optimize for organization, team, and individual workflows. It integrates with existing tools via standardized APIs, emphasizing rapid time-to-value through minimal setup and optional expert controls for customization.
The system delivers adaptive, context-aware responses grounded in internal data via RAG technology. It employs behavioral learning from usage patterns, including preferred information types, response formats, and automation pathways. The platform offers seamless integration with current systems, preserving prior investments while enabling low-friction onboarding with automatic context inference and optional expert tuning. Additionally, insora implements strict quality and filtering mechanisms to mitigate LLM hallucinations and increase trust among users.
3. Functional architecture
The architecture maintains dynamic, searchable representations of organizational knowledge using vector embeddings and semantic search across multiple data types, including documents, email, structured data, images, and diagrams. It supports at-scale retrieval over very large corpora while minimizing runtime context size sent to LLMs through statistical selection.
The system features a semantic knowledge base with multi-level retrieval and automated ingestion capabilities. Multi-agent orchestration enables an orchestrator to coordinate task-specific agents that can be created dynamically to execute tool and API calls across systems. Shared collaboration spaces provide real-time co-working, action logs, and personal workspaces with invite-based sharing. The platform continuously improves through implicit and explicit feedback signals at organization, team, and individual levels.
4. Adoption barriers and drivers
Across industries, several barriers consistently emerged during our research. Companies expressed concerns about complexity and expertise, particularly regarding implementation, maintenance, and limited internal technical skills. Privacy and security requirements emerged as critical considerations, with decision-makers emphasizing data protection, reliability, and safety. Perhaps most significantly, a value-perception gap exists where decision-makers struggle to quantify benefits in financial terms.
Key drivers for adoption include innovation orientation, demonstrable efficiency gains, social proof through peer adoption, and strong trust signals such as security certifications, encryption, auditability, explainability, and human control mechanisms.
5. Market segmentation and early adopters
Our segmentation integrated structural factors such as industry, size, and digital maturity with behavioral factors including time valuation, AI readiness, and implementation time. Early adopters are expected to experience acute pain points, possess higher technical and business acumen, tolerate uncertainty, and use the system intensively, enabling faster iteration cycles.
The optimal early adopter segment consists of small firms averaging approximately 24 employees in information and communication industries. These companies demonstrate short implementation cycles of approximately 2.2 months for early adopters. The scaling path involves product iteration with small tech firms first, then expanding to mid-sized and large enterprises for revenue maximization.
6. Key quantitative findings
Our quantitative analysis revealed several critical insights. Approximately 77.20% of respondents exhibit underestimation of benefits, indicating a significant value-perception gap. Trust and security emerged as decisive factors, with approximately 80% of companies valuing security and transparency features as critical to adoption decisions.
Early adopters show a likelihood to adopt (LTA) of approximately 0.34, with implementation taking approximately 2.2 months. Large enterprises demonstrate a lower but still significant LTA of approximately 0.18 with substantial total segment value. From an industry perspective, Information and Communication sectors show high adoption potential, while Manufacturing exhibits strong value potential.
7. Security, privacy, and compliance
The system prioritizes GDPR compliance, encryption, and audit trails as core design requirements. We adopt a cloud-first approach for speed and scale, while offering an on-premises option for organizations with specific security postures. LLM calls to leading providers use encrypted inputs, with Data Processing Agreements restricting training use and geographic transfer of data.
Security certifications such as ISO-27001 are targeted to build trust and reduce procurement friction. Given European privacy priorities and trust barriers, transparent safeguards and third-party attestations serve as critical adoption levers.
8. Cost model and efficiency
LLM usage typically dominates operating costs. The system minimizes in-context tokens by statistically selecting the most relevant information for each task, yielding orders-of-magnitude cost reductions compared to naïve full-context strategies.
The platform provides a predictable cost profile through time-series analysis of expected token consumption multiplied by per-token pricing. Efficient context construction reduces costs by approximately 10³ to 10⁴ times compared to full context retention approaches.
9. Next steps
Based on the research findings, several key development priorities have been identified. The system requires continued refinement of agent orchestration and retrieval policies to maximize quality while minimizing operational costs. Security and compliance features must be enhanced, including pursuing certifications such as ISO-27001 to build trust and reduce procurement friction.
Product development should focus on improving the semantic knowledge base with more sophisticated multi-level retrieval capabilities and expanding automated ingestion support for diverse data types. The multi-agent orchestration system needs further development to enable more dynamic agent creation and more seamless tool and API integration across enterprise systems.
User experience improvements are critical, particularly around low-friction onboarding, automatic context inference, and intuitive design that balances security requirements with usability. The platform should expand collaboration features, including enhanced real-time co-working capabilities and more sophisticated personal workspace management.
To accelerate adoption, the system should develop comprehensive enablement resources including quickstart integrations, sample workflows, transparent logging systems, and ROI calculation tools. Expanding case studies and references across industries will provide social proof and accelerate mainstream adoption.
Disclaimer
This independent research summary paraphrases internal research findings and does not disclose or reproduce any confidential or NDA-protected documents. All product descriptions and quantitative insights are presented at a high level and may evolve as the product develops. External statistics are indicative and sourced from publicly available reports.