See all
No search result found
It looks like we couldn't find any results for your search.
Helpful links
Responsible use of AI at Nordic Innovators, protecting client data and intellectual property
AI is useful, but imperfect by design:
Separating AI hype from reality
AI is becoming a standard tool in knowledge work for professional services firms. It can support research, summarisation, drafting support, and refinement. Often, it is presented as a transformative solution that can save time, increase throughput, and boost quality by automating repetitive tasks and augmenting our most valuable work.
Our experience does not support that narrative. Large Language Models (LLMs) generate probabilistic outputs based on pattern recognition and training data. This makes the output quality dependent on structured prompts and context, and the model does not inherently “know” what is true in your specific case. This means that they often miss proposal narratives and technological understanding.
And AI systems fail with many valuable tasks and can produce text that is convincing yet highly inaccurate, over-generalised, and reflects underlying biases. Rather, success depends on understanding companies’ technologies, aligning partner perspectives, and maintaining a clear and coherent strategic narrative across deliverables.
And the case for Nordic Innovators is no different. AI’s capabilities in complex R&D&I projects, especially those that involve DeepTech innovations, often struggle to add any novel value. Without experienced consultants and domain experts guiding and critically reviewing outputs, AI can reduce efficiency - in other words, a human in the loop is always needed. It often results in wasting senior staffs’ time, adding onto their mental load, and diverts them from doing truly valuable work for our clients.
Yet AI is a driving force yet to be seen, and the progress is staggeringly quick. At Nordic Innovators, we cautiously use AI where it brings value, and not where it lacks behind: To support our workflows, while keeping deliverables human-led, quality-assured, and aligned with client expectations, to ensure that we apply our, and our client’s, domain expertise more effectively without trying to substitute it.
Why protecting client data and intellectual property is important
Equally important is our client’s trust when using AI tools. It depends on confidentiality, and in many R&D&I projects, IP sensitivity is as important as personal data. Using AI irresponsibly increases risk, for example by sharing confidential project details in tools that are not appropriate for sensitive information.
“Innovation encompasses not only the generation of ideas but also their protection; safeguarding intellectual property — including ensuring confidential data is never exposed to external AI tools — remains paramount in a digital environment”.
Alexander Bjørnå, Partner at aera (A European IP consultancy and trusted partner of Nordic Innovators.)
Our baseline is simple: strict AI governance and data protection practices are paramount, and sensitive information must only be shared on secure systems. This is not exclusive to personal data; It applies to any proprietary technical details, claims, and novelty-sensitive descriptions that could affect IP strategy.
We also stay close to specialist perspectives on IP protection, such as from aera, especially as innovation teams increasingly experiment with AI in early-stage drafting and analysis.
Our compliance view on human-in-the-loop and AI literacy
Under the EU AI Act, Nordic Innovators acts as a deployer of AI systems. Our compliance focus is on how AI is used in practice: human oversight, accountable decision-making, and staff competence. We require human-in-the-loop review for AI-assisted outputs, meaning a consultant remains responsible for validation, editorial control, and the final result. (EU AI Act, Article 50: Transparency Obligations)
Furthermore, the Act explicitly expects providers and deployers to take measures to ensure a sufficient level of AI literacy among staff and others operating AI on their behalf. We address this through internal guidance, training, and continuous iteration of best practices, so colleagues understand both the value and the limitations of AI, including risks related to confidentiality and IP. (EU AI Act, Article 4: AI Literacy)
To ensure both transparency and AI literacy across the Nordic Innovators Group, as well as the compliant deployment of AI, Nordic Innovators has a dedicated centralised team, the Nordic AI Hub. The AI Hub oversees the Group’s AI architecture, while aligning security and data protection obligations with standards and regulations like GDPR and the EU AI Act.
Underlining our stance
AI can be a practical tool for improving how we structure, refine, and quality-assure complex work, but it does not replace expertise, judgment, or accountability – inherently human capabilities. Nordic Innovators’ position is to use AI where it brings value, and with clear safeguards: protect sensitive data and IP, keep humans responsible for validation and outcomes, and build organisation-wide AI literacy in line with evolving expectations.
Benjamin brings a strong academic and professional background along with a sharp eye for where strategy meets technology. He leads and manages our AI initiatives, aligning business requirements, strategic objectives and technical constraints to design scalable AI solutions and drive digital transformation across business units.

