Building a trust-first AI software
Natan Voitenkov
Sep 12, 2023
•
7
min read
Artificial intelligence (AI) is already driving profound innovation and leading us to societal transformation. Yet, as systems like Genway AI gain more traction in their respective areas (for us, AI-led, insight-driven decision-making), legitimate concerns arise regarding transparency, safety, and accountability. The answer to these concerns doesn't lie in halting technological progress but rather in crafting a foundation of trust from the outset. Building trust-first AI software means ensuring it aligns with foundational humanistic values, safeguards user data, and operates within ethical boundaries. But how do you actually do it?
The core trust challenge when building AI products
AI technologies operate on unseen patterns within massive datasets, leading to decision-making processes that are often opaque, even to us as the team building the platform. This "black box" nature raises questions: Is our system reliable? Are the insights it’s generating sound? Is it biased in ways that exacerbate societal inequalities?
When things go wrong, the mistrust that’s generated erodes adoption and limits the value AI can offer. Consider our realm of insight-gathering functions within tech; when researchers can't understand the basis for research synthesis and action items tied to it, naturally, they’ll lack the confidence to implement an AI-led research tool more widely.
Because companies can easily squander the transformative potential of AI by ignoring these challenges, at Genway AI , we’ve chosen to build using a “trust-first” approach.
Our trust-first approach encompasses three key areas of focus:
Governance and societal well-being: We’re building a platform that adheres to principles of transparency, data rights, and responsible use of personal information through stringent adherence to privacy and security standards. In addition, we’re aligning Genway from the get-go with sustainable development goals and a focus on positive societal impact.
Safety: Building a safe AI platform requires a multipronged approach. The base layer of our safety approach is focused on technical robustness: resilience against errors, misinterpretations, and attack vectors. Then, we introduce transparency, i.e., the ability to understand the drivers of AI decisions and explanations in plain language. This enables us to support the third and final safety layer, human oversight - enabling researchers to be the ultimate decision-makers.
Accountability: It’s our responsibility to build AI systems in which we have the ability to trace back causes of harm, its consequences, and to to resolve any issues swiftly.
The frameworks that guide a trust-first AI approach
We use a multifaceted approach and a diverse set of frameworks to enable our trust-first approach across our three areas of focus.
Governance frameworks have helped us establish our guiding values and processes. We’ve reviewed and implemented documents like the OECD Principles on AI, reflecting in particular on inclusive growth, sustainable development and well-being,
This principle “recognizes that guiding the development and use of AI toward prosperity and beneficial outcomes for people and planet is a priority.”
Genway AI naturally plays an important role in advancing the mission of inclusive growth, sustainable development, and well-being by bridging the gaps that have led to disparities in who we focus on as we build technology. Too often, our limited capabilities to conduct qualitative research at scale have perpetuated existing biases. AI-led research can and should be used to give all members of society a voice and help reduce biases. In turn, as the OECD terms, this “responsible stewardship” will drive the development of more human-centered products with beneficial outcomes for all.
AI Review Boards - of publicly traded companies - a relatively novel form of oversight, have already approved Genway AI for deployment. These internal bodies assess pre-deployment AI ethics implications before deployment and conduct proactive assessments of AI's potential harms and how to mitigate them.
Technical Frameworks have helped us make Genway AI more reliable and trustworthy. One of our core areas of focus in this area has been transparency. In the context of research, transparency helps insight-gathering functions understand the outputs of a system like Genway AI , namely the analysis and synthesis of rapid interview data. Genway AI utilizes complex algorithms operating on vast datasets of potentially 1000s of interviews. Transparency provides visibility into how our AI arrived at a specific output (such as a and insight or research summary). The simplest manifestation of in-product transparency is in the fact that we deep-link to the specific parts of interview transcripts that were included in an AI generated insight or summary. This approach is crucial in decision-support situations where researchers need to comprehend why an AI system suggested a particular insight or follow-up action item.
Finally, Responsible AI frameworks are guiding our thinking on accountability, meaning that from day one we’ve been creating mechanisms to establish who is responsible for the proper functioning of our AI system and how we address any negative or unforeseen consequences that arise from its use.
In this realm, there are two types of measures we employ (and will continue to evolve):
Preventative, e.g. documentation - keeping records of system architecture and design choices, training datasets, and validation tests through the development process.
Proactive, e.g. assigning responsible parties and conducting 3rd-party audits, in addition to customer-level audits of our AIs output.
We’ve been inspired by companies like Microsoft and Google, who’ve been sharing their responsible AI journey for years. In fact, this article was specifically inspired by Microsoft's principle to share learnings about developing and deploying AI responsibly. We’ll continue to do so as we go through our own trust-first, responsible AI journey.
As always, if you’d like to learn more about what we’re up to at Genway AI , check out our website at www.genway.ai. We’re working hard to leverage AI in ways that benefit our society and help us build technology inclusively.
We’re perfecting the end-to-end process of conducting interviews by leveraging AI to refine and enhance how research teams schedule research, synthesize their learnings, and integrate them into their workflows for maximal impact on everyone, for everyone.
More insights
Generate insights with depth and scale using AI Interviewers