Generative AI

Harnessing Frameworks for Generative AI Development

New advances in artificial intelligence — especially in generative AI systems like ChatGPT and Midjourney — have the potential to radically improve productivity and the bottom line. While genAI can drive productivity and automation at a once-unimaginable scale, it can also generate devastating results — at scale. To manage and counterbalance generative AI risk, Agile development methodologies and many Responsible AI frameworks offer teams important structures to help them navigate this new landscape.

Embracing Frameworks for Safer AI Integration

“No matter where you are — a corporate startup or university — you’re also humans first. There are also human responsibilities. Everything we do in life needs to have a framework and I do believe we need to have an ethical, human-centered framework.”
Dr. Fei-Fei Li, speaking about AI on the podcast On with Kara Swisher

For organizations just starting in generative AI, you have the benefit of decades of expertise and experience to aid decision-making, a portion of which is captured in existing frameworks. Among them are myriad responsible AI, ethical, and Agile frameworks that your teams can employ to account for the unknown-unknowns that too often derail the best-intended genAI projects.

By leveraging a framework, you’re putting learned experience into practice. Frameworks capture guidelines and principles that allow teams to systematically and comprehensively consider their work's many legal, moral, social, philosophical, and other attributes consistently across or within contexts.

Using the AI Incident Database to Focus and Apply Frameworks

Before we dive fully into how frameworks allow us to safely integrate AI, let’s refresh ourselves on why they’re so essential. When AI goes wrong, it can potentially cause outsized harm because the nature of the tech is to work quickly and at an extraordinary scale. This technology disproportionately harms disadvantaged groups because of the historical biases ingrained in the source data. The ethical implications of this tech are enormous, and the business world is quickly catching on to that.

In a survey of 1,200 global CEOs, 67% agreed there needs to be focus on these implications, and 64% said businesses aren’t appropriately managing unintended consequences of AI use.

The AI Incident Database (AIID) collects some of AI’s most terrifying failures for all to see. A pair of self-driving taxis blocked an ambulance headed to the ER, leading to a patient’s death. A professor failed his students after accusing them — wrongly — of using ChatGPT. And an AI chatbot offered harmful advice to a woman with an eating disorder, telling her to cut back on calories and pinch her skin to gauge her body fat. You can imagine the fallout.

And while Wired Magazine called AIID the “Artificial Intelligence Hall of Shame,” the real goal of the site isn’t to shame anyone. It’s to help organizations learn to use AI more responsibly. Teams can use these “worse-case” examples to understand where to focus in their domain areas and envision how to apply frameworks.

Understanding and Embracing Responsible AI Development

With greater care and intentional strategies, companies can responsibly develop AI and make valuable tools while mitigating and minimizing harm. But doing so requires one crucial understanding: AI projects operate very differently from traditional software projects and must be treated accordingly.

Traditional software design and development centers around the idea of predictability. Software teams plan a system to solve specific problems, design the functions, and know what outcomes to expect. When results don’t match expectations, they figure out what needs to change, and they change it. And sure, every project has some unknowns, but teams often benefit from comparable projects and (hopefully) well-defined user requirements they can lean on to hit delivery milestones.

Generative AI is very different in that it centers around unpredictability (typically as a feature rather than a bug).

  • With genAI, there are many more unknowns at the outset. Given available data, we may need to determine whether a solution is even feasible. For instance, until data scientists run and adapt models through various scenarios and hypothesis tests, it’s not a foregone conclusion that datasets will yield the intended benefits for a reasonable cost.
  • Further complicating matters, genAI outputs can be as murky as their feasibility. Data scientists must corral training data from myriad systems, train the system to look for patterns, and then ask it to create new content based on the recognized patterns. Even the developers can’t know with 100 percent certainty what genAI will create.

Yet, this unpredictability is the very thing that makes generative AI so powerful: it can do things we haven’t explicitly taught it to do. It can make decisions on its own and generate results that surprise us!

So, how do we make the most of generative AI to create impactful and even transformational solutions while ensuring that our AI tools are reliable and safe? The answer, in part, lies in coming to terms with genAI’s unique risks and ambiguities by addressing them directly and grounding them in workflows.

Adopting well-regarded frameworks provides the structure and flexibility to help your team adapt and integrate changes even at the eleventh hour of your generative AI project, maintaining a responsive working environment.

The Agile Approach to Responsible AI

Generative AI has prompted us to reevaluate nearly every process in our arsenal. In the case of Agile methodologies, we have many new frameworks that have been updated to account for the special circumstances of AI development. The Agile mindset shouldn't just be confined to development. It extends to the discovery stage, where possible harms and biases must be recognized and factored into the development process.

When developing generative AI projects, proceed both enthusiastically and with caution. Assume that hallucinations are inevitable and spend adequate time exploring use cases, edge cases, and potential biases in data as part of normal expectations. To help you think this through, we suggest frameworks like NIST and HHS’s Trustworthy AI Playbook and WillowTree’s defense-in-depth approach to AI hallucinations.

You’ll also want to create a yardstick against which you can measure. Make sure your team understands the user and business value of what they’re trying to create. That way, team members can judge whether or not their models are performing adequately and adding value in the intended ways.

One useful set of guidelines to help ensure the development of responsible AI systems comes from the “Scaled Agile Framework,” also known as SAFe, which describes how AI can be applied at all levels of the framework.

Leveraging an AI Decision-Making Framework

These four principles make up an AI decision-making framework that is enabled by SAFe (yet another framework within a framework):

  1. Alignment with Strategy: Direct your AI tool towards profit-yielding outcomes for your business, as well as enhancing brand reputation and market value.
  2. Customer Centricity: Ensure your AI tool solves real customer problems. Customer satisfaction is a crucial determinant of a product's success.
  3. Continuous Exploration: Keep an inquisitive mindset during AI development. Test and validate your AI tool and underlying business hypothesis relentlessly.
  4. Empirical Milestones: Track your development journey. Deliver and deploy the AI tool iteratively, gathering and responding to customer feedback at each point.


Ensure that it’s not just the data scientists judging these outcomes. The users impacted by the systems are important stakeholders and should be positioned to pinpoint potential harms and benefits (and whether the trade-offs are worth it).

Similarly, encourage frequent and meaningful information-sharing within your team and across expertise. Studies have shown that genAI projects, particularly, demand special attention to externalizing thinking and regular collaboration.

With this kind of enhanced teamwork and thoughtful strategy, you’ll be well-positioned to unlock the staggering benefits that generative AI promises.

Diligence Gives You the Edge

While it’s true that any process needs to account for the risks of possible harm to individuals, communities, organizations, and society, AI development requires unwavering attention in this arena. Frameworks set your team up for success when confronting the intimidating unknowns of artificial intelligence.

So, remain inspired by the possibilities but wield this technology thoughtfully. With care, creativity, and compassion, you can develop AI tools that revolutionize human potential while respecting human values. The path forward lies in working hand-in-hand with stakeholders across disciplines, grounded in our shared humanity.

To embark on this thoughtful AI journey, consider partnering with AI consultancy companies like WillowTree that can offer the requisite insights and expertise. The future of artificial intelligence is not just innovative; it should be responsible, and employing well-established frameworks is a significant stride towards that future.

Table of Contents
Read the Video Transcript

One email, once a month.

Our latest thinking—delivered.
Thank you! You have been successfully added to our monthly email list.
Oops! Something went wrong while submitting the form.
More content

Let’s talk.

Wherever you are on your journey, we can help. Let’s have a conversation.