Video: AI Maturity: The Next Evolution of Proposals | Duration: 2604s | Summary: AI Maturity: The Next Evolution of Proposals | Chapters: Webinar Introduction (2.96s), AI Maturity Model (70.385s), Early AI Adoption (607.005s), AI Workflow Standardization (930.69s), Metrics and Transformation (1492.78s), AI Proposal Evaluation (1912.315s), Scaling AI Challenges (2004.26s), Reducing AI Hallucinations (2076.915s), AI Proposal Prompts (2157.065s), AI Tool Selection (2247.44s), Small Team Applications (2349.785s), AI Maturity Model (2648.81s), AI Maturity Levels (3240.5s), Scaling AI Advantage (3558.25s), AI-Enabled Transformation (4158.415s), Next Steps Forward (4271.59s), AI Proposal Evaluation (4544.355s), Scaling AI Challenges (4631.82s), Reducing AI Hallucinations (4705.675s), AI Prompt Strategies (4786.145s), AI Tool Selection (4875s), AI Writing Guidelines (5003.945s), Q&A and Resources (5135.21s), Closing Remarks (5213.08s)
Transcript for "AI Maturity: The Next Evolution of Proposals":
Hello, everyone, and welcome to today's Deltek's thought leadership webinar. I'm Nora Bashur, product director of AI proposal here at Deltek. Before we get started, just a few quick reminders. For the best webinar experience, experience, we recommend you use Google Chrome or Firefox. If you have question, please type it into the q and a box anytime during the presentation. You don't have to wait for the end. We will address as many questions as we can after this session. And And if we don't get to your question live, your contact information will be shared with shared with Lofeld Consulting to answer your question after the webinar. Resources, including today's slide deck, can be downloaded in the doc section, which should be in the upper right corner of your screen. In case you need to leave early or you wanna relisten to the session, you'll receive a link to an on demand recording of today's webinar via email within twenty four hours after the session ends. Deltek is pleased to partner with Lowfeld Consulting Group to bring you actionable intelligence and strategies you can use today. Today's discussion topic is very timely. It is AI maturity, the next evolution of proposals, and our expert presenters are Beth Wingate and Brenda Christ. Brenda and Beth, the floor is yours. Thank you. And, Alex, if you can share our slides. There we go. Alright. Welcome, everybody. I'm Beth Wingate, APMP fellow and CEO of Lowfell Consulting Group. I am joined by my colleague, Brenda Christ, also an APMP fellow and vice president of Lowfell Consulting. Brenda holds APMP professional certification and teaches several of our Lowfell Consulting classes. We each have over thirty years of experience in bid and proposal management and writing, and we've been helping proposal teams implement AI since 2023. We published the first book on AI use and proposals for bidding proposal professionals in April 2024. AI is already part of proposal development, but in most organizations, its application is inconsistent and used without clear guardrails. Today, Brenda and I will talk about our new book, From Prompts to Proposals, an AI Maturity Model, a practical framework specifically designed for proposal capture and operations teams. The model outlines five maturity levels, and we'll go through each level and show how organizations can progress from ad hoc AI use to disciplined practice, better security, and results you can actually measure. Most teams are surprised by where they actually land in the model. At the end of our presentation, we'll give you a QR code to download a free copy of our new book, and you'll be able to take your AI maturity self assessment. In the early days of AI adoption, the dominant use case was drafting. Writers used AI to generate their first drafts, rephrase content, and accelerate production. That remains true today. Drafting content still leads at 58% of responses in our 2026 poll. But the real story of where value lands is expanding. Between 2024 and 2026, we saw measurable growth in three areas, proposal reviews and scoring, up from 15 to 19%, compliance validation, up from seven to 12%, and brainstorming and solutioning holding steady at around 12 to 11. What is also changing is how we use AI. In 2024, most teams relied on single prompts to generate content and considered that a breakthrough. By 2026, mature teams have moved well beyond that. They are building configured AI workflows in persistent workspaces. Let me define persistent workspaces. They are dedicated environments within an AI platform that retain files, scanning instructions, and conversational histories across multiple sessions. Rather than reuploading documents or re explaining background at the start of every conversation, the workspace holds that context so AI enters each section each session already oriented to the work. With persistent workspaces, you will find structured process instructions, which tell AI how to execute specific tasks in the same disciplined way every time. I'd like to share with you a story now. We've seen a pattern repeated over the past year, and that's a federal contractor enthusiastically adopts AI. Within three months, they're producing first drafts significantly faster. The team is thrilled. The leadership is impressed. But during the formal proposal reviews, something subtle begins happening. Reviewers keep writing the same comment, clear writing, but where is differentiation? They lose recompetes. The evaluator feedback is not negative. It's just neutral. AI has amplified existing boilerplate but doesn't strengthen approaches. These organizations are using AI immaturely. Once they shift and start using AI strategically to improve their proposal scores and qualities, they also find their win rate increases. So let me talk to you about the core premise of this presentation. AI maturity is not about mastering prompts. It's about designing systems. AI amplifies whatever operational structure enters. If your workflows are fragmented, AI accelerates fragmentation. If your content library is disorganized, AI scales the inconsistency. Without the controls listed on the slide, organizations plateau quickly. With them, organizations maximize the advantages that AI offers. Now Beth is gonna give us an overview of our maturity model. Beth? Thanks, Brenda. As a result of our observations, we created the Lofeld AI maturity model. The model consists of five levels. Level one, explore, individual experimentation and high variability. Level two, apply. We see defined use cases and repeatable patterns. Level three, standardized. Workflow is integrated and governance is defined. Level four, scale. We see embedded quality systems and measured return on investment. And level five, transform. AI orchestrated proposal operations with executive ownership. Most organizations operate across multiple levels simultaneously. Writers may be at level one, managers attempting level three, and executives expecting level four outcomes. This misalignment creates friction. The purpose of this model isn't speed. It's sequencing because skipping levels increases instability, and instability reduces trust. Organizations can use this model as a benchmarking tool to identify where they're stalling and where they should focus next. If you'd like a structured assessment version of the model, we'll share access at the end of this this presentation. We can also help you conduct an assessment. This model matters because it prevents premature scaling. Many organizations expand AI use before they define governance, which creates inconsistent outputs across contributors, security and governance gaps, and loss of reviewer and leadership confidence. The model provides structure. It gives leadership an accurate picture of where the organization actually is. It keeps teams from scaling practices that just aren't ready. Without structure, early games plateau. With it, each level builds on the last, and teams stop having the same problems twice. Finally, our AI maturity model provides a tool to assess your current state. Maturity initiatives do not fail because of technology limitations, but because of unclear ownership. Who approves the tools? Who defines acceptable use? Who validates the content? Who monitors the risk? When roles are not clearly defined, which when, excuse me, when roles are clearly defined, maturity can advance. With without defined accountability, AI adoption can become a political issue. Governance does not slow innovation. It stabilizes it. Consider creating a simple RACI chart to clarify who is responsible, who is accountable, who must be consulted, who must be informed. When roles are explicit, adoption accelerates safely. When roles are clear, maturity can advance. This model is a diagnostic tool. It is designed to help you answer one question. What is our next discipline step? So benchmark at every level with the understanding that most organizations operate across multiple levels simultaneously. Identify your next credibility gap, then close that gap intentionally before expanding further. Don't skip the levels because it introduces risks, including AI fatigue, credibility erosion, and governance backlash. Now we'll walk through the levels in sequence, starting with where most organizations begin, which is exploration. Now Beth is gonna talk about the early adoption at levels one and two. Beth? Thanks. Level one is where nearly every organization begins. This is the exploration phase. Individual writers and proposal managers begin experimenting with AI tools to accelerate drafting, summarize RFPs, or brainstorm ideas. Enthusiasm is typically high because early results can feel like a breakthrough, but this stage is highly variable. Output quality depends heavily on the individual using the tool. One writer may produce strong, structured drafts. Another may produce generic language that requires significant rewriting and costs time. There's little consistency across the team. Governance at this stage is usually minimal. Most uses unofficial, ungoverned, or both. Security and data handling are rarely addressed at this stage. AI is being used, but it's not being managed. Don't get us wrong. Exploration is necessary. However, level one can't be the long term operating state. Organizations that treat exploration as permanent adoption tend to plateau quickly, and those who intentionally intentionally close the exploration phase and move towards structured application create the foundation for sustainable progress. Level one carries identifiable risks. The first is hallucination, that confident, well written statement that statements that aren't fully verifiable. The second risk is what we call AI speak, generic phrasing, overpolished transitions, and language that says nothing specific about your solution that can become detectable. As evaluators read more AI influenced proposals, this tolerance for generic language is going to decrease. Third, security exposes risk. Proposal content often includes pricing strategies, proprietary processes, and sensitive partner information. Uncontrolled tool usage may introduce compliance concerns. Finally, bias. Your AI tool is pulling content from a limited data pool or from unvalidated old proposals, which can introduce bias. For example, it can skew how you describe your capabilities or misrepresent your past performance. These risks don't mean AI should be avoided. They mean exploration must be stabilized. Guardrails, approved tools, and human review at key checkpoints reduce risk without eliminating innovation. In addition, NIST's AI risk management framework establishes governance, validation, and accountability as core enterprise requirements, not optional controls. I put the link to the framework on the slide for you. The organizations that move intentionally from experimentation to structured use reduce backlash, and increase leadership confidence. Risk isn't the enemy. Unmanaged risk is. Level two represents the shift from curiosity to intentionality. At this stage, organizations identify specific, approved, and high value use cases for AI like RFP analysis, compliance matrix development, structured brainstorming, or first draft generation. Please note the book is filled with use cases for your consideration. Level two is necessary. It builds confidence and competence, But organizations that remain here without advancing towards standardization will struggle to convert speed into measurable advantage. Intentional use here is progress and its integrated, use is definitely transformation. The most important evolution within level two is strategic application. At this stage, AI becomes a strategic assistant rather than just a writing tool. It begins supporting more than just drafting. It helps map strengths to evaluation criteria. It helps identify credibility gaps. It supports no bid bid analysis by summarizing complex requirements. However, reliability becomes critical. If outputs vary widely or require heavy correction, confidence can erode. Strategic decisions cannot depend on inconsistent input. This is where discipline begins to matter most and more than experimentation. There is a key mindset shift at this point going from how can I help a AI help me draft these sections to how can AI strengthen our overall positioning and decision making? That shift prepares organization for level three, which is the turning point. Let's stop here at level three, which is where we move from repeatable prompts to organized workflows. It's time for a poll to see where you are on your standardization journey. How standard are your AI workflows and prompt engineering? Well established, you have standardized prompts, persistent workspaces, and structured process instructions. AI tools are integrated with other tools, and they're secure. Somewhat standardized. You have partial integration. You have standardized workflows, but you haven't set up persistent workspaces or structured process instructions or not yet standardized. Your tools aren't integrated. You don't have a standardized prompt library. You haven't set up persistent workspaces or structured process instructions. Alright. Here. So we have 6%, a little almost 7% well established, about 25% somewhat standardized, and 69% not yet standardized. Okay. So wherever you landed, level three is where the path forward gets clearer. Let's go ahead and move on. Alright. Level three is the turning point in AI maturity. It's where AI stops being layered on top of your process and becomes integrated into it. Approved tools are documented. Security requirements are clear. Persistent workspaces and structured process instructions are being built, and they're being shared. Prompt libraries are shared across your teams. AI is embedded into defined stages of the proposal life cycle, capture, compliance, content development, and review. Variability decreases. Reviewers spend less time correcting tone and more time evaluating strategy and strengths. Compliance checks become more systematic. Draft quality becomes more predictable. This is important because without standardization, scaling introduces instability. But here's the critical distinction. Standardization stabilizes AI, but it doesn't differentiate you. Level three reduce reduces chaos, risk, and variability, but stabilization alone doesn't create competitive separation from other bidders. You can be fully standardized and still sound like everybody else. You can be fully governed and still plateau in your win rates. That's why level three is necessary. It creates a foundation for scaling without introducing new risk. What differentiates level three is content readiness. AI retrieval is only as strong as the content foundation beneath it. At this stage, many organizations discover their knowledge libraries just aren't AI ready. Past performance examples may be outdated. Euler plate language may lack ownership or a regular update schedule. Content organization and tagging may be inconsistent or nonexistent. When AI pulls from disorganized outdated content, it just amplifies the problem. That's the hidden barrier most organizations don't see coming. Content readiness requires clear ownership of artifacts, validation dates, structured tagging, and version control. Strengthening the content foundation is one of the highest return investments a company can make in its proposal operations. Those who skip this step often blame the tool for inconsistent results. There's a moderate degree of risk associated with using AI. When asked what concerns you the most about using AI for proposal writing, bid and proposal professionals who responded to our late twenty twenty five LinkedIn poll responded as follows, 42% hallucination, 33% AI speak, 17% security, and 8% bias. Mature organizations should maintain an AI risk mitigation plan to ensure the accuracy and integrity of outputs produced by AI, train their teams in risk mitigation processes, and maintain lessons learned loops. At level three, teams must understand workflow integration, risk mitigation, and content architecture. The AI enabled proposal professionals are not replaced by technology, but rather they use technology to advance proposal operations. This shift requires enablement, not just tool familiarity. Many organizations are investing in structured AI implementation workshops or sprints to align governance, workflows, and team capabilities, just as those Lofeld Consulting offers. It's also a great time to formalize the use of GovWinds AI tools in your standard processes. I know Lofeld Consulting has, and we have definitely reaped the rewards. So thanks to GovWind. Now let's look at level four, which is scaling for advantage. Beth? Level four is where AI stops being a drafting accelerator and becomes a performance system. This is where you stop asking, are we using AI? And start asking, are we winning more effectively? Here's what changes. Review teams' comments shift. You stop seeing clarify and generic. You start seeing strong. Push it further. The conversation moves from fixing weaknesses to amplifying your strengths. Late stage rewrites drop predictably, not because people worked harder, but because variability decreased. Strength density increases across volumes. Strengths aren't isolated. They're systemic and explicitly tied to criteria and reinforced throughout your proposal. And evaluator debriefs reflect it. You hear clear differentiation, compelling strengths, and easy to evaluate. That's not efficiency. That's a competitive lift. Level four organizations don't just move faster. They compete more deliberately. Here's an example. One one organization we worked with had stabilized AI at level three with governance defined, content validated, prompt standardized. They didn't stop there. They moved into measurement, strength to weakness ratios, review findings, compliant defects, rewrite cycles, tracked every submission. Within six months, late stage rewrites dropped by nearly a third, and evaluator debriefs started reflecting it. Leadership stopped asking, are we using AI effectively? And started asking, what are the performance trends across our last five bids? That's the shift level four produces. What does leadership notice when an organization reaches level four? Proposal timelines become more reliable. Reviews happen when they're scheduled. Rewrite cycles shrink, and surprises around submission deadlines decrease. Executives are no longer pulled in at the last minute to solve preventable issues. Compliance errors decline. Tone inconsistencies are caught earlier. Governance works. Capture managers begin to trust that the strengths they identified early will actually show up clearly in the final proposal. Leadership's questions get more specific. How has our strength articulation improved? What are our rewrite trends? Where are we seeing measurable gains? That discipline shows up in the numbers. Your win rates increase and your defect trends and rewrite cycle times decrease. ELLICs make ROI concrete. At level four, ROI is measured in competitive impact. It shows up in debrief language where evaluators begin referencing clear differentiation, compelling strengths, or well articulated advantages. So RI shows up in strength density. Strengths are no longer situational. They are systematic, clearly labeled, explicitly tied to evaluation criteria, and consistent across volumes. Next, ROI shows up in review team trends. Instead of finding recurring weaknesses, review teams see fewer structural gaps. Weaknesses decrease quarter over quarter. Next, ROI shows up in capture in capture confidence. Capture managers begin trusting the strategy will survive the translation into writing. Level four organizations do not just write faster. They compete better, and that is a difference between efficiency and advantage. At level four, metrics shift from are we using AI to is AI helping us improve our competitive outcomes. This slide suggests several measurements you might consider at level four. With metrics, AI becomes a strategy for continual improvement and increased accountability. Evaluator trust is fragile. AI must operate as an assistant and never as an unverified author. Claims must be substantiated. Credentials must be accurate. Tone must remain authentic. And GovCon trust is a competitive differentiator. An ethical AI applies at every level and becomes a measurable competitive differentiator at level four. Now let's talk about the level five transformation. Level five organizations do not simply use AI inside proposals. They redesign proposal operations around AI enabled workflows. Capture, compliance, drafting, review, and lessons learned are no longer separate stages connected by handoffs. They become part of an integrated operating system. Deloitte Research finds that organizations are twice as likely to exceed their AI ROI expectations when they design around how work gets done, not just how they deploy new tools. At level five, the conversation shifts from how can we write faster to how can we systematically improve competitive performance across every bid. This level is not achieved accidentally. It's designed deliberately. Transformation is not about adopting better tools. It's about rethinking how proposals are produced, measured, and improved. Also, the organization can safely evolve as AI capabilities advance without sacrificing trust, control, or or competitiveness. The proposal organization of the future is not AI driven. It's AI enabled. Systems are structured. Content is governed. Metrics are aligned with leadership objectives. Continuous improvement loops refine performance over time. Organizations that deliberately, march through the maturity model will arrive at a disciplined, human led orchestration system, one that enhances performance over time while keeping humans firmly in control of every consequential decision. Let's talk about next steps. Your next step depends on where you are today. If you're exploring, formalize some use cases and go to level two. If you're applying, standardize your workflows and go to level three. If you're standard standardizing already, start measuring performance indicators and go to level four. If you're scaling, optimize your operations and then go to level five. Progress is sequential. Consider building your next capability in thirty or sixty day focused sprints, one level at a time. Remember, the only way to eat an elephant is one bite at a time. So be realistic on what you can do in a given time frame. Don't drive yourself and your team completely nuts trying to complete a level in an unrealistic time frame, all while trying to get multiple winning proposals out the door. And remember, this is an iterative process with technologies that change and expand their capabilities constantly, sometimes on a daily basis. You'll need to monitor updates and capabilities and determine how to incorporate them into your processes and workflows. We began today's discussion by saying that AI is now a baseline capability. Almost every organization has access to the same tools. What separates proposal teams now isn't access. It's the maturity of their implementation. The question is no longer, are we using AI? The question is, is our AI use improving our win rate, and can we prove it? If you leave today and do only one thing, benchmark your AI maturity honestly. The organizations that act on that assessment will produce stronger proposals, catch issues earlier, and make better bid decisions. The ones that don't will keep getting faster at producing the same results. The next move is yours. Let me leave you with four practical ways to continue this conversation depending where you are in your AI maturity journey. If you're still exploring or want to benchmark your thinking, the first QR code gives you access to a complimentary download of our new book, From Prompts to Proposals. It includes the full maturity model and the self assessment tool in appendix a that we referenced earlier, and that's a great place to start. You can use the second QR code to download a copy of our first AI book, Insights Volume five, Harnessing the Power of Gen AI Forbidden Proposal Professionals, where we share our hands on experience using Gen AI to develop better, higher scoring proposals faster. I'll pause for a second so you can take a picture of the QR code or take a quick screenshot. If you want to build capability across your team, this third QR code links to our classes. All of them have AI built in. And finally, if you're ready to move from experimentation to consistent practice, governance, workflows, measurements, the fourth QR code connects you to our workshops and implementation sprints. That's where we work alongside your team to make the changes stick. Wherever you are, the next step matters. Move deliberately, and we're here when you're ready. Here's our contact information again, and please reach out if we can help you. We're really happy to. Here are our other nine books on capture proposals and AI. You can download many of them free on our website, or you can get them on Amazon. And now let's turn to your questions. We received literally hundreds of questions before and during this webinar, and looks like they're still coming in. We've answered many of them already in our articles. So go to our website and check for for those articles. We've also answered a lot of them in our book, which you can download after the presentation. Please add your questions to the q and a window, and we'll answer as many questions as we can during this q and a session. You can also email your questions directly to Brenda and me. If we don't get to all the questions, we'll answer them after the webinar by email and in our and in our articles. So let me pull those over to look at them. Okay. Alright, Brenda. Here's one for you. What do we know about how defense agencies are currently using or exploring how to use AI tools to evaluate proposals? K. Well, they've been very clear. They are using them. I can cite here the army's DORA tool, which is the determination of responsibility assistant as an outside of defense GSA's CALI, which is the contract acquisition, life cycle intelligence. And there are many others. We've cited them in our book, and some I'm sure that the government is using. They're using bots and just not telling us about it. But what we see is they're using them to scan our proposals for compliance, making sure all the documents are there, that we've answered all of their questions, and that's becoming more and more prevalent as government talks about that formally and informally. And we talked about that in an article that Brenda and I put in Federal News Network recently. So you can go there, Federal News Network, and look up that article. And we listed a few more in in that article. Yes. Alright. If that's if I could add just one more thing is that you can be sure that a human used to be the first person to evaluate proposal. Now it's not gonna be infrequent that a bot will do that, but, of course, a human will make the ultimate decision about who the contract is awarded to. Right. Exactly. What are the most common failure points you see when organizations try to scale from ad hoc AI use to an enterprise model? We we talked about a lot of them during the presentation today. For me, it's that unorganized content and data that's all over the place in different servers, folders, hard drives. It it's not organized consistently or tagged. It's not validated. There's no process or no accountability to keep the company's data up to date. It ends up being garbage in, garbage out, only much faster with AI. Brenda? Well, for me, it is inconsistent training or knowledge sharing across the organization. We did a poll coming out this Tuesday, you know, is where do you acquire your knowledge about AI? And 60% of respondents mentioned that they it's through self study. That about maybe 20% through peer review interactions and learning and then a much smaller rate, less than 10% through training. So what I see is an inconsistent pattern of implementation, across organizations. Mhmm. Okay. I'm gonna combine a couple of questions here. What are some ways to reduce, sorry, to reduce hallucinations in AI, and what is the best way to prevent AI from just giving you made up answers and information? So, You wanna hit download this? Yeah. Okay. yeah, I'll I'll start. Every single one of my prompts, agents, and projects, they call them gems and Gemini. There's other terms for that. But, basically, what wherever you're interfacing with your AI has instructions, lots of instructions that says, do not hallucinate and check your work when you finish to confirm that you didn't hallucinate and provide references for me to all the content used in your response. Additionally, be sure you provide your AI with concrete data and information for it to use in formulating its its responses. Just like all of us do. If we don't have the facts, we don't have the information, we're not sure what what good looks like, we write in generalities and try to make our responses sound like we know what we're doing and hope that that works. And that's that's what AI does when it doesn't have the data that it needs. Brenda, do you have anything to add to that? That's a good that's a good point there. Okay. Next. Do you have any prompts for using AI in proposals? Yes. We have, in fact, lots of prompt suggestions and types of prompts to consider and build on in our articles on the website and in the book. Brenda's put out several several articles that have item after item of ways that you can use prompts and prompt suggestions. If I could just give you one, it would I could tell everyone about is it's very easy. Create a style guide for yourself. The reason I say that is because if we're gonna have bots looking at our proposals first just for compliance to make sure they're not thrown out, they're looking for how you follow instructions, if you're using their buzzwords, things like that. And if you realize they're looking for patterns, I think creating a style guide that you communicate to your entire team will help improve their early review of your proposal. Sure. And ask the AI to help you develop your prompts and your agents. Tell it what you're trying to do or what you do in your day to day job and ask it to help you figure out what kinds of prompts or agents or gems or whatever might be useful for you and and iterate those and then save them, share them with your team, get everybody working on that, and and create a library of what works best for your group. Do you endorse any particular AI tools for proposal development? If so, which one and why? Brenda? No. We do not, because I think AI tools are very specific to our organization. And in fact, we use many AI tools. You know, one I already talked about. We use the AI tools associated with GovWind. And at our own organization, we use several AI tools. And luckily, we've had the ability, several vendors have been very nice in sharing AI tools with us. We also use large language models. We wrote a blog that is on the Whipple Consulting website about 70 different AI tools and group them according to functionality. So I would say use the ones that are best positioned for your organization. I mean, you might have security concerns. You you might have a big budget. You might not have a big budget. So I think it's just worth the time to to take a look at one tool, and they're changing all the time. So I would not get locked into one long term. That's my suggestion. Beth, do you have anything to say? Right. Yeah. I was gonna say that list of 70 tools gives a good idea of the kinds of tools that are out there, but don't let don't limit yourself there. There are new ones coming out every day. For those that are gonna be at the APMP conference in Denver in a couple of weeks, there'll be a lot of vendors there. Come ready to go with lots of questions if your organization is still looking for the right tool for you. And then as well, take advantage of the ones that gov win. are putting out. Alright. Would would love to hear how this can be utilized for small teams slash small businesses. In fact, we have specific notes to small businesses in every chapter right at the beginning about how you can apply AI at each maturity level as well as a discussion of the basic training needed at each maturity level. Brenda, anything to add there? No. I think you hit the nail on the head, Beth, is that we we realize that everybody is not working for a thousand, you know, 500 person organization. It might only be, you know, a handful of people in a shop. So we make sure we tailor the book to that. I think you'll find it pretty easy to use. Good. Okay. I think we have time for one more question. How do we incorporate healthy AI writing with human writing and proposals? Now that AI is becoming a daily tool in society, our technical team says AI writing saves them time in proposal development. As the proposal lead, AI write ups cost me significant time in rewriting and reformatting the proposal compared to quality human writing as the first draft. It's a constant battle. Brenda? Oh, this is a great question. What I have seen, you know, by teaching proposal writing and as a practitioner, when you put AI in the hands of these wonderful technical solution architects or SMEs who are not as familiar with proposal writing. What you get out is not necessarily what you want to put in a proposal. So if I was gonna let them use it, I would put up some heavy guardrails. I would give them props that are very specific, that are have a style sheet applied to them, that that guide them in how to produce a prompt because, you know, what's, you know, garbage in, garbage out. So you wanna be as specific as you can about how they should be telling the response for you in in an AI prompt. And I'd add, like we said earlier, give them examples of what good looks like, what what the end goal is. Sometimes people have trouble because they just aren't sure what they're supposed to do. So besides helping them with the prompts or the agents, just show them what good looks like and why why that's good, that it's full of strengths, it's full of validate validated information that will make your proposal section easy for the evaluators to score. Beth, Alright. I see in the in the comments that we're could you please roll back a few slides to the QR code? We have quite a few people in the chat. looking for the QR code for the book. Alright. There we go. Thank you. And maybe we could answer one more question or respond to one more question while they get that QR code. Okay. Let me go back over to that. Sorry. I have a million different And one one question was from Steven here. How does one benchmark their AI maturity? And inside the book, there is appendix AI, I believe, has an AI maturity, test in there or quiz where you can go to benchmark yourself. Yes. Okay. Well, with that, back over to you, Nora. And everybody will get copies of these slides, so you will have access to the QR codes. Mhmm. Thank you so much, Beth and Brenda. That was incredibly insightful and very, very timely. Before we conclude today's session, we wanna remind everyone that you will receive a recording of today's webinar by email within twenty four hours. And if we were not able to answer your question today live, we'll be sure to follow-up with you afterwards. If you could also please fill out a short survey that you'll see at the conclusion of this webinar, we'll greatly appreciate. Thank you again so much for joining us today, and please visit deltech.com for more valuable Deltek events. Have a great day, everybody. Bye bye.