Categories
Meeting

Controlling and Constraining LLM’s

12 Aug 2024 / 6:00 PM / ATLAS Building, CU Boulder

We’ve all struggled to get the response we want from a Large Language Model. Emerging techniques now offer better ways to control LLM responses, ensuring more consistent formats and reliability in product workflows. We’ll also discover how advanced observability can enhance your LLM experience and learn the importance of protecting sensitive information across generative AI interactions.

Our first speaker, Uche Ogbuji will give a talk exploring emerging techniques to better control LLM outputs, enhancing their reliability in product workflows.

Uche Ogbuji is an AI engineering lead, consultant, and startup founder with a long history in AI, data, and network technologies. He contributes to open-source projects like OgbujiPT, co-founded the AI DIY YouTube show, and is also a writer, public speaker, and artist. He also leads the AI for Entrepreneurs and Startups (AES) Subgroup for RMAIIG.

Next, SallyAnn DeLucia, Product Manager at Azize will explore the use of guardrails, datasets, and experiments to enhance AI application performance and reliability. This talk will provide AI practitioners actionable insights into optimizing their development processes to achieving cutting-edge results once applications hit production.

SallyAnn DeLucia is a Product Manager at Arize AI, a leader in LLM observability & evaluation. A passionate machine learning enthusiast and generative AI specialist, she holds a master’s degree in Applied Data Science. Delucia combines a creative outlook with a dedication to developing solutions that are not only technically sound but also socially responsible.

Finally, Aaron Bach, CTO of Liminal will discuss how organizations are navigating the challenges of data security and privacy when it comes to LLM use, including employee attitudes towards AI security policies, and the importance of creating tools that satisfy both security teams and end users.

Aaron Bach is a seasoned product development leader with over 15 years of experience in software, hardware, and innovation for Fortune 500 companies. He has led diverse teams and delivered impactful solutions, including overseeing venture concepts and patentable IP at FIS. Previously, Aaron was SVP of Software Development at Four Winds Interactive, where he played a key role in its acquisitions and platform development.

Thanks to NexusTek for sponsoring pizza! NexusTek provides consulting and managed Hybrid cloud solutions to reduce the cost and risk to AI development and operations. https://www.nexustek.com/

Notes

The meeting focused on controlling and constraining large language models (LLMs). Dan Murray discussed the group’s 1600 members and upcoming events, including an engineering subgroup meeting on LLM output stages and a Women in AI meeting with 44 RSVPs. Uche Ogbuji emphasized the importance of structured output and tool calling to enhance LLM reliability. SallyAnn DeLucia highlighted Rise AI’s AI Copilot, which uses data sets and experiments for testing. Aaron Bach addressed data security and privacy concerns in AI, noting that 75% of regulated employees use unapproved LLMs, and Liminal ensures secure AI usage by redacting sensitive data.

Key Takeaways

Some of the key takeaways from the meeting:

Uche Ogbuji

  • Discussed the challenges of using large language models (LLMs) in production environments and the need for structured output and guidance
  • Explained the concept of “prompt engineering” and how it is becoming less important as LLMs become more advanced
  • Introduced the idea of “structured output” and “grammar-structured output” as a way to control and constrain LLM outputs
  • Described the two-step process of training the LLM to understand grammar rules, and then guiding the LLM to follow those rules
  • Provided an example of using a JSON schema to define the structure and allowed outputs for an LLM-powered restaurant menu ordering system
  • Emphasized the importance of “tool calling” and agentic frameworks, which allow LLMs to interact with external APIs and applications in a controlled manner
  • Highlighted the recent announcement by OpenAI regarding structured outputs in their API, which significantly improves control and reliability
  • Discussed the concept of “vector steering,” which is a more sophisticated approach to prompt engineering using latent representations
  • Stressed that the future of LLM integration will involve less manual prompt engineering and more automated, constrained workflows
  • Provided examples of open-source tools like Llama CPP that support negative prompting and other advanced prompt control techniques

SallyAnn DeLucia

  • Provided an overview of Rise AI’s AI assistant, Copilot, and the importance of optimizing its performance and reliability
  • Explained the architecture of Copilot, including the use of a “router/planner” to select the appropriate skills and functions to call
  • Discussed the key components of the router/planner, such as platform data, debugging advice, and state management
  • Highlighted the popularity of Copilot’s search function and the need to expand its capabilities based on user feedback
  • Described the process of building data sets and experiments to test and validate Copilot’s function selection
  • Outlined the steps involved in creating an experiment, including defining the data set, task, evaluator, and GitHub action
  • Demonstrated how the GitHub action automatically runs the experiment whenever changes are made to the Copilot search functions
  • Explained the use of the Rise AI dashboard to view the results of the experiments and analyze the performance of the LLM
  • Discussed the importance of providing explanations for the LLM’s decisions, which helps with understanding and debugging
  • Mentioned the concept of “LLM light analytics,” where the LLM is used to categorize data and provide insights
  • Emphasized the value of data sets and experiments in iterating on the Copilot product and improving its reliabilityMatt’s company takes a people-centric approach to AI adoption focused on innovation, culture change and cohesive strategic planning

Aaron Bach

  • Emphasized the importance of addressing regulatory compliance and sensitive data leakage in AI applications, particularly in regulated industries like healthcare and finance
  • Highlighted the paradox of AI’s potential benefits and the need for secure and compliant usage in these regulated environments
  • Discussed the challenges faced by Chief Information Security Officers (CISOs) in these industries, who are often tasked with saying “no” to new technologies
  • Explained how Liminal aims to enable CISOs to say “yes” to generative AI by providing a secure and compliant platform
  • Described Liminal’s approach to detecting and handling sensitive data in AI prompts, including redaction and intelligent masking
  • Discussed the three main modalities of generative AI usage that Liminal supports: chat, in-app, and app development
  • Demonstrated the Liminal platform’s administrative dashboard, which allows for fine-grained control and governance over AI models
  • Highlighted Liminal’s ability to connect to multiple AI model providers, giving organizations flexibility and choice
  • Explained Liminal’s policy controls, which allow administrators to define how sensitive data should be handled
  • Provided an example of how Liminal would handle a prompt containing sensitive information, redacting or masking the data while preserving context
  • Emphasized the importance of not just focusing on technical solutions, but also understanding the needs and constraints of end-users in regulated industries
  • Shared insights from conversations with healthcare and financial services professionals, highlighting their desire for AI tools that can enhance productivity and efficiency
  • Discussed the concept of “model agnostic assistance,” where Liminal aims to route prompts to the most appropriate AI model based on the task and user needs

Q&A

During the Q&A portion:

  • A question was asked about Liminal’s pricing and whether it could be a cost-effective solution for small startups. Aaron Bach responded that Liminal is designed to be affordable for organizations of all sizes, even individual practitioners, as they aim to provide a more cost-effective alternative to the infrastructure costs of running large language models.
  • Bill McIntyre asked about Liminal’s approach to handling HIPAA compliance and data security in the healthcare industry. Aaron Bach explained that Liminal works closely with customers to ensure they meet all necessary regulatory requirements, including signing business associate agreements. He also discussed Liminal’s focus on providing transparency and observability around AI interactions.
  • A question was raised about the potential for prompt injection vulnerabilities and how Liminal addresses this. Aaron Bach acknowledged that prompt injection is a valid concern, but stated that Liminal’s focus has been on the more common use cases they’ve observed in regulated industries, where malicious prompt injection has not been a significant issue. He emphasized Liminal’s approach of working closely with customers to understand their specific needs and risks.
  • There was a discussion around the concept of “extractive AI” versus “generative AI” and how the terminology is used. Bucha provided some context on the historical evolution of these terms and why “generative AI” has become the more common parlance, even when the AI system is primarily extracting and transforming data.
  • A question was asked about Liminal’s approach to handling attachments or additional data sources provided to the AI system, and how they ensure the security and integrity of that information. Aaron Bach and SallyAnn DeLucia discussed Liminal’s strategies for detecting and handling sensitive data in these scenarios, including the use of heuristics and ongoing collaboration with customers.
  • The speakers were asked about their strategies for explaining higher-order prompting and prompt engineering concepts to non-technical users. The panelists acknowledged the challenge and discussed approaches like using frameworks from software engineering, as well as focusing on the end-user experience rather than the technical details.

Click here to see a full transcript/recording.

Categories
Meeting

How GenAI is Transforming Companies

12 June 2024 / 6:00 PM / ATLAS Building, CU Boulder

At this meeting, we explored the impact of generative AI on businesses, including strategic integration to enhance efficiency and innovation. Attendees learned AI-first leadership principles, prepare for Generation AI, and identify key roles for AI transformations. We highlighted practical AI applications, demonstrating task automation, time-saving, and fostering relationships. Additionally, we examined real-world examples and future potential, showcasing how generative AI drives creativity and productivity across sectors.

Our first speaker, Brett Schklar, presented a talk on, “What Your CEO is Being Told About AI.” In speaking with CEOs across the country, Brett helps Owners, Presidents and CEOs as well as their organization get their arms around AI, its potential and how to align the organization for success. During the talk, he’ll cover: The 9 AI-First Leadership Thinking Principles, Preparing for Generation AI, and Who should be leading AI Transformation.

Brett Schklar helps and teaches CEOs to harness generative AI to grow, streamline, and expand their businesses to outflank competitors and gain market share. With over 20 years as a go-to-market driver, agency owner, CMO and now Fractional CMO, Brett captures the attention of CEOs seeking an AI-first transformation of their business, operations, and market penetration.

The second speaker, Matt Fornito covered “Automate the Boring to Spend Time on the Meaningful.” Matt Fornito empowers business professionals to harness AI, helping them reclaim their most precious resource – time – and allowing them to prioritize what truly matters – fostering meaningful relationships. AI has the potential to revolutionize how we work, but if implemented poorly, it becomes just another tool that debilitates instead of enables. Learn how to navigate the complex landscape of AI, turning it from a time-consuming hindrance into a powerful ally. Matt will guide you through identifying tasks that can be automated, implementing AI effectively, and most importantly, using the time saved to build meaningful relationships. It’s time to take back control and let technology serve us, not the other way around.

Matt Fornito, founder and CAIO at the AI Advisory Group, has over two decades of experience in AI and data science. Previously a CDO, Head of AI, and Data Scientist, Matt built AI practices for two firms, earning accolades such as NVIDIA’s Partner of the Year Award. He has worked with Fortune 500 companies, developing strategic AI roadmaps that drive significant business outcomes. With billions in managed spend and generating hundreds of millions in revenue, Matt is a seasoned consultant and keynote speaker, known for his ability to blend theory, practicality, and storytelling to inspire audiences about AI’s future potential. His thought leadership has been featured in CIO/CDO Roundtables, Evanta events, podcasts, and various news outlets.

Our final speaker, Travis Frisinger spoke on “Myth and Reality of Generative AI in the Wild.” He will share insights from the frontlines of those building generative AI solutions. This overview will illustrate how generative AI is enhancing creativity and productivity through both large-scale innovations and small, impactful enhancements to existing systems. We will explore the current state of generative AI, highlighting practical applications. The discussion will also touch on the future potential of this transformative technology, showcasing its ability to drive innovation and change across multiple sectors.

Travis Frisinger is a seasoned software engineer with over 20 years of global experience, from mainframe operator to CTO. As Technical Director and AI Adventurer at 8th Light, he leverages AI to drive innovation in product and software development. Travis is a thought leader in AI, dedicated to solving complex problems and promoting technological advancements. Connect with him on LinkedIn for AI insights and visit his blog at aibuddy.software for more on software engineering and AI.

3LC sponsored the pizza for this event! 3LC enables you to create more accurate and smaller ML models. Attach 3LC to your training scripts for instant insights and data tweaks. No SaaS, no data duplication, no uploads, no sign-up required. Simply pip install 3lc! Visit 3lc.ai to see how we illuminate the ML black box with minimal changes to your workflow.

Notes

This meeting covered various topics related to AI adoption in business. Brett spoke to CEOs’ differing reactions to AI, from resistance to excitement about opportunities. Matt discussed using AI to automate mundane tasks so employees can focus on meaningful work. Travis outlined current uses of generative AI like conversational interfaces, augmented productivity tools, and enhanced user experiences. Speakers emphasized the importance of change management in AI adoption and focusing on incremental improvements. They also debated issues like job disruption and the need for continuous education. Overall the discussion provided insights into how companies are leveraging AI today and perspectives on challenges and benefits in this evolving field

Key Takeaways

Some of the key takeaways from the meeting:

Brett Schklar

  • Brett speaks to CEOs across the country about AI and helps them harness generative AI to grow their businesses
  • There are different types of CEO reactions to AI: those who see it as cheating, those resistant but want to use it in their organizations, those who see it as an IT issue not a change management one, and those excited to leverage AI initiatives
  • CEOs’ biggest competitors (e.g. McKinsey) are making AI their top initiative, forcing others to pay attention
  • AI is already replacing about half of marketing functions through increased efficiency, scalability and faster/stronger results than human-led efforts
  • Brett shares an example of a faucet company CEO who started leveraging generative AI to explore new product design concepts
  • CEOs want to know how AI can provide competitive advantages through better communication, data analysis and helping their organizations evolve

Matt Fornito

  • Matt’s company takes a people-centric approach to AI adoption focused on innovation, culture change and cohesive strategic planning
  • He shared experiencing a concussion that made him reevaluate constantly working 80+ hour weeks and prioritizing meaningful vs mundane work
  • Automating routine tasks through AI can provide 10-20 hours of weekly time savings for employees to focus on more impactful work
  • Matt outlined building an internal prospecting workflow using tools like LinkedIn, Hubspot, Apollo, Zapier and an LLM to streamline connections, CRM integration, scoring and personalized outreach
  • The goal is to have more impact working less by being smart about workload allocation rather than just increasing work hours
  • Case studies like Kalana show AI reducing costs and jobs in some customer support and marketing roles while increasing productivity

Travis Frisinger

  • Travis discussed generative AI from an engineering perspective, focusing on how it is currently being applied in products rather than hype/potential
  • He identified 3 main “lenses” – conversational interfaces like ChatGPT, augmented productivity tools integrated into workflows, and enhanced user experiences
  • Examples of conversational interfaces included ChatGPT, Claude and Perplex while augmented productivity included GitHub Copilot, Excel, Google Sheets and Oakowl
  • Enhanced experiences included digital twins of retired scientists at NASA and Walmart’s targeted advertising campaigns created with generative AI
  • He sees the future trending towards “sentient design” with interfaces that intuitively adapt to user needs through contextual awareness
  • Travis emphasized starting with incremental improvements rather than replacement, ensuring seamless integration and trust, and not assuming generative AI can solve all problems

Q&A

During the Q&A portion:

  • Matt discussed their tech stack including LinkedIn, Hubspot, Apollo, Zapier and nas.ai for their internal workflow automation. He said they may offer this as a product.
  • Speakers debated the impact of AI on jobs, with Matt noting predictions of no displacement by 2029 but concerns about retraining displaced workers.
  • Travis recommended resources like OpenAI docs and asking ChatGPT for AI overviews rather than a single source.
  • Brett shared that few CEOs are evangelists but they want competitive advantages, and generative AI is helping with communication styles.
  • Matt emphasized the importance of change management and addressing fears in adoption through frameworks and incremental improvements.
  • Questions also covered CEO reactions, the percentage who are evangelists, quantum computing opportunities in CS, and managing failure/uncertainty in new technologies.

Click here to see a full transcript/recording.

Categories
Next Meeting

Meeting TBD

TBD 2024 / 6:00 PM / ATLAS Building, CU Boulder

More info to follow.

Categories
Meeting

Developing Products with Generative AI

13 March 2024 / 6:00 PM / ATLAS Building, CU Boulder

At this meeting, we on two different aspects of Developing Products With Generative AI. First, we will look at using GenAI tools, through the lens of one of the world’s leading AI companies, to improve the development process itself and increase developer efficiency. Next, we will explore the challenges and opportunities of building GenAI into our products.

Our first two speakers, Carolyn Ujcic and David Soto, both from Google, will talk on Improving Developer Efficiency with Generative AI.

GenAI can help improve developer productivity through assisting code development, DevOps, and non-coding processes. In this demo, we will showcase an AI-powered solution to provide assistance to developers and operators across the software development lifecycle built on Google’s state-of-the-art generative-AI foundation models.

Carolyn Ujcic, Director of AI Services at Google, leads an organization of AI consultants and engineers in Google Cloud Consulting. In this role, Carolyn and her teams help Google Cloud customers adopt AI in the enterprise. Carolyn has held positions of increasing responsibility, including Machine Learning Engineering Manager, AI Consultant, Fiber Learning Lab Lead and Global Training Lead. She joined Google as a Change Management Lead for Enterprise customers in 2010. Prior to Google, Carolyn served as a management consultant for multinationals at Accenture.

David Soto is a Data Scientist at Google specializing in machine learning, deep learning and software development. With over 10 years of expertise in systems architecture, IP Core Networks, and Cloud solutions, he has a passion for continuous learning and delivering accurate data driven results to enhance company decisions.

Next, Ian Cairns will present an “Intro to LLMs for Builders: Challenges & Opportunities of Using GenAl In Your Products.”

Ian is co-founder & CEO at Freeplay, an AI infrastructure startup based in Boulder. Freeplay builds experimentation, testing & monitoring tools that help product development teams make use of generative AI in their products. He’s spent most of his career in product management for developer products, including as a PM at the Boulder startup Gnip and as head of product for the Twitter Developer Platform. He’s also a University of Colorado graduate.

Notes

The conversation revolved around the applications and potential benefits of AI within various industries and applications. Speakers discussed the use of AI in software development, including improved productivity and business impact, and highlighted the importance of understanding and improving code quality. They also discussed the challenges of designing and deploying large language models (LLMs) and the potential of AI-powered tools to enhance user experience. Speakers shared their experiences with different AI platforms and tools, such as Gong, raycast, and Loom, and emphasized the importance of balancing flexibility and opinionated functionality to create a more seamless user experience.

Key Takeaways

Some of the key takeaways from the meeting included:

  • Gen AI tools like code generation, documentation, refactoring and testing can improve developer productivity by 20-45% according to estimates.
  • Features like code completion, summarization and explanation in integrated development environments (IDEs) and tools can make developers more efficient.
  • Large language models can help understand code functionality even without good documentation or comments by providing explanations.
  • Tools can generate unit test cases automatically based on code, providing a starting point for testing.
  • Integrating ML throughout the development process allows more roles beyond just engineers to get involved, like PMs, designers and QA testing code.
  • Running ML models locally on devices allows experimentation and prototyping without requiring internet access or paid API calls.
  • AI and ML can improve developer productivity through features like code generation, documentation, refactoring, and testing. Tools like free play and cloud coders were discussed.
  • Building AI-powered software requires a focus on data quality, capturing inputs/outputs, and continuous evaluation/improvement through feedback loops.
  • Defining what constitutes “good” output from ML models is challenging and requires considering multiple dimensions of quality.
  • Adopting ML models in production environments requires monitoring what systems are producing and addressing changes over time.
  • Integrating ML throughout the software development process involves more actors like PMs, designers, and QA in addition to engineers.
  • There are opportunities to apply ML to roles like enterprise architecture, though it also presents unique challenges around model sizes and monitoring.

Click here to see a full transcript/recording.

Categories
Meeting

The ABCs of GPTs

07 February 2024 / 6:00 PM / ATLAS Building, CU Boulder

We will learn about custom versions of ChatGPT, known as GPTs. These personalized editions of ChatGPT combine instructions, extra knowledge, and any combination of skills for specific needs or tasks. You can build these yourself, without code, and have the opportunity to share your customized GPTs with others. OpenAI has created a GPT Store which highlights a reported 3 million of these custom AI bots.

Our first speaker, Liza Adams, will help us understand what GPTs (custom Generative Pre-trained Transformers) make possible, their impact, and the different types of GPTs. As GPTs accelerate innovation and creativity, see some GPTs in action and learn about how you can get the most value from them as a builder and user. With over 20 years of experience in B2B technology, Liza Adams has held marketing executive roles at industry leaders like Smartsheet, Juniper Networks, Brocade (now Broadcom), Pure Storage, Encompass Technologies, and Level 3 (now Lumen). As a Managing Partner at GrowthPath Partners, she serves high-growth businesses in three distinct roles: as a fractional Chief Marketing Officer, an executive advisor, and an AI consultant. A recognized thought leader in the AI space, Liza is a prolific writer and public speaker. Her work focuses on the responsible use of AI, its strategic value, the future of work, and its application in strategic go-to-market and marketing use cases. Linkedin: https://www.linkedin.com/in/lizaadams/
Website: https://www.growthpath.net/

Our second speaker, Daniel Ritchie, will start with the basics of GPTs and go deeper, covering what they are, what they are not, and how you can leverage them for your own AI powered solutions. We will touch on various aspects of GPTs in this approachable overview, and you will walk away with an understanding of the massive transformative potential of GPTs.

Daniel was one of the hosts and judges at the recent GPT Hackathon event, hosted by RMAIIG’s AI for Entrepreneurs and Startups (AES) Subgroup. He is an entrepreneur, dreamer, and forward thinking technologist captivated by the disruptive nature of AI. His current focus is building the Brain Wave Collective, a groundbreaking approach to employment and equity building, exploring new models at the intersection of technology and community. He provides services through LetsBuildGPTs.com, where he shows that the learning curve for mastering these advancements is more manageable than often perceived.

Notes

Speakers Liza Adams and Daniel Ritchie then provided an overview of GPT technology, explaining how generative AI models like GPTs can be used to build custom applications with ease. Daniel demonstrated how uploading an API schema to a GPT allows the quick creation of a weather application. The speakers discussed ethical considerations around AI such as mitigating bias, ensuring data privacy, and promoting responsible use. Both highlighted how GPTs have potential to improve productivity, decision making, and allow rapid prototyping of ideas. However, concerns were also raised about potential negative impacts on work-life balance. Attendees asked questions about tracking GPT usage, integrating proprietary data while preserving privacy, optimizing token costs for API calls, and accounting for biases in business applications of AI. Meeting participants were encouraged to explore GPT capabilities through the various AI interest subgroups and share their own ideas.

Some of the items discussed:

  • Discussed the concept of “giving grace” in navigating the new GPT landscape, given its rapid evolution and differing experiences each day
  • Showed examples of using GPTs for data analysis, personalized experiences, automation, ideation, and more through demonstrations of interacting with GPT models
  • Highlighted potential applications of GPTs in marketing use cases like competitive analysis, personalized experiences, and strategic business decision-making
  • Emphasized the importance of ethical and responsible AI practices like overseeing GPTs, narrowing prompts, and preventing made-up responses
  • Explained how AI assistants can supplement knowledge gaps through natural language interactions
  • Demonstrated uploading an API schema to a GPT to quickly build the weather application functionality
  • Discussed more advanced uses like integrating calendar data and optimizing for privacy and costs
  • Emphasized that AI lowers the barrier for non-experts through focused, curated models rather than general-purpose knowledge
  • Encouraged sharing prototype ideas with technical friends to incorporate AI into real products and services
  • OpenAI does not currently provide analytics on GPT usage or visits, though workarounds exist to analyze network traffic.
  • Large language models like GPTs can access vast amounts of data through API integrations, like the medical journal API used by the “Consensus” GPT.
  • Making GPT outputs more deterministic for applications requires lowering the temperature setting, but this reduces interesting responses.
  • Proprietary APIs and self-hosted assistants provide better data privacy than GPTs, allowing access control and limiting what data is shared.
  • Personalizing responses based on large user profiles will be possible as GPTs continue evolving to analyze more complex inputs.
  • Token costs for commercial GPT applications need optimization to ensure business viability, though techniques are still emerging.
  • Bias in AI results from its training and must be actively managed through testing, prompts, and human oversight of complex tasks.
  • Open source and commercial options exist beyond OpenAI for hosting custom models, though integration capabilities still lag the GPT builder tools.

Click here for a full transcript/recording.

Categories
Meeting

Fine Tuning and Optimizing LLMs

10 January 2024 / 6:00 PM / ATLAS Building, CU Boulder

Launching a Large Language Model (LLM) like GPT-4 which powers ChatGPT can involve at least three distinct processes: training, fine-tuning, and optimizing. Each of these has its own purpose and methodology in the development of AI models.

At a high level, training establishes the foundational knowledge of the LLM, fine-tuning adapts it to specific tasks or domains, and optimizing enhances its performance and efficiency for practical use. Each process is crucial in developing an LLM that is both powerful and applicable to real-world tasks. This meeting will focus on steps 2 and 3: fine-tuning and optimizing LLMs.

Our speaker, Mark Hennings, will cover what fine-tuning is (and isn’t), when to use it, its benefits, and limitations. Mark will also cover how to optimize LLM performance from a broader perspective. See how fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG) can all work together to improve LLM performance.

Mark is the founder of Entry Point AI, a modern platform for fine-tuning large language models. He’s a serial entrepreneur, Inc 500 alumni, and self-taught developer who is passionate about UX and democratizing AI.

Notes

This meeting discussed various techniques for optimizing and fine-tuning large language models (LLMs), including prompt engineering, retrieval augmented generation (RAG), and fine-tuning. The presenter, Mark Hennings, explained each technique and how they can be used together or separately to improve LLM outputs. Some key topics discussed included reducing hallucinations, preventing harmful outputs, connecting LLMs to traditional software, and narrowing an LLM’s scope to specialized tasks through fine-tuning. There was also discussion around bias in training data and synthetic data, as well as legal and ethical considerations around certifying AI systems.

Some of the specifics discussed include the following: 

  • Prompt engineering: This involves carefully crafting the input prompt/context to steer model behavior. Techniques include priming, examples, and “chain of thought” reasoning.
  • Retrieval augmented generation (RAG): This supplements the prompt with relevant external knowledge by searching a text corpus for similar embeddings and including them. This can reduce hallucinations and allow referencing real-time or proprietary data.
  • Inference parameters: Settings like temperature, top-p/k, and frequency/repetition penalties can affect which tokens models select during output generation.
  • Function calling: Models can intelligently recommend actions/functions for an application to take based on the prompt, like calling APIs. This gives models more capabilities but requires carefully controlling what functions they can access.
  • Fine-tuning: Re-training models on more domain-specific data narrows their behavior and bakes in desired formatting, style, and capabilities. Task tuning creates specialists for very focused use cases.
  • Measuring input diversity, such as by computing cosine similarity between input embeddings. This could help evaluate true diversity.
  • The differences between RAG and fine-tuning, with RAG acting more as a wrapper around the LLM and not modifying it.
  • Diminishing returns with adding more fine-tuning examples, especially if they are too similar to existing ones. New examples for under-served cases are most impactful.
  • Appropriate model sizes for tasks, with larger models generally better for complex writing but smaller models sufficient for classifiers or specialized tasks.
  • Bias in models, including bias in pre-training data, synthetic training data, and challenges around certifying “unbiased” models when real-world data contains biases.
  • Practical workflows for fine-tuning, including identifying and removing unnecessary data attributes and focusing on examples that teach desired behaviors rather than facts.

Click here for a full transcript/recording.

Categories
Meeting

Thriving in an AI-Paced World

11 December 2023 / 6:00 PM / ATLAS Building, CU Boulder

Our meeting will feature discussions on adapting to the challenges of an AI-driven era, focusing on strategic time management, aligning business goals with user needs, and reimagining business strategies for future growth. Join us to gain insights into leveraging AI for organizational success and navigating the complexities of rapidly evolving technology landscapes.

Our first speaker, Brad Perkins is a seasoned professional in design thinking, digital product strategy, and organizational transformation. With over a decade of experience leading large digital experience improvements and organizational workflow optimization efforts, Bradley has excelled in creating clarity for leaders and execution teams by aligning end users’ fundamental needs and behaviors, with business goals, while emphasizing purpose driven design and implementation plans.

Brad will discuss adapting behaviors and business structures for success in the AI-driven world, emphasizing the importance of time management and strategic focus to harness AI’s potential while upholding human values. This talk aims to prepare us for the future by reorienting our focus and strategies in the context of AI’s growing influence.

Our second speaker, Bill Quinn, is a futurist with TCS. With more than 25 years of leadership experience in both venture-backed start-ups and large enterprise businesses – including innovation, strategy development, product management and marketing – he helps business leaders “connect the dots” to reimagine and navigate the future.

Bill will discuss how AI is accelerating already changing industry dynamics, science and technology advancements, emerging ecosystems, and increasingly complex customer and stakeholder expectations. He will make the case that enterprises must think differently and reimagine how they do business – to optimize performance today and create new future-forward growth opportunities. He will then provide a framework for thinking about the future in an AI-driven world to help unravel the complexity and provide an action-oriented path forward for businesses.

Notes

This meeting discussed the growing capabilities and impacts of artificial intelligence.

Brad Perkins (link to deck) began by talking about how AI will speed up existing problems in businesses and require companies to have clarity of purpose and vision. He provided a framework for creating clarity involving understanding why a project is being done, who it is for, mapping out the current and desired future states. Some of the Brad’s key takeaways included the following: 

  • AI will speed up existing problems in businesses if the underlying issues aren’t addressed
  • Companies need clarity of purpose and vision to effectively leverage AI
  • A framework for creating clarity:
    • Understand why, who, current and future states
    • Identify opportunities and define challenges
    • Ideate solutions and map out workflows
    • Consider MVP features and test concepts
  • Clarity is needed to organize thinking and ensure the right problems are being solved before implementing AI tools

Bill Quinn (link to deck) then discussed how AI will accelerate changes in many industries and require new ways of thinking. He emphasized the need to rehearse future scenarios and ask “what if” questions. He outlined a model for considering how AI could both positively and negatively impact different areas like health, education, work and more.  Some of Bill’s key takeaways include the following:

  • AI will accelerate already changing industry dynamics across sectors
  • Enterprises must think differently and reimagine how they do business to optimize performance and growth with AI
  • A convergence model to consider how AI could impact and be impacted by various areas like the economy, environment, politics, etc.  
  • Examples of potential AI impacts included personal robots, crisis response systems, digital twins for modeling scenarios
  • Companies should rehearse future scenarios by asking “what if” questions to avoid problems and leverage opportunities
  • A maturity model was proposed comparing AI adoption to electricity in manufacturing, suggesting it may take decades to fully realize AI’s potential

There was discussion of how AI may disrupt jobs and careers but also create new opportunities. Speakers encouraged focusing on intangible skills like problem solving, collaboration and lifelong learning. Questions from attendees explored topics like adopting AI in conservative companies, potential guardrails as AI capabilities increase, and managing feelings of uncertainty or anger around technological changes.

Click here for a full transcript/recording.