Categories
Meeting

Developing Products with Generative AI

13 March 2024 / 6:00 PM / ATLAS Building, CU Boulder

At this meeting, we on two different aspects of Developing Products With Generative AI. First, we will look at using GenAI tools, through the lens of one of the world’s leading AI companies, to improve the development process itself and increase developer efficiency. Next, we will explore the challenges and opportunities of building GenAI into our products.

Our first two speakers, Carolyn Ujcic and David Soto, both from Google, will talk on Improving Developer Efficiency with Generative AI.

GenAI can help improve developer productivity through assisting code development, DevOps, and non-coding processes. In this demo, we will showcase an AI-powered solution to provide assistance to developers and operators across the software development lifecycle built on Google’s state-of-the-art generative-AI foundation models.

Carolyn Ujcic, Director of AI Services at Google, leads an organization of AI consultants and engineers in Google Cloud Consulting. In this role, Carolyn and her teams help Google Cloud customers adopt AI in the enterprise. Carolyn has held positions of increasing responsibility, including Machine Learning Engineering Manager, AI Consultant, Fiber Learning Lab Lead and Global Training Lead. She joined Google as a Change Management Lead for Enterprise customers in 2010. Prior to Google, Carolyn served as a management consultant for multinationals at Accenture.

David Soto is a Data Scientist at Google specializing in machine learning, deep learning and software development. With over 10 years of expertise in systems architecture, IP Core Networks, and Cloud solutions, he has a passion for continuous learning and delivering accurate data driven results to enhance company decisions.

Next, Ian Cairns will present an “Intro to LLMs for Builders: Challenges & Opportunities of Using GenAl In Your Products.”

Ian is co-founder & CEO at Freeplay, an AI infrastructure startup based in Boulder. Freeplay builds experimentation, testing & monitoring tools that help product development teams make use of generative AI in their products. He’s spent most of his career in product management for developer products, including as a PM at the Boulder startup Gnip and as head of product for the Twitter Developer Platform. He’s also a University of Colorado graduate.

Notes

The conversation revolved around the applications and potential benefits of AI within various industries and applications. Speakers discussed the use of AI in software development, including improved productivity and business impact, and highlighted the importance of understanding and improving code quality. They also discussed the challenges of designing and deploying large language models (LLMs) and the potential of AI-powered tools to enhance user experience. Speakers shared their experiences with different AI platforms and tools, such as Gong, raycast, and Loom, and emphasized the importance of balancing flexibility and opinionated functionality to create a more seamless user experience.

Key Takeaways

Some of the key takeaways from the meeting included:

  • Gen AI tools like code generation, documentation, refactoring and testing can improve developer productivity by 20-45% according to estimates.
  • Features like code completion, summarization and explanation in integrated development environments (IDEs) and tools can make developers more efficient.
  • Large language models can help understand code functionality even without good documentation or comments by providing explanations.
  • Tools can generate unit test cases automatically based on code, providing a starting point for testing.
  • Integrating ML throughout the development process allows more roles beyond just engineers to get involved, like PMs, designers and QA testing code.
  • Running ML models locally on devices allows experimentation and prototyping without requiring internet access or paid API calls.
  • AI and ML can improve developer productivity through features like code generation, documentation, refactoring, and testing. Tools like free play and cloud coders were discussed.
  • Building AI-powered software requires a focus on data quality, capturing inputs/outputs, and continuous evaluation/improvement through feedback loops.
  • Defining what constitutes “good” output from ML models is challenging and requires considering multiple dimensions of quality.
  • Adopting ML models in production environments requires monitoring what systems are producing and addressing changes over time.
  • Integrating ML throughout the software development process involves more actors like PMs, designers, and QA in addition to engineers.
  • There are opportunities to apply ML to roles like enterprise architecture, though it also presents unique challenges around model sizes and monitoring.

Click here to see a full transcript/recording.