Codetown ::: a software developer's community
Chicago Kotlin User Group x Android Listeners
Hosted at GrubHub, July 17
Coroutines are the new hot stuff, and right now they’re being added to lots of libraries. But what if you don’t want to use an alpha01 in production code? What can coroutines do on their own, right now? In this talk, we’ll discuss the power behind structured concurrency and how we can use it to make our entire stack lifecycle-aware. We’ll look at examples of how to turn any callback or long-running code into a coroutine, and we’ll go over when and how to use Channels to handle hot streams of data without leaking. Finally, and most importantly, we’ll see how we can use these tools to inform our application architecture, so that we can quickly write maintainable and testable features. Thanks to GrubHub for hosting!
Tags:
Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.
Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.
Check out the Codetown Jobs group.
Ana Petkovska discusses creating platform teams, establishing the team API, engagement of early adopters, easing adoption and providing a high quality product.
By Ana PetkovskaDeepSeek open-sourced DeepSeek-V3, a Mixture-of-Experts (MoE) LLM containing 671B parameters. It was pre-trained on 14.8T tokens using 2.788M GPU hours and outperforms other open-source models on a range of LLM benchmarks, including MMLU, MMLU-Pro, and GPQA.
By Anthony AlfordLLM accuracy is a challenging topic to address and is much more multi-dimensional than a simple accuracy score. Denys Linkov introduces a framework for creating micro metrics to evaluate LLM systems, focusing on goal-aligned metrics that improve performance and reliability. By adopting an iterative "crawl, walk, run" methodology, teams can incrementally develop observability.
By Denys LinkovGoogle has introduced Gemini 2.0 Flash Thinking Experimental, an AI reasoning model available in its AI Studio platform.
By Daniel DominguezVertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.
By Sergio De Simone© 2025 Created by Michael Levin. Powered by