Codetown ::: a software developer's community

ResourcesLast week, we went over higher order functions in Kotlin. We learned how higher order functions can accept functions as parameters and are also able to return functions. This week, we will take a look at lambdas. Lambdas are another type of function and they are very popular in the functional programming world.
Computer programs are made up of two parts: logic and data. Usually, logic is described in functions and data is passed to those functions. The functions do things with the data, and return a result. When we write a function we would typically create a named function. As we saw last week, this is a typical named function:
fun hello(name: String): String {
return "Hello, $name"
}
Then you can call this function:
fun main() {
println(hello("Matt"))
}
Which gives us the result:
Hello, Matt
Functions as DataThere is a concept in the functional programming world where functions are treated as data. Lambdas (functions as data) can do the same thing as named functions, but with lambdas, the content of a given function can be passed directly into other functions. A lambda can also be assigned to a variable as though it were just a value.
Lambda SyntaxLambdas are similar to named functions but lambdas do not have a name and the lambda syntax looks a little different. Whereas a function in Kotlin would look like this:
fun hello() {
return "Hello World"
}
The lambda expression would look like this:
{ "Hello World" }
Here is an example with a parameter:
fun(name: String) {
return "Hello, ${name}"
}
The lambda version:
{ name: String -> "Hello, $name" }
You can call the lambda by passing the parameter to it in parentheses after the last curly brace:
{ name: String -> "Hello, $name" }("Matt")
It’s also possible to assign a lambda to a variable:
val hello = { name: String -> "Hello, $name" }
You can then call the variable the lambda has been assigned to, just as if it was a named function:
hello("Matt")
Lambdas provide us with a convenient way to pass logic into other functions without having to define that logic in a named function. This is very useful when processing lists or arrays of data. We’ll take a look at processing lists with lambdas in the next post!
Tags:
Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.
Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.
Check out the Codetown Jobs group.

At QCon SF 2025, Dr. Nicole Forsgren highlighted how AI accelerates code generation but reveals deployment bottlenecks, urging a strategic pivot to optimizing Developer Experience (DevEx). With 31% of developer time lost to friction, focusing on effective feedback loops, flow state, and cognitive load management is vital for competitive survival and retention.
By Steef-Jan Wiggers
IBM recently announced the Granite 4.0 family of small language models. The model family aims to deliver faster speeds and significantly lower operational costs at acceptable accuracy vs. larger models. Granite 4.0 features a new hybrid Mamba/transformer architecture that largely reduces memory requirements, enabling Granite to run on significantly cheaper GPUs and at significantly reduced costs.
By Bruno Couriol
This week's Java roundup for November 10th, 2025, features news highlighting: OpenJDK JEPs targeted for JDK 26; the GA release of Spring Framework 7.0; point releases of Spring Data, Spring AI, JobRunr and Jox; the November 2025 edition of Payara Platform; the fifth release candidate of Maven 4.0; and a maintenance release of Micronaut.
By Michael Redlich
Generative AI technologies need to support new workloads, traffic patterns, and infrastructure demands and require a new set of tools for the age of GenAI. Erica Hughberg from Tetrate and Alexa Griffith from Bloomberg spoke last week at KubeCon + CloudNativeCon North America 2025 Conference about what it takes to build GenAI platforms capable of serving model inference at scale.
By Srini Penchikala
Wes Reisz discusses an experiment to deliver a QCon certification using a Retrieval-Augmented Generation (RAG) architecture and supervised coding agents (Claude Sonnet/Cursor). He breaks down the 4-week serverless video transcription pipeline, RAG variations (hybrid, graph), and the process of structuring prompts for 95% AI-generated code.
By Wesley Reisz
© 2025 Created by Michael Levin.
Powered by