Codetown ::: a software developer's community
Resources
Last week, we went over higher order functions in Kotlin. We learned how higher order functions can accept functions as parameters and are also able to return functions. This week, we will take a look at lambdas. Lambdas are another type of function and they are very popular in the functional programming world.
Computer programs are made up of two parts: logic and data. Usually, logic is described in functions and data is passed to those functions. The functions do things with the data, and return a result. When we write a function we would typically create a named function. As we saw last week, this is a typical named function:
fun hello(name: String): String {
return "Hello, $name"
}
Then you can call this function:
fun main() {
println(hello("Matt"))
}
Which gives us the result:
Hello, Matt
Functions as Data
There is a concept in the functional programming world where functions are treated as data. Lambdas (functions as data) can do the same thing as named functions, but with lambdas, the content of a given function can be passed directly into other functions. A lambda can also be assigned to a variable as though it were just a value.
Lambda Syntax
Lambdas are similar to named functions but lambdas do not have a name and the lambda syntax looks a little different. Whereas a function in Kotlin would look like this:
fun hello() {
return "Hello World"
}
The lambda expression would look like this:
{ "Hello World" }
Here is an example with a parameter:
fun(name: String) {
return "Hello, ${name}"
}
The lambda version:
{ name: String -> "Hello, $name" }
You can call the lambda by passing the parameter to it in parentheses after the last curly brace:
{ name: String -> "Hello, $name" }("Matt")
It’s also possible to assign a lambda to a variable:
val hello = { name: String -> "Hello, $name" }
You can then call the variable the lambda has been assigned to, just as if it was a named function:
hello("Matt")
Lambdas provide us with a convenient way to pass logic into other functions without having to define that logic in a named function. This is very useful when processing lists or arrays of data. We’ll take a look at processing lists with lambdas in the next post!
Tags:
Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.
Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.
Check out the Codetown Jobs group.
At its Build 2025 conference, Microsoft announced plans to open source over the next few months the code behind the GitHub Copilot Chat extension under the MIT license and refactor core AI capabilities directly into the main VS Code codebase. The move, if completed, may affect the ability of current for-pay AI code editors to compete purely on features.
By Bruno CouriolLMEval aims to help AI researchers and developers compare the performance of different large language models. Designed to be accurate, multimodal, and easy to use, LMEval has already been used to evaluate major models in terms of safety and security.
By Sergio De SimoneMicrosoft’s Azure AI Search unveils agentic retrieval, a cutting-edge query engine that enhances conversational AI answer relevance by up to 40%. This dynamic system leverages conversation history and parallel subquery execution, paving the way for sophisticated knowledge retrieval. Currently in public preview, it offers adaptive search strategies tailored for evolving enterprise needs.
By Steef-Jan WiggersThe OpenSearch Software Foundation has announced the general availability of OpenSearch 3.0, the first major release in three years and the first since the project joined the Linux Foundation. This version introduces native support for the Model Context Protocol (MCP), along with pull-based data ingestion and gRPC support, aimed at improving scalability and integration.
By Renato LosioGoogle has released MedGemma, a pair of open-source generative AI models designed to support medical text and image understanding in healthcare applications. Based on the Gemma 3 architecture, the models are available in two configurations: MedGemma 4B, a multimodal model capable of processing both images and text, and MedGemma 27B, a larger model focused solely on medical text.
By Robert Krzaczyński
© 2025 Created by Michael Levin.
Powered by