Resources

Introduction

Last week, we went over higher order functions in Kotlin. We learned how higher order functions can accept functions as parameters and are also able to return functions. This week, we will take a look at lambdas. Lambdas are another type of function and they are very popular in the functional programming world.



Logic & Data

Computer programs are made up of two parts: logic and data. Usually, logic is described in functions and data is passed to those functions. The functions do things with the data, and return a result. When we write a function we would typically create a named function. As we saw last week, this is a typical named function:

fun hello(name: String): String {
return "Hello, $name"
}

Then you can call this function:

fun main() {
println(hello("Matt"))
}

Which gives us the result:

Hello, Matt

Functions as Data

There is a concept in the functional programming world where functions are treated as data. Lambdas (functions as data) can do the same thing as named functions, but with lambdas, the content of a given function can be passed directly into other functions. A lambda can also be assigned to a variable as though it were just a value.

Lambda Syntax

Lambdas are similar to named functions but lambdas do not have a name and the lambda syntax looks a little different. Whereas a function in Kotlin would look like this:

fun hello() {
return "Hello World"
}

The lambda expression would look like this:

{ "Hello World" }

Here is an example with a parameter:

fun(name: String) {
return "Hello, ${name}"
}

The lambda version:

{ name: String -> "Hello, $name" }

You can call the lambda by passing the parameter to it in parentheses after the last curly brace:

{ name: String -> "Hello, $name" }("Matt")

It’s also possible to assign a lambda to a variable:

val hello = { name: String -> "Hello, $name" }

You can then call the variable the lambda has been assigned to, just as if it was a named function:

hello("Matt")

Lambdas provide us with a convenient way to pass logic into other functions without having to define that logic in a named function. This is very useful when processing lists or arrays of data. We’ll take a look at processing lists with lambdas in the next post!

Views: 148

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Orion: New Zero-Telemetry, Zero-Ad, AI-Proof Browser for Privacy-Focused Users

Kagi has released Orion 1.0, a web browser that features privacy by default, zero telemetry, and no integrated ad-tracking technology. Orion supports both Chrome and Firefox extensions and intentionally excludes AI from its core to prioritize security, privacy, and performance. Orion targets macOS and iOS, with upcoming Linux and Windows versions. Orion is based on WebKit.

By Bruno Couriol

Cactus v1: Cross-Platform LLM Inference on Mobile with Zero Latency and Full Privacy

Cactus, a Y Combinator-backed startup, enables local AI inference to mobile phones, wearables, and other low-power devices through cross-platform, energy-efficient kernels and a native runtime. It delivers sub-50ms time-to-first-token for on-device inference, eliminates network latency, and defaults to complete privacy.

By Sergio De Simone

Presentation: Ecologies and Economics of Language AI in Practice

Jade Abbott discusses the shift from massive, resource-heavy models to "Little LMs" that prioritize efficiency and cultural sustainability. She explains how techniques like LoRA, quantization, and GRPO allow for high performance with less compute. By sharing the "Ubuntu Punk" philosophy, she shares how to move beyond extractive data practices toward human-centric, sustainable AI systems.

By Jade Abbott

Python Workers Redux: Wasm Snapshots and Native uv Tooling

Cloudflare's latest advancements in Python Workers revolutionize serverless performance with near-instant cold starts, expanded package compatibility, and streamlined workflows via the uv package manager. By leveraging memory snapshots and WebAssembly, Cloudflare drastically reduces startup times, making Python a prime choice for AI and data science applications.

By Steef-Jan Wiggers

Nuxt Introduces Native Request Cancellation and Async Handler Extraction for Performance Gains

Nuxt 4.2 elevates the developer experience with native abort control for data fetching, improved error handling, and experimental TypeScript support. With a 39% reduction in bundle sizes and a streamlined app directory, this release enhances performance and project organization, positioning Nuxt as a leading choice for full-stack web applications built on Vue.js.

By Daniel Curtis

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service